Activity
From 07/18/2019 to 08/16/2019
08/16/2019
- 09:57 PM Bug #40773: qa: 'ceph osd require-osd-release nautilus' fails
- https://github.com/ceph/ceph/pull/29715
- 07:37 PM Bug #40773: qa: 'ceph osd require-osd-release nautilus' fails
- http://pulpito.ceph.com/pdonnell-2019-08-15_13:19:31-fs-wip-pdonnell-testing-20190814.222632-distro-basic-smithi/
... - 07:16 PM Documentation #41316 (Fix Under Review): doc: update documentation for LazyIO
- 03:44 PM Documentation #41316 (Resolved): doc: update documentation for LazyIO
- Update the documentation with usage info about the LazyIO methods: lazyio_propagate() and lazyio_synchronize(). Also ...
- 06:52 PM Feature #41302: mds: add ephemeral random and distributed export pins
- Sidharth, I've discussed this with Doug and we'll be assigning this to you.
Sidharth Anupkrishnan wrote:
> Nice!... - 06:25 PM Feature #41302: mds: add ephemeral random and distributed export pins
- Nice!
I have a doubt regarding how we could use consistent hashing for the 2nd case: "export_ephemeral_random" pinni... - 04:04 PM Feature #41302: mds: add ephemeral random and distributed export pins
- Here are some scripts shared by Dan from CERN that can be used to manually test random subtree pinning: https://githu...
- 05:38 PM Support #40906: Full CephFS causes hang when accessing inode.
- I've sent an e-mail to Zheng with a link to download the logs due to sensitive info. The client requested the file 0....
- 05:33 PM Bug #41319 (Can't reproduce): ceph.in: pool creation fails with "AttributeError: 'str' object has...
- ...
- 04:42 PM Bug #41192 (In Progress): mds: atime not being updated persistently
- I've dropped the PR for now, as Zheng pointed out that atime is not actually tied to Fr caps after all, but rather to...
- 12:52 PM Bug #40298 (Resolved): cephfs-shell: 'rmdir *' does not remove all directories
- The issue is resolved in this PR https://github.com/ceph/ceph/commit/4c968b1f30faab9f9013dee95043ccf5f38f5d20.
- 11:41 AM Feature #41311 (Resolved): deprecate CephFS inline_data support
- I sent a proposal to the various ceph mailing lists to deprecate inline_data support for Octopus. At this point, we m...
- 10:51 AM Bug #41310 (Resolved): client: lazyio synchronize does not get file size
- LazyIO synchronize fails to do the task of making the propagated writes by other clients/fds visible to the current f...
- 08:08 AM Backport #37761 (In Progress): mimic: mds: deadlock when setting config value via admin socket
08/15/2019
- 06:25 PM Feature #41302: mds: add ephemeral random and distributed export pins
- It's worth noting that the only difference between the two options is that export_ephemeral_distributed is not hierar...
- 06:15 PM Feature #41302 (Resolved): mds: add ephemeral random and distributed export pins
- Background: export pins [1] are an effective way to distribute metadata load for large workloads without the metadata...
- 04:22 PM Support #40906: Full CephFS causes hang when accessing inode.
- It seems that after some time (hour, days, weeks) things get fixed, but it sure would be nice to know how to get it i...
- 01:46 AM Support #40906: Full CephFS causes hang when accessing inode.
- please provide logs of both ceph-fuse and mds during accessing the bad file.
- 01:55 PM Bug #41242 (Fix Under Review): mds: re-introudce mds_log_max_expiring to control expiring concurr...
- 09:50 AM Bug #40939: mds: map client_caps been inserted by mistake
- keep the script happy (alternatively, we could delete #41098)
- 09:09 AM Backport #41283 (Rejected): nautilus: cephfs-shell: No error message is printed on ls of invalid ...
- 09:08 AM Backport #41276 (Resolved): nautilus: qa: malformed job
- https://github.com/ceph/ceph/pull/30038
- 09:07 AM Backport #41269 (Resolved): nautilus: cephfs-shell: Convert files path type from string to bytes
- https://github.com/ceph/ceph/pull/30057
- 09:07 AM Backport #41268 (Rejected): nautilus: cephfs-shell: onecmd throws TypeError
- 07:31 AM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
- I think the first time I used the standard mimic workflow of @mds fail@ and once all MDSs are stopped, @fs remove@. T...
08/14/2019
- 10:23 PM Bug #41031 (Pending Backport): qa: malformed job
- 10:06 PM Bug #40430 (Pending Backport): cephfs-shell: No error message is printed on ls of invalid directo...
- 10:04 PM Bug #41164 (Pending Backport): cephfs-shell: onecmd throws TypeError
- 10:03 PM Bug #41163 (Pending Backport): cephfs-shell: Convert files path type from string to bytes
- 09:24 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
- > [14:11:31] <@gregsfortytwo> batrick: looking at tracker.ceph.com/issues/41228 and it's got a lot going on but part...
- 09:10 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
- Can you include exactly what commands you ran? Did you still have clients mounted while deleting the FS?
- 04:22 PM Bug #41192 (Fix Under Review): mds: atime not being updated persistently
- 04:18 PM Feature #41220: mgr/volumes: add test case for blacklisted clients
- Related: test ceph-mgr getting blacklisted. It should recover somehow (probably close the libcephfs handle and get a ...
- 04:17 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
- PR here: https://github.com/ceph/ceph/pull/29642
It occurs to me though that now that we have a way to get to the ... - 04:14 PM Feature #16656 (Fix Under Review): mount.ceph: enable consumption of ceph keyring files
- 01:26 PM Backport #37761: mimic: mds: deadlock when setting config value via admin socket
- https://github.com/ceph/ceph/pull/29664
- 12:08 PM Feature #41209: mds: create a configurable snapshot limit
- Zheng Yan wrote:
> Milind Changire wrote:
> > Thanks for the hint Zheng.
> >
> > How could the MDS return the st... - 11:09 AM Feature #41209: mds: create a configurable snapshot limit
- Milind Changire wrote:
> Thanks for the hint Zheng.
>
> How could the MDS return the status "too many snaps" to t... - 10:43 AM Feature #41209: mds: create a configurable snapshot limit
- Thanks for the hint Zheng.
How could the MDS return the status "too many snaps" to the caller ?
There's no error ... - 08:07 AM Bug #41242 (Closed): mds: re-introudce mds_log_max_expiring to control expiring concurrency manually
- In some case, huge of mds segments could be expired concurrently, which might bring very heavy loads to OSDs and we c...
- 04:22 AM Backport #40944 (In Progress): nautilus: mgr: failover during in qa testing causes unresponsive c...
- https://github.com/ceph/ceph/pull/29649
- 02:12 AM Support #40906: Full CephFS causes hang when accessing inode.
- Okay, we had another data corruption incident, so I took some time to try looking deeper into the problem. I did some...
08/13/2019
- 08:01 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- I tried again, this time with a replicated pool and just one MDS. I think it's too early to draw definitive conclusio...
- 02:54 PM Bug #41140: mds: trim cache more regularly
- I believe this problem may be particularly severe when the main data pool is an EC pool. I am trying the same thing w...
- 01:17 PM Bug #41228 (Resolved): mon: deleting a CephFS and its pools causes MONs to crash
- Disclaimer: I am not entirely sure if this is strictly related to CephFS or a general problem when deleting pools wit...
- 11:14 AM Bug #41133 (In Progress): qa/tasks: update thrasher design
- 06:46 AM Feature #41220 (New): mgr/volumes: add test case for blacklisted clients
- see qa/tasks/cephfs/test_volumes.py
- 06:44 AM Bug #41218: mgr/volumes: retry spawning purge threads on failure
- thanks -- those were cache in my browser :P
- 05:16 AM Bug #41218 (Resolved): mgr/volumes: retry spawning purge threads on failure
- seen here: http://qa-proxy.ceph.com/teuthology/pdonnell-2019-08-07_15:57:31-fs-wip-pdonnell-testing-20190807.132723-d...
- 05:18 AM Bug #41219 (Resolved): mgr/volumes: send purge thread (and other) health warnings to `ceph status`
- so as to easily identify issues rather than scanning log files. Also, log any critical error(s) in cluster log.
- 01:34 AM Feature #41209: mds: create a configurable snapshot limit
- mds.0 has snaptable, which contains information about all snapshots. Each mds has a snapclient, which also caches sna...
08/12/2019
- 09:00 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
- Jeff Layton wrote:
> That's certainly another possibility. I'm not sure it's any easier though.
>
> We'd have to ... - 06:18 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
- That's certainly another possibility. I'm not sure it's any easier though.
We'd have to scrape and parse the outpu... - 06:05 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
- Jeff Layton wrote:
> My current thinking is to link in libceph-common, create a context and fetch the keys using the... - 05:37 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
- My current thinking is to link in libceph-common, create a context and fetch the keys using the same C++ routines tha...
- 03:49 PM Feature #16656 (In Progress): mount.ceph: enable consumption of ceph keyring files
- 03:46 PM Bug #41187 (Rejected): src/mds/Server.cc: 958: FAILED assert(in->snaprealm)
- Sorry Jan, snapshots are not stable in Luminous and we don't spend time looking at snapshot related failures for that...
- 03:42 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- Each file will have an object in the default data pool (the data pool used at file system creation time) with an exte...
- 01:35 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- I had the same issue before our MDS and Mons died.
Journal was producing 2 files a few TB big and the metadatapool ... - 01:34 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- referenced: https://tracker.ceph.com/issues/41026
- 01:28 PM Bug #41204 (New): CephFS pool usage 3x above expected value and sparse journal dumps
- I am in the process of copying about 230 million small and medium-sized files to a CephFS and I have three active MDS...
- 02:03 PM Backport #41098: luminous: mds: map client_caps been inserted by mistake
- (snapshots are not supported/stable in luminous)
- 01:43 PM Backport #41098 (Rejected): luminous: mds: map client_caps been inserted by mistake
- 01:59 PM Backport #40162: mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- Backport PR here:
https://github.com/ceph/ceph/pull/29609 - 12:59 PM Backport #40162 (In Progress): mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "parent...
- 01:56 PM Feature #41209 (Resolved): mds: create a configurable snapshot limit
- Add a new config option that imposes a limit on the number of snapshots in a directory. Zheng has found in the past t...
- 01:31 PM Bug #40821 (In Progress): osdc: objecter ops output does not have useful time information
- 12:29 PM Bug #41192: mds: atime not being updated persistently
- Now that I've done a bit more investigation, I think there are actually two parts to this bug:
1) the MDS only upd... - 06:54 AM Backport #40894 (In Progress): nautilus: mds: cleanup truncating inodes when standby replay mds t...
- https://github.com/ceph/ceph/pull/29591
- 05:32 AM Cleanup #41185 (Fix Under Review): mds: reorg FSMapUser header
08/09/2019
- 06:03 PM Bug #41192: mds: atime not being updated persistently
- Tracepoints from adding and removing caps for the inode:...
- 05:26 PM Bug #41192: mds: atime not being updated persistently
- ...maybe Fw too?
- 05:16 PM Bug #41192 (Won't Fix): mds: atime not being updated persistently
- xfstest generic/192 fails with kcephfs. It basically:
mounts the fs
creates a file and records the atime
waits a... - 02:18 PM Bug #41187 (Rejected): src/mds/Server.cc: 958: FAILED assert(in->snaprealm)
- We saw this on a cluster where two out of three mds servers needed to be rebooted. After the reboot both mds dumped c...
- 11:03 AM Cleanup #41185 (Resolved): mds: reorg FSMapUser header
- 10:16 AM Feature #41182 (Resolved): mgr/volumes: add `fs subvolume extend/shrink` commands
- To extend and shrink subvolume set desired quota.
And during shrink, maybe error out if the desired shrunk size i... - 10:02 AM Cleanup #41181 (Fix Under Review): mds: reorg FSMap header
- 09:55 AM Cleanup #41181 (Resolved): mds: reorg FSMap header
- 09:12 AM Bug #41069 (Closed): nautilus: test_subvolume_group_create_with_desired_mode fails with "Assertio...
- The test passed in a more recent nautilus test run that included the mgr/volumes backport PR https://github.com/ceph...
- 08:04 AM Cleanup #41178 (Fix Under Review): mds: reorg DamageTable header
- 07:54 AM Cleanup #41178 (Resolved): mds: reorg DamageTable header
- 06:12 AM Bug #39395 (Resolved): ceph: ceph fs auth fails
08/08/2019
- 01:49 PM Bug #41164 (Fix Under Review): cephfs-shell: onecmd throws TypeError
- 01:41 PM Bug #41164 (Resolved): cephfs-shell: onecmd throws TypeError
- cmd2 (0.9.15)
Python 3.7.4
Fedora 30
Due to changes in recent cmd module, on any command the following error is ... - 01:17 PM Bug #41163 (Fix Under Review): cephfs-shell: Convert files path type from string to bytes
- 01:09 PM Bug #41163 (Resolved): cephfs-shell: Convert files path type from string to bytes
- 09:36 AM Bug #41140: mds: trim cache more regularly
- I have the following settings now, which seem to work okay-ish:...
- 04:56 AM Bug #41034: cephfs-journal-tool: NetHandler create_socket couldn't create socket
- I forgot to say: the tool functions work, the warning/error doesn't affect its functionality but its confusing if you...
- 04:55 AM Bug #41034: cephfs-journal-tool: NetHandler create_socket couldn't create socket
- the cephfs-journal-tool but I also see this with the cephfs-data-scan
but I made a mistake, its Ubuntu 16.04, on the...
08/07/2019
- 09:48 PM Bug #41141 (Fix Under Review): mds: recall capabilities more regularly when under cache pressure
- 09:47 PM Bug #41140 (Fix Under Review): mds: trim cache more regularly
- 04:53 PM Bug #41140: mds: trim cache more regularly
- We did much the same thing in the OSD. Previously we trimmed in a single thread at regular intervals, but now we tri...
- 09:39 AM Bug #41140: mds: trim cache more regularly
- Janek Bevendorff wrote:
> This may be obvious, but to put the whole thing into context: this cache trimming issue ca... - 09:29 AM Bug #41140: mds: trim cache more regularly
- This may be obvious, but to put the whole thing into context: this cache trimming issue can make a CephFS permanently...
- 09:07 PM Bug #41034 (Need More Info): cephfs-journal-tool: NetHandler create_socket couldn't create socket
- Can you please elaborate on this, which journal tool are you talking about?
If possible, could you provide steps to... - 08:03 PM Bug #40927 (Resolved): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is ...
- 08:02 PM Feature #40617 (Resolved): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- 08:02 PM Backport #41071 (Resolved): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes wh...
- 08:02 PM Backport #41070 (Resolved): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- 02:42 PM Bug #41144: mount.ceph: doesn't accept "strictatime"
- Changing subject. "nostrictatime" seems to be intercepted by /bin/mount, so the mount helper doesn't need to handle it.
- 07:37 AM Bug #41148 (Fix Under Review): client: _readdir_cache_cb() may use the readdir_cache already clear
- 07:31 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
- segment fault:inode=0x0
- 07:25 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
- huanwen ren wrote:
> Zheng Yan wrote:
> > I think getattr does not affect parent directory inode's completeness
> ... - 07:22 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
- Zheng Yan wrote:
> I think getattr does not affect parent directory inode's completeness
From the log, there is a... - 07:00 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
- I think getattr does not affect parent directory inode's completeness
- 06:56 AM Bug #41148 (Resolved): client: _readdir_cache_cb() may use the readdir_cache already clear
- Calling function A means to get dir information from the cache, but in the while loop,
the contents of readdir_cach... - 04:55 AM Bug #41147: mds: crash loop - Server.cc 6835: FAILED ceph_assert(in->first <= straydn->first)
- this is temporarily fixed by wiping session table
- 04:34 AM Bug #41147 (Duplicate): mds: crash loop - Server.cc 6835: FAILED ceph_assert(in->first <= straydn...
- After creating a new FS and running it for 2 days I my MDS is in a crash loop. I didn't try anything yet so far as to...
08/06/2019
- 10:36 PM Bug #41140: mds: trim cache more regularly
- Dan van der Ster wrote:
> FWIW i can trigger the same problems here without creates. `ls -lR` of a large tree is eno... - 05:01 PM Bug #41140: mds: trim cache more regularly
- FWIW i can trigger the same problems here without creates. `ls -lR` of a large tree is enough, and increasing the thr...
- 04:42 PM Bug #41140 (Resolved): mds: trim cache more regularly
- Under -create- workloads that result in the acquisition of a lot of capabilities, the MDS can't trim the cache fast e...
- 07:56 PM Bug #41144 (Resolved): mount.ceph: doesn't accept "strictatime"
- The cephfs mount helper doesn't support either strictatime or nostrictatime. It should intercept those options and se...
- 04:46 PM Bug #41141 (Resolved): mds: recall capabilities more regularly when under cache pressure
- If a client is doing a large parallel create workload, the MDS may not recall capabilities fast enough and the client...
- 02:46 AM Bug #41133 (Closed): qa/tasks: update thrasher design
- * Make the Thrasher class abstract by adding _do_thrash abstract function.
* Change OSDThrasher, RBDMirrorThrasher, ... - 02:45 AM Bug #41066 (Closed): mds: skip trim mds cache if mdcache is not opened
08/05/2019
- 11:09 PM Backport #41129 (Resolved): mimic: qa: power off still resulted in client sending session close
- https://github.com/ceph/ceph/pull/30233
- 11:09 PM Backport #41128 (Resolved): nautilus: qa: power off still resulted in client sending session close
- https://github.com/ceph/ceph/pull/29983
- 11:07 PM Backport #41118 (Rejected): nautilus: cephfs-shell: add CI testing with flake8
- 11:07 PM Backport #41114 (Resolved): mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- https://github.com/ceph/ceph/pull/31283
- 11:07 PM Backport #41113 (Resolved): nautilus: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- https://github.com/ceph/ceph/pull/30032
- 11:06 PM Backport #41112 (Rejected): nautilus: cephfs-shell: cd with no args has no effect
- 11:06 PM Backport #41108 (Resolved): mimic: mds: disallow setting ceph.dir.pin value exceeding max rank id
- https://github.com/ceph/ceph/pull/29940
- 11:06 PM Backport #41107 (Resolved): nautilus: mds: disallow setting ceph.dir.pin value exceeding max rank id
- https://github.com/ceph/ceph/pull/29938
- 11:06 PM Backport #41106 (Resolved): nautilus: mds: add command that modify session metadata
- https://github.com/ceph/ceph/pull/32245
- 11:06 PM Backport #41105 (Rejected): nautilus: cephfs-shell: flake8 blank line and indentation error
- 11:05 PM Backport #41101 (Rejected): luminous: tools/cephfs: memory leak in cephfs/Resetter.cc
- 11:05 PM Backport #41100 (Resolved): mimic: tools/cephfs: memory leak in cephfs/Resetter.cc
- https://github.com/ceph/ceph/pull/29915
- 11:05 PM Backport #41099 (Resolved): nautilus: tools/cephfs: memory leak in cephfs/Resetter.cc
- https://github.com/ceph/ceph/pull/29879
- 11:05 PM Backport #41098 (Rejected): luminous: mds: map client_caps been inserted by mistake
- 11:05 PM Backport #41097 (Resolved): mimic: mds: map client_caps been inserted by mistake
- https://github.com/ceph/ceph/pull/29833
- 11:04 PM Backport #41096 (Resolved): nautilus: mds: map client_caps been inserted by mistake
- https://github.com/ceph/ceph/pull/29878
- 11:04 PM Backport #41095 (Resolved): nautilus: qa: race in test_standby_replay_singleton_fail
- https://github.com/ceph/ceph/pull/29832
- 11:04 PM Backport #41094 (Resolved): mimic: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_...
- https://github.com/ceph/ceph/pull/29812
- 11:04 PM Backport #41093 (Resolved): nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.te...
- https://github.com/ceph/ceph/pull/29811
- 11:04 PM Backport #41089 (Rejected): nautilus: cephfs-shell: Multiple flake8 errors
- 11:04 PM Backport #41088 (Resolved): mimic: qa: AssertionError: u'open' != 'stale'
- https://github.com/ceph/ceph/pull/29751
- 11:03 PM Backport #41087 (Resolved): nautilus: qa: AssertionError: u'open' != 'stale'
- https://github.com/ceph/ceph/pull/29750
- 04:05 PM Backport #40326: nautilus: mds: evict stale client when one of its write caps are stolen
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28583
merged - 04:04 PM Backport #40324: nautilus: ceph_volume_client: d_name needs to be converted to string before using
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28609
merged - 04:03 PM Backport #40839: nautilus: cephfs-shell: TypeError in poutput
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29156
merged - 04:03 PM Backport #40842: nautilus: ceph-fuse: mount does not support the fallocate()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29157
merged - 04:02 PM Backport #40843: nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29158
merged - 04:02 PM Backport #40874: nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29186
merged - 04:01 PM Backport #40438: nautilus: getattr on snap inode stuck
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29231
merged - 04:00 PM Backport #40440: nautilus: mds: cannot switch mds state from standby-replay to active
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29233
merged - 03:59 PM Backport #40443: nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operating on...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29343
merged - 03:58 PM Backport #40445: nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29344
merged - 02:14 PM Bug #41072 (In Progress): scheduled cephfs snapshots (via ceph manager)
- 12:54 PM Bug #41072: scheduled cephfs snapshots (via ceph manager)
- - also, interface for fetching snap metadata (`flushed` state, etc...)...
- 12:23 PM Bug #41072 (Resolved): scheduled cephfs snapshots (via ceph manager)
- outline detailed here: https://pad.ceph.com/p/cephfs-snap-mirroring
Specify a snapshot schedule on any (sub)direct... - 01:47 PM Bug #40989 (Resolved): qa: RECENT_CRASH warning prevents wait_for_health_clear from completing
- 01:46 PM Feature #40986 (Fix Under Review): cephfs qos: implement cephfs qos base on tokenbucket algorighm
- 01:45 PM Backport #41000 (New): luminous: client: failed to drop dn and release caps causing mds stary sta...
- Zheng, please take this one. The backport is non-trivial.
- 01:14 PM Feature #41074 (Resolved): pybind/mgr/volumes: mirror (scheduled) snapshots to remote target
- outline detailed here: https://pad.ceph.com/p/cephfs-snap-mirroring
mirror scheduled (and temporary) snapshots a r... - 01:13 PM Feature #41073 (Rejected): cephfs-sync: tool for synchronizing cephfs snapshots to remote target
- Introduce a rsync like tool (cephfs-sync) for mirroring scheduled and temp cephfs snapshots to sync targets. sync tar...
- 01:02 PM Backport #41070 (In Progress): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- 11:56 AM Backport #41070 (Resolved): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- https://github.com/ceph/ceph/pull/29490
- 01:00 PM Backport #41071 (In Progress): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes...
- 11:58 AM Backport #41071 (Resolved): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes wh...
- https://github.com/ceph/ceph/pull/29490
- 10:50 AM Bug #41069 (Need More Info): nautilus: test_subvolume_group_create_with_desired_mode fails with "...
- Seen here is nautilus run: http://qa-proxy.ceph.com/teuthology/yuriw-2019-07-30_20:57:10-fs-wip-yuri-testing-2019-07-...
- 05:23 AM Bug #41066: mds: skip trim mds cache if mdcache is not opened
- https://github.com/ceph/ceph/pull/29481
- 05:18 AM Bug #41066 (Closed): mds: skip trim mds cache if mdcache is not opened
- ```
2019-07-24 14:51:28.028198 7f6dc2543700 1 mds.0.940446 active_start
2019-07-24 14:51:39.452890 7f6dc2543700 1... - 12:29 AM Backport #41001 (In Progress): mimic: client: failed to drop dn and release caps causing mds star...
- 12:20 AM Backport #41002 (In Progress): nautilus:client: failed to drop dn and release caps causing mds st...
08/01/2019
- 05:46 PM Support #40906: Full CephFS causes hang when accessing inode.
- Please confirm that I understand the process so that I can give it a try.
Thanks! - 05:45 PM Bug #41049: adding ceph secret key to kernel failed: Invalid argument.
- reason: accidentially double base64 encoded
MISSING old warning!: secret is not valid base64: Invalid argument. - 05:32 PM Bug #41049 (New): adding ceph secret key to kernel failed: Invalid argument.
- Fresh Nautilus Cluster.
Fresh cephfs.
This is not a common base64 error
mount -t ceph 10.3.2.1:6789:/ /mnt/ -o n...
07/31/2019
- 06:40 PM Feature #40811 (Pending Backport): mds: add command that modify session metadata
- 06:34 PM Bug #40927 (Pending Backport): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph...
- 05:37 PM Bug #41034 (Resolved): cephfs-journal-tool: NetHandler create_socket couldn't create socket
- Ubuntu 18.04 with ceph Nautilus repo
Journal tool is broken:
2019-07-31 19:36:56.879 7f57bf308700 -1 NetHandler cre... - 05:33 PM Bug #40999 (Pending Backport): qa: AssertionError: u'open' != 'stale'
- 05:10 PM Bug #41031 (Resolved): qa: malformed job
- /ceph/teuthology-archive/pdonnell-2019-07-31_00:35:45-fs-wip-pdonnell-testing-20190730.205527-distro-basic-smithi/416...
- 04:59 PM Bug #41026 (Rejected): MDS process crashes on 14.2.2
- Please seek help on ceph-users. Provide more information about your cluster and how the error came about.
- 04:00 PM Bug #41026: MDS process crashes on 14.2.2
- ...
- 01:15 PM Bug #41026 (Rejected): MDS process crashes on 14.2.2
- MDS Process on Ubuntu 18.04 Nautilus 14.2.2 are crashing, unable to recover
-7> 2019-07-31 13:29:46.888 7fb36a... - 03:32 PM Bug #39395: ceph: ceph fs auth fails
- merged https://github.com/ceph/ceph/pull/28666
- 09:08 AM Bug #40960: client: failed to drop dn and release caps causing mds stary stacking.
- some more background of this issue is under
https://tracker.ceph.com/issues/38679#note-9 - 02:49 AM Bug #41006 (Fix Under Review): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before...
07/30/2019
- 10:13 PM Backport #40845: nautilus: MDSMonitor: use stringstream instead of dout for mds repaired
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29159
merged - 06:35 PM Bug #39947 (Pending Backport): cephfs-shell: add CI testing with flake8
- 05:07 PM Bug #40951 (Rejected): (think this is bug) Why CephFS metadata pool usage is only growing?
- NAB
- 12:09 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- Bingo - mds_log_max_segments.
In Luminous desc for this option is empty:... - 01:25 PM Bug #41006: cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len)
- looks like discontiguous free inode number can trigger the crash
- 08:56 AM Bug #41006 (Resolved): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len)
- Running cephfs-data-scan scan_links on a test 14.2.2 cluster I get this assertion:...
- 05:51 AM Feature #5520 (In Progress): osdc: should handle namespaces
- 01:42 AM Backport #41002 (Resolved): nautilus:client: failed to drop dn and release caps causing mds stary...
- https://github.com/ceph/ceph/pull/29478
- 01:41 AM Backport #41001 (Resolved): mimic: client: failed to drop dn and release caps causing mds stary s...
- https://github.com/ceph/ceph/pull/29479
- 01:38 AM Backport #41000 (Resolved): luminous: client: failed to drop dn and release caps causing mds star...
- https://github.com/ceph/ceph/pull/29830
07/29/2019
- 09:53 PM Bug #40603 (Pending Backport): mds: disallow setting ceph.dir.pin value exceeding max rank id
- 09:49 PM Bug #40965 (Resolved): client: don't report ceph.* xattrs via listxattr
- 09:47 PM Bug #40939 (Pending Backport): mds: map client_caps been inserted by mistake
- 09:46 PM Bug #40960 (Pending Backport): client: failed to drop dn and release caps causing mds stary stack...
- 09:10 PM Bug #37681: qa: power off still resulted in client sending session close
- backport note: also need fix for https://tracker.ceph.com/issues/40999
- 08:09 PM Bug #37681 (Pending Backport): qa: power off still resulted in client sending session close
- 09:09 PM Bug #40999 (Fix Under Review): qa: AssertionError: u'open' != 'stale'
- 09:06 PM Bug #40999 (Resolved): qa: AssertionError: u'open' != 'stale'
- ...
- 08:10 PM Bug #40968 (Pending Backport): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stal...
- 06:17 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- Konstantin Shalygin wrote:
> ??The MDS tracks opened files and capabilities in the MDS journal. That would explain t... - 04:25 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- ??The MDS tracks opened files and capabilities in the MDS journal. That would explain the space usage in the metadata...
- 05:38 PM Cleanup #40992 (Pending Backport): cephfs-shell: Multiple flake8 errors
- 11:54 AM Cleanup #40992: cephfs-shell: Multiple flake8 errors
- Not ignoring E501 instead limiting line length to 100....
- 06:59 AM Cleanup #40992 (Fix Under Review): cephfs-shell: Multiple flake8 errors
- 06:48 AM Cleanup #40992 (Resolved): cephfs-shell: Multiple flake8 errors
- After ignoring E501 and W503 flake8 errors, following needs to be fixed:...
07/27/2019
- 05:25 AM Support #40906: Full CephFS causes hang when accessing inode.
- Okay, I'll give it a shot next week on some of my smaller directories.
Just to be sure I understand the process.
...
07/26/2019
- 10:18 PM Bug #40474 (Pending Backport): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- 10:18 PM Bug #40474 (Resolved): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- 10:16 PM Bug #40476 (Pending Backport): cephfs-shell: cd with no args has no effect
- 10:14 PM Bug #40695 (Resolved): mds: rework PurgeQueue on_error handler to avoid mds_lock state check
- 10:12 PM Cleanup #40787 (Resolved): mds: reorg CInode header
- 10:11 PM Bug #40936 (Pending Backport): tools/cephfs: memory leak in cephfs/Resetter.cc
- 10:10 PM Bug #40967 (Pending Backport): qa: race in test_standby_replay_singleton_fail
- 07:47 PM Bug #40989 (Resolved): qa: RECENT_CRASH warning prevents wait_for_health_clear from completing
- ...
- 06:36 PM Bug #40965 (Fix Under Review): client: don't report ceph.* xattrs via listxattr
- 06:34 PM Bug #40927 (Fix Under Review): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph...
- 05:08 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- The MDS tracks opened files and capabilities in the MDS journal. That would explain the space usage in the metadata p...
- 09:37 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- After another bunch of simulations I was call `cache drop` to MDS via admin socket. Pool usage *198.8MB* -> *2.8MB*.
... - 06:45 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- I was abstract of kernel client and make some tests with Samba VFS - this is Luminous client.
First, I just copied... - 03:25 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- +10MBytes for last ~24H.
Actual pool's usage:
fs_data:
!fs_data_26.07.19.png!
fs_meta:
!fs_meta_26.07.19.png! - 03:23 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- Patrick, this is definitely Luminous 12.2.12. My actual question is why metadata usage (bytes) is growing and new obj...
- 04:06 PM Feature #40986: cephfs qos: implement cephfs qos base on tokenbucket algorighm
- I think there are two kinds of design:
1. all clients use the same QoS setting, just as the implementation
in this... - 03:58 PM Feature #40986 (Fix Under Review): cephfs qos: implement cephfs qos base on tokenbucket algorighm
- The basic idea is as follows:
Set QoS info as one of the dir's xattrs;
All clients that can access the same dirs ... - 10:04 AM Feature #18537: libcephfs cache invalidation upcalls
- Jeff Layton wrote:
> I looked at this, but I think the real solution to this problem is to just prevent ganesha from... - 06:15 AM Backport #40445 (In Progress): nautilus: mds: MDCache::cow_inode does not cleanup unneeded client...
- https://github.com/ceph/ceph/pull/29344
- 06:12 AM Backport #40443 (In Progress): nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH whe...
- https://github.com/ceph/ceph/pull/29343
- 04:49 AM Backport #37761: mimic: mds: deadlock when setting config value via admin socket
- ACK
- 02:42 AM Support #40906: Full CephFS causes hang when accessing inode.
- after delete omap key. use scrub_path asok command to repair parent directory
- 02:06 AM Support #40906: Full CephFS causes hang when accessing inode.
- Will deleting the RADOS object for the inode going to cause more problems with the MDS because they get out of sync?
07/25/2019
- 11:12 PM Feature #40617 (Pending Backport): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- 11:11 PM Cleanup #40866 (Resolved): mds: reorg Capability header
- 11:09 PM Bug #40836 (Pending Backport): cephfs-shell: flake8 blank line and indentation error
- 11:06 PM Cleanup #40742 (Resolved): mds: reorg CDir header
- 10:18 PM Bug #40968 (Fix Under Review): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stal...
- 10:15 PM Bug #40968 (Resolved): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stale_write_...
- /ceph/teuthology-archive/pdonnell-2019-07-25_06:25:06-fs-wip-pdonnell-testing-20190725.023305-distro-basic-smithi/414...
- 09:43 PM Bug #40967 (Fix Under Review): qa: race in test_standby_replay_singleton_fail
- 09:39 PM Bug #40967 (Resolved): qa: race in test_standby_replay_singleton_fail
- ...
- 07:51 PM Bug #40965 (Resolved): client: don't report ceph.* xattrs via listxattr
- The convention with filesystems that have virtual xattrs is to not report them via listxattr(). A lot of archiving to...
- 06:53 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- I see now that you're using 12.2.12? That wouldn't be the cause then I guess. The PR which backports the relevant cha...
- 06:48 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- Did you just upgrade your cluster to Nautilus? The data usage stats changed recently so that omaps in the metadata po...
- 09:45 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- Found another cluster with this Ceph version. Data usage 10x more, but meta is not much.
Metadata pool in this clust... - 09:37 AM Bug #40951 (Rejected): (think this is bug) Why CephFS metadata pool usage is only growing?
- Can't find a clue: singe MDS fs without actually usage, but metadata is only growing (screenshot attached). MDS with ...
- 06:46 PM Bug #40939 (Fix Under Review): mds: map client_caps been inserted by mistake
- 08:52 AM Bug #40939 (Resolved): mds: map client_caps been inserted by mistake
- SnapRealm.h: SnapRealm::remove_cap()
if map client_caps has not key client, 'client_caps[client]' will insert key cl... - 06:18 PM Bug #40936 (Fix Under Review): tools/cephfs: memory leak in cephfs/Resetter.cc
- 06:24 AM Bug #40936: tools/cephfs: memory leak in cephfs/Resetter.cc
- function -- Resetter::_write_reset_event(Journaler *journaler)
- 06:22 AM Bug #40936 (Resolved): tools/cephfs: memory leak in cephfs/Resetter.cc
- there is a memory leak in cephfs/Resetter.cc:
LogEvent *le = new EResetJournal;
forget using 'delete' to release ... - 06:02 PM Bug #40932 (Need More Info): ceph-fuse: rm a dir failed, print "non-empty" error
- does that work?
- 03:25 AM Bug #40932: ceph-fuse: rm a dir failed, print "non-empty" error
- use scrub_path asok command to fix it
- 02:10 AM Bug #40932 (Need More Info): ceph-fuse: rm a dir failed, print "non-empty" error
- #rm -rf dir1
print "non-empty" error, but "ls dir1" is empty. - 05:53 PM Bug #40877 (Fix Under Review): client: client should return EIO when it's unsafe reqs have been d...
- 05:51 PM Bug #40960 (Fix Under Review): client: failed to drop dn and release caps causing mds stary stack...
- 02:32 PM Bug #40960 (Resolved): client: failed to drop dn and release caps causing mds stary stacking.
- when client get notification from MDS that a file has been deleted(via
getting CEPH_CAP_LINK_SHARED cap for inode wi... - 02:47 PM Bug #40927: mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is run as non-...
- Patrick Donnelly wrote:
> Also need a ticket to allow setting the uid/gid of the subvolume (Feature).
Done. http:... - 02:43 PM Bug #40171 (Resolved): mds: reset heartbeat during long-running loops in recovery
- 02:43 PM Backport #40222 (Resolved): mimic: mds: reset heartbeat during long-running loops in recovery
- 02:37 PM Backport #40222: mimic: mds: reset heartbeat during long-running loops in recovery
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28918
merged - 02:42 PM Backport #40875 (Resolved): mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- 02:37 PM Backport #40875: mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29187
merged - 02:41 PM Backport #39685 (Resolved): mimic: ceph-fuse: client hang because its bad session PipeConnection ...
- 02:36 PM Backport #39685: mimic: ceph-fuse: client hang because its bad session PipeConnection to mds
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29200
merged - 02:40 PM Backport #38099 (Resolved): mimic: mds: remove cache drop admin socket command
- 02:35 PM Backport #38099: mimic: mds: remove cache drop admin socket command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29210
merged - 02:38 PM Bug #38270 (Resolved): kcephfs TestClientLimits.test_client_pin fails with "client caps fell belo...
- 02:38 PM Backport #38687 (Resolved): mimic: kcephfs TestClientLimits.test_client_pin fails with "client ca...
- 02:34 PM Backport #38687: mimic: kcephfs TestClientLimits.test_client_pin fails with "client caps fell bel...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29211
merged
- 02:19 PM Feature #40959 (Resolved): mgr/volumes: allow setting uid, gid of subvolume and subvolume group d...
- ... by passing optional arguments --uid and --gid to `ceph fs subvolume/subvolume group create` commands.
- 01:41 PM Documentation #40957 (Resolved): doc: add section to manpage for recover_session= option
- Now that the recover_session= mount option has been added to the testing branch in the kernel, we need to update the ...
- 08:56 AM Backport #40944 (Resolved): nautilus: mgr: failover during in qa testing causes unresponsive clie...
- https://github.com/ceph/ceph/pull/29649
- 03:14 AM Support #40934 (New): can't get connection to external cephfs frmo kubernetes pod
- I need to get clear step by step instructions on how to setup cephfs provisioning to external cephfs cluster without ...
07/24/2019
- 11:19 PM Bug #40931 (Rejected): Can't connect to my kubernetes pod
- I'm not sure I have enough context to help here. Also, this question should begin on ceph-users, not here.
- 11:11 PM Bug #40931 (Rejected): Can't connect to my kubernetes pod
- New version is not allowed to connect to my external to kubernetes ceph cluster. It is OK if connecting to rdbms, but...
- 06:59 PM Feature #40929 (Resolved): pybind/mgr/mds_autoscaler: create mgr plugin to deploy and configure M...
- Create a new mgr plugin that adds and removes MDSs in response to degraded file systems. The plugin should monitor ch...
- 06:23 PM Bug #40927: mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is run as non-...
- Also need a ticket to allow setting the uid/gid of the subvolume (Feature).
- 04:36 PM Bug #40927 (Resolved): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is ...
- ...
- 04:55 PM Support #40906 (New): Full CephFS causes hang when accessing inode.
- Reopening this as I see now that Zheng asked you to create the ticket.
- 04:53 PM Support #40906 (Rejected): Full CephFS causes hang when accessing inode.
- This discussion should move to the ceph-users list: https://lists.ceph.io/postorius/lists/ceph-users.ceph.io/
- 01:48 AM Support #40906: Full CephFS causes hang when accessing inode.
- The log does not contain information about the bad inode.
To use rados to delete inode
1. find inode number o... - 11:51 AM Bug #24868 (Resolved): doc: Incomplete Kernel mount debugging
- This is fixed by https://github.com/ceph/ceph/pull/28900
- 11:14 AM Bug #40864: cephfs-shell: rmdir doesn't complain when directory is not empty
- After this PR[https://github.com/ceph/ceph/pull/28514]
It does print an error message, but the message is not approp... - 11:03 AM Bug #40863: cephfs-shell: rmdir with -p attempts to delete non-dir files as well
07/23/2019
- 04:14 PM Support #40906: Full CephFS causes hang when accessing inode.
- The complete logs were 81GB, so I just filtered on the client session. If you need some more complete logs, let me kn...
- 04:10 PM Support #40906 (New): Full CephFS causes hang when accessing inode.
- Our CephFS filled up and we are having trouble operating on a directory that has a file with a bad inode state. We wo...
- 03:57 PM Backport #40440 (In Progress): nautilus: mds: cannot switch mds state from standby-replay to active
- 03:53 PM Backport #40438 (In Progress): nautilus: getattr on snap inode stuck
- 03:51 PM Backport #40437 (In Progress): mimic: getattr on snap inode stuck
- 03:47 PM Backport #40218 (In Progress): luminous: TestMisc.test_evict_client fails
- 03:46 PM Backport #40219 (In Progress): mimic: TestMisc.test_evict_client fails
- 03:45 PM Backport #40163 (In Progress): luminous: mount: key parsing fail when doing a remount
- 03:44 PM Backport #40165 (In Progress): mimic: mount: key parsing fail when doing a remount
- 03:43 PM Backport #40162 (Need More Info): mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "par...
- non-trivial
- 03:41 PM Backport #39223 (In Progress): mimic: mds: behind on trimming and "[dentry] was purgeable but no ...
- 03:36 PM Backport #39215 (In Progress): mimic: mds: there is an assertion when calling Beacon::shutdown()
- 03:25 PM Backport #39212 (In Progress): mimic: MDSTableServer.cc: 83: FAILED assert(version == tid)
- 03:22 PM Backport #39210 (In Progress): mimic: mds: mds_cap_revoke_eviction_timeout is not used to initial...
- 03:19 PM Backport #38875 (In Progress): mimic: mds: high debug logging with many subtrees is slow
- 03:15 PM Backport #38709 (In Progress): mimic: qa: kclient unmount hangs after file system goes down
- 02:07 PM Bug #40867 (Pending Backport): mgr: failover during in qa testing causes unresponsive client warn...
- 01:03 PM Bug #40867: mgr: failover during in qa testing causes unresponsive client warnings
- Patrick Donnelly wrote:
> Sage's whitelist PR: https://github.com/ceph/ceph/pull/29169
>
> As I said in issue des... - 12:11 PM Backport #38687 (In Progress): mimic: kcephfs TestClientLimits.test_client_pin fails with "client...
- 12:09 PM Backport #38643 (Need More Info): mimic: fs: "log [WRN] : failed to reconnect caps for missing in...
- not trivial to backport because in master we have:...
- 12:04 PM Backport #38444 (Need More Info): mimic: mds: drop cache does not timeout as expected
- see luminous backport https://github.com/ceph/ceph/pull/27342
- 12:03 PM Backport #38350 (Need More Info): luminous: mds: decoded LogEvent may leak during shutdown
- extensive changes that do not apply cleanly - elevated risk
- 12:03 PM Backport #38349 (Need More Info): mimic: mds: decoded LogEvent may leak during shutdown
- extensive changes that do not apply cleanly - elevated risk
- 12:01 PM Backport #38339: mimic: mds: may leak gather during cache drop
- The luminous backport https://github.com/ceph/ceph/pull/27342 involved cherry-picking a large number of commits. Assi...
- 12:00 PM Backport #38339 (Need More Info): mimic: mds: may leak gather during cache drop
- non-trivial - the fix touches code that appears to have been refactored for nautilus
- 11:57 AM Backport #38099 (In Progress): mimic: mds: remove cache drop admin socket command
- 11:52 AM Backport #37906 (Need More Info): mimic: make cephfs-data-scan reconstruct snaptable
- non-trivial feature backport - assigning to dev lead for disposition
- 11:50 AM Backport #37761 (Need More Info): mimic: mds: deadlock when setting config value via admin socket
- The master PR has two commits. The first commit touches the following files:
src/common/config_obs_mgr.h
src/comm... - 11:48 AM Backport #37637 (Need More Info): luminous: client: support getfattr ceph.dir.pin extended attribute
- non-trivial feature backport - assigning to dev lead for disposition
- 11:47 AM Backport #37636 (Need More Info): mimic: client: support getfattr ceph.dir.pin extended attribute
- non-trivial feature backport - assigning to dev lead for disposition
- 10:22 AM Backport #39685 (In Progress): mimic: ceph-fuse: client hang because its bad session PipeConnecti...
- 08:28 AM Backport #40900 (Resolved): nautilus: mds: only evict an unresponsive client when another client ...
- https://github.com/ceph/ceph/pull/30031
- 08:28 AM Backport #40899 (Resolved): mimic: mds: only evict an unresponsive client when another client wan...
- https://github.com/ceph/ceph/pull/30239
- 08:25 AM Backport #40898 (Rejected): nautilus: cephfs-shell: Error messages are printed to stdout
- 08:25 AM Backport #40897 (Resolved): nautilus: ceph_volume_client: fs_name must be converted to string bef...
- https://github.com/ceph/ceph/pull/30030
- 08:25 AM Backport #40896 (Rejected): mimic: ceph_volume_client: fs_name must be converted to string before...
- https://github.com/ceph/ceph/pull/30238
- 08:25 AM Backport #40895 (Resolved): nautilus: pybind: Add standard error message and fix print of path as...
- https://github.com/ceph/ceph/pull/30026
- 08:25 AM Backport #40894 (Resolved): nautilus: mds: cleanup truncating inodes when standby replay mds trim...
- https://github.com/ceph/ceph/pull/29591
- 08:24 AM Backport #40892 (Resolved): luminous: mds: cleanup truncating inodes when standby replay mds trim...
- https://github.com/ceph/ceph/pull/31286
- 08:23 AM Backport #40887 (Resolved): nautilus: ceph_volume_client: to_bytes converts NoneType object str
- https://github.com/ceph/ceph/pull/30030
- 08:23 AM Backport #40886 (Rejected): mimic: ceph_volume_client: to_bytes converts NoneType object str
- 02:41 AM Backport #40875 (In Progress): mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- https://github.com/ceph/ceph/pull/29187
- 02:29 AM Bug #40877 (Resolved): client: client should return EIO when it's unsafe reqs have been dropped w...
- when the session is close, the client will dropped unsafe req and cannot confirm whether its request had been complet...
- 02:21 AM Backport #40874 (In Progress): nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- 02:20 AM Backport #40874: nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- https://github.com/ceph/ceph/pull/29186
07/22/2019
- 11:42 PM Backport #40875 (Resolved): mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- https://github.com/ceph/ceph/pull/29187
- 11:41 PM Backport #40874 (Resolved): nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- https://github.com/ceph/ceph/pull/29186
- 11:34 PM Bug #40775 (Pending Backport): /src/include/xlist.h: 77: FAILED assert(_size == 0)
- 11:30 PM Bug #40477 (Pending Backport): mds: cleanup truncating inodes when standby replay mds trim log se...
- 11:29 PM Bug #40411 (Pending Backport): pybind: Add standard error message and fix print of path as byte o...
- 11:28 PM Bug #40800 (Pending Backport): ceph_volume_client: to_bytes converts NoneType object str
- 11:28 PM Bug #40369 (Pending Backport): ceph_volume_client: fs_name must be converted to string before usi...
- 11:26 PM Bug #40202 (Pending Backport): cephfs-shell: Error messages are printed to stdout
- 11:25 PM Feature #17854 (Pending Backport): mds: only evict an unresponsive client when another client wan...
- 09:10 PM Bug #40784 (Fix Under Review): mds: metadata changes may be lost when MDS is restarted
- 08:01 PM Bug #40873 (Duplicate): qa: expected MDS_CLIENT_LATE_RELEASE in tasks.cephfs.test_client_recovery...
- This needs whitelisted:
/ceph/teuthology-archive/pdonnell-2019-07-20_01:58:35-fs-wip-pdonnell-testing-20190719.231... - 06:19 PM Bug #40867: mgr: failover during in qa testing causes unresponsive client warnings
- Sage's whitelist PR: https://github.com/ceph/ceph/pull/29169
As I said in issue description, I'd prefer if we clea... - 03:26 PM Bug #40867 (Resolved): mgr: failover during in qa testing causes unresponsive client warnings
- ...
- 01:48 PM Bug #23262 (Resolved): kclient: nofail option not supported
- 01:48 PM Backport #39233 (Resolved): mimic: kclient: nofail option not supported
- 01:48 PM Bug #38832 (Resolved): mds: fail to resolve snapshot name contains '_'
- 01:48 PM Backport #39472 (Resolved): mimic: mds: fail to resolve snapshot name contains '_'
- 01:47 PM Bug #39645 (Resolved): mds: output lock state in format dump
- 01:47 PM Backport #39669 (Resolved): mimic: mds: output lock state in format dump
- 01:46 PM Feature #39403 (Resolved): pybind: add the lseek() function to pybind of cephfs
- 01:46 PM Backport #39679 (Resolved): mimic: pybind: add the lseek() function to pybind of cephfs
- 01:25 PM Cleanup #40866 (Fix Under Review): mds: reorg Capability header
- 01:21 PM Cleanup #40866 (Resolved): mds: reorg Capability header
- 12:54 PM Backport #39689 (Resolved): mimic: mds: error "No space left on device" when create a large numb...
- 12:22 PM Feature #24463: kclient: add btime support
- Patches merged for v5.3.
- 12:21 PM Feature #24463 (Resolved): kclient: add btime support
- 11:19 AM Backport #40168 (Resolved): mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "09" ...
- 11:18 AM Bug #40211 (Resolved): mds: fix corner case of replaying open sessions
- 11:18 AM Backport #40342 (Resolved): mimic: mds: fix corner case of replaying open sessions
- 11:14 AM Bug #40028 (Resolved): mds: avoid trimming too many log segments after mds failover
- 11:14 AM Backport #40042 (Resolved): mimic: avoid trimming too many log segments after mds failover
- 10:46 AM Bug #40864 (Resolved): cephfs-shell: rmdir doesn't complain when directory is not empty
- Passing rmdir an non-empty directory doesn't remove the directory, as it's expected. But not printing anything mislea...
- 10:40 AM Bug #40863 (Resolved): cephfs-shell: rmdir with -p attempts to delete non-dir files as well
- Going through do_rmdir in cephfs-shell, I don't see any checks that stop the method from deleting non-directory files...
- 09:49 AM Backport #40845 (In Progress): nautilus: MDSMonitor: use stringstream instead of dout for mds rep...
- 08:21 AM Backport #40845 (Resolved): nautilus: MDSMonitor: use stringstream instead of dout for mds repaired
- https://github.com/ceph/ceph/pull/29159
- 09:48 AM Backport #40843 (In Progress): nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
- 08:20 AM Backport #40843 (Resolved): nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
- https://github.com/ceph/ceph/pull/29158
- 09:47 AM Backport #40842 (In Progress): nautilus: ceph-fuse: mount does not support the fallocate()
- 08:20 AM Backport #40842 (Resolved): nautilus: ceph-fuse: mount does not support the fallocate()
- https://github.com/ceph/ceph/pull/29157
- 09:47 AM Backport #40839 (In Progress): nautilus: cephfs-shell: TypeError in poutput
- 08:20 AM Backport #40839 (Resolved): nautilus: cephfs-shell: TypeError in poutput
- https://github.com/ceph/ceph/pull/29156
- 09:20 AM Bug #40861 (Resolved): cephfs-shell: -p doesn't work for rmdir
- ...
- 09:18 AM Bug #40860 (Resolved): cephfs-shell: raises incorrect error when regfiles are passed to be deleted
- Steps to reproduce the bug -...
- 08:23 AM Backport #40858 (Rejected): luminous: ceph_volume_client: python program embedded in test_volume_...
- 08:23 AM Backport #40857 (Resolved): nautilus: ceph_volume_client: python program embedded in test_volume_...
- https://github.com/ceph/ceph/pull/30030
- 08:23 AM Backport #40856 (Rejected): mimic: ceph_volume_client: python program embedded in test_volume_cli...
- 08:23 AM Backport #40855 (Rejected): luminous: test_volume_client: test_put_object_versioned is unreliable
- 08:23 AM Backport #40854 (Resolved): nautilus: test_volume_client: test_put_object_versioned is unreliable
- https://github.com/ceph/ceph/pull/30030
- 08:22 AM Backport #40853 (Resolved): mimic: test_volume_client: test_put_object_versioned is unreliable
- https://github.com/ceph/ceph/pull/30236
- 08:21 AM Backport #40844 (Resolved): mimic: MDSMonitor: use stringstream instead of dout for mds repaired
- https://github.com/ceph/ceph/pull/30235
- 08:20 AM Backport #40841 (Resolved): mimic: ceph-fuse: mount does not support the fallocate()
- https://github.com/ceph/ceph/pull/30228
- 06:18 AM Bug #40836 (Fix Under Review): cephfs-shell: flake8 blank line and indentation error
- 06:06 AM Bug #40836 (Resolved): cephfs-shell: flake8 blank line and indentation error
- Following are the flake8 errors...
07/19/2019
07/18/2019
- 09:05 PM Bug #39405 (Pending Backport): ceph_volume_client: python program embedded in test_volume_client....
- 09:04 PM Bug #39510 (Pending Backport): test_volume_client: test_put_object_versioned is unreliable
- 06:06 PM Feature #40036 (Resolved): mgr / volumes: support asynchronous subvolume deletes
- 06:06 PM Backport #40796 (Resolved): nautilus: mgr / volumes: support asynchronous subvolume deletes
- 05:35 PM Bug #40821 (Resolved): osdc: objecter ops output does not have useful time information
- ...
- 01:22 PM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- kernel data structure for this...
- 07:52 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- echo 2>/proc/sys/vm/drop_caches " can walk release the reference and walk around the issue....
- 06:16 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- Analyzing more on the log , it seems an overflow in ll_ref.
From below log, it is pretty clear the patten is 2 _ll... - 01:00 PM Feature #40811 (Fix Under Review): mds: add command that modify session metadata
- 07:34 AM Feature #40811 (Resolved): mds: add command that modify session metadata
- 10:52 AM Feature #40617 (Fix Under Review): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Also available in: Atom