Activity
From 09/02/2020 to 10/01/2020
10/01/2020
- 08:08 PM Bug #47642 (Resolved): nautilus: qa/suites/{kcephfs, multimds}: client kernel "testing" builds fo...
- 05:05 PM Bug #47689: rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting unresponsive c...
- /a/teuthology-2020-10-01_07:01:02-rados-master-distro-basic-smithi/5485885
- 03:31 PM Bug #43762: pybind/mgr/volumes: create fails with TypeError
- Jos Collin wrote:
> Victoria Martinez de la Cruz wrote:
> > Adding more context to this
> >
> > This happened af... - 06:36 AM Bug #47565: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending p...
- Patrick Donnelly wrote:
> Sounds good. Please write up a PR for this Xiubo.
Sure, will do. - 02:17 AM Bug #43902: qa: mon_thrash: timeout "ceph quorum_status"
- /ceph/teuthology-archive/pdonnell-2020-09-29_05:23:34-fs-wip-pdonnell-testing-20200929.022151-distro-basic-smithi/547...
09/30/2020
- 09:18 PM Bug #47565: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending p...
- Sounds good. Please write up a PR for this Xiubo.
- 01:42 AM Bug #47565: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending p...
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > @Patrick,
> >
> > Maybe the MDS shouldn't report the WRN to monito... - 09:10 PM Bug #47307: mds: throttle workloads which acquire caps faster than the client can release
- Dan van der Ster wrote:
> Are you sure that the defaults for recalling aren't overly conservative?
Yes, the proba... - 05:54 PM Bug #47689: rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting unresponsive c...
- /a/teuthology-2020-09-30_07:01:02-rados-master-distro-basic-smithi/5483508/
- 03:41 PM Fix #46645 (Resolved): librados|libcephfs: use latest MonMap when creating from CephContext
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:02 PM Bug #47698: mds crashed in try_remove_dentries_for_stray after touching file in strange directory
- b1 was not longer there after we followed the recover_dentries procedure, so it is gone.
- 02:43 PM Bug #47698: mds crashed in try_remove_dentries_for_stray after touching file in strange directory
- try deleting 'd1' using 'rados rmomapkey'. If you have debug_mds=10, it should be easy to get d1's parent dirfrag (co...
- 01:21 PM Bug #47698: mds crashed in try_remove_dentries_for_stray after touching file in strange directory
- Here is the `b1` dir at the start of this issue:...
- 01:19 PM Bug #47698: mds crashed in try_remove_dentries_for_stray after touching file in strange directory
- After finishing the following, the MDS started:...
- 01:03 PM Bug #47698 (New): mds crashed in try_remove_dentries_for_stray after touching file in strange dir...
- We had a directory "b1" which appeared empty but could not be rmdir'd.
The directory also had a very large size, als... - 07:27 AM Bug #47693 (In Progress): qa: snap replicator tests
- 07:24 AM Bug #47693 (Rejected): qa: snap replicator tests
- add tests for snap replicator component
requires PR#36276 - 07:11 AM Backport #46479 (Resolved): octopus: mds: send scrub status to ceph-mgr only when scrub is runnin...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36047
m...
09/29/2020
- 09:46 PM Backport #46479: octopus: mds: send scrub status to ceph-mgr only when scrub is running (or pause...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36047
merged - 08:08 PM Bug #47689 (Resolved): rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting unr...
- ...
- 07:20 PM Backport #47605 (In Progress): nautilus: mds: purge_queue's _calculate_ops is inaccurate
- 05:05 PM Backport #47020 (In Progress): nautilus: client: shutdown race fails with status 141
- 04:56 PM Backport #46960 (In Progress): nautilus: cephfs-journal-tool: incorrect read_offset after finding...
- 02:17 PM Bug #47307: mds: throttle workloads which acquire caps faster than the client can release
- Are you sure that the defaults for recalling aren't overly conservative?
Today debugging a situation with 2 heavy ... - 02:08 PM Bug #47307 (In Progress): mds: throttle workloads which acquire caps faster than the client can r...
- 02:10 PM Bug #47565: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending p...
- Xiubo Li wrote:
> @Patrick,
>
> Maybe the MDS shouldn't report the WRN to monitor when revoking the "Fwbl" caps ?... - 08:58 AM Bug #47565: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending p...
- @Patrick,
Maybe the MDS shouldn't report the WRN to monitor when revoking the "Fwbl" caps ? Since it may need to f... - 08:21 AM Bug #47565: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending p...
- During flush the 0x200000007d5 inode, there also have many other inodes doing the flush on the same osd.6 at the same...
- 03:21 AM Bug #47565: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending p...
- From 5451587/remote/smithi110/log/ceph-client.1.30354.log.gz:
We can see that the client.4606 has received the rev... - 02:45 AM Bug #47565 (In Progress): qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x2000000...
- 02:07 PM Bug #47682: MDS can't release caps faster than clients taking caps
- Dan, see: #47307
- 01:51 PM Bug #47682 (Rejected): MDS can't release caps faster than clients taking caps
- with more effective tuning I think we can manage. cancelling this ticket.
- 10:23 AM Bug #47682: MDS can't release caps faster than clients taking caps
- Our current config is:
mds_recall_global_max_decay_threshold 200000
mds_recall_max_decay_threshold 100000
mds_re... - 10:10 AM Bug #47682: MDS can't release caps faster than clients taking caps
- Update:
* the central cache freelist eventually decreases after an hour or so.
* I suppose the bigger issue is tha... - 08:06 AM Bug #47682 (Rejected): MDS can't release caps faster than clients taking caps
- We have a workload in which a kernel client is stat'ing all files in an FS. This workload triggered a few issues:
...
09/28/2020
- 07:30 PM Backport #47014 (Resolved): octopus: librados|libcephfs: use latest MonMap when creating from Cep...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36705
m... - 07:30 PM Backport #47013 (Resolved): nautilus: librados|libcephfs: use latest MonMap when creating from Ce...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36704
m... - 02:51 PM Backport #47013: nautilus: librados|libcephfs: use latest MonMap when creating from CephContext
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36704
merged - 07:29 PM Bug #47563: qa: kernel client closes session improperly causing eviction due to timeout
- I have a patch I'm testing now that seems to also anecdotally fix some of the umount hangs I've seen lately during xf...
- 05:50 PM Bug #47563: qa: kernel client closes session improperly causing eviction due to timeout
- Patrick Donnelly wrote:
>
> I _think_ the concern is that hte client could conceivably dirty the cap the MDS just ... - 05:39 PM Bug #47563: qa: kernel client closes session improperly causing eviction due to timeout
- Jeff Layton wrote:
> Doesn't look like libcephfs does anything saner:
>
> [...]
>
> ...and it looks like the t... - 05:07 PM Bug #47563: qa: kernel client closes session improperly causing eviction due to timeout
- Doesn't look like libcephfs does anything saner:...
- 04:51 PM Bug #47563: qa: kernel client closes session improperly causing eviction due to timeout
- Jeff Layton wrote:
> Hmm, ok. This may be related to another bug I've been chasing where umount hangs waiting for th... - 04:39 PM Bug #47563 (In Progress): qa: kernel client closes session improperly causing eviction due to tim...
- 04:34 PM Bug #47563: qa: kernel client closes session improperly causing eviction due to timeout
- Hmm, ok. This may be related to another bug I've been chasing where umount hangs waiting for the session to close. I ...
- 06:50 PM Bug #47006 (Resolved): mon: required client features adding/removing
- 06:49 PM Feature #47148 (Resolved): mds: get rid of the mds_lock when storing the inode backtrace to meta ...
- 06:47 PM Tasks #47047 (Resolved): client: release the client_lock before copying data in all the reads
- 06:47 PM Bug #47039 (Resolved): client: mutex lock FAILED ceph_assert(nlock > 0)
- 06:42 PM Bug #47679 (New): kceph: kernel does not open session with MDS importing subtree
- ...
- 06:24 PM Bug #47678: mgr: include/interval_set.h: 466: ceph_abort_msg("abort() called")
- https://pulpito.ceph.com/teuthology-2020-09-21_04:15:02-multimds-master-distro-basic-smithi/5454314/
Seems to be a... - 06:17 PM Bug #47678 (New): mgr: include/interval_set.h: 466: ceph_abort_msg("abort() called")
- ...
- 06:11 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Another: /ceph/teuthology-archive/pdonnell-2020-09-26_05:47:56-fs-wip-pdonnell-testing-20200926.000836-distro-basic-s...
- 04:33 PM Feature #47034: mds: readdir for snapshot diff
- Hey Zheng,
CephFS snapshot mirror would make use of rctime approach. That needs PR https://github.com/ceph/ceph/pu... - 03:03 PM Bug #47642 (Fix Under Review): nautilus: qa/suites/{kcephfs, multimds}: client kernel "testing" b...
- 01:40 PM Bug #47662 (Fix Under Review): mds: try to replicate hot dir to restarted MDS
09/27/2020
- 10:59 PM Backport #47014: octopus: librados|libcephfs: use latest MonMap when creating from CephContext
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36705
merged - 10:41 AM Bug #47662 (Resolved): mds: try to replicate hot dir to restarted MDS
- Hot dir would be replicated to other active MDSes, but if replica MDS restarted, auth MDS won't replicate this dir ag...
09/25/2020
- 04:40 PM Feature #15070 (Resolved): mon: client: multifs: auth caps on client->mon connections to limit th...
- 02:59 PM Bug #47652: teuthology's misc.sudo_write_file is incompatible with vstart_runner
- > The compatibility was broken by this teuthology PR, since it makes
"this teuthology PR": https://github.com/cep... - 02:58 PM Bug #47652 (Fix Under Review): teuthology's misc.sudo_write_file is incompatible with vstart_runner
- 02:41 PM Bug #47652 (Resolved): teuthology's misc.sudo_write_file is incompatible with vstart_runner
- Here's the traceback -...
- 02:32 PM Feature #46059: vstart_runner.py: optionally rotate logs between tests
- Got some time to work on this finally. Fixed the PR after some scrutiny, ceph API tests pass for this PR now.
- 08:57 AM Backport #47622 (In Progress): nautilus: various quota failures
- 08:43 AM Bug #47643: mds: Segmentation fault in thread 7fcff3078700 thread_name:md_log_replay
- Patrick Donnelly wrote:
> > #x 0x5628d800
>
> I'm not sure this double-deref is indicating anything. Are you sure... - 12:22 AM Cleanup #47325 (Resolved): client: remove unneccessary client_lock for objector->write()
- 12:20 AM Bug #40613 (New): kclient: .handle_message_footer got old message 1 <= 648 0x558ceadeaac0 client_...
- This one is back:...
09/24/2020
- 07:33 PM Bug #46823 (Resolved): nautilus: kceph w/ testing branch: mdsc_handle_session corrupt message mds...
- Fixed upstream.
- 07:17 PM Backport #47622 (Need More Info): nautilus: various quota failures
- 07:16 PM Backport #47623 (In Progress): octopus: various quota failures
- 05:29 PM Bug #47643 (Need More Info): mds: Segmentation fault in thread 7fcff3078700 thread_name:md_log_re...
- > #x 0x5628d800
I'm not sure this double-deref is indicating anything. Are you sure that's a pointer? Would you no... - 04:43 PM Bug #47643 (Need More Info): mds: Segmentation fault in thread 7fcff3078700 thread_name:md_log_re...
- In ceph-14.2.11.394+g9cbbc473c0 (downstream build but mds sources are the same as v14.2.11) we got a report about the...
- 04:33 PM Bug #47642 (Resolved): nautilus: qa/suites/{kcephfs, multimds}: client kernel "testing" builds fo...
- As described in https://tracker.ceph.com/issues/47540, kernel "testing" builds for CentOS 7 are unavailable. This is ...
- 11:31 AM Bug #47591 (Can't reproduce): TestNFS: test_exports_on_mgr_restart: command failed with status 32...
- The mount command does not fail with latest builds: http://pulpito.front.sepia.ceph.com/varsha-2020-09-24_10:49:55-ra...
- 07:29 AM Bug #46769: qa: Refactor cephfs creation/removal code.
- Based on comment https://github.com/ceph/ceph/pull/36368#pullrequestreview-458486627, retaining the behavior of clean...
- 03:44 AM Backport #47608 (In Progress): octopus: mds: OpenFileTable::prefetch_inodes during rejoin can cau...
- https://github.com/ceph/ceph/pull/37383
- 03:43 AM Backport #47609 (In Progress): nautilus: mds: OpenFileTable::prefetch_inodes during rejoin can ca...
- https://github.com/ceph/ceph/pull/37382
09/23/2020
- 07:06 PM Bug #45835: mds: OpenFileTable::prefetch_inodes during rejoin can cause out-of-memory
- Dan van der Ster wrote:
> The fix was merged. Something needed to start the backports process?
@Dan, the "backpor... - 07:05 PM Bug #46583 (Resolved): mds slave request 'no_available_op_found'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:04 PM Backport #47623 (Resolved): octopus: various quota failures
- https://github.com/ceph/ceph/pull/37369
- 07:04 PM Backport #47622 (Resolved): nautilus: various quota failures
- https://github.com/ceph/ceph/pull/37231
- 03:56 PM Backport #46790 (Rejected): nautilus: mds slave request 'no_available_op_found'
- This isn't really necessary for backport.
- 01:31 PM Backport #46790 (Need More Info): nautilus: mds slave request 'no_available_op_found'
- non-trivial conflicts
@Patrick, could you help find the right assignee for this? - 03:56 PM Backport #46789 (Rejected): octopus: mds slave request 'no_available_op_found'
- This isn't really necessary for backport.
- 01:30 PM Backport #46789 (Need More Info): octopus: mds slave request 'no_available_op_found'
- non-trivial conflicts
@Patrick, could you help find the right assignee for this? - 03:06 PM Bug #47224 (Pending Backport): various quota failures
- 01:22 PM Backport #47608 (Need More Info): octopus: mds: OpenFileTable::prefetch_inodes during rejoin can ...
- extensive changeset with non-trivial conflicts
- 11:17 AM Backport #47608 (Resolved): octopus: mds: OpenFileTable::prefetch_inodes during rejoin can cause ...
- https://github.com/ceph/ceph/pull/37383
- 01:19 PM Backport #47604 (In Progress): octopus: mds: purge_queue's _calculate_ops is inaccurate
- 11:15 AM Backport #47604 (Resolved): octopus: mds: purge_queue's _calculate_ops is inaccurate
- https://github.com/ceph/ceph/pull/37372
- 01:12 PM Backport #47601 (In Progress): octopus: mgr/nfs: Cluster creation throws 'NoneType' object has no...
- 11:14 AM Backport #47601 (Resolved): octopus: mgr/nfs: Cluster creation throws 'NoneType' object has no at...
- https://github.com/ceph/ceph/pull/37371
- 01:09 PM Backport #47260 (In Progress): octopus: client: FAILED assert(dir->readdir_cache[dirp->cache_inde...
- 01:08 PM Backport #47255 (In Progress): octopus: client: Client::open() pass wrong cap mask to path_walk
- 01:02 PM Backport #47253 (In Progress): octopus: mds: fix possible crash when the MDS is stopping
- 01:02 PM Backport #47247 (In Progress): octopus: qa: Replacing daemon mds.a as rank 0 with standby daemon ...
- 12:53 PM Backport #47151 (In Progress): octopus: pybind/mgr/volumes: add debugging for global lock
- 12:52 PM Backport #47147 (In Progress): octopus: pybind/mgr/nfs: Test mounting of exports created with nfs...
- 12:51 PM Backport #47095 (Need More Info): octopus: mds: provide altrenatives to increase the total cephfs...
- non-trivial feature
- 12:50 PM Backport #47089 (In Progress): octopus: After restarting an mds, its standy-replay mds remained i...
- 12:49 PM Backport #47085 (In Progress): octopus: common: validate type CephBool cause 'invalid command json'
- 12:30 PM Backport #47083 (In Progress): octopus: mds: 'forward loop' when forward_all_requests_to_auth is set
- 12:25 PM Feature #47266 (Closed): add a subcommand to change caps in a simpler and clear way
- 12:13 PM Bug #47006 (Fix Under Review): mon: required client features adding/removing
- 12:07 PM Backport #47021 (In Progress): octopus: client: shutdown race fails with status 141
- 12:06 PM Backport #47018 (In Progress): octopus: mds: kcephfs parse dirfrag's ndist is always 0
- 12:06 PM Backport #47016 (In Progress): octopus: mds: fix the decode version
- 12:05 PM Backport #46942 (In Progress): octopus: mds: segv in MDCache::wait_for_uncommitted_fragments
- 12:05 PM Backport #46940 (In Progress): octopus: mds: memory leak during cache drop
- 12:02 PM Backport #46859 (In Progress): octopus: mds: do not raise "client failing to respond to cap relea...
- 12:01 PM Backport #46857 (In Progress): octopus: qa: add debugging for volumes plugin use of libcephfs
- 12:01 PM Backport #46855 (In Progress): octopus: client: static dirent for readdir is not thread-safe
- 11:59 AM Backport #46463 (In Progress): octopus: mgr/volumes: fs subvolume clones stuck in progress when l...
- 11:54 AM Backport #46094 (Need More Info): octopus: cephfs-shell: set proper return value for the tool
- non-trivial conflicts
- 11:18 AM Bug #44408 (Resolved): qa: after the cephfs qa test case quit the mountpoints still exist
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:17 AM Backport #47609 (Rejected): nautilus: mds: OpenFileTable::prefetch_inodes during rejoin can cause...
- https://github.com/ceph/ceph/pull/37382
- 11:17 AM Bug #46269 (Resolved): ceph-fuse: ceph-fuse process is terminated by the logratote task and what ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:15 AM Backport #47605 (Resolved): nautilus: mds: purge_queue's _calculate_ops is inaccurate
- https://github.com/ceph/ceph/pull/37481
- 11:10 AM Backport #47087 (In Progress): octopus: mds: recover files after normal session close
- 11:02 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Hi Jeff,
Have finished code in MDS, and for now I didn't handle the loopup version case. All the version related ... - 01:24 AM Feature #47162 (Fix Under Review): mds: handle encrypted filenames in the MDS for fscrypt
- 08:21 AM Backport #47178 (Resolved): nautilus: qa: after the cephfs qa test case quit the mountpoints stil...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36863
m... - 08:19 AM Backport #47152 (Resolved): nautilus: pybind/mgr/volumes: add debugging for global lock
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36828
m... - 08:19 AM Backport #46948 (Resolved): nautilus: qa: Fs cleanup fails with a traceback
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36714
m... - 08:19 AM Backport #46592 (Resolved): nautilus: ceph-fuse: ceph-fuse process is terminated by the logratote...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36181
m...
09/22/2020
- 10:27 PM Bug #47591 (Resolved): TestNFS: test_exports_on_mgr_restart: command failed with status 32: 'sudo...
- a/mgfritch-2020-09-21_20:24:35-rados:cephadm-wip-mgfritch-testing-2020-09-21-1034-distro-basic-smithi/5457554/teuthol...
- 08:09 PM Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- https://tracker.ceph.com/issues/47158 depends on the backport for this issue.
A simple cherry pick is throwing con... - 08:05 PM Backport #47158: octopus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improve...
- Also depends on the backport of https://tracker.ceph.com/issues/47095
- 07:49 PM Backport #47178: nautilus: qa: after the cephfs qa test case quit the mountpoints still exist
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36863
merged - 06:15 PM Bug #47582: MDS failover takes 10-15 hours: Ceph MDS stays in "up:replay" state for hours
- Thank you for the provided information.
I will test the MDS failover in a day. Quick question regarding "mds_log_m... - 04:14 PM Bug #47582: MDS failover takes 10-15 hours: Ceph MDS stays in "up:replay" state for hours
- Heilig IOS wrote:
> Still no changes. The "mds_log_max_segments" didn't help. The MDS failover is running for 30 min... - 04:13 PM Bug #47582 (Rejected): MDS failover takes 10-15 hours: Ceph MDS stays in "up:replay" state for hours
- (This discussion should move to ceph-users.)
- 02:40 PM Bug #47582: MDS failover takes 10-15 hours: Ceph MDS stays in "up:replay" state for hours
- Still no changes. The "mds_log_max_segments" didn't help. The MDS failover is running for 30 minutes already. What el...
- 02:04 PM Bug #47582: MDS failover takes 10-15 hours: Ceph MDS stays in "up:replay" state for hours
- I decreased it with these commands:...
- 01:52 PM Bug #47582: MDS failover takes 10-15 hours: Ceph MDS stays in "up:replay" state for hours
- Heilig IOS wrote:
> Current value: mds_log_max_segments = 100000
that's the root cause. the value should be small... - 01:47 PM Bug #47582: MDS failover takes 10-15 hours: Ceph MDS stays in "up:replay" state for hours
- Current value: mds_log_max_segments = 100000
- 01:34 PM Bug #47582: MDS failover takes 10-15 hours: Ceph MDS stays in "up:replay" state for hours
- what is the value of "mds log max segments" config
- 01:24 PM Bug #47582: MDS failover takes 10-15 hours: Ceph MDS stays in "up:replay" state for hours
- I have this issue right now. No, there is no "mds behind on trim" warning.
- 01:11 PM Bug #47582: MDS failover takes 10-15 hours: Ceph MDS stays in "up:replay" state for hours
- were there "mds behind on trim" warning
- 12:16 PM Bug #47582 (Rejected): MDS failover takes 10-15 hours: Ceph MDS stays in "up:replay" state for hours
- We have 9 nodes Ceph cluster. Ceph version is 15.2.5. The cluster has 175 OSD (HDD) + 3 NVMe for cache tier for "ceph...
- 04:07 PM Backport #47254: nautilus: client: Client::open() pass wrong cap mask to path_walk
- regression: https://tracker.ceph.com/issues/47224
- 04:07 PM Backport #47255: octopus: client: Client::open() pass wrong cap mask to path_walk
- regression: https://tracker.ceph.com/issues/47224
- 03:21 PM Feature #47490: Integration of dashboard with volume/nfs module
- Volume/nfs module doc: https://docs.ceph.com/docs/master/cephfs/fs-nfs-exports
- 03:02 PM Feature #47490 (In Progress): Integration of dashboard with volume/nfs module
- 09:35 AM Feature #47490: Integration of dashboard with volume/nfs module
- Exports and nfs clusters cannot be managed by dashboard and volumes/nfs interface at the same time. Xattrs can be use...
- 02:55 PM Feature #47587 (In Progress): pybind/mgr/nfs: add Rook support
- 02:10 PM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Jeff Layton wrote:
> Xiubo Li wrote:
> > Hi Jeff,
> >
> > There is another case for lookup:
> >
> > If the MD... - 12:02 PM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Xiubo Li wrote:
> Hi Jeff,
>
> There is another case for lookup:
>
> If the MDS is old version, such as all th... - 11:52 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- I think the MDS should treat these names as opaque. The client should never need to look up a dentry by the binary cr...
- 02:35 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Hi Jeff,
There is another case for lookup:
If the MDS is old version, such as all the dentries is under `ceph_f... - 12:34 PM Bug #47224 (Fix Under Review): various quota failures
09/21/2020
- 09:53 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- All right, I'm going to shove some more debug information in Objecter and Monitor.
- 12:40 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Xiubo Li wrote:
> Patrick Donnelly wrote:
> > Xiubo Li wrote:
> > > Hi Patrick,
> > >
> > > For this let's add ... - 09:04 PM Bug #47526 (Resolved): qa: RuntimeError: FSCID 2 not in map
- 09:02 PM Bug #36389: untar encounters unexpected EPERM on kclient/multimds cluster with thrashing
- ...
- 08:32 PM Bug #47565 (Resolved): qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d...
- ...
- 07:47 PM Bug #47563 (Resolved): qa: kernel client closes session improperly causing eviction due to timeout
- ...
- 05:08 PM Bug #45835 (Pending Backport): mds: OpenFileTable::prefetch_inodes during rejoin can cause out-of...
- 03:47 PM Bug #45835: mds: OpenFileTable::prefetch_inodes during rejoin can cause out-of-memory
- The fix was merged. Something needed to start the backports process?
- 03:21 PM Backport #47152: nautilus: pybind/mgr/volumes: add debugging for global lock
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36828
merged - 03:20 PM Backport #46948: nautilus: qa: Fs cleanup fails with a traceback
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36714
merged - 03:20 PM Backport #46592: nautilus: ceph-fuse: ceph-fuse process is terminated by the logratote task and w...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36181
merged - 01:51 PM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Jeff Layton wrote:
> Xiubo Li wrote:
> > Ceph has its own base64 encode/decode logic already in src/common/armor.c,... - 01:42 PM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Xiubo Li wrote:
> Ceph has its own base64 encode/decode logic already in src/common/armor.c, which is the same with ... - 01:39 PM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
I am planing to append a `fscrypt.alternate_name : ${raw_ciphertext}` pair to the xattr map when doing the create d...- 04:07 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Ceph has its own base64 encode/decode logic already in src/common/armor.c, which is the same with the kernel does.
09/20/2020
- 11:01 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Jeff Layton wrote:
> Xiubo Li wrote:
> >
> > Yeah, this looks good.
> >
> > BTW, what the alternat_name will s...
09/19/2020
- 05:56 PM Bug #47389: ceph fs volume create fails to create pool
- Hi Joshua,
I tried on the master and it works for me. The HEAD was at 240c46a75a44cb9363cf994cb264e9d7048c98a1 dat... - 12:29 AM Bug #47512 (Pending Backport): mgr/nfs: Cluster creation throws 'NoneType' object has no attribut...
- 12:27 AM Bug #47423 (Resolved): volume rm throws Permissioned denied error
- 12:24 AM Bug #47353 (Pending Backport): mds: purge_queue's _calculate_ops is inaccurate
09/18/2020
- 11:26 PM Bug #47518 (Resolved): qa: spawn MDS daemons before creating file system
- 11:22 PM Backport #47249 (In Progress): octopus: mon: deleting a CephFS and its pools causes MONs to crash
- 11:19 PM Backport #47248 (In Progress): nautilus: mon: deleting a CephFS and its pools causes MONs to crash
- 04:43 PM Bug #47499: Simultaneous MDS and OSD crashes when answering to client
- It just happened again on a different MDS with a different client and I found something in common. In all the crashes...
- 04:12 PM Bug #47526 (Fix Under Review): qa: RuntimeError: FSCID 2 not in map
- 02:02 AM Bug #47526 (Resolved): qa: RuntimeError: FSCID 2 not in map
- ...
- 03:42 PM Backport #46786 (In Progress): octopus: client: in _open() the open ref maybe decreased twice, bu...
- 03:41 PM Backport #46783 (In Progress): octopus: mds/CInode: Optimize only pinned by subtrees check
- 03:29 PM Backport #46637 (In Progress): octopus: mds: optimize ephemeral rand pin
- 02:56 PM Backport #46636 (In Progress): octopus: mds: null pointer dereference in MDCache::finish_rollback
- 02:53 PM Backport #46634 (In Progress): octopus: mds forwarding request 'no_available_op_found'
- 11:55 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Xiubo Li wrote:
>
> Yeah, this looks good.
>
> BTW, what the alternat_name will store ? The full ciphertext bin... - 11:04 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Zheng Yan wrote:
> Jeff Layton wrote:
[...]
> > I think that approach will give us the most flexibility going forw... - 10:39 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Jeff Layton wrote:
> Xiubo Li wrote:
> >
> > Yeah, right.
> >
> > If the master key is absent, for the ->looku... - 11:04 AM Backport #47259 (In Progress): nautilus: client: FAILED assert(dir->readdir_cache[dirp->cache_ind...
- 10:59 AM Backport #47254 (In Progress): nautilus: client: Client::open() pass wrong cap mask to path_walk
- 10:52 AM Backport #47252 (In Progress): nautilus: mds: fix possible crash when the MDS is stopping
- 10:49 AM Backport #47246 (In Progress): nautilus: qa: Replacing daemon mds.a as rank 0 with standby daemon...
- 01:29 AM Bug #47444 (Resolved): crash in FSMap::parse_role
09/17/2020
- 04:27 PM Bug #47518 (Fix Under Review): qa: spawn MDS daemons before creating file system
- 04:01 PM Bug #47518 (Resolved): qa: spawn MDS daemons before creating file system
- ...
- 03:52 PM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Jeff Layton wrote:
> Probably something like the last one. I think we're best off avoiding any logic that requires t... - 03:36 PM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Xiubo Li wrote:
>
> Yeah, right.
>
> If the master key is absent, for the ->lookup() the client will tell MDS t... - 05:19 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Xiubo Li wrote:
> Jeff Layton wrote:
> > Xiubo Li wrote:
[...]
> Yeah, since the long name case is rare, and ... - 05:09 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Jeff Layton wrote:
> Xiubo Li wrote:
> > Hi Jeff,
> >
> > One question:
> >
> > Currently the ext4 will just ... - 12:44 PM Bug #47515: pybind/snap_schedule: deactivating a schedule is ineffective
- (formatting fix)...
- 12:43 PM Bug #47515 (Resolved): pybind/snap_schedule: deactivating a schedule is ineffective
- Deactivating a snap schedule does not have any effect on the schedule. Schedules snapshots still get created by the s...
- 10:51 AM Bug #47512 (Fix Under Review): mgr/nfs: Cluster creation throws 'NoneType' object has no attribut...
- 10:45 AM Bug #47512 (Resolved): mgr/nfs: Cluster creation throws 'NoneType' object has no attribute 'repla...
- ...
09/16/2020
- 02:02 PM Bug #47499 (New): Simultaneous MDS and OSD crashes when answering to client
- We observed 4 MDSes and 2 OSDs segfaulting simultaneously when answering to one client. All the six tracebacks report...
- 12:27 PM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Xiubo Li wrote:...
- 11:25 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Xiubo Li wrote:
>
> With this approach there seems no need to covert the ciphertext to base64-encode text when s... - 11:16 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Xiubo Li wrote:
> Hi Jeff,
>
> One question:
>
> Currently the ext4 will just store the ciphertext as the fina... - 08:10 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- For the 2nd approach, suggeset by Zheng, more detail in my mind is:
If we will store both the "based64-encoded-pla... - 06:13 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Hi Jeff,
One question:
Currently the ext4 will just store the ciphertext as the final filename to the disk, and... - 02:22 AM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- From the source code, the encoded filename length will be roughly increased to 4/3 of the original filename....
- 12:05 PM Bug #47423 (Fix Under Review): volume rm throws Permissioned denied error
- 07:05 AM Feature #47490 (Pending Backport): Integration of dashboard with volume/nfs module
- Currently, there are two ways to create exports with mgr/volume/nfs module and
dashboard. Both use the same code[1]... - 03:33 AM Backport #47090 (In Progress): nautilus: After restarting an mds, its standy-replay mds remained ...
- 03:30 AM Backport #47088 (In Progress): nautilus: mds: recover files after normal session close
- 03:27 AM Backport #47084 (Need More Info): nautilus: mds: 'forward loop' when forward_all_requests_to_auth...
- Zheng, the backport for this is non-trivial. Can you take a look?
- 03:25 AM Backport #47017 (In Progress): nautilus: mds: kcephfs parse dirfrag's ndist is always 0
- 02:46 AM Bug #47488: Apparent deadlock in tasks.mgr.dashboard.test_cephfs.CephfsTest.test_snapshots
- To progress this further we really need more/better logs. Created https://github.com/ceph/ceph/pull/37176 to assist i...
- 02:40 AM Bug #47488 (New): Apparent deadlock in tasks.mgr.dashboard.test_cephfs.CephfsTest.test_snapshots
- /a/yuriw-2020-09-02_17:33:04-rados-wip-yuri-master-baseline-9.2.2020-distro-basic-smithi/5400010...
- 01:49 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > Hi Patrick,
> >
> > For this let's add more debug logs to check wh...
09/15/2020
- 07:32 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Xiubo Li wrote:
> Hi Patrick,
>
> For this let's add more debug logs to check where it is stucked in ?
>
> I w... - 01:09 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Hi Patrick,
For this let's add more debug logs to check where it is stucked in ?
I went through the client_loc... - 12:54 PM Bug #47423 (In Progress): volume rm throws Permissioned denied error
- 11:22 AM Feature #47277: implement new mount "device" syntax for kcephfs
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick Donnelly wrote:
> > > Venky Shankar wrote:
> > > > Pat...
09/14/2020
- 10:00 PM Feature #47277: implement new mount "device" syntax for kcephfs
- There are other alternates too, fwiw (e.g.):
name@fs#/path
...or maybe just omit the ':' or anything to rep... - 08:52 PM Feature #47277: implement new mount "device" syntax for kcephfs
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Venky Shankar wrote:
> > > Patrick Donnelly wrote:
> > > > Jef... - 12:33 PM Feature #47277: implement new mount "device" syntax for kcephfs
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick Donnelly wrote:
> > > Jeff Layton wrote:
> > > > Propo... - 09:00 PM Bug #47423: volume rm throws Permissioned denied error
- Rishabh Dave wrote:
> Unlike @volume rm@, @fs fail@ does not fail -
>
> [...]
>
> @volume rm@ too runs @fs fai... - 03:12 PM Bug #47423: volume rm throws Permissioned denied error
- The issue with ticket assignee was because my page wasn't refreshed before hitting submit button.
- 03:11 PM Bug #47423: volume rm throws Permissioned denied error
- Unlike @volume rm@, @fs fail@ does not fail -...
- 02:39 PM Bug #47423: volume rm throws Permissioned denied error
- ...
- 12:41 PM Bug #47423: volume rm throws Permissioned denied error
- From what I see on master in my local repo, this issue (getting @Permissioned denied@ on @volume rm@) is not just lim...
- 08:51 AM Bug #47423: volume rm throws Permissioned denied error
- Kefu Chai wrote:
> i suspect that it is https://github.com/ceph/ceph/pull/32581 which broke `test_cluster_set_reset_... - 12:42 AM Bug #47423: volume rm throws Permissioned denied error
- i suspect that it is https://github.com/ceph/ceph/pull/32581 which broke `test_cluster_set_reset_user_config` in `tas...
- 08:30 PM Documentation #47449 (New): doc: complete ec pool configuration section with an example
- https://docs.ceph.com/docs/master/cephfs/createfs/#using-erasure-coded-pools-with-cephfs
The section should provid... - 07:41 PM Bug #47444 (Fix Under Review): crash in FSMap::parse_role
- 07:19 PM Bug #47444 (In Progress): crash in FSMap::parse_role
- 05:19 PM Bug #47444 (Resolved): crash in FSMap::parse_role
- ...
- 01:34 PM Backport #47200 (In Progress): octopus: scheduled cephfs snapshots (via ceph manager)
09/13/2020
- 06:02 PM Bug #47423 (Triaged): volume rm throws Permissioned denied error
- 05:59 PM Bug #47389 (Triaged): ceph fs volume create fails to create pool
- 05:55 PM Feature #47277: implement new mount "device" syntax for kcephfs
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Jeff Layton wrote:
> > > Proposed syntax looks wrong in the des... - 05:48 PM Bug #47379 (Rejected): mds: mark no warn on killed request
- PR was rejected
- 05:47 PM Bug #47353 (Fix Under Review): mds: purge_queue's _calculate_ops is inaccurate
09/12/2020
- 07:22 PM Backport #47248 (Need More Info): nautilus: mon: deleting a CephFS and its pools causes MONs to c...
- non-trivial because it depends on a series of changes to qa/tasks/cephfs/mount.py that have not been backported
- 07:22 PM Backport #47249 (Need More Info): octopus: mon: deleting a CephFS and its pools causes MONs to crash
- non-trivial because it depends on a series of changes to qa/tasks/cephfs/mount.py that have not been backported
- 06:55 PM Backport #47086 (Need More Info): nautilus: common: validate type CephBool cause 'invalid command...
- must be backported together with the fix for #47179
- 06:55 PM Backport #47085 (Need More Info): octopus: common: validate type CephBool cause 'invalid command ...
- must be backported together with the fix for #47179
- 11:54 AM Bug #47423 (Resolved): volume rm throws Permissioned denied error
- ...
09/11/2020
- 10:39 PM Bug #46985: common: validate type CephBool cause 'invalid command json'
- https://github.com/ceph/ceph/pull/37098 fixes a bug in https://github.com/ceph/ceph/pull/36459 and needs backport too.
- 03:03 AM Bug #46985: common: validate type CephBool cause 'invalid command json'
- This change causes the failure seen in #47179. Could we either revert it or modify it so it reinstates the old behavi...
09/10/2020
09/09/2020
- 05:59 PM Feature #47277: implement new mount "device" syntax for kcephfs
- One idea might be to just get rid of the ':' ?
name@fsname[.fscid]/path
...but that fsname/path looks like ... - 01:40 PM Feature #47277: implement new mount "device" syntax for kcephfs
- Venky Shankar wrote:
>
> The "=" is a bit offputting. FWIW, mount helper tries to resolve (parse host/IP:port + ge... - 12:26 PM Feature #47162 (In Progress): mds: handle encrypted filenames in the MDS for fscrypt
- 10:57 AM Bug #47379 (Rejected): mds: mark no warn on killed request
- It is unnecessary to report slow request on killed ones, otherwise cause continous false alarms.
09/08/2020
- 09:30 PM Bug #47367 (New): mgr/volumes: volumes plugin does not ensure passed in subvolume name does not h...
- The volumes plugin does not check and ensure that a subvolume name is passed as the parameter to calls that require t...
- 07:10 PM Feature #40401 (Fix Under Review): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume a...
- 01:53 PM Feature #47161 (Rejected): mds: add dedicated field to inode for fscrypt context
- Fair enough then. I'll keep working with this as an xattr for now. Let's go ahead and close this out then, and I'll r...
- 06:19 AM Bug #47353 (Resolved): mds: purge_queue's _calculate_ops is inaccurate
- ...
- 04:45 AM Bug #47268 (Resolved): pybind/snap_schedule: scheduled snapshots get pruned just after creation
09/07/2020
- 08:31 PM Backport #47317 (In Progress): nautilus: mds: CDir::_omap_commit(int): Assertion `committed_versi...
- 08:25 PM Backport #47316 (In Progress): octopus: mds: CDir::_omap_commit(int): Assertion `committed_versio...
- 08:25 PM Backport #46520 (In Progress): octopus: mds: deleting a large number of files in a directory caus...
- 08:28 AM Backport #46520: octopus: mds: deleting a large number of files in a directory causes the file sy...
- sorry, I made a mistake.
reset state to need more info. - 08:27 AM Backport #46520 (Need More Info): octopus: mds: deleting a large number of files in a directory c...
- 08:22 AM Backport #46520 (In Progress): octopus: mds: deleting a large number of files in a directory caus...
- 10:20 AM Feature #47277: implement new mount "device" syntax for kcephfs
- Patrick Donnelly wrote:
> Jeff Layton wrote:
> > Proposed syntax looks wrong in the description. I meant this:
> >... - 10:01 AM Backport #46524 (In Progress): octopus: non-head batch requests may hold authpins and locks
- 08:31 AM Backport #46522 (In Progress): octopus: mds: fix hang issue when accessing a file under a lost pa...
- 08:12 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- It was neither blocked by the "client_lock", nor by the RWRef's lock, because both kept working well:
For the ti... - 08:02 AM Backport #46516 (In Progress): octopus: client: directory inode can not call release_callback
- 03:28 AM Cleanup #47160 (In Progress): qa/tasks/cephfs: Break up test_volumes.py
09/06/2020
- 10:21 AM Backport #47157: nautilus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improv...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36833
m... - 10:21 AM Backport #46796: nautilus: mds: Subvolume snapshot directory does not save attribute "ceph.quota....
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36404
m... - 09:54 AM Cleanup #47325 (Fix Under Review): client: remove unneccessary client_lock for objector->write()
- 09:35 AM Cleanup #47325 (Resolved): client: remove unneccessary client_lock for objector->write()
- 09:23 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > From /ceph/teuthology-archive/pdonnell-2020-09-03_02:04:14-fs-wip-pdo...
09/05/2020
- 09:10 PM Backport #47317 (Resolved): nautilus: mds: CDir::_omap_commit(int): Assertion `committed_version ...
- https://github.com/ceph/ceph/pull/37035
- 09:10 PM Backport #47316 (Resolved): octopus: mds: CDir::_omap_commit(int): Assertion `committed_version =...
- https://github.com/ceph/ceph/pull/37034
09/04/2020
- 09:08 PM Feature #40401 (In Progress): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and su...
- 06:59 PM Bug #47293 (Resolved): client: osdmap wait not protected by mounted mutex
- 02:54 AM Bug #47293 (Fix Under Review): client: osdmap wait not protected by mounted mutex
- 06:54 PM Bug #47307 (Triaged): mds: throttle workloads which acquire caps faster than the client can release
- 06:28 PM Bug #47307 (Resolved): mds: throttle workloads which acquire caps faster than the client can release
- A trivial "find" command on a large directory hierarchy will cause the client to receive caps significantly faster th...
- 05:53 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Xiubo Li wrote:
> From /ceph/teuthology-archive/pdonnell-2020-09-03_02:04:14-fs-wip-pdonnell-testing-20200903.000442... - 10:13 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- From /ceph/teuthology-archive/pdonnell-2020-09-03_02:04:14-fs-wip-pdonnell-testing-20200903.000442-distro-basic-smith...
- 08:07 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Set the "ceph.dir.subvolume" won't fetch the osdmap, only for the pool related xattrs....
- 03:23 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Patrick Donnelly wrote:
> Actually, I think it's more likely the hang is in
>
> https://github.com/ceph/ceph/blob... - 05:43 PM Bug #46882 (Resolved): client: mount abort hangs: [volumes INFO mgr_util] aborting connection fro...
- I don't think this issue exists in Octopus or Nautilus? I think this is fallout from Xiubo's work on breaking the cli...
- 05:41 PM Bug #46905 (Resolved): client: cluster [WRN] evicting unresponsive client smithi122:0 (34373), af...
- 05:29 PM Feature #47102 (Resolved): mds: add perf counter for cap messages
- 01:15 PM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Will start it next week.
- 10:26 AM Feature #47277: implement new mount "device" syntax for kcephfs
- I will start taking a look next week
- 06:14 AM Feature #47266: add a subcommand to change caps in a simpler and clear way
- Closing this ticket based on conversation with Patrick.
09/03/2020
- 11:18 PM Bug #47293 (In Progress): client: osdmap wait not protected by mounted mutex
- 06:12 PM Bug #47293 (Resolved): client: osdmap wait not protected by mounted mutex
- https://github.com/ceph/ceph/blob/master/src/client/Client.cc#L11619
Accessing the client members before acquiring... - 06:37 PM Bug #47201 (Pending Backport): mds: CDir::_omap_commit(int): Assertion `committed_version == 0' f...
- 06:34 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Xiubo, I'd also suggest adding debugging entry/exit points for these methods. (If you're feeling motivated, debugging...
- 06:32 PM Bug #47294 (Triaged): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Actually, I think it's more likely the hang is in
https://github.com/ceph/ceph/blob/e4a37f6338cf39e76228492897c1f2... - 06:29 PM Bug #47294 (Resolved): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- ...
- 05:48 PM Bug #47292 (In Progress): cephfs-shell: test_df_for_valid_file failure
- ...
- 05:35 PM Feature #47277: implement new mount "device" syntax for kcephfs
- Jeff Layton wrote:
> Proposed syntax looks wrong in the description. I meant this:
>
> [...]
>
> Note that if ... - 04:45 PM Bug #42688 (Triaged): Standard CephFS caps do not allow certain dot files to be written
- 04:35 PM Cleanup #46802 (In Progress): mds: do not use asserts for RADOS failures
- 11:31 AM Bug #47268 (Fix Under Review): pybind/snap_schedule: scheduled snapshots get pruned just after cr...
- 09:31 AM Backport #46473 (In Progress): octopus: mds: make threshold for MDS_TRIM warning configurable
- 07:49 AM Backport #46943 (In Progress): nautilus: mds: segv in MDCache::wait_for_uncommitted_fragments
- 07:45 AM Backport #46941 (In Progress): nautilus: mds: memory leak during cache drop
- 07:38 AM Backport #46787 (In Progress): nautilus: client: in _open() the open ref maybe decreased twice, b...
- 07:35 AM Backport #46784 (In Progress): nautilus: mds/CInode: Optimize only pinned by subtrees check
- 07:26 AM Backport #46633 (In Progress): nautilus: mds forwarding request 'no_available_op_found'
09/02/2020
- 05:04 PM Feature #47277: implement new mount "device" syntax for kcephfs
- Proposed syntax looks wrong in the description. I meant this:...
- 05:02 PM Feature #47277 (Resolved): implement new mount "device" syntax for kcephfs
- Currently, a mount has to pass in a device string like this:
mon_addr1,mon_addr2:/path
It's problematic for... - 04:41 PM Bug #47276: MDSMonitor: add command to rename file systems
- I think we should also rethink allowing "." in file system names. Jeff is about to open a ticket to change the mount ...
- 04:40 PM Bug #47276 (Resolved): MDSMonitor: add command to rename file systems
- We've added character restrictions on file system names but there's no mechanism for fixing a legacy file system name...
- 10:43 AM Bug #47268 (Resolved): pybind/snap_schedule: scheduled snapshots get pruned just after creation
- Sample run link with PR https://github.com/ceph/ceph/pull/34552: https://pulpito.ceph.com/vshankar-2020-09-02_08:40:2...
- 09:20 AM Feature #47266 (Closed): add a subcommand to change caps in a simpler and clear way
- I am not sure if there's a better way to do it but AFAIS changing permission flag or path within the cap isn't very c...
- 08:50 AM Feature #47264 (Resolved): "fs authorize" subcommand should work for multiple FSs too
- Currently assigning caps for a second FS to an already existing client (which holds caps for a different FS already) ...
- 05:57 AM Feature #47148: mds: get rid of the mds_lock when storing the inode backtrace to meta pool
- Currently will queue some encoding excepting the encodings which need to access the CDir/CInode members in the finish...
- 05:54 AM Feature #47148 (Fix Under Review): mds: get rid of the mds_lock when storing the inode backtrace ...
- 05:09 AM Backport #47260 (Resolved): octopus: client: FAILED assert(dir->readdir_cache[dirp->cache_index] ...
- https://github.com/ceph/ceph/pull/37370
- 05:09 AM Backport #47259 (Resolved): nautilus: client: FAILED assert(dir->readdir_cache[dirp->cache_index]...
- https://github.com/ceph/ceph/pull/37232
- 05:05 AM Backport #47255 (Resolved): octopus: client: Client::open() pass wrong cap mask to path_walk
- https://github.com/ceph/ceph/pull/37369
- 05:05 AM Backport #47254 (Resolved): nautilus: client: Client::open() pass wrong cap mask to path_walk
- https://github.com/ceph/ceph/pull/37231
- 05:05 AM Backport #47253 (Resolved): octopus: mds: fix possible crash when the MDS is stopping
- https://github.com/ceph/ceph/pull/37368
- 05:05 AM Backport #47252 (Resolved): nautilus: mds: fix possible crash when the MDS is stopping
- https://github.com/ceph/ceph/pull/37229
- 05:04 AM Backport #47249 (Resolved): octopus: mon: deleting a CephFS and its pools causes MONs to crash
- https://github.com/ceph/ceph/pull/37256
- 05:04 AM Backport #47248 (Rejected): nautilus: mon: deleting a CephFS and its pools causes MONs to crash
- https://github.com/ceph/ceph/pull/37255
- 05:04 AM Backport #47247 (Resolved): octopus: qa: Replacing daemon mds.a as rank 0 with standby daemon mds...
- https://github.com/ceph/ceph/pull/37367
- 05:04 AM Backport #47246 (Resolved): nautilus: qa: Replacing daemon mds.a as rank 0 with standby daemon md...
- https://github.com/ceph/ceph/pull/37228
- 03:58 AM Backport #47158: octopus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improve...
- Shyamsundar Ranganathan wrote:
> Conflicts (and also depends) with backports in https://github.com/ceph/ceph/pull/36... - 03:56 AM Backport #47158 (Need More Info): octopus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume v...
- Changing status to reflect that issue is waiting for an external event.
Also available in: Atom