Activity
From 08/13/2020 to 09/11/2020
09/11/2020
- 10:39 PM Bug #46985: common: validate type CephBool cause 'invalid command json'
- https://github.com/ceph/ceph/pull/37098 fixes a bug in https://github.com/ceph/ceph/pull/36459 and needs backport too.
- 03:03 AM Bug #46985: common: validate type CephBool cause 'invalid command json'
- This change causes the failure seen in #47179. Could we either revert it or modify it so it reinstates the old behavi...
09/10/2020
09/09/2020
- 05:59 PM Feature #47277: implement new mount "device" syntax for kcephfs
- One idea might be to just get rid of the ':' ?
name@fsname[.fscid]/path
...but that fsname/path looks like ... - 01:40 PM Feature #47277: implement new mount "device" syntax for kcephfs
- Venky Shankar wrote:
>
> The "=" is a bit offputting. FWIW, mount helper tries to resolve (parse host/IP:port + ge... - 12:26 PM Feature #47162 (In Progress): mds: handle encrypted filenames in the MDS for fscrypt
- 10:57 AM Bug #47379 (Rejected): mds: mark no warn on killed request
- It is unnecessary to report slow request on killed ones, otherwise cause continous false alarms.
09/08/2020
- 09:30 PM Bug #47367 (New): mgr/volumes: volumes plugin does not ensure passed in subvolume name does not h...
- The volumes plugin does not check and ensure that a subvolume name is passed as the parameter to calls that require t...
- 07:10 PM Feature #40401 (Fix Under Review): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume a...
- 01:53 PM Feature #47161 (Rejected): mds: add dedicated field to inode for fscrypt context
- Fair enough then. I'll keep working with this as an xattr for now. Let's go ahead and close this out then, and I'll r...
- 06:19 AM Bug #47353 (Resolved): mds: purge_queue's _calculate_ops is inaccurate
- ...
- 04:45 AM Bug #47268 (Resolved): pybind/snap_schedule: scheduled snapshots get pruned just after creation
09/07/2020
- 08:31 PM Backport #47317 (In Progress): nautilus: mds: CDir::_omap_commit(int): Assertion `committed_versi...
- 08:25 PM Backport #47316 (In Progress): octopus: mds: CDir::_omap_commit(int): Assertion `committed_versio...
- 08:25 PM Backport #46520 (In Progress): octopus: mds: deleting a large number of files in a directory caus...
- 08:28 AM Backport #46520: octopus: mds: deleting a large number of files in a directory causes the file sy...
- sorry, I made a mistake.
reset state to need more info. - 08:27 AM Backport #46520 (Need More Info): octopus: mds: deleting a large number of files in a directory c...
- 08:22 AM Backport #46520 (In Progress): octopus: mds: deleting a large number of files in a directory caus...
- 10:20 AM Feature #47277: implement new mount "device" syntax for kcephfs
- Patrick Donnelly wrote:
> Jeff Layton wrote:
> > Proposed syntax looks wrong in the description. I meant this:
> >... - 10:01 AM Backport #46524 (In Progress): octopus: non-head batch requests may hold authpins and locks
- 08:31 AM Backport #46522 (In Progress): octopus: mds: fix hang issue when accessing a file under a lost pa...
- 08:12 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- It was neither blocked by the "client_lock", nor by the RWRef's lock, because both kept working well:
For the ti... - 08:02 AM Backport #46516 (In Progress): octopus: client: directory inode can not call release_callback
- 03:28 AM Cleanup #47160 (In Progress): qa/tasks/cephfs: Break up test_volumes.py
09/06/2020
- 10:21 AM Backport #47157: nautilus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improv...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36833
m... - 10:21 AM Backport #46796: nautilus: mds: Subvolume snapshot directory does not save attribute "ceph.quota....
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36404
m... - 09:54 AM Cleanup #47325 (Fix Under Review): client: remove unneccessary client_lock for objector->write()
- 09:35 AM Cleanup #47325 (Resolved): client: remove unneccessary client_lock for objector->write()
- 09:23 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > From /ceph/teuthology-archive/pdonnell-2020-09-03_02:04:14-fs-wip-pdo...
09/05/2020
- 09:10 PM Backport #47317 (Resolved): nautilus: mds: CDir::_omap_commit(int): Assertion `committed_version ...
- https://github.com/ceph/ceph/pull/37035
- 09:10 PM Backport #47316 (Resolved): octopus: mds: CDir::_omap_commit(int): Assertion `committed_version =...
- https://github.com/ceph/ceph/pull/37034
09/04/2020
- 09:08 PM Feature #40401 (In Progress): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and su...
- 06:59 PM Bug #47293 (Resolved): client: osdmap wait not protected by mounted mutex
- 02:54 AM Bug #47293 (Fix Under Review): client: osdmap wait not protected by mounted mutex
- 06:54 PM Bug #47307 (Triaged): mds: throttle workloads which acquire caps faster than the client can release
- 06:28 PM Bug #47307 (Resolved): mds: throttle workloads which acquire caps faster than the client can release
- A trivial "find" command on a large directory hierarchy will cause the client to receive caps significantly faster th...
- 05:53 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Xiubo Li wrote:
> From /ceph/teuthology-archive/pdonnell-2020-09-03_02:04:14-fs-wip-pdonnell-testing-20200903.000442... - 10:13 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- From /ceph/teuthology-archive/pdonnell-2020-09-03_02:04:14-fs-wip-pdonnell-testing-20200903.000442-distro-basic-smith...
- 08:07 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Set the "ceph.dir.subvolume" won't fetch the osdmap, only for the pool related xattrs....
- 03:23 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Patrick Donnelly wrote:
> Actually, I think it's more likely the hang is in
>
> https://github.com/ceph/ceph/blob... - 05:43 PM Bug #46882 (Resolved): client: mount abort hangs: [volumes INFO mgr_util] aborting connection fro...
- I don't think this issue exists in Octopus or Nautilus? I think this is fallout from Xiubo's work on breaking the cli...
- 05:41 PM Bug #46905 (Resolved): client: cluster [WRN] evicting unresponsive client smithi122:0 (34373), af...
- 05:29 PM Feature #47102 (Resolved): mds: add perf counter for cap messages
- 01:15 PM Feature #47162: mds: handle encrypted filenames in the MDS for fscrypt
- Will start it next week.
- 10:26 AM Feature #47277: implement new mount "device" syntax for kcephfs
- I will start taking a look next week
- 06:14 AM Feature #47266: add a subcommand to change caps in a simpler and clear way
- Closing this ticket based on conversation with Patrick.
09/03/2020
- 11:18 PM Bug #47293 (In Progress): client: osdmap wait not protected by mounted mutex
- 06:12 PM Bug #47293 (Resolved): client: osdmap wait not protected by mounted mutex
- https://github.com/ceph/ceph/blob/master/src/client/Client.cc#L11619
Accessing the client members before acquiring... - 06:37 PM Bug #47201 (Pending Backport): mds: CDir::_omap_commit(int): Assertion `committed_version == 0' f...
- 06:34 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Xiubo, I'd also suggest adding debugging entry/exit points for these methods. (If you're feeling motivated, debugging...
- 06:32 PM Bug #47294 (Triaged): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Actually, I think it's more likely the hang is in
https://github.com/ceph/ceph/blob/e4a37f6338cf39e76228492897c1f2... - 06:29 PM Bug #47294 (Resolved): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- ...
- 05:48 PM Bug #47292 (In Progress): cephfs-shell: test_df_for_valid_file failure
- ...
- 05:35 PM Feature #47277: implement new mount "device" syntax for kcephfs
- Jeff Layton wrote:
> Proposed syntax looks wrong in the description. I meant this:
>
> [...]
>
> Note that if ... - 04:45 PM Bug #42688 (Triaged): Standard CephFS caps do not allow certain dot files to be written
- 04:35 PM Cleanup #46802 (In Progress): mds: do not use asserts for RADOS failures
- 11:31 AM Bug #47268 (Fix Under Review): pybind/snap_schedule: scheduled snapshots get pruned just after cr...
- 09:31 AM Backport #46473 (In Progress): octopus: mds: make threshold for MDS_TRIM warning configurable
- 07:49 AM Backport #46943 (In Progress): nautilus: mds: segv in MDCache::wait_for_uncommitted_fragments
- 07:45 AM Backport #46941 (In Progress): nautilus: mds: memory leak during cache drop
- 07:38 AM Backport #46787 (In Progress): nautilus: client: in _open() the open ref maybe decreased twice, b...
- 07:35 AM Backport #46784 (In Progress): nautilus: mds/CInode: Optimize only pinned by subtrees check
- 07:26 AM Backport #46633 (In Progress): nautilus: mds forwarding request 'no_available_op_found'
09/02/2020
- 05:04 PM Feature #47277: implement new mount "device" syntax for kcephfs
- Proposed syntax looks wrong in the description. I meant this:...
- 05:02 PM Feature #47277 (Resolved): implement new mount "device" syntax for kcephfs
- Currently, a mount has to pass in a device string like this:
mon_addr1,mon_addr2:/path
It's problematic for... - 04:41 PM Bug #47276: MDSMonitor: add command to rename file systems
- I think we should also rethink allowing "." in file system names. Jeff is about to open a ticket to change the mount ...
- 04:40 PM Bug #47276 (Resolved): MDSMonitor: add command to rename file systems
- We've added character restrictions on file system names but there's no mechanism for fixing a legacy file system name...
- 10:43 AM Bug #47268 (Resolved): pybind/snap_schedule: scheduled snapshots get pruned just after creation
- Sample run link with PR https://github.com/ceph/ceph/pull/34552: https://pulpito.ceph.com/vshankar-2020-09-02_08:40:2...
- 09:20 AM Feature #47266 (Closed): add a subcommand to change caps in a simpler and clear way
- I am not sure if there's a better way to do it but AFAIS changing permission flag or path within the cap isn't very c...
- 08:50 AM Feature #47264 (Resolved): "fs authorize" subcommand should work for multiple FSs too
- Currently assigning caps for a second FS to an already existing client (which holds caps for a different FS already) ...
- 05:57 AM Feature #47148: mds: get rid of the mds_lock when storing the inode backtrace to meta pool
- Currently will queue some encoding excepting the encodings which need to access the CDir/CInode members in the finish...
- 05:54 AM Feature #47148 (Fix Under Review): mds: get rid of the mds_lock when storing the inode backtrace ...
- 05:09 AM Backport #47260 (Resolved): octopus: client: FAILED assert(dir->readdir_cache[dirp->cache_index] ...
- https://github.com/ceph/ceph/pull/37370
- 05:09 AM Backport #47259 (Resolved): nautilus: client: FAILED assert(dir->readdir_cache[dirp->cache_index]...
- https://github.com/ceph/ceph/pull/37232
- 05:05 AM Backport #47255 (Resolved): octopus: client: Client::open() pass wrong cap mask to path_walk
- https://github.com/ceph/ceph/pull/37369
- 05:05 AM Backport #47254 (Resolved): nautilus: client: Client::open() pass wrong cap mask to path_walk
- https://github.com/ceph/ceph/pull/37231
- 05:05 AM Backport #47253 (Resolved): octopus: mds: fix possible crash when the MDS is stopping
- https://github.com/ceph/ceph/pull/37368
- 05:05 AM Backport #47252 (Resolved): nautilus: mds: fix possible crash when the MDS is stopping
- https://github.com/ceph/ceph/pull/37229
- 05:04 AM Backport #47249 (Resolved): octopus: mon: deleting a CephFS and its pools causes MONs to crash
- https://github.com/ceph/ceph/pull/37256
- 05:04 AM Backport #47248 (Rejected): nautilus: mon: deleting a CephFS and its pools causes MONs to crash
- https://github.com/ceph/ceph/pull/37255
- 05:04 AM Backport #47247 (Resolved): octopus: qa: Replacing daemon mds.a as rank 0 with standby daemon mds...
- https://github.com/ceph/ceph/pull/37367
- 05:04 AM Backport #47246 (Resolved): nautilus: qa: Replacing daemon mds.a as rank 0 with standby daemon md...
- https://github.com/ceph/ceph/pull/37228
- 03:58 AM Backport #47158: octopus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improve...
- Shyamsundar Ranganathan wrote:
> Conflicts (and also depends) with backports in https://github.com/ceph/ceph/pull/36... - 03:56 AM Backport #47158 (Need More Info): octopus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume v...
- Changing status to reflect that issue is waiting for an external event.
09/01/2020
- 01:53 PM Bug #47236 (New): Getting "Cannot send after transport endpoint shutdown" after changing subvolum...
- In an Ubuntu 20.04 environment with Ceph Nautilus (ceph version 14.2.11-99-gaf0268dc91 (af0268dc910f84b47655e83a83ca5...
08/31/2020
- 08:44 PM Bug #47224 (Resolved): various quota failures
- https://pulpito.ceph.com/pdonnell-2020-08-31_20:09:39-fs-master-distro-basic-smithi/
https://pulpito.ceph.com/pdon... - 08:23 PM Bug #47202 (Pending Backport): qa: Replacing daemon mds.a as rank 0 with standby daemon mds.b" in...
- 08:22 PM Feature #47168 (Resolved): client: support getting ceph.dir.rsnaps vxattr
- 08:21 PM Bug #47125 (Pending Backport): mds: fix possible crash when the MDS is stopping
- 08:19 PM Bug #42365 (Pending Backport): client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- 08:06 PM Bug #47182 (Pending Backport): mon: deleting a CephFS and its pools causes MONs to crash
- 11:58 AM Backport #47158: octopus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improve...
- Conflicts (and also depends) with backports in https://github.com/ceph/ceph/pull/36803
Awaiting merge of the above... - 11:30 AM Feature #3244: qa: integrate Ganesha into teuthology testing to regularly exercise Ganesha CephFS...
- closed already via https://github.com/ceph/ceph/blob/master/qa/tasks/cephfs/test_nfs.py ?
- 09:15 AM Bug #47201 (Fix Under Review): mds: CDir::_omap_commit(int): Assertion `committed_version == 0' f...
08/30/2020
- 06:48 AM Bug #47201: mds: CDir::_omap_commit(int): Assertion `committed_version == 0' failed.
- Backporting note:
* octopus backport should be done together with #46273
* nautilus backport has no dependency an... - 06:46 AM Backport #46520 (Need More Info): octopus: mds: deleting a large number of files in a directory c...
- setting "Need More Info" to prevent blind automated backport
- 05:45 AM Feature #47161: mds: add dedicated field to inode for fscrypt context
- Jeff Layton wrote:
> The prototype implementation uses an xattr, so I'm aware how that works, but there is more to t...
08/29/2020
- 12:49 AM Bug #47011 (Pending Backport): client: Client::open() pass wrong cap mask to path_walk
- 12:45 AM Bug #47202 (Fix Under Review): qa: Replacing daemon mds.a as rank 0 with standby daemon mds.b" in...
- 12:44 AM Bug #47202 (Resolved): qa: Replacing daemon mds.a as rank 0 with standby daemon mds.b" in cluster...
- ...
- 12:21 AM Backport #46520: octopus: mds: deleting a large number of files in a directory causes the file sy...
- PR introduced a bug: https://tracker.ceph.com/issues/47201
- 12:20 AM Bug #47201 (Resolved): mds: CDir::_omap_commit(int): Assertion `committed_version == 0' failed.
- ...
08/28/2020
- 05:05 PM Bug #47182 (Fix Under Review): mon: deleting a CephFS and its pools causes MONs to crash
- 04:48 AM Bug #47182 (Resolved): mon: deleting a CephFS and its pools causes MONs to crash
- This is a clone of #41228. The bug is back in Octopus:
https://tracker.ceph.com/issues/41228#note-11
and in Nau... - 02:42 PM Backport #47200 (Rejected): octopus: scheduled cephfs snapshots (via ceph manager)
- https://github.com/ceph/ceph/pull/37142
- 02:39 PM Bug #46278 (Resolved): mds: Subvolume snapshot directory does not save attribute "ceph.quota.max_...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:04 PM Backport #47058 (Resolved): nautilus: mgr/volumes: Clone operation uses source subvolume root dir...
- 01:04 PM Backport #47058: nautilus: mgr/volumes: Clone operation uses source subvolume root directory mode...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36744
Merged as part of PR https://github.com/ceph/ceph/... - 10:38 AM Feature #47161: mds: add dedicated field to inode for fscrypt context
- The prototype implementation uses an xattr, so I'm aware how that works, but there is more to this than just setting ...
- 08:26 AM Feature #47161: mds: add dedicated field to inode for fscrypt context
- xattr is the most suitable place, because client can create file and set xattr at the same time. This is similar to s...
- 08:01 AM Bug #21777: src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
- mds.node188-2(rank 2) receive MMDSCacheRejoin message from mds.node185-0(rank 1), but mds.node188-2 is in resolve sta...
- 07:42 AM Bug #41072 (Pending Backport): scheduled cephfs snapshots (via ceph manager)
- 01:30 AM Feature #47168: client: support getting ceph.dir.rsnaps vxattr
- v2: https://patchwork.kernel.org/patch/11742015/
08/27/2020
- 09:01 PM Backport #47178 (In Progress): nautilus: qa: after the cephfs qa test case quit the mountpoints s...
- 08:57 PM Backport #47178 (Resolved): nautilus: qa: after the cephfs qa test case quit the mountpoints stil...
- https://github.com/ceph/ceph/pull/36863
- 08:56 PM Bug #44408 (Pending Backport): qa: after the cephfs qa test case quit the mountpoints still exist
- 08:43 PM Backport #47157 (Resolved): nautilus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr...
- 07:32 PM Backport #46796 (Resolved): nautilus: mds: Subvolume snapshot directory does not save attribute "...
- 03:46 PM Bug #47172 (Resolved): mgr/nfs: Add support for RGW export
- Current interface for CephFS:
https://docs.ceph.com/en/latest/cephfs/fs-nfs-exports/
The "ceph nfs cluster crea... - 03:42 PM Feature #45746: mgr/nfs: Add interface to update export
- Add an option to output export config in json format. Then user can use this json file to modify the existing export ...
- 03:09 PM Bug #45745 (Rejected): mgr/nfs: Move enable pool to cephadm
- Because we need to create/enable the nfs-ganesha pool for all orchestrator backends, I think this should stay in the ...
- 01:51 PM Feature #47102 (Fix Under Review): mds: add perf counter for cap messages
- 10:56 AM Backport #47096: nautilus: mds: provide altrenatives to increase the total cephfs subvolume snaps...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36804
m... - 09:50 AM Feature #47168 (Resolved): client: support getting ceph.dir.rsnaps vxattr
- https://patchwork.kernel.org/patch/11740285/
08/26/2020
- 11:35 PM Backport #47096 (Resolved): nautilus: mds: provide altrenatives to increase the total cephfs subv...
- 02:38 AM Backport #47096 (In Progress): nautilus: mds: provide altrenatives to increase the total cephfs s...
- 09:23 PM Cleanup #47160: qa/tasks/cephfs: Break up test_volumes.py
- To be clear: this is as simple as breaking the volumes tests into separate classes in teh same file. Then the yaml fr...
- 07:16 PM Cleanup #47160 (Resolved): qa/tasks/cephfs: Break up test_volumes.py
- The test_volumes has become unwieldy with growing number of non-trivial tests to test growing set of features in mgr/...
- 07:49 PM Feature #47162 (Resolved): mds: handle encrypted filenames in the MDS for fscrypt
- Once you turn a filename into encrypted text then it can have non-legal and non-printable embedded characters. To mak...
- 07:25 PM Feature #47161 (Rejected): mds: add dedicated field to inode for fscrypt context
- fscrypt requires that each encrypted inode contain an encryption context:
https://www.kernel.org/doc/html/late... - 06:44 PM Backport #47157 (In Progress): nautilus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxa...
- 05:48 PM Backport #47157 (Resolved): nautilus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr...
- https://github.com/ceph/ceph/pull/36833
- 05:48 PM Backport #47158 (Resolved): octopus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr,...
- https://github.com/ceph/ceph/pull/38612
- 05:47 PM Bug #47154 (Pending Backport): mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to im...
- 05:30 PM Bug #47154 (Resolved): mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxattr, to improve sn...
- Fix for tracker https://tracker.ceph.com/issues/46074 introduces the vxattr ceph.dir.subvolume that can be use to mar...
- 03:31 PM Backport #47152 (In Progress): nautilus: pybind/mgr/volumes: add debugging for global lock
- 03:28 PM Backport #47152 (Resolved): nautilus: pybind/mgr/volumes: add debugging for global lock
- https://github.com/ceph/ceph/pull/36828
- 03:28 PM Backport #47151 (Resolved): octopus: pybind/mgr/volumes: add debugging for global lock
- https://github.com/ceph/ceph/pull/37366
- 03:28 PM Fix #47149 (Pending Backport): pybind/mgr/volumes: add debugging for global lock
- 02:10 PM Fix #47149 (Resolved): pybind/mgr/volumes: add debugging for global lock
- To help diagnose deadlocks we believe to be happening.
- 12:02 PM Bug #47140 (Duplicate): mgr/volumes: unresponsive Client::abort_conn() when cleaning stale libcep...
- ACK. Thx
- 11:51 AM Bug #47140: mgr/volumes: unresponsive Client::abort_conn() when cleaning stale libcephfs handle
- @Venky looks like a duplicate of https://tracker.ceph.com/issues/46882
From the logs further down,
Job ID: 537667... - 07:25 AM Bug #47140: mgr/volumes: unresponsive Client::abort_conn() when cleaning stale libcephfs handle
- https://pulpito.ceph.com/vshankar-2020-08-26_05:34:12-fs-wip-pdonnell-testing-20200826.032941-distro-basic-smithi/537...
- 07:20 AM Bug #47140 (Duplicate): mgr/volumes: unresponsive Client::abort_conn() when cleaning stale libcep...
- Libcephfs connection pool in mgr (mgr_util) identifies stale filesystem handles and cleans them up by calling abort_c...
- 09:32 AM Feature #47148 (In Progress): mds: get rid of the mds_lock when storing the inode backtrace to me...
- 09:32 AM Feature #47148 (Resolved): mds: get rid of the mds_lock when storing the inode backtrace to meta ...
- The objector->mutate() may take a long time to finish. We can get rid of the mds_lock when doing this.
- 09:30 AM Backport #47081 (In Progress): nautilus: mds: decoding of enum types on big-endian systems broken
- 09:26 AM Backport #47080 (In Progress): octopus: mds: decoding of enum types on big-endian systems broken
- 09:20 AM Feature #20 (Resolved): client: recover from a killed session (w/ blacklist)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:17 AM Bug #44276 (Resolved): pybind/mgr/volumes: cleanup stale connection hang
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:16 AM Feature #45371 (Resolved): mgr/volumes: `protect` and `clone` operation in a single transaction
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:14 AM Backport #47147 (Resolved): octopus: pybind/mgr/nfs: Test mounting of exports created with nfs ex...
- https://github.com/ceph/ceph/pull/37365
- 09:05 AM Backport #46957 (Resolved): octopus: pybind/mgr/nfs: add interface for adding user defined config...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36635
m... - 09:05 AM Backport #46795 (Resolved): octopus: mds: Subvolume snapshot directory does not save attribute "c...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36403
m... - 09:05 AM Backport #46591 (Resolved): octopus: ceph-fuse: ceph-fuse process is terminated by the logratote ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36195
m... - 09:04 AM Backport #46528 (Resolved): octopus: mgr/volumes: `protect` and `clone` operation in a single tra...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36126
m... - 09:04 AM Backport #46402 (Resolved): octopus: client: recover from a killed session (w/ blacklist)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35962
m... - 09:04 AM Backport #46389 (Resolved): octopus: pybind/mgr/volumes: cleanup stale connection hang
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35962
m... - 12:42 AM Backport #47059 (In Progress): octopus: mgr/volumes: Clone operation uses source subvolume root d...
- 12:35 AM Backport #46820 (In Progress): octopus: pybind/mgr/volumes: Add the ability to keep snapshots of ...
08/25/2020
- 10:10 PM Backport #46957: octopus: pybind/mgr/nfs: add interface for adding user defined configuration
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36635
merged - 10:09 PM Backport #46795: octopus: mds: Subvolume snapshot directory does not save attribute "ceph.quota.m...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36403
merged - 10:08 PM Backport #46591: octopus: ceph-fuse: ceph-fuse process is terminated by the logratote task and wh...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36195
merged - 10:08 PM Backport #46528: octopus: mgr/volumes: `protect` and `clone` operation in a single transaction
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36126
merged - 10:07 PM Backport #46402: octopus: client: recover from a killed session (w/ blacklist)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35962
merged - 10:07 PM Backport #46389: octopus: pybind/mgr/volumes: cleanup stale connection hang
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35962
merged - 04:37 AM Bug #47125 (Fix Under Review): mds: fix possible crash when the MDS is stopping
- 04:32 AM Bug #47125 (In Progress): mds: fix possible crash when the MDS is stopping
- Before I have hit one crash without any useful logs months ago, just be possible caused by this.
- 04:30 AM Bug #47125 (Resolved): mds: fix possible crash when the MDS is stopping
- While the MDS daemon is stopping and if it call journaler->flush(), it may be crash dues to the onsafe parameter in J...
08/24/2020
- 08:20 PM Feature #46989 (Pending Backport): pybind/mgr/nfs: Test mounting of exports created with nfs expo...
- 07:37 PM Bug #47051 (Duplicate): fs/upgrade/volume_client: Command failed with status 124: 'sudo adjust-ul...
- 07:34 PM Bug #47051: fs/upgrade/volume_client: Command failed with status 124: 'sudo adjust-ulimits ceph-c...
- Older log with this occurrence: https://pulpito.ceph.com/pdonnell-2020-08-08_02:16:26-fs-wip-pdonnell-testing-2020080...
- 07:25 PM Bug #47051: fs/upgrade/volume_client: Command failed with status 124: 'sudo adjust-ulimits ceph-c...
- I thought this might be fixed by https://github.com/ceph/ceph/pull/36499 after earlier discussions with Neha but it s...
- 06:36 PM Bug #47015: mds: decoding of enum types on big-endian systems broken
- Ulrich Weigand wrote:
> Thanks for creating the backport requests!
>
> Would it make sense to also include this c... - 11:20 AM Bug #47015: mds: decoding of enum types on big-endian systems broken
- Thanks for creating the backport requests!
Would it make sense to also include this commit for backporting:
https... - 06:36 PM Backport #47081: nautilus: mds: decoding of enum types on big-endian systems broken
- Note to backporters: please include https://github.com/ceph/ceph/pull/35920
- 06:36 PM Backport #47080: octopus: mds: decoding of enum types on big-endian systems broken
- Note to backporters: please include https://github.com/ceph/ceph/pull/35920
- 01:51 PM Feature #47034: mds: readdir for snapshot diff
- cephfs-mirror could use this to read the changes to entire subtrees (not just a single directory) given a snapshot.
- 01:50 PM Bug #47009 (Resolved): TestNFS.test_cluster_set_reset_user_config: command failed with status 32:...
- backport PR: https://github.com/ceph/ceph/pull/36748
- 10:41 AM Bug #47009 (Pending Backport): TestNFS.test_cluster_set_reset_user_config: command failed with st...
- 01:48 PM Feature #47102: mds: add perf counter for cap messages
- separate perf counter for revokes and release
- 01:30 AM Feature #47102 (Resolved): mds: add perf counter for cap messages
- 01:47 PM Feature #45747 (Resolved): pybind/mgr/nfs: add interface for adding user defined configuration
- backport pr: https://github.com/ceph/ceph/pull/36748
Note that this is an exception. I don't plan to include volum... - 01:47 PM Bug #47006 (Triaged): mon: required client features adding/removing
08/22/2020
- 07:45 PM Backport #47096 (Resolved): nautilus: mds: provide altrenatives to increase the total cephfs subv...
- https://github.com/ceph/ceph/pull/36804
- 07:45 PM Backport #47095 (Resolved): octopus: mds: provide altrenatives to increase the total cephfs subvo...
- https://github.com/ceph/ceph/pull/38553
- 07:43 PM Backport #47090 (Resolved): nautilus: After restarting an mds, its standy-replay mds remained in ...
- https://github.com/ceph/ceph/pull/37179
- 07:43 PM Backport #47089 (Resolved): octopus: After restarting an mds, its standy-replay mds remained in t...
- https://github.com/ceph/ceph/pull/37363
- 07:43 PM Backport #47088 (Resolved): nautilus: mds: recover files after normal session close
- https://github.com/ceph/ceph/pull/37178
- 07:43 PM Backport #47087 (Resolved): octopus: mds: recover files after normal session close
- https://github.com/ceph/ceph/pull/37334
- 07:43 PM Backport #47086 (Rejected): nautilus: common: validate type CephBool cause 'invalid command json'
- 07:43 PM Backport #47085 (Resolved): octopus: common: validate type CephBool cause 'invalid command json'
- https://github.com/ceph/ceph/pull/37362
- 07:43 PM Backport #47084 (Rejected): nautilus: mds: 'forward loop' when forward_all_requests_to_auth is set
- 07:43 PM Backport #47083 (Resolved): octopus: mds: 'forward loop' when forward_all_requests_to_auth is set
- https://github.com/ceph/ceph/pull/37360
- 07:42 PM Backport #47081 (Resolved): nautilus: mds: decoding of enum types on big-endian systems broken
- https://github.com/ceph/ceph/pull/36814
- 07:42 PM Backport #47080 (Resolved): octopus: mds: decoding of enum types on big-endian systems broken
- https://github.com/ceph/ceph/pull/36813
- 01:43 AM Bug #46988 (Pending Backport): mds: 'forward loop' when forward_all_requests_to_auth is set
- 01:42 AM Bug #46984 (Pending Backport): mds: recover files after normal session close
- 01:41 AM Bug #46976 (Pending Backport): After restarting an mds, its standy-replay mds remained in the "re...
- 01:39 AM Bug #46985 (Pending Backport): common: validate type CephBool cause 'invalid command json'
- 01:36 AM Bug #47015 (Pending Backport): mds: decoding of enum types on big-endian systems broken
- 12:01 AM Bug #47075: qa: FAIL: test_config_session_timeout
- Another test that failed because of too long a sleep:
/ceph/teuthology-archive/pdonnell-2020-08-19_23:50:59-multim...
08/21/2020
- 11:44 PM Bug #47075 (New): qa: FAIL: test_config_session_timeout
- ...
- 11:13 PM Feature #46074 (Pending Backport): mds: provide altrenatives to increase the total cephfs subvolu...
- Wiring up mgr/volumes will happen in another ticket.
- 11:10 PM Bug #46882: client: mount abort hangs: [volumes INFO mgr_util] aborting connection from cephfs 'c...
- Another: /ceph/teuthology-archive/pdonnell-2020-08-21_07:42:41-fs-wip-pdonnell-testing-20200821.043335-distro-basic-s...
- 01:10 AM Tasks #47047 (Fix Under Review): client: release the client_lock before copying data in all the r...
- 01:09 AM Bug #47039 (Fix Under Review): client: mutex lock FAILED ceph_assert(nlock > 0)
- It should be caused by my local code. I added more check code for using the client_lock directly.
08/20/2020
- 11:15 PM Backport #47059: octopus: mgr/volumes: Clone operation uses source subvolume root directory mode ...
- Awaiting backport for https://tracker.ceph.com/issues/46820, which conflicts with merge of backport https://github.co...
- 08:09 PM Backport #47059 (Resolved): octopus: mgr/volumes: Clone operation uses source subvolume root dire...
- https://github.com/ceph/ceph/pull/36803
- 11:12 PM Backport #46820: octopus: pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes ind...
- Awaiting merge of backport https://github.com/ceph/ceph/pull/36126 as it conflicts with commits for this patch.
- 11:07 PM Backport #47058 (In Progress): nautilus: mgr/volumes: Clone operation uses source subvolume root ...
- 08:09 PM Backport #47058 (Resolved): nautilus: mgr/volumes: Clone operation uses source subvolume root dir...
- -https://github.com/ceph/ceph/pull/36744-
https://github.com/ceph/ceph/pull/36833 - 08:54 PM Bug #41069 (Need More Info): nautilus: test_subvolume_group_create_with_desired_mode fails with "...
- Let's leave this as NI and see what the new debugging for mgr/volumes shows if it comes up again.
- 09:38 AM Bug #41069: nautilus: test_subvolume_group_create_with_desired_mode fails with "AssertionError: '...
- Tried 1k iterations on nautilus 14.2.11 but could not reproduce.
- 05:32 PM Bug #47009 (Fix Under Review): TestNFS.test_cluster_set_reset_user_config: command failed with st...
- 12:23 PM Bug #47009 (In Progress): TestNFS.test_cluster_set_reset_user_config: command failed with status ...
- 12:22 PM Bug #47009: TestNFS.test_cluster_set_reset_user_config: command failed with status 32: 'sudo moun...
- ganesha log...
- 07:43 AM Bug #47009: TestNFS.test_cluster_set_reset_user_config: command failed with status 32: 'sudo moun...
- /a/kchai-2020-08-19_06:47:30-rados-wip-kefu-testing-2020-08-19-1141-distro-basic-smithi/5359038/
- 07:22 AM Bug #47009: TestNFS.test_cluster_set_reset_user_config: command failed with status 32: 'sudo moun...
- https://pulpito.ceph.com/swagner-2020-08-19_07:38:40-rados:cephadm-wip-swagner-testing-2020-08-18-1624-distro-basic-s...
- 02:41 PM Bug #47054 (New): mgr/volumes: Handle potential errors in readdir cephfs python binding
- Current implementation of the python binding in cephfs.pyx does not process errno in case of a nullptr return from re...
- 01:25 PM Bug #47033 (Duplicate): client: inode ref leak
- 06:11 AM Bug #47033 (New): client: inode ref leak
- It fails immediately with following trace.
/home/zhyan/Ceph/ceph/src/client/Client.cc: In function 'void Client::d... - 11:23 AM Bug #46163 (Pending Backport): mgr/volumes: Clone operation uses source subvolume root directory ...
- 10:35 AM Backport #46821 (Resolved): nautilus: pybind/mgr/volumes: Add the ability to keep snapshots of su...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36448
m... - 09:49 AM Bug #47051 (Duplicate): fs/upgrade/volume_client: Command failed with status 124: 'sudo adjust-ul...
- Hit the following error in fs/upgrade/volume_client test,...
- 09:09 AM Bug #46882: client: mount abort hangs: [volumes INFO mgr_util] aborting connection from cephfs 'c...
- Shyam spotted this issue in a recent mgr/volumes testing,
https://github.com/ceph/ceph/pull/35756#issuecomment-67667... - 05:32 AM Feature #46059 (Fix Under Review): vstart_runner.py: optionally rotate logs between tests
- 05:32 AM Feature #46059: vstart_runner.py: optionally rotate logs between tests
- Raised https://github.com/ceph/ceph/pull/36732 since https://github.com/ceph/ceph/pull/35824 was reversed.
- 02:13 AM Feature #46059 (In Progress): vstart_runner.py: optionally rotate logs between tests
- Reverted by https://github.com/ceph/ceph/pull/36711 to fix api tests. Rishabh, please open a new PR.
- 02:51 AM Tasks #47047 (Resolved): client: release the client_lock before copying data in all the reads
- The memory copy could take a long time, we can just unlock the client_lock before doing the copy.
- 01:50 AM Bug #47039 (In Progress): client: mutex lock FAILED ceph_assert(nlock > 0)
- Checked the whole libcephfs code, didn't find any suspicious code about it. And I have one enhancement about the clie...
08/19/2020
- 11:39 PM Bug #47039: client: mutex lock FAILED ceph_assert(nlock > 0)
- IMO, it is not safe to use the client_lock.lock/.unlock diretctly without any check before it, if we use them we'd be...
- 11:33 PM Bug #47039: client: mutex lock FAILED ceph_assert(nlock > 0)
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > Introduced by https://github.com/ceph/ceph/pull/35410 ?
>
> I don'... - 06:01 PM Bug #47039: client: mutex lock FAILED ceph_assert(nlock > 0)
- Xiubo Li wrote:
> Introduced by https://github.com/ceph/ceph/pull/35410 ?
I don't think so. The commit looks to b... - 01:08 PM Bug #47039: client: mutex lock FAILED ceph_assert(nlock > 0)
- Introduced by https://github.com/ceph/ceph/pull/35410 ?
- 01:08 PM Bug #47039 (Resolved): client: mutex lock FAILED ceph_assert(nlock > 0)
- ...
- 11:30 PM Bug #47033: client: inode ref leak
- Xiubo Li wrote:
> With [1] and [2] I have run the test for very long time and didn't see any errors.
>
> [1] htt... - 11:28 PM Bug #47033 (Duplicate): client: inode ref leak
- 02:33 PM Bug #47033: client: inode ref leak
- With [1] and [2] I have run the test for very long time and didn't see any errors.
[1] https://github.com/ceph/ce... - 08:48 AM Bug #47033: client: inode ref leak
- good commit is c8b5f84f49ef74609ba3ea69dea0764ef925ae85
- 08:07 AM Bug #47033: client: inode ref leak
- Zheng Yan wrote:
> It can be easily reproduced by following program.
>
> [...]
>
> pre-create testdir at root... - 07:56 AM Bug #47033 (In Progress): client: inode ref leak
- I will take a look of this. Thanks :-)
- 07:29 AM Bug #47033 (Duplicate): client: inode ref leak
- It can be easily reproduced by following program. ...
- 07:57 PM Bug #46496 (Resolved): pybind/mgr/volumes: subvolume operations throw exception if volume doesn't...
- 05:59 PM Backport #46793 (Rejected): nautilus: pybind/mgr/volumes: subvolume operations throw exception if...
- https://tracker.ceph.com/issues/46792#note-4
- 10:28 AM Backport #46793: nautilus: pybind/mgr/volumes: subvolume operations throw exception if volume doe...
- Please check https://tracker.ceph.com/issues/46792#note-3
- 05:59 PM Backport #46792 (Rejected): octopus: pybind/mgr/volumes: subvolume operations throw exception if ...
- Kotresh Hiremath Ravishankar wrote:
> The issue got introduced by the commit https://github.com/ceph/ceph/pull/32319... - 10:27 AM Backport #46792: octopus: pybind/mgr/volumes: subvolume operations throw exception if volume does...
- The issue got introduced by the commit https://github.com/ceph/ceph/pull/32319/commits/a44de38b61d598fb0512ea48da0de4...
- 05:56 PM Bug #47006: mon: required client features adding/removing
- Jos Collin wrote:
> Patrick Donnelly wrote:
> > Can you elaborate on what the problem is? Give an example.
>
> [... - 05:05 AM Bug #47006 (New): mon: required client features adding/removing
- Patrick Donnelly wrote:
> Can you elaborate on what the problem is? Give an example.... - 01:59 PM Bug #47041 (Resolved): MDS recall configuration options not documented yet
- <T1w> Hi, some of the "new" MDS recall configuration options mentioned on https://ceph.io/community/nautilus-cephfs/ ...
- 10:09 AM Backport #46948 (In Progress): nautilus: qa: Fs cleanup fails with a traceback
- 10:05 AM Backport #46947 (In Progress): octopus: qa: Fs cleanup fails with a traceback
- 07:46 AM Feature #47034 (New): mds: readdir for snapshot diff
- make readdir return changed/removed dentries since given snapshot
08/18/2020
- 08:16 PM Backport #47014 (In Progress): octopus: librados|libcephfs: use latest MonMap when creating from ...
- 04:03 PM Backport #47014 (Resolved): octopus: librados|libcephfs: use latest MonMap when creating from Cep...
- https://github.com/ceph/ceph/pull/36705
- 08:13 PM Backport #47013 (In Progress): nautilus: librados|libcephfs: use latest MonMap when creating from...
- 04:02 PM Backport #47013 (Resolved): nautilus: librados|libcephfs: use latest MonMap when creating from Ce...
- https://github.com/ceph/ceph/pull/36704
- 04:57 PM Bug #47015 (Fix Under Review): mds: decoding of enum types on big-endian systems broken
- 04:26 PM Bug #47015 (Resolved): mds: decoding of enum types on big-endian systems broken
- When a struct member that has enum type needs to be encoded or
decoded, we need to use an explicit integer type, sin... - 04:53 PM Bug #47012 (Need More Info): mds: MDCache.cc: 6418: FAILED ceph_assert(r == 0 || r == -2)
- 04:52 PM Bug #47012: mds: MDCache.cc: 6418: FAILED ceph_assert(r == 0 || r == -2)
- the mds.0 debug_ms log level = 1, and log is in the attachment
- 03:21 PM Bug #47012: mds: MDCache.cc: 6418: FAILED ceph_assert(r == 0 || r == -2)
- please try reproduce it again with debug_ms = 1
- 03:09 PM Bug #47012 (Need More Info): mds: MDCache.cc: 6418: FAILED ceph_assert(r == 0 || r == -2)
- My mds.0 service (standby, active mds: 4) cyclical crash, each time the stack information is as follows:
ceph versio... - 04:52 PM Bug #47006 (Need More Info): mon: required client features adding/removing
- Can you elaborate on what the problem is? Give an example.
- 12:07 PM Bug #47006 (Resolved): mon: required client features adding/removing
- ...
- 04:38 PM Backport #47021 (Resolved): octopus: client: shutdown race fails with status 141
- https://github.com/ceph/ceph/pull/37358
- 04:37 PM Backport #47020 (Resolved): nautilus: client: shutdown race fails with status 141
- https://github.com/ceph/ceph/pull/41593
- 04:35 PM Backport #47018 (Resolved): octopus: mds: kcephfs parse dirfrag's ndist is always 0
- https://github.com/ceph/ceph/pull/37357
- 04:34 PM Backport #47017 (Resolved): nautilus: mds: kcephfs parse dirfrag's ndist is always 0
- https://github.com/ceph/ceph/pull/37177
- 04:34 PM Backport #47016 (Resolved): octopus: mds: fix the decode version
- https://github.com/ceph/ceph/pull/37356
- 04:28 PM Feature #46059 (Resolved): vstart_runner.py: optionally rotate logs between tests
- 04:12 PM Bug #47011 (Fix Under Review): client: Client::open() pass wrong cap mask to path_walk
- 02:23 PM Bug #47011 (Resolved): client: Client::open() pass wrong cap mask to path_walk
- 04:01 PM Fix #46645 (Pending Backport): librados|libcephfs: use latest MonMap when creating from CephContext
- 12:50 PM Bug #47009 (Resolved): TestNFS.test_cluster_set_reset_user_config: command failed with status 32:...
- ...
- 11:55 AM Feature #47005 (Fix Under Review): kceph: add metric for number of pinned capabilities and number...
- Patchwork link: https://patchwork.kernel.org/patch/11720599/
- 11:19 AM Feature #47005 (Resolved): kceph: add metric for number of pinned capabilities and number of dirs...
- 11:19 AM Feature #46866 (In Progress): kceph: add metric for number of pinned capabilities
- 11:17 AM Feature #46866: kceph: add metric for number of pinned capabilities
- The number of the pinned capbilities will always equal to the total number of the s_caps in kclient.
- 03:40 AM Bug #43039 (Pending Backport): client: shutdown race fails with status 141
- 03:39 AM Bug #46868 (Resolved): client: switch to use ceph_mutex_is_locked_by_me always
- 03:35 AM Bug #46891 (Pending Backport): mds: kcephfs parse dirfrag's ndist is always 0
- 03:35 AM Bug #46926 (Pending Backport): mds: fix the decode version
08/17/2020
- 03:48 PM Bug #46985: common: validate type CephBool cause 'invalid command json'
- Just this commit needs backported:
common: fix validate type CephBool cause 'invalid command json'
Fixes: http... - 08:52 AM Bug #46985 (Fix Under Review): common: validate type CephBool cause 'invalid command json'
- 02:07 AM Bug #46985 (Resolved): common: validate type CephBool cause 'invalid command json'
- ...
- 02:33 PM Bug #46883: kclient: ghost kernel mount
- Patrick Donnelly wrote:
> So there are two issues here:
>
> * umount should not use -l so we aren't papering over... - 01:45 PM Bug #46883: kclient: ghost kernel mount
- So there are two issues here:
* umount should not use -l so we aren't papering over bugs. Use -f to umount. If -f ... - 01:42 PM Bug #46887 (Need More Info): kceph: testing branch: hang in workunit by 1/2 clients during tree e...
- Would be good to adjust the qa code to fetch the stack if the process hangs. Get /sys/debug/fs/ceph files as well.
- 10:41 AM Feature #46989 (Fix Under Review): pybind/mgr/nfs: Test mounting of exports created with nfs expo...
- 10:37 AM Feature #46989 (Resolved): pybind/mgr/nfs: Test mounting of exports created with nfs export command
- 09:51 AM Bug #41069: nautilus: test_subvolume_group_create_with_desired_mode fails with "AssertionError: '...
- The code looks ok both on master and nautilus branch. I ran 1000 iterations on master but didn't see the failure. I w...
- 08:44 AM Bug #46988 (Fix Under Review): mds: 'forward loop' when forward_all_requests_to_auth is set
- 08:38 AM Bug #46988 (Resolved): mds: 'forward loop' when forward_all_requests_to_auth is set
- 05:12 AM Bug #46868 (Fix Under Review): client: switch to use ceph_mutex_is_locked_by_me always
- 05:11 AM Tasks #46890 (Fix Under Review): client: add request lock support
08/16/2020
- 04:43 AM Bug #46976 (Fix Under Review): After restarting an mds, its standy-replay mds remained in the "re...
- 04:42 AM Bug #46984 (Fix Under Review): mds: recover files after normal session close
- 04:30 AM Bug #46984 (Resolved): mds: recover files after normal session close
- client does not flush its cap release before sending session close request.
- 03:59 AM Bug #42365 (Fix Under Review): client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
08/14/2020
- 06:45 PM Bug #44294: mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"
- This issue and #44295 were both fixed by the same PR, https://github.com/ceph/ceph/pull/33538, which was backported t...
- 02:00 PM Bug #46976: After restarting an mds, its standy-replay mds remained in the "resolve" state
- MDSRank::calc_recovery_set() should be called by MDSRank::resolve_start
- 09:24 AM Bug #46976 (Resolved): After restarting an mds, its standy-replay mds remained in the "resolve" s...
- In multimds and standy-replay enabled Ceph cluster,after reduce a filesystem mds num and restart an active mds, its ...
- 12:51 PM Backport #46957 (In Progress): octopus: pybind/mgr/nfs: add interface for adding user defined con...
08/13/2020
- 11:31 PM Backport #46860 (Resolved): nautilus: mds: do not raise "client failing to respond to cap release...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36513
m... - 11:31 PM Backport #46858 (Resolved): nautilus: qa: add debugging for volumes plugin use of libcephfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36512
m... - 11:30 PM Backport #46856 (Resolved): nautilus: client: static dirent for readdir is not thread-safe
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36511
m... - 08:53 PM Bug #41228 (Resolved): mon: deleting a CephFS and its pools causes MONs to crash
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:39 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
- also /ceph/teuthology-archive/yuriw-2020-08-05_20:28:22-fs-wip-yuri2-testing-2020-08-05-1459-octopus-distro-basic-smi...
- 05:37 PM Bug #41228 (New): mon: deleting a CephFS and its pools causes MONs to crash
- This is back but in Octopus. The fix for #40011 doesn't fix this, apparently....
- 08:51 PM Bug #43517 (Resolved): qa: random subvolumegroup collision
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:49 PM Backport #46960 (Resolved): nautilus: cephfs-journal-tool: incorrect read_offset after finding mi...
- https://github.com/ceph/ceph/pull/37479
- 08:49 PM Backport #46959 (Resolved): octopus: cephfs-journal-tool: incorrect read_offset after finding mis...
- https://github.com/ceph/ceph/pull/37854
- 08:49 PM Bug #45662 (Resolved): pybind/mgr/volumes: volume deletion should check mon_allow_pool_delete
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:49 PM Backport #46957 (Resolved): octopus: pybind/mgr/nfs: add interface for adding user defined config...
- https://github.com/ceph/ceph/pull/36635
- 08:48 PM Bug #45910 (Resolved): pybind/mgr/volumes: volume deletion not always removes the associated osd ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:48 PM Bug #46277 (Resolved): pybind/mgr/volumes: get_pool_names may indicate volume does not exist if m...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:47 PM Bug #46565 (Resolved): mgr/nfs: Ensure pseudoroot path is absolute and is not just /
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:46 PM Backport #46948 (Resolved): nautilus: qa: Fs cleanup fails with a traceback
- https://github.com/ceph/ceph/pull/36714
- 08:46 PM Backport #46947 (Resolved): octopus: qa: Fs cleanup fails with a traceback
- https://github.com/ceph/ceph/pull/36713
- 08:46 PM Backport #46943 (Resolved): nautilus: mds: segv in MDCache::wait_for_uncommitted_fragments
- https://github.com/ceph/ceph/pull/36968
- 08:46 PM Backport #46942 (Resolved): octopus: mds: segv in MDCache::wait_for_uncommitted_fragments
- https://github.com/ceph/ceph/pull/37355
- 08:45 PM Backport #46941 (Resolved): nautilus: mds: memory leak during cache drop
- https://github.com/ceph/ceph/pull/36967
- 08:45 PM Backport #46940 (Resolved): octopus: mds: memory leak during cache drop
- https://github.com/ceph/ceph/pull/37354
- 08:44 PM Backport #46234 (Resolved): octopus: pybind/mgr/volumes: volume deletion not always removes the a...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36327
m... - 06:38 PM Backport #46234: octopus: pybind/mgr/volumes: volume deletion not always removes the associated o...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36327
merged - 08:43 PM Backport #46477 (Resolved): octopus: pybind/mgr/volumes: volume deletion should check mon_allow_p...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36327
m... - 06:38 PM Backport #46477: octopus: pybind/mgr/volumes: volume deletion should check mon_allow_pool_delete
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36327
merged - 08:43 PM Backport #46465 (Resolved): octopus: pybind/mgr/volumes: get_pool_names may indicate volume does ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36327
m... - 06:38 PM Backport #46465: octopus: pybind/mgr/volumes: get_pool_names may indicate volume does not exist i...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36327
merged - 08:43 PM Backport #46642 (Resolved): octopus: qa: random subvolumegroup collision
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36327
m... - 06:38 PM Backport #46642: octopus: qa: random subvolumegroup collision
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36327
merged - 08:43 PM Backport #46712 (Resolved): octopus: mgr/nfs: Ensure pseudoroot path is absolute and is not just /
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36299
m... - 06:38 PM Backport #46712: octopus: mgr/nfs: Ensure pseudoroot path is absolute and is not just /
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36299
merged - 08:43 PM Bug #46572 (Resolved): mgr/nfs: help for "nfs export create" and "nfs export delete" says "<attac...
- 08:43 PM Backport #46632 (Resolved): octopus: mgr/nfs: help for "nfs export create" and "nfs export delete...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36220
m... - 06:37 PM Backport #46632: octopus: mgr/nfs: help for "nfs export create" and "nfs export delete" says "<at...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36220
merged - 03:36 PM Bug #46926 (Fix Under Review): mds: fix the decode version
- 03:24 PM Bug #46926 (Resolved): mds: fix the decode version
- https://github.com/ceph/ceph/commit/3fac3b1236c4918e9640e38fe7f5f59efc0a23b9
the decode changes are reverted, but ...
Also available in: Atom