Activity
From 07/23/2020 to 08/21/2020
08/21/2020
- 11:44 PM Bug #47075 (New): qa: FAIL: test_config_session_timeout
- ...
- 11:13 PM Feature #46074 (Pending Backport): mds: provide altrenatives to increase the total cephfs subvolu...
- Wiring up mgr/volumes will happen in another ticket.
- 11:10 PM Bug #46882: client: mount abort hangs: [volumes INFO mgr_util] aborting connection from cephfs 'c...
- Another: /ceph/teuthology-archive/pdonnell-2020-08-21_07:42:41-fs-wip-pdonnell-testing-20200821.043335-distro-basic-s...
- 01:10 AM Tasks #47047 (Fix Under Review): client: release the client_lock before copying data in all the r...
- 01:09 AM Bug #47039 (Fix Under Review): client: mutex lock FAILED ceph_assert(nlock > 0)
- It should be caused by my local code. I added more check code for using the client_lock directly.
08/20/2020
- 11:15 PM Backport #47059: octopus: mgr/volumes: Clone operation uses source subvolume root directory mode ...
- Awaiting backport for https://tracker.ceph.com/issues/46820, which conflicts with merge of backport https://github.co...
- 08:09 PM Backport #47059 (Resolved): octopus: mgr/volumes: Clone operation uses source subvolume root dire...
- https://github.com/ceph/ceph/pull/36803
- 11:12 PM Backport #46820: octopus: pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes ind...
- Awaiting merge of backport https://github.com/ceph/ceph/pull/36126 as it conflicts with commits for this patch.
- 11:07 PM Backport #47058 (In Progress): nautilus: mgr/volumes: Clone operation uses source subvolume root ...
- 08:09 PM Backport #47058 (Resolved): nautilus: mgr/volumes: Clone operation uses source subvolume root dir...
- -https://github.com/ceph/ceph/pull/36744-
https://github.com/ceph/ceph/pull/36833 - 08:54 PM Bug #41069 (Need More Info): nautilus: test_subvolume_group_create_with_desired_mode fails with "...
- Let's leave this as NI and see what the new debugging for mgr/volumes shows if it comes up again.
- 09:38 AM Bug #41069: nautilus: test_subvolume_group_create_with_desired_mode fails with "AssertionError: '...
- Tried 1k iterations on nautilus 14.2.11 but could not reproduce.
- 05:32 PM Bug #47009 (Fix Under Review): TestNFS.test_cluster_set_reset_user_config: command failed with st...
- 12:23 PM Bug #47009 (In Progress): TestNFS.test_cluster_set_reset_user_config: command failed with status ...
- 12:22 PM Bug #47009: TestNFS.test_cluster_set_reset_user_config: command failed with status 32: 'sudo moun...
- ganesha log...
- 07:43 AM Bug #47009: TestNFS.test_cluster_set_reset_user_config: command failed with status 32: 'sudo moun...
- /a/kchai-2020-08-19_06:47:30-rados-wip-kefu-testing-2020-08-19-1141-distro-basic-smithi/5359038/
- 07:22 AM Bug #47009: TestNFS.test_cluster_set_reset_user_config: command failed with status 32: 'sudo moun...
- https://pulpito.ceph.com/swagner-2020-08-19_07:38:40-rados:cephadm-wip-swagner-testing-2020-08-18-1624-distro-basic-s...
- 02:41 PM Bug #47054 (New): mgr/volumes: Handle potential errors in readdir cephfs python binding
- Current implementation of the python binding in cephfs.pyx does not process errno in case of a nullptr return from re...
- 01:25 PM Bug #47033 (Duplicate): client: inode ref leak
- 06:11 AM Bug #47033 (New): client: inode ref leak
- It fails immediately with following trace.
/home/zhyan/Ceph/ceph/src/client/Client.cc: In function 'void Client::d... - 11:23 AM Bug #46163 (Pending Backport): mgr/volumes: Clone operation uses source subvolume root directory ...
- 10:35 AM Backport #46821 (Resolved): nautilus: pybind/mgr/volumes: Add the ability to keep snapshots of su...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36448
m... - 09:49 AM Bug #47051 (Duplicate): fs/upgrade/volume_client: Command failed with status 124: 'sudo adjust-ul...
- Hit the following error in fs/upgrade/volume_client test,...
- 09:09 AM Bug #46882: client: mount abort hangs: [volumes INFO mgr_util] aborting connection from cephfs 'c...
- Shyam spotted this issue in a recent mgr/volumes testing,
https://github.com/ceph/ceph/pull/35756#issuecomment-67667... - 05:32 AM Feature #46059 (Fix Under Review): vstart_runner.py: optionally rotate logs between tests
- 05:32 AM Feature #46059: vstart_runner.py: optionally rotate logs between tests
- Raised https://github.com/ceph/ceph/pull/36732 since https://github.com/ceph/ceph/pull/35824 was reversed.
- 02:13 AM Feature #46059 (In Progress): vstart_runner.py: optionally rotate logs between tests
- Reverted by https://github.com/ceph/ceph/pull/36711 to fix api tests. Rishabh, please open a new PR.
- 02:51 AM Tasks #47047 (Resolved): client: release the client_lock before copying data in all the reads
- The memory copy could take a long time, we can just unlock the client_lock before doing the copy.
- 01:50 AM Bug #47039 (In Progress): client: mutex lock FAILED ceph_assert(nlock > 0)
- Checked the whole libcephfs code, didn't find any suspicious code about it. And I have one enhancement about the clie...
08/19/2020
- 11:39 PM Bug #47039: client: mutex lock FAILED ceph_assert(nlock > 0)
- IMO, it is not safe to use the client_lock.lock/.unlock diretctly without any check before it, if we use them we'd be...
- 11:33 PM Bug #47039: client: mutex lock FAILED ceph_assert(nlock > 0)
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > Introduced by https://github.com/ceph/ceph/pull/35410 ?
>
> I don'... - 06:01 PM Bug #47039: client: mutex lock FAILED ceph_assert(nlock > 0)
- Xiubo Li wrote:
> Introduced by https://github.com/ceph/ceph/pull/35410 ?
I don't think so. The commit looks to b... - 01:08 PM Bug #47039: client: mutex lock FAILED ceph_assert(nlock > 0)
- Introduced by https://github.com/ceph/ceph/pull/35410 ?
- 01:08 PM Bug #47039 (Resolved): client: mutex lock FAILED ceph_assert(nlock > 0)
- ...
- 11:30 PM Bug #47033: client: inode ref leak
- Xiubo Li wrote:
> With [1] and [2] I have run the test for very long time and didn't see any errors.
>
> [1] htt... - 11:28 PM Bug #47033 (Duplicate): client: inode ref leak
- 02:33 PM Bug #47033: client: inode ref leak
- With [1] and [2] I have run the test for very long time and didn't see any errors.
[1] https://github.com/ceph/ce... - 08:48 AM Bug #47033: client: inode ref leak
- good commit is c8b5f84f49ef74609ba3ea69dea0764ef925ae85
- 08:07 AM Bug #47033: client: inode ref leak
- Zheng Yan wrote:
> It can be easily reproduced by following program.
>
> [...]
>
> pre-create testdir at root... - 07:56 AM Bug #47033 (In Progress): client: inode ref leak
- I will take a look of this. Thanks :-)
- 07:29 AM Bug #47033 (Duplicate): client: inode ref leak
- It can be easily reproduced by following program. ...
- 07:57 PM Bug #46496 (Resolved): pybind/mgr/volumes: subvolume operations throw exception if volume doesn't...
- 05:59 PM Backport #46793 (Rejected): nautilus: pybind/mgr/volumes: subvolume operations throw exception if...
- https://tracker.ceph.com/issues/46792#note-4
- 10:28 AM Backport #46793: nautilus: pybind/mgr/volumes: subvolume operations throw exception if volume doe...
- Please check https://tracker.ceph.com/issues/46792#note-3
- 05:59 PM Backport #46792 (Rejected): octopus: pybind/mgr/volumes: subvolume operations throw exception if ...
- Kotresh Hiremath Ravishankar wrote:
> The issue got introduced by the commit https://github.com/ceph/ceph/pull/32319... - 10:27 AM Backport #46792: octopus: pybind/mgr/volumes: subvolume operations throw exception if volume does...
- The issue got introduced by the commit https://github.com/ceph/ceph/pull/32319/commits/a44de38b61d598fb0512ea48da0de4...
- 05:56 PM Bug #47006: mon: required client features adding/removing
- Jos Collin wrote:
> Patrick Donnelly wrote:
> > Can you elaborate on what the problem is? Give an example.
>
> [... - 05:05 AM Bug #47006 (New): mon: required client features adding/removing
- Patrick Donnelly wrote:
> Can you elaborate on what the problem is? Give an example.... - 01:59 PM Bug #47041 (Resolved): MDS recall configuration options not documented yet
- <T1w> Hi, some of the "new" MDS recall configuration options mentioned on https://ceph.io/community/nautilus-cephfs/ ...
- 10:09 AM Backport #46948 (In Progress): nautilus: qa: Fs cleanup fails with a traceback
- 10:05 AM Backport #46947 (In Progress): octopus: qa: Fs cleanup fails with a traceback
- 07:46 AM Feature #47034 (New): mds: readdir for snapshot diff
- make readdir return changed/removed dentries since given snapshot
08/18/2020
- 08:16 PM Backport #47014 (In Progress): octopus: librados|libcephfs: use latest MonMap when creating from ...
- 04:03 PM Backport #47014 (Resolved): octopus: librados|libcephfs: use latest MonMap when creating from Cep...
- https://github.com/ceph/ceph/pull/36705
- 08:13 PM Backport #47013 (In Progress): nautilus: librados|libcephfs: use latest MonMap when creating from...
- 04:02 PM Backport #47013 (Resolved): nautilus: librados|libcephfs: use latest MonMap when creating from Ce...
- https://github.com/ceph/ceph/pull/36704
- 04:57 PM Bug #47015 (Fix Under Review): mds: decoding of enum types on big-endian systems broken
- 04:26 PM Bug #47015 (Resolved): mds: decoding of enum types on big-endian systems broken
- When a struct member that has enum type needs to be encoded or
decoded, we need to use an explicit integer type, sin... - 04:53 PM Bug #47012 (Need More Info): mds: MDCache.cc: 6418: FAILED ceph_assert(r == 0 || r == -2)
- 04:52 PM Bug #47012: mds: MDCache.cc: 6418: FAILED ceph_assert(r == 0 || r == -2)
- the mds.0 debug_ms log level = 1, and log is in the attachment
- 03:21 PM Bug #47012: mds: MDCache.cc: 6418: FAILED ceph_assert(r == 0 || r == -2)
- please try reproduce it again with debug_ms = 1
- 03:09 PM Bug #47012 (Need More Info): mds: MDCache.cc: 6418: FAILED ceph_assert(r == 0 || r == -2)
- My mds.0 service (standby, active mds: 4) cyclical crash, each time the stack information is as follows:
ceph versio... - 04:52 PM Bug #47006 (Need More Info): mon: required client features adding/removing
- Can you elaborate on what the problem is? Give an example.
- 12:07 PM Bug #47006 (Resolved): mon: required client features adding/removing
- ...
- 04:38 PM Backport #47021 (Resolved): octopus: client: shutdown race fails with status 141
- https://github.com/ceph/ceph/pull/37358
- 04:37 PM Backport #47020 (Resolved): nautilus: client: shutdown race fails with status 141
- https://github.com/ceph/ceph/pull/41593
- 04:35 PM Backport #47018 (Resolved): octopus: mds: kcephfs parse dirfrag's ndist is always 0
- https://github.com/ceph/ceph/pull/37357
- 04:34 PM Backport #47017 (Resolved): nautilus: mds: kcephfs parse dirfrag's ndist is always 0
- https://github.com/ceph/ceph/pull/37177
- 04:34 PM Backport #47016 (Resolved): octopus: mds: fix the decode version
- https://github.com/ceph/ceph/pull/37356
- 04:28 PM Feature #46059 (Resolved): vstart_runner.py: optionally rotate logs between tests
- 04:12 PM Bug #47011 (Fix Under Review): client: Client::open() pass wrong cap mask to path_walk
- 02:23 PM Bug #47011 (Resolved): client: Client::open() pass wrong cap mask to path_walk
- 04:01 PM Fix #46645 (Pending Backport): librados|libcephfs: use latest MonMap when creating from CephContext
- 12:50 PM Bug #47009 (Resolved): TestNFS.test_cluster_set_reset_user_config: command failed with status 32:...
- ...
- 11:55 AM Feature #47005 (Fix Under Review): kceph: add metric for number of pinned capabilities and number...
- Patchwork link: https://patchwork.kernel.org/patch/11720599/
- 11:19 AM Feature #47005 (Resolved): kceph: add metric for number of pinned capabilities and number of dirs...
- 11:19 AM Feature #46866 (In Progress): kceph: add metric for number of pinned capabilities
- 11:17 AM Feature #46866: kceph: add metric for number of pinned capabilities
- The number of the pinned capbilities will always equal to the total number of the s_caps in kclient.
- 03:40 AM Bug #43039 (Pending Backport): client: shutdown race fails with status 141
- 03:39 AM Bug #46868 (Resolved): client: switch to use ceph_mutex_is_locked_by_me always
- 03:35 AM Bug #46891 (Pending Backport): mds: kcephfs parse dirfrag's ndist is always 0
- 03:35 AM Bug #46926 (Pending Backport): mds: fix the decode version
08/17/2020
- 03:48 PM Bug #46985: common: validate type CephBool cause 'invalid command json'
- Just this commit needs backported:
common: fix validate type CephBool cause 'invalid command json'
Fixes: http... - 08:52 AM Bug #46985 (Fix Under Review): common: validate type CephBool cause 'invalid command json'
- 02:07 AM Bug #46985 (Resolved): common: validate type CephBool cause 'invalid command json'
- ...
- 02:33 PM Bug #46883: kclient: ghost kernel mount
- Patrick Donnelly wrote:
> So there are two issues here:
>
> * umount should not use -l so we aren't papering over... - 01:45 PM Bug #46883: kclient: ghost kernel mount
- So there are two issues here:
* umount should not use -l so we aren't papering over bugs. Use -f to umount. If -f ... - 01:42 PM Bug #46887 (Need More Info): kceph: testing branch: hang in workunit by 1/2 clients during tree e...
- Would be good to adjust the qa code to fetch the stack if the process hangs. Get /sys/debug/fs/ceph files as well.
- 10:41 AM Feature #46989 (Fix Under Review): pybind/mgr/nfs: Test mounting of exports created with nfs expo...
- 10:37 AM Feature #46989 (Resolved): pybind/mgr/nfs: Test mounting of exports created with nfs export command
- 09:51 AM Bug #41069: nautilus: test_subvolume_group_create_with_desired_mode fails with "AssertionError: '...
- The code looks ok both on master and nautilus branch. I ran 1000 iterations on master but didn't see the failure. I w...
- 08:44 AM Bug #46988 (Fix Under Review): mds: 'forward loop' when forward_all_requests_to_auth is set
- 08:38 AM Bug #46988 (Resolved): mds: 'forward loop' when forward_all_requests_to_auth is set
- 05:12 AM Bug #46868 (Fix Under Review): client: switch to use ceph_mutex_is_locked_by_me always
- 05:11 AM Tasks #46890 (Fix Under Review): client: add request lock support
08/16/2020
- 04:43 AM Bug #46976 (Fix Under Review): After restarting an mds, its standy-replay mds remained in the "re...
- 04:42 AM Bug #46984 (Fix Under Review): mds: recover files after normal session close
- 04:30 AM Bug #46984 (Resolved): mds: recover files after normal session close
- client does not flush its cap release before sending session close request.
- 03:59 AM Bug #42365 (Fix Under Review): client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
08/14/2020
- 06:45 PM Bug #44294: mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"
- This issue and #44295 were both fixed by the same PR, https://github.com/ceph/ceph/pull/33538, which was backported t...
- 02:00 PM Bug #46976: After restarting an mds, its standy-replay mds remained in the "resolve" state
- MDSRank::calc_recovery_set() should be called by MDSRank::resolve_start
- 09:24 AM Bug #46976 (Resolved): After restarting an mds, its standy-replay mds remained in the "resolve" s...
- In multimds and standy-replay enabled Ceph cluster,after reduce a filesystem mds num and restart an active mds, its ...
- 12:51 PM Backport #46957 (In Progress): octopus: pybind/mgr/nfs: add interface for adding user defined con...
08/13/2020
- 11:31 PM Backport #46860 (Resolved): nautilus: mds: do not raise "client failing to respond to cap release...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36513
m... - 11:31 PM Backport #46858 (Resolved): nautilus: qa: add debugging for volumes plugin use of libcephfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36512
m... - 11:30 PM Backport #46856 (Resolved): nautilus: client: static dirent for readdir is not thread-safe
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36511
m... - 08:53 PM Bug #41228 (Resolved): mon: deleting a CephFS and its pools causes MONs to crash
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:39 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
- also /ceph/teuthology-archive/yuriw-2020-08-05_20:28:22-fs-wip-yuri2-testing-2020-08-05-1459-octopus-distro-basic-smi...
- 05:37 PM Bug #41228 (New): mon: deleting a CephFS and its pools causes MONs to crash
- This is back but in Octopus. The fix for #40011 doesn't fix this, apparently....
- 08:51 PM Bug #43517 (Resolved): qa: random subvolumegroup collision
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:49 PM Backport #46960 (Resolved): nautilus: cephfs-journal-tool: incorrect read_offset after finding mi...
- https://github.com/ceph/ceph/pull/37479
- 08:49 PM Backport #46959 (Resolved): octopus: cephfs-journal-tool: incorrect read_offset after finding mis...
- https://github.com/ceph/ceph/pull/37854
- 08:49 PM Bug #45662 (Resolved): pybind/mgr/volumes: volume deletion should check mon_allow_pool_delete
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:49 PM Backport #46957 (Resolved): octopus: pybind/mgr/nfs: add interface for adding user defined config...
- https://github.com/ceph/ceph/pull/36635
- 08:48 PM Bug #45910 (Resolved): pybind/mgr/volumes: volume deletion not always removes the associated osd ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:48 PM Bug #46277 (Resolved): pybind/mgr/volumes: get_pool_names may indicate volume does not exist if m...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:47 PM Bug #46565 (Resolved): mgr/nfs: Ensure pseudoroot path is absolute and is not just /
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:46 PM Backport #46948 (Resolved): nautilus: qa: Fs cleanup fails with a traceback
- https://github.com/ceph/ceph/pull/36714
- 08:46 PM Backport #46947 (Resolved): octopus: qa: Fs cleanup fails with a traceback
- https://github.com/ceph/ceph/pull/36713
- 08:46 PM Backport #46943 (Resolved): nautilus: mds: segv in MDCache::wait_for_uncommitted_fragments
- https://github.com/ceph/ceph/pull/36968
- 08:46 PM Backport #46942 (Resolved): octopus: mds: segv in MDCache::wait_for_uncommitted_fragments
- https://github.com/ceph/ceph/pull/37355
- 08:45 PM Backport #46941 (Resolved): nautilus: mds: memory leak during cache drop
- https://github.com/ceph/ceph/pull/36967
- 08:45 PM Backport #46940 (Resolved): octopus: mds: memory leak during cache drop
- https://github.com/ceph/ceph/pull/37354
- 08:44 PM Backport #46234 (Resolved): octopus: pybind/mgr/volumes: volume deletion not always removes the a...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36327
m... - 06:38 PM Backport #46234: octopus: pybind/mgr/volumes: volume deletion not always removes the associated o...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36327
merged - 08:43 PM Backport #46477 (Resolved): octopus: pybind/mgr/volumes: volume deletion should check mon_allow_p...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36327
m... - 06:38 PM Backport #46477: octopus: pybind/mgr/volumes: volume deletion should check mon_allow_pool_delete
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36327
merged - 08:43 PM Backport #46465 (Resolved): octopus: pybind/mgr/volumes: get_pool_names may indicate volume does ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36327
m... - 06:38 PM Backport #46465: octopus: pybind/mgr/volumes: get_pool_names may indicate volume does not exist i...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36327
merged - 08:43 PM Backport #46642 (Resolved): octopus: qa: random subvolumegroup collision
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36327
m... - 06:38 PM Backport #46642: octopus: qa: random subvolumegroup collision
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36327
merged - 08:43 PM Backport #46712 (Resolved): octopus: mgr/nfs: Ensure pseudoroot path is absolute and is not just /
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36299
m... - 06:38 PM Backport #46712: octopus: mgr/nfs: Ensure pseudoroot path is absolute and is not just /
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36299
merged - 08:43 PM Bug #46572 (Resolved): mgr/nfs: help for "nfs export create" and "nfs export delete" says "<attac...
- 08:43 PM Backport #46632 (Resolved): octopus: mgr/nfs: help for "nfs export create" and "nfs export delete...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36220
m... - 06:37 PM Backport #46632: octopus: mgr/nfs: help for "nfs export create" and "nfs export delete" says "<at...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36220
merged - 03:36 PM Bug #46926 (Fix Under Review): mds: fix the decode version
- 03:24 PM Bug #46926 (Resolved): mds: fix the decode version
- https://github.com/ceph/ceph/commit/3fac3b1236c4918e9640e38fe7f5f59efc0a23b9
the decode changes are reverted, but ...
08/12/2020
- 06:10 PM Bug #46906 (Fix Under Review): mds: fix file recovery crash after replaying delayed requests
- 04:37 AM Bug #46906 (Resolved): mds: fix file recovery crash after replaying delayed requests
- When client replay stage or active stage just started, MDS replayed delayed requests firstly, then tried to recover f...
- 09:01 AM Bug #46882: client: mount abort hangs: [volumes INFO mgr_util] aborting connection from cephfs 'c...
- In case:...
- 08:56 AM Bug #46882 (Fix Under Review): client: mount abort hangs: [volumes INFO mgr_util] aborting connec...
- 03:45 AM Bug #46905 (Fix Under Review): client: cluster [WRN] evicting unresponsive client smithi122:0 (34...
- Let's just defer cancelling the event timer.
- 03:12 AM Bug #46905 (In Progress): client: cluster [WRN] evicting unresponsive client smithi122:0 (34373),...
- 03:11 AM Bug #46905 (Resolved): client: cluster [WRN] evicting unresponsive client smithi122:0 (34373), af...
From https://pulpito.ceph.com/pdonnell-2020-08-08_02\:21\:01-multimds-wip-pdonnell-testing-20200808.001303-distro-b...
08/11/2020
- 10:56 PM Feature #45747 (Pending Backport): pybind/mgr/nfs: add interface for adding user defined configur...
- 08:17 PM Fix #46645 (Fix Under Review): librados|libcephfs: use latest MonMap when creating from CephContext
- 01:45 PM Bug #46902: mds: CInode::maybe_export_pin is broken
- Zheng Yan wrote:
> void CInode::maybe_export_pin(bool update)
> {
> if (!g_conf()->mds_bal_export_pin)
> re... - 01:39 PM Bug #46902 (Rejected): mds: CInode::maybe_export_pin is broken
- void CInode::maybe_export_pin(bool update)
{
if (!g_conf()->mds_bal_export_pin)
return;
if (!is_dir() || ... - 08:06 AM Bug #46893 (New): client: check the quota limit for the full path in the rename operation
- In the rename operation, the first top directory with quota set on the path where the target directory is located is ...
- 06:03 AM Bug #46882: client: mount abort hangs: [volumes INFO mgr_util] aborting connection from cephfs 'c...
- The above was from /a/pdonnell-2020-08-08_02\:21\:01-multimds-wip-pdonnell-testing-20200808.001303-distro-basic-smith...
- 02:35 AM Bug #46882: client: mount abort hangs: [volumes INFO mgr_util] aborting connection from cephfs 'c...
- Additional info:
From the logs we can see that the inode ref was increased by:... - 02:15 AM Bug #46882: client: mount abort hangs: [volumes INFO mgr_util] aborting connection from cephfs 'c...
- Checked it, it seems the write ops didn't finished and kept going while the _umount() just stopped the tick(), then t...
- 02:10 AM Bug #46882 (In Progress): client: mount abort hangs: [volumes INFO mgr_util] aborting connection ...
- 04:24 AM Feature #46892 (Fix Under Review): pybind/mgr/volumes: Make number of cloner threads configurable
- 04:23 AM Feature #46892 (Resolved): pybind/mgr/volumes: Make number of cloner threads configurable
- The number of cloner threads is set to 4 and can't be configured.
This is bottle neck if the system resource is capa... - 02:35 AM Bug #46891 (Fix Under Review): mds: kcephfs parse dirfrag's ndist is always 0
- 02:12 AM Bug #46891 (Resolved): mds: kcephfs parse dirfrag's ndist is always 0
- ...
- 02:06 AM Tasks #46890 (Closed): client: add request lock support
- For each request it will be protected by its own lock. And the lock order with client_lock will be:...
08/10/2020
- 08:41 PM Tasks #46649 (Resolved): client: make the 'mounted', 'unmounting' and 'initialized' members a sin...
- 08:38 PM Bug #46887 (Can't reproduce): kceph: testing branch: hang in workunit by 1/2 clients during tree ...
- ...
- 08:12 PM Bug #46883: kclient: ghost kernel mount
- I think the problem here is that teuthology is using umount -l. That just detaches the mount from the tree, but defer...
- 07:36 PM Bug #46883 (Resolved): kclient: ghost kernel mount
- Relevant snippets of teuthology log:...
- 07:53 PM Fix #46885 (New): pybind/mgr/mds_autoscaler: add test for MDS scaling with cephadm
- There needs to be teuthology tests that validate its behavior.
- 07:52 PM Documentation #46884 (Resolved): pybind/mgr/mds_autoscaler: add documentation
- Explain how to enable/disable the module. What it does.
- 07:47 PM Feature #40929 (Resolved): pybind/mgr/mds_autoscaler: create mgr plugin to deploy and configure M...
- 06:09 PM Bug #46882 (Resolved): client: mount abort hangs: [volumes INFO mgr_util] aborting connection fro...
- ...
- 02:11 PM Fix #46645 (In Progress): librados|libcephfs: use latest MonMap when creating from CephContext
- 05:40 AM Bug #46831 (Resolved): nautilus: mds: SIGSEGV in MDCache::finish_uncommitted_slave
08/08/2020
08/07/2020
- 11:45 PM Bug #46868 (Resolved): client: switch to use ceph_mutex_is_locked_by_me always
- There is one case if the client_lock is hold by another thread, the check here will also be passed.
- 11:25 PM Backport #46860: nautilus: mds: do not raise "client failing to respond to cap release" when clie...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36513
merged - 05:20 AM Backport #46860 (In Progress): nautilus: mds: do not raise "client failing to respond to cap rele...
- 05:14 AM Backport #46860 (Resolved): nautilus: mds: do not raise "client failing to respond to cap release...
- https://github.com/ceph/ceph/pull/36513
- 11:23 PM Backport #46858: nautilus: qa: add debugging for volumes plugin use of libcephfs
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36512
merged - 05:07 AM Backport #46858 (In Progress): nautilus: qa: add debugging for volumes plugin use of libcephfs
- 05:03 AM Backport #46858 (Resolved): nautilus: qa: add debugging for volumes plugin use of libcephfs
- https://github.com/ceph/ceph/pull/36512
- 11:22 PM Backport #46856: nautilus: client: static dirent for readdir is not thread-safe
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36511
merged - 04:55 AM Backport #46856 (In Progress): nautilus: client: static dirent for readdir is not thread-safe
- 04:53 AM Backport #46856 (Resolved): nautilus: client: static dirent for readdir is not thread-safe
- https://github.com/ceph/ceph/pull/36511
- 11:20 PM Bug #46831: nautilus: mds: SIGSEGV in MDCache::finish_uncommitted_slave
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/36462
merged - 11:12 PM Bug #46434: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- https://pulpito.ceph.com/yuriw-2020-08-07_15:05:02-multimds-wip-yuri4-testing-2020-08-07-1350-nautilus-distro-basic-s...
- 06:08 PM Feature #46866 (Resolved): kceph: add metric for number of pinned capabilities
- Specifically, we want to know how many files are open on the client.
- 06:08 PM Feature #46865 (Resolved): client: add metric for number of pinned capabilities
- Specifically, we want to know how many files are open on the client.
- 08:46 AM Bug #43039 (Fix Under Review): client: shutdown race fails with status 141
- 06:39 AM Bug #43039: client: shutdown race fails with status 141
- The root cause should be the test process reached the open files limit, and the fd returned is -1, or something, when...
- 05:33 AM Bug #43039: client: shutdown race fails with status 141
- Hi Jeff, Patrick
From my test locally, this issue is very similar to https://tracker.ceph.com/issues/45829, which ... - 07:13 AM Bug #45333: LARGE_OMAP_OBJECTS in pool metadata
- Lately I am getting more of these warnings. If needed I can raise the warning threshold, but it seems it is not exact...
- 05:14 AM Backport #46859 (Resolved): octopus: mds: do not raise "client failing to respond to cap release"...
- https://github.com/ceph/ceph/pull/37353
- 05:14 AM Bug #46830 (Pending Backport): mds: do not raise "client failing to respond to cap release" when ...
- 05:03 AM Backport #46857 (Resolved): octopus: qa: add debugging for volumes plugin use of libcephfs
- https://github.com/ceph/ceph/pull/37352
- 05:02 AM Fix #46851 (Pending Backport): qa: add debugging for volumes plugin use of libcephfs
- 04:52 AM Backport #46855 (Resolved): octopus: client: static dirent for readdir is not thread-safe
- https://github.com/ceph/ceph/pull/37351
- 04:52 AM Bug #46832 (Pending Backport): client: static dirent for readdir is not thread-safe
- 02:55 AM Bug #46853: ceph_test_libcephfs: LibCephFS.TestUtime gets core dumped randomly
- Sometimes I am also getting:...
- 02:52 AM Bug #46853: ceph_test_libcephfs: LibCephFS.TestUtime gets core dumped randomly
- Patrick Donnelly wrote:
> It's interesitn gyou were able to reproduce that locally. This one has been plaguing us fo... - 01:57 AM Bug #46853: ceph_test_libcephfs: LibCephFS.TestUtime gets core dumped randomly
- It's interesitn gyou were able to reproduce that locally. This one has been plaguing us for a while Xiubo. I've seen ...
- 01:51 AM Bug #46853 (Duplicate): ceph_test_libcephfs: LibCephFS.TestUtime gets core dumped randomly
- 01:21 AM Bug #46853 (Duplicate): ceph_test_libcephfs: LibCephFS.TestUtime gets core dumped randomly
- With the upstream, when running the ./bin/ceph_test_libcephfs test, I randomly gettting:...
08/06/2020
- 09:11 PM Fix #46851 (Fix Under Review): qa: add debugging for volumes plugin use of libcephfs
- 09:08 PM Fix #46851 (Resolved): qa: add debugging for volumes plugin use of libcephfs
- To aid debugging #46832.
- 09:06 PM Bug #46832 (Fix Under Review): client: static dirent for readdir is not thread-safe
- 08:54 PM Bug #46832 (In Progress): client: static dirent for readdir is not thread-safe
- Another instance with debugging: https://pulpito.ceph.com/pdonnell-2020-08-06_15:29:50-fs-wip-pdonnell-testing-202008...
- 06:42 PM Bug #46844 (Won't Fix): ceph-fuse: writing data can exceed quota when mount with ceph-fuse
- See: https://docs.ceph.com/docs/master/cephfs/quota/#limitations
- 07:51 AM Bug #46844 (Won't Fix): ceph-fuse: writing data can exceed quota when mount with ceph-fuse
- setfattr -n ceph.quota.max_bytes -v 1073741824 /opt/cephfs/test1/
use ceph-fuse mount the dir /cephfs/test1/ to /m... - 06:17 PM Bug #46830 (Fix Under Review): mds: do not raise "client failing to respond to cap release" when ...
- 04:06 PM Bug #46830 (In Progress): mds: do not raise "client failing to respond to cap release" when clien...
- 06:03 PM Bug #46823: nautilus: kceph w/ testing branch: mdsc_handle_session corrupt message mds0 len 67
- So I tried reproducing this locally and couldn't get the error to pop. I suspect that the MDS I have is not sending a...
- 01:51 PM Bug #44318 (Duplicate): nautilus: mgr/volumes: exception when logging message (in logging::log())
- https://tracker.ceph.com/issues/46832
So, like this was seen a while back in nautilus. Seems to happen once in a w...
08/05/2020
- 06:23 PM Feature #42451 (Fix Under Review): mds: add root_squash
- 06:04 PM Bug #46832: client: static dirent for readdir is not thread-safe
- Debugging branch: https://github.com/ceph/ceph/pull/36483
- 08:36 AM Bug #46832: client: static dirent for readdir is not thread-safe
- So, this is probably a unicode+py2 thingy. One way to fix this is to log raw strings by prefixing the log message wit...
- 04:38 AM Bug #46832: client: static dirent for readdir is not thread-safe
- Real backtrace:...
- 02:33 PM Bug #46675 (Duplicate): nautilus: fs/upgrade test: Crash: 'wait_until_healthy' reached maximum tr...
- Sorry I missed this tracker ticket when searching. I'll mark this as duplicate since the other already has the fix li...
- 02:27 PM Bug #46831: nautilus: mds: SIGSEGV in MDCache::finish_uncommitted_slave
- Bug is only in nautilus.
- 11:56 AM Bug #46831: nautilus: mds: SIGSEGV in MDCache::finish_uncommitted_slave
- This issue was already reported at https://tracker.ceph.com/issues/46675
- 06:51 AM Bug #46831 (Fix Under Review): nautilus: mds: SIGSEGV in MDCache::finish_uncommitted_slave
- https://github.com/ceph/ceph/pull/36462
- 01:36 PM Feature #46074 (Fix Under Review): mds: provide altrenatives to increase the total cephfs subvolu...
- 01:00 PM Bug #46163 (Fix Under Review): mgr/volumes: Clone operation uses source subvolume root directory ...
08/04/2020
- 10:19 PM Bug #46831 (New): nautilus: mds: SIGSEGV in MDCache::finish_uncommitted_slave
- Looks like this occurs shortly after the upgrade from Luminous:...
- 08:01 PM Bug #46831 (Resolved): nautilus: mds: SIGSEGV in MDCache::finish_uncommitted_slave
- ...
- 10:07 PM Bug #46832 (Resolved): client: static dirent for readdir is not thread-safe
- ...
- 07:54 PM Backport #46635 (Resolved): nautilus: mds: null pointer dereference in MDCache::finish_rollback
- 05:38 PM Bug #46830 (Resolved): mds: do not raise "client failing to respond to cap release" when client w...
- If a client has more than mds_min_caps_per_client caps pinned due to open files and the MDS is trying to recall caps ...
- 04:38 PM Bug #46823 (In Progress): nautilus: kceph w/ testing branch: mdsc_handle_session corrupt message ...
- Patch posted upstream and marked for stable:
https://marc.info/?l=ceph-devel&m=159655872206314&w=2 - 02:59 PM Bug #46823: nautilus: kceph w/ testing branch: mdsc_handle_session corrupt message mds0 len 67
- I missed some of the fields. op and seq come from struct ceph_mds_session_head:...
- 11:24 AM Bug #46823: nautilus: kceph w/ testing branch: mdsc_handle_session corrupt message mds0 len 67
- To be clear:...
- 11:18 AM Bug #46823: nautilus: kceph w/ testing branch: mdsc_handle_session corrupt message mds0 len 67
- handle_session only really deals with the "front" part of the message, so the corruption is likely there....
- 02:10 AM Bug #46823 (Resolved): nautilus: kceph w/ testing branch: mdsc_handle_session corrupt message mds...
- ...
- 12:30 PM Backport #46821 (In Progress): nautilus: pybind/mgr/volumes: Add the ability to keep snapshots of...
- 10:41 AM Bug #46426 (New): mds: 8MMDSPing is not an MMDSOp type
08/03/2020
- 06:34 PM Backport #46635 (In Progress): nautilus: mds: null pointer dereference in MDCache::finish_rollback
- 06:21 PM Backport #46821 (Resolved): nautilus: pybind/mgr/volumes: Add the ability to keep snapshots of su...
- https://github.com/ceph/ceph/pull/36448
- 06:21 PM Backport #46820 (Resolved): octopus: pybind/mgr/volumes: Add the ability to keep snapshots of sub...
- https://github.com/ceph/ceph/pull/36803
- 05:13 PM Feature #42451 (In Progress): mds: add root_squash
- 02:35 PM Bug #46817 (Duplicate): ceph fs status prints stack trace
- Duplicate of https://tracker.ceph.com/issues/45633
- 01:32 PM Bug #46817 (Duplicate): ceph fs status prints stack trace
- We had a cluster in an intermediate failed state (mixed versions nautilus and octopus with multiple active MDS). ceph...
- 01:42 PM Feature #46746 (In Progress): mgr/nfs: Add interface to accept yaml file for creating clusters
- 01:42 PM Bug #46747 (In Progress): mds: make rstats in CInode::old_inodes stable
- 01:41 PM Bug #46809 (In Progress): mds: purge orphan objects created by lost async file creation
- 01:02 AM Bug #46809 (In Progress): mds: purge orphan objects created by lost async file creation
08/01/2020
- 11:22 AM Backport #46796 (In Progress): nautilus: mds: Subvolume snapshot directory does not save attribut...
- 11:20 AM Backport #46795 (In Progress): octopus: mds: Subvolume snapshot directory does not save attribute...
07/31/2020
- 09:19 PM Tasks #46768 (Resolved): client: clean up the unnecessary client_lock for _conf->client_trace
- 09:19 PM Bug #46766 (Pending Backport): mds: memory leak during cache drop
- 09:18 PM Bug #46765 (Pending Backport): mds: segv in MDCache::wait_for_uncommitted_fragments
- 09:17 PM Tasks #46682 (Resolved): client: add timer_lock support
- 09:15 PM Bug #46597 (Pending Backport): qa: Fs cleanup fails with a traceback
- 09:14 PM Bug #46282 (Resolved): qa: multiclient connection interruptions by stopping one client
- 09:14 PM Bug #45806 (Resolved): qa/task/vstart_runner.py: setting the network namespace "ceph-ns--tmp-tmpq...
- 09:13 PM Bug #45817 (Resolved): qa: Command failed with status 2: ['sudo', 'bash', '-c', 'ip addr add 192....
- 09:11 PM Feature #45729 (Pending Backport): pybind/mgr/volumes: Add the ability to keep snapshots of subvo...
- 09:10 PM Bug #45575 (Pending Backport): cephfs-journal-tool: incorrect read_offset after finding missing o...
- 09:09 PM Feature #26996 (Resolved): cephfs: get capability cache hits by clients to provide introspection ...
- 05:23 PM Cleanup #46802 (In Progress): mds: do not use asserts for RADOS failures
- https://github.com/ceph/ceph/blob/ec472b7b56eb5ed6ec52aa0bc4c2d18578f1e88c/src/mds/Server.cc#L371
and many others.... - 10:32 AM Backport #46796 (Resolved): nautilus: mds: Subvolume snapshot directory does not save attribute "...
- https://github.com/ceph/ceph/pull/36404
- 10:32 AM Backport #46795 (Resolved): octopus: mds: Subvolume snapshot directory does not save attribute "c...
- https://github.com/ceph/ceph/pull/36403
- 10:32 AM Backport #46793 (Rejected): nautilus: pybind/mgr/volumes: subvolume operations throw exception if...
- 10:31 AM Backport #46792 (Rejected): octopus: pybind/mgr/volumes: subvolume operations throw exception if ...
- 10:31 AM Backport #46790 (Rejected): nautilus: mds slave request 'no_available_op_found'
- 10:31 AM Backport #46789 (Rejected): octopus: mds slave request 'no_available_op_found'
- 10:31 AM Backport #46787 (Resolved): nautilus: client: in _open() the open ref maybe decreased twice, but ...
- https://github.com/ceph/ceph/pull/36966
- 10:31 AM Backport #46786 (Resolved): octopus: client: in _open() the open ref maybe decreased twice, but o...
- https://github.com/ceph/ceph/pull/37249
- 10:30 AM Backport #46784 (Resolved): nautilus: mds/CInode: Optimize only pinned by subtrees check
- https://github.com/ceph/ceph/pull/36965
- 10:30 AM Backport #46783 (Resolved): octopus: mds/CInode: Optimize only pinned by subtrees check
- https://github.com/ceph/ceph/pull/37248
- 06:41 AM Backport #46641 (Resolved): nautilus: qa: random subvolumegroup collision
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36314
m...
07/30/2020
- 06:52 PM Bug #44294 (Resolved): mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"
- 05:43 PM Bug #44294 (Pending Backport): mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"
- This should have been flagged for backport.
- 05:51 PM Backport #46778 (Duplicate): nautilus: mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"
- 05:44 PM Backport #46778 (Duplicate): nautilus: mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"
- 09:21 AM Bug #46769 (Fix Under Review): qa: Refactor cephfs creation/removal code.
- 08:14 AM Bug #46769 (In Progress): qa: Refactor cephfs creation/removal code.
- 08:14 AM Bug #46769 (Fix Under Review): qa: Refactor cephfs creation/removal code.
- The 'CephFSTestCase' class creates the filesystem based on
'REQUIRE_FILESYSTEM' flag in 'setUp' but the correspondin... - 08:15 AM Tasks #46768 (Fix Under Review): client: clean up the unnecessary client_lock for _conf->client_t...
- 08:09 AM Tasks #46768 (Resolved): client: clean up the unnecessary client_lock for _conf->client_trace
- There is no need to make the "cct->_conf->client_trace" under the
client_lock, it is for the "ceph-syn" and it almos... - 03:13 AM Fix #46727 (Pending Backport): mds/CInode: Optimize only pinned by subtrees check
- 03:06 AM Bug #46278 (Pending Backport): mds: Subvolume snapshot directory does not save attribute "ceph.qu...
- 03:05 AM Bug #45575 (Fix Under Review): cephfs-journal-tool: incorrect read_offset after finding missing o...
- 03:04 AM Bug #46733 (Closed): Erro:EEXIST returned while unprotecting a snap which is not protected
- This issue no longer occurs as the protect/unprotect is deprecated with https://tracker.ceph.com/issues/45371
The pr... - 03:00 AM Bug #46420 (Resolved): cephfs-shell: Return proper error code instead of 1
- 02:59 AM Bug #46496 (Pending Backport): pybind/mgr/volumes: subvolume operations throw exception if volume...
- 02:57 AM Bug #46583 (Pending Backport): mds slave request 'no_available_op_found'
- 02:55 AM Bug #46664 (Pending Backport): client: in _open() the open ref maybe decreased twice, but only in...
- 02:46 AM Bug #46766 (Fix Under Review): mds: memory leak during cache drop
- 02:44 AM Bug #46766 (Resolved): mds: memory leak during cache drop
- The MDSGatherBuilder used to recall state is not freed.
- 02:39 AM Bug #46765 (Fix Under Review): mds: segv in MDCache::wait_for_uncommitted_fragments
- 02:34 AM Bug #46765 (Resolved): mds: segv in MDCache::wait_for_uncommitted_fragments
- ...
- 02:05 AM Bug #46607 (Closed): nautilus: pybind/mgr/volumes: TypeError: bad operand type for unary -: 'str'
- I believe this is fixed with https://tracker.ceph.com/issues/46464. Therefore I am closing this, please re-open this ...
07/29/2020
- 06:27 PM Bug #46675: nautilus: fs/upgrade test: Crash: 'wait_until_healthy' reached maximum tries (150) af...
- This failure wasn't seen in v14.2.10 release testing,
https://tracker.ceph.com/issues/46039#note-2
https://pulpito.... - 06:10 PM Feature #46074: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts ...
- Zheng Yan wrote:
> I only see per-dir limit in the mds code. where is the 400 snapshot per-file-system limit from?
... - 02:02 PM Feature #46074: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts ...
- I only see per-dir limit in the mds code. where is the 400 snapshot per-file-system limit from?
- 02:48 PM Fix #46645: librados|libcephfs: use latest MonMap when creating from CephContext
- Capturing a mail conversation on direction of fix:
> Shyamsundar
Patrick
> MonClient::handle_monmap handles th... - 01:59 PM Feature #45747 (Fix Under Review): pybind/mgr/nfs: add interface for adding user defined configur...
- 10:07 AM Bug #41069 (New): nautilus: test_subvolume_group_create_with_desired_mode fails with "AssertionEr...
- I see this in a recent nautilus run during 14.2.11 pre-release testing,
http://pulpito.front.sepia.ceph.com/rraja-20... - 09:28 AM Bug #46747 (In Progress): mds: make rstats in CInode::old_inodes stable
- when modifying dir, MDCache::project_rstat_frag_to_inode may wrongly update rstats in old_inodes of the dir.
- 08:07 AM Feature #46746 (New): mgr/nfs: Add interface to accept yaml file for creating clusters
- nfs cluster create -i <yaml_file>
This will deploy nfs ganesha clusters according to the cluster specification in ya... - 04:32 AM Bug #43943 (Resolved): qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:31 AM Bug #45530 (Resolved): qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: ['cd', '|...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:30 AM Bug #46025 (Resolved): client: release the client_lock before copying data in read
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:30 AM Bug #46042 (Resolved): mds: EMetablob replay too long will cause mds restart
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:30 AM Fix #46070 (Resolved): client: fix snap directory atime
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:30 AM Bug #46084 (Resolved): client: supplying ceph_fsetxattr with no value unsets xattr
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:18 AM Backport #46466 (Resolved): nautilus: pybind/mgr/volumes: get_pool_names may indicate volume does...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36167
m... - 04:18 AM Backport #46478 (Resolved): nautilus: pybind/mgr/volumes: volume deletion should check mon_allow_...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36167
m... - 04:18 AM Backport #46235 (Resolved): nautilus: pybind/mgr/volumes: volume deletion not always removes the ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36167
m... - 04:17 AM Backport #46470 (Resolved): nautilus: client: release the client_lock before copying data in read
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36294
m... - 04:17 AM Backport #46388 (Resolved): nautilus: pybind/mgr/volumes: cleanup stale connection hang
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36215
m... - 04:17 AM Backport #46464 (Resolved): nautilus: mgr/volumes: fs subvolume clones stuck in progress when lib...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36180
m... - 04:16 AM Backport #46523 (Resolved): nautilus: mds: fix hang issue when accessing a file under a lost pare...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36179
m... - 04:16 AM Backport #46521 (Resolved): nautilus: mds: deleting a large number of files in a directory causes...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36178
m... - 04:16 AM Backport #46517 (Resolved): nautilus: client: directory inode can not call release_callback
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36177
m... - 04:16 AM Backport #46474 (Resolved): nautilus: mds: make threshold for MDS_TRIM warning configurable
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36175
m... - 04:15 AM Backport #46409 (Resolved): nautilus: client: supplying ceph_fsetxattr with no value unsets xattr
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36173
m... - 04:15 AM Backport #46310 (Resolved): nautilus: qa/tasks/cephfs/test_snapshots.py: Command failed with stat...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36172
m... - 04:14 AM Backport #46200 (Resolved): nautilus: qa: "[WRN] evicting unresponsive client smithi131:z (6314),...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36171
m... - 04:14 AM Backport #46189 (Resolved): nautilus: mds: EMetablob replay too long will cause mds restart
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36170
m... - 12:31 AM Tasks #46688: client: add inode lock support
- For the Inode::inode_lock it is responsible to protect :
1, all the members in the Inode class
2, if the Inode'...
07/28/2020
- 08:26 PM Backport #46187 (Resolved): nautilus: client: fix snap directory atime
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36169
m... - 06:10 PM Bug #46733 (Triaged): Erro:EEXIST returned while unprotecting a snap which is not protected
- 10:25 AM Bug #46733 (Closed): Erro:EEXIST returned while unprotecting a snap which is not protected
- Double unprotect or unprotect on a snap which is not protected give EEXIST error code.
[root@node /]# ceph fs subvo... - 03:52 PM Bug #41581 (In Progress): pybind/mgr: Fix subvolume options
- 01:04 PM Bug #41581: pybind/mgr: Fix subvolume options
- * In Progress
- 02:32 PM Backport #46234 (In Progress): octopus: pybind/mgr/volumes: volume deletion not always removes th...
- 02:32 PM Backport #46477 (In Progress): octopus: pybind/mgr/volumes: volume deletion should check mon_allo...
- 02:32 PM Backport #46465 (In Progress): octopus: pybind/mgr/volumes: get_pool_names may indicate volume do...
- 02:31 PM Backport #46642 (In Progress): octopus: qa: random subvolumegroup collision
- 08:22 AM Backport #46641 (In Progress): nautilus: qa: random subvolumegroup collision
07/27/2020
- 09:31 PM Bug #46438 (In Progress): mds: add vxattr for querying inherited layout
- 06:41 PM Bug #46608 (Duplicate): qa: thrashosds: log [ERR] : 4.0 has 3 objects unfound and apparently lost
- 01:40 PM Bug #46608 (Need More Info): qa: thrashosds: log [ERR] : 4.0 has 3 objects unfound and apparently...
- 05:50 PM Fix #46727 (Fix Under Review): mds/CInode: Optimize only pinned by subtrees check
- 05:49 PM Fix #46727 (Pending Backport): mds/CInode: Optimize only pinned by subtrees check
- 03:37 PM Fix #46727 (Resolved): mds/CInode: Optimize only pinned by subtrees check
- Per Patrick's request:
https://github.com/ceph/ceph/pull/36288 - 03:35 PM Backport #46470: nautilus: client: release the client_lock before copying data in read
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36294
merged - 03:34 PM Backport #46388: nautilus: pybind/mgr/volumes: cleanup stale connection hang
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36215
merged - 03:34 PM Backport #46464: nautilus: mgr/volumes: fs subvolume clones stuck in progress when libcephfs hits...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36180
merged - 03:33 PM Backport #46523: nautilus: mds: fix hang issue when accessing a file under a lost parent directory
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36179
merged - 03:32 PM Backport #46521: nautilus: mds: deleting a large number of files in a directory causes the file s...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36178
merged - 03:31 PM Backport #46517: nautilus: client: directory inode can not call release_callback
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36177
merged - 03:30 PM Backport #46474: nautilus: mds: make threshold for MDS_TRIM warning configurable
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36175
merged - 03:29 PM Backport #46409: nautilus: client: supplying ceph_fsetxattr with no value unsets xattr
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36173
merged - 03:29 PM Backport #46310: nautilus: qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: ['cd'...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36172
merged - 03:28 PM Backport #46200: nautilus: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304....
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36171
merged - 03:28 PM Backport #46189: nautilus: mds: EMetablob replay too long will cause mds restart
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36170
merged - 03:27 PM Backport #46187: nautilus: client: fix snap directory atime
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36169
merged - 02:27 PM Backport #46712 (In Progress): octopus: mgr/nfs: Ensure pseudoroot path is absolute and is not ju...
- 11:31 AM Backport #46712 (Resolved): octopus: mgr/nfs: Ensure pseudoroot path is absolute and is not just /
- https://github.com/ceph/ceph/pull/36299
- 01:39 PM Bug #46675 (Need More Info): nautilus: fs/upgrade test: Crash: 'wait_until_healthy' reached maxim...
- 11:36 AM Bug #42723 (Resolved): pybind/mgr/volumes: add upgrade testing
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:34 AM Bug #44579 (Resolved): qa: commit 9f6c764f10f break qa code in several places
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:33 AM Bug #45935 (Resolved): mds: cap revoking requests didn't success when the client doing reconnecti...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:31 AM Documentation #46571 (Resolved): mgr/nfs: Update about nfs ganesha cluster deployment using cepha...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:31 AM Bug #46579 (Resolved): mgr/nfs: Remove NParts and Cache_Size from MDCACHE block
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
07/26/2020
- 08:13 PM Backport #46585: octopus: mgr/nfs: Update about nfs ganesha cluster deployment using cephadm in v...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36224
m... - 08:13 PM Backport #46631: octopus: mgr/nfs: Remove NParts and Cache_Size from MDCACHE block
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36224
m... - 08:02 PM Backport #46191: nautilus: mds: cap revoking requests didn't success when the client doing reconn...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35841
m... - 08:01 PM Backport #46012 (Resolved): nautilus: qa: commit 9f6c764f10f break qa code in several places
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35601
m... - 08:01 PM Backport #45854 (Resolved): nautilus: cephfs-journal-tool: NetHandler create_socket couldn't crea...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35401
m... - 08:01 PM Backport #44487: nautilus: pybind/mgr/volumes: add upgrade testing
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34461
m...
07/25/2020
- 07:51 AM Backport #46191 (Resolved): nautilus: mds: cap revoking requests didn't success when the client d...
07/24/2020
- 07:06 PM Backport #46191: nautilus: mds: cap revoking requests didn't success when the client doing reconn...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35841
merged - 06:24 PM Backport #46585 (Resolved): octopus: mgr/nfs: Update about nfs ganesha cluster deployment using c...
- 06:24 PM Backport #46631 (Resolved): octopus: mgr/nfs: Remove NParts and Cache_Size from MDCACHE block
- 12:13 AM Bug #46565 (Pending Backport): mgr/nfs: Ensure pseudoroot path is absolute and is not just /
07/23/2020
- 04:58 PM Fix #46696 (Resolved): mds: pre-fragment distributed ephemeral pin directories to distribute the ...
- As a workaround for #46648, pre-micro-fragment a directory that is distributed so that a single MDS does not track al...
- 09:07 AM Tasks #46688 (In Progress): client: add inode lock support
- 09:07 AM Tasks #46688 (Fix Under Review): client: add inode lock support
- We can add one private lock for each inode, it will have better concurrency and could improve the perf.
The lock seq... - 07:07 AM Backport #44487 (Resolved): nautilus: pybind/mgr/volumes: add upgrade testing
- 06:11 AM Tasks #46682 (Fix Under Review): client: add timer_lock support
- 05:31 AM Tasks #46682 (Resolved): client: add timer_lock support
- This will help part of Client::tick() code can get rid of the big client_lock. And at the same time make the Client::...
- 02:21 AM Bug #46278 (Fix Under Review): mds: Subvolume snapshot directory does not save attribute "ceph.qu...
- 02:13 AM Feature #46680 (New): pybind/mgr/mds_autoscaler: deploy larger or smaller (RAM) MDS in response t...
- As a follow-up to #40929, it would be great to start deployment of MDS with smaller ones (like 4GB RAM). This would b...
- 01:53 AM Cleanup #46618 (Resolved): client: clean up the fuse client code
Also available in: Atom