Activity
From 12/15/2020 to 01/13/2021
01/13/2021
- 06:45 PM Bug #48863: cephfs-shell should allow changing all mode bits
- The issue is that you can't currently set S_ISUID, S_ISGID or S_ISVTX. We should allow that within cephfs-shell.
I... - 06:44 PM Bug #48863 (Resolved): cephfs-shell should allow changing all mode bits
- Currently, cephfs-shell says:...
- 04:24 PM Bug #48808 (Resolved): mon/MDSMonitor: `fs rm` is not idempotent
- 04:23 PM Bug #48834 (Resolved): qa: MDS_SLOW_METADATA_IO with osd thrasher
- 03:45 PM Backport #48859 (Resolved): nautilus: pybind/mgr/volumes: inherited snapshots should be filtered ...
- https://github.com/ceph/ceph/pull/39292
- 03:45 PM Backport #48858 (Resolved): octopus: pybind/mgr/volumes: inherited snapshots should be filtered o...
- https://github.com/ceph/ceph/pull/39390
- 03:44 PM Bug #48501 (Pending Backport): pybind/mgr/volumes: inherited snapshots should be filtered out of ...
- 01:02 PM Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- Andras Sali wrote:
> Really looking forward to getting this fix with 15.2.9 - is there a planned release date?
@A... - 03:42 AM Bug #48778: Setting quota triggered ops storms on meta pool
- Today I was able to simultaneously shut down all clients, which I did. The write operations were still happening, eve...
01/12/2021
- 04:59 PM Documentation #48838 (Resolved): document ms_mode options in mount.ceph manpage
- 04:19 PM Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- Really looking forward to getting this fix with 15.2.9 - is there a planned release date?
- 03:29 PM Bug #48770 (Resolved): qa: "Test failure: test_hole (tasks.cephfs.test_failover.TestClusterResize)"
- 01:46 PM Bug #47294 (Fix Under Review): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Fix at: https://github.com/ceph/ceph/pull/38858
Thanks for spotting that. - 12:05 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- That is probably the underlying cause. I'll push a patch fixing that tomorrow and then a bunch of tests and see if th...
- 08:40 AM Bug #48778: Setting quota triggered ops storms on meta pool
- Small question, why do we see client_request(mds.0:61937457 setxattr #0x10006a64656 ceph.quota caller_uid=0, caller_g...
- 05:54 AM Bug #48778: Setting quota triggered ops storms on meta pool
- Thank you for your reply, Patrick.
I already suspected the requests might come from the clients, but my initial de...
01/11/2021
- 11:48 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- By the way, something odd I noticed today was:
https://github.com/ceph/ceph/blob/d20916964984242e513a645bd275fad89... - 11:42 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Patrick Donnelly wrote:
> Adam Emerson wrote:
> > This doesn't look quite the same? Before we were having the opera... - 07:28 PM Bug #48839 (Resolved): qa: Error: Unable to find a match: cephfs-top
- My fault with bad scheduling.
- 06:58 PM Bug #48839 (Resolved): qa: Error: Unable to find a match: cephfs-top
- ...
- 06:40 PM Documentation #48838 (In Progress): document ms_mode options in mount.ceph manpage
- 06:37 PM Documentation #48838 (Resolved): document ms_mode options in mount.ceph manpage
- We recently merged a patch to support the ms_mode= option in mount.ceph. Document the new option in the manpage.
- 06:20 PM Backport #48837 (Resolved): nautilus: have mount helper pick appropriate mon sockets for ms_mode ...
- https://github.com/ceph/ceph/pull/39133
- 06:20 PM Backport #48836 (Resolved): octopus: have mount helper pick appropriate mon sockets for ms_mode v...
- https://github.com/ceph/ceph/pull/40763
- 06:18 PM Bug #48835 (New): qa: add ms_mode random choice to kclient tests
- Building on https://github.com/ceph/ceph/pull/38788 (#48765): modify kernel_client.py to conditionally set ms_mode fo...
- 06:15 PM Bug #48765 (Pending Backport): have mount helper pick appropriate mon sockets for ms_mode value
- 06:13 PM Bug #48661 (Resolved): mds: reserved can be set on feature set
- 06:11 PM Bug #48834 (Fix Under Review): qa: MDS_SLOW_METADATA_IO with osd thrasher
- 06:09 PM Bug #48834 (Resolved): qa: MDS_SLOW_METADATA_IO with osd thrasher
- ...
- 06:02 PM Bug #48833 (New): snap_rm hang during osd thrashing
- ...
- 05:47 PM Bug #48832: qa: fsstress w/ valgrind causes MDS to be blocklisted
- Similar one: /ceph/teuthology-archive/pdonnell-2021-01-10_18:07:36-fs-wip-pdonnell-testing-20210110.050947-distro-bas...
- 05:46 PM Bug #48832 (New): qa: fsstress w/ valgrind causes MDS to be blocklisted
- ...
- 05:42 PM Bug #48831 (New): qa: ERROR: test_snapclient_cache
- ...
- 05:40 PM Bug #48830 (Resolved): pacific: qa: :ERROR: test_idempotency
- ...
- 05:37 PM Bug #48772: qa: pjd: not ok 9, 44, 80
- Also k-stock:...
- 02:50 PM Bug #48772 (Triaged): qa: pjd: not ok 9, 44, 80
- Both tests are with the kernel's testing branch.
- 05:11 PM Bug #44100: cephfs rsync kworker high load.
- I also encountered this on Ubuntu 20.04 @Linux 5.4.0-60-generic #67-Ubuntu SMP Tue Jan 5 18:31:36 UTC 2021@
The clus... - 04:39 PM Feature #48602 (Resolved): `cephfs-top` frontend utility
- 04:30 PM Bug #42887 (Won't Fix): tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such file...
- 04:30 PM Bug #42061 (Won't Fix): volume_client: AssertionError: 237 != 8
- 04:28 PM Bug #42724 (Won't Fix): pybind/mgr/volumes: confirm backwards-compatibility of ceph_volume_client.py
- 02:58 PM Bug #48760 (Triaged): qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 02:56 PM Bug #48763 (Need More Info): mds memory leak
- Can you set `debug mds = 10` during the event so we can get an idea what the MDS is doing during this time, assuming ...
- 02:54 PM Bug #48766 (Triaged): qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.TestVo...
- 02:47 PM Bug #48773 (Triaged): qa: scrub does not complete
- 02:45 PM Bug #48778 (Need More Info): Setting quota triggered ops storms on meta pool
- This doesn't sound like a bug we'd expect. I think you may have some clients executing these setxattr requests but I'...
- 08:53 AM Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process
- Tim Serong wrote:
> How was your test cluster deployed (vstart/cstart/something else)? I'd like to see if I'm able ... - 06:55 AM Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process
- I tried reproducing this (admittedly with a downstream SUSE build), and TERM seemed to have no effect, while KILL *di...
- 08:03 AM Feature #48404 (In Progress): client: add a ceph.caps vxattr
- 08:02 AM Backport #48195: nautilus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and subvo...
- Patrick/Ramana,
I am planning to backport this along with [1] which is under review. The fix [1] persists the auth... - 08:02 AM Backport #48196: octopus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and subvol...
- Patrick/Ramana,
I am planning to backport this along with [1] which is under review. The fix [1] persists the auth...
01/10/2021
- 05:04 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Adam Emerson wrote:
> This doesn't look quite the same? Before we were having the operation file with a definite tim... - 04:47 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- This doesn't look quite the same? Before we were having the operation file with a definite timeout and I can't find t...
01/09/2021
- 03:10 AM Bug #48811 (Closed): qa: fs/snaps/snaptest-realm-split.sh hang
- https://github.com/ceph/ceph/pull/38732
- 03:07 AM Bug #48757 (Resolved): qa: "[WRN] Replacing daemon mds.d as rank 0 with standby daemon mds.f"
01/08/2021
- 11:20 PM Backport #48814 (Resolved): nautilus: mds: spurious wakeups in cache upkeep
- https://github.com/ceph/ceph/pull/39130
- 11:20 PM Backport #48813 (Resolved): octopus: mds: spurious wakeups in cache upkeep
- https://github.com/ceph/ceph/pull/40743
- 11:16 PM Bug #48753 (Pending Backport): mds: spurious wakeups in cache upkeep
- 11:14 PM Bug #48707 (Resolved): client: unmount() doesn't dump the cache
- 11:07 PM Bug #48812 (Resolved): qa: test_scrub_pause_and_resume_with_abort failure
- ...
- 11:06 PM Bug #48365: qa: ffsb build failure on CentOS 8.2
- Haven't seen this failure recently but did get this:...
- 11:00 PM Bug #48811: qa: fs/snaps/snaptest-realm-split.sh hang
- Probably related group of failures:...
- 10:57 PM Bug #48811 (Closed): qa: fs/snaps/snaptest-realm-split.sh hang
- ...
- 08:12 PM Bug #48808 (Fix Under Review): mon/MDSMonitor: `fs rm` is not idempotent
- 08:04 PM Bug #48808 (Resolved): mon/MDSMonitor: `fs rm` is not idempotent
- ...
- 07:27 PM Bug #48756 (Resolved): qa: kclient does not synchronously write with O_DIRECT
- 07:26 PM Fix #48121 (Resolved): qa: merge fs/multimds suites
- 06:47 PM Bug #48702 (Resolved): qa: fwd_scrub should only scrub rank 0
- 06:39 PM Bug #48514 (Resolved): mgr/nfs: Don't prefix 'ganesha-' to cluster id
- 06:37 PM Bug #47294 (Need More Info): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Looks like this is still here, maybe more racy than before:...
- 06:26 PM Bug #48805 (Resolved): mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blog...
- ...
- 06:19 PM Feature #18513 (Resolved): MDS: scrub: forward scrub reports missing backtraces on new files as d...
- This is fixed in the current code.
- 04:38 PM Fix #48802 (Resolved): mds: define CephFS errors that replace standard errno values
- CephFS protocol depends on the errno numbers which may vary by operating system. Copy the Linux values we use into in...
01/07/2021
- 09:58 PM Feature #48791: mds: support file block size
- Technically, I think the blocksize can be anything >= 8 bytes or so. Too small or large a block will be cumbersome to...
- 09:53 PM Feature #48791 (Rejected): mds: support file block size
- The new fscrypt feature in the kernel client that is under development needs to be able to prevent the MDS from trunc...
- 05:57 PM Bug #48203 (Resolved): qa: quota failure
- Luis Henriques wrote:
> Patrick Donnelly wrote:
> > Hey Luis, I think this is still broken; the revert didn't work:... - 04:42 PM Bug #48203: qa: quota failure
- Patrick Donnelly wrote:
> Hey Luis, I think this is still broken; the revert didn't work: https://pulpito.ceph.com/t... - 03:50 PM Bug #48203 (Need More Info): qa: quota failure
- Hey Luis, I think this is still broken; the revert didn't work: https://pulpito.ceph.com/teuthology-2021-01-03_03:15:...
- 12:22 PM Backport #48457 (Resolved): nautilus: client: fix crash when doing remount in none fuse case
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38467
m... - 12:22 PM Backport #48110 (Resolved): nautilus: client: ::_read fails to advance pos at EOF checking
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37991
m... - 12:21 PM Backport #48097 (Resolved): nautilus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37988
m... - 12:21 PM Backport #48095 (Resolved): nautilus: mds: fix file recovery crash after replaying delayed requests
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37986
m... - 05:28 AM Bug #48778 (Need More Info): Setting quota triggered ops storms on meta pool
- A few days ago, I toyed with setting a quota on a handful (less than 60) directories on my CephFS filesystem. At the ...
01/06/2021
- 11:20 PM Backport #48457: nautilus: client: fix crash when doing remount in none fuse case
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38467
merged - 11:20 PM Backport #48110: nautilus: client: ::_read fails to advance pos at EOF checking
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37991
merged - 11:19 PM Backport #48097: nautilus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37988
merged - 11:19 PM Backport #48095: nautilus: mds: fix file recovery crash after replaying delayed requests
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37986
merged - 09:35 PM Documentation #48585 (Resolved): mds_cache_trim_decay_rate misnamed?
- 08:31 PM Bug #48773 (In Progress): qa: scrub does not complete
- ...
- 08:29 PM Bug #48772 (Need More Info): qa: pjd: not ok 9, 44, 80
- ...
- 08:25 PM Bug #48771 (New): qa: iogen: workload fails to cause balancing
- Not really a bug but it causes a test failure and is worthy of investigation:...
- 07:54 PM Bug #48765 (Fix Under Review): have mount helper pick appropriate mon sockets for ms_mode value
- 02:13 PM Bug #48765 (Resolved): have mount helper pick appropriate mon sockets for ms_mode value
- Ilya recently added msgr2 support to the kclient, but the mount helper still ignores any v2 addresses when mounting. ...
- 07:12 PM Bug #48770 (Fix Under Review): qa: "Test failure: test_hole (tasks.cephfs.test_failover.TestClust...
- 06:56 PM Bug #48770 (Resolved): qa: "Test failure: test_hole (tasks.cephfs.test_failover.TestClusterResize)"
- ...
- 05:50 PM Bug #48766: qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.TestVolumeClient)
- master failure: /ceph/teuthology-archive/teuthology-2020-12-27_03:15:03-fs-master-distro-basic-smithi/5738903/teuthol...
- 02:23 PM Bug #48766 (Duplicate): qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.Test...
- test_evict_client fails in [1] and [2].
[1] http://qa-proxy.ceph.com/teuthology/jcollin-2021-01-05_16:12:23-fs-wip... - 03:02 PM Bug #48760: qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- failure in master:
/ceph/teuthology-archive/teuthology-2021-01-05_03:15:02-fs-master-distro-basic-smithi/5754681/t... - 04:37 AM Bug #48760 (Can't reproduce): qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- Job 5756405 [1] fails with below error.
[1] http://qa-proxy.ceph.com/teuthology/jcollin-2021-01-05_16:12:23-fs-wip... - 02:51 PM Documentation #48531 (Resolved): doc/cephfs: "ceph fs new" command is, ironically, old. The new (...
- 02:20 PM Feature #44928 (Fix Under Review): mgr/volumes: evict clients based on auth ID and subvolume mounted
- 01:51 PM Bug #45344: doc: Table Of Contents doesn't work
- I spoke to Patrick (the creator and owner of the CephFS documentation) about this, and for the time being, the rst fi...
- 01:43 PM Bug #48763: mds memory leak
- ...
- 09:47 AM Bug #48763 (Need More Info): mds memory leak
- I have a possible memory leak in 14.2.10 mds. MDS suddenly uses 107GB Ram, before around 70.
Once mds starts its eat... - 12:57 PM Bug #44565 (In Progress): src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || st...
01/05/2021
- 09:45 PM Bug #48756 (Fix Under Review): qa: kclient does not synchronously write with O_DIRECT
- 08:15 PM Bug #48756: qa: kclient does not synchronously write with O_DIRECT
- Trying to reproduce on master: https://pulpito.ceph.com/pdonnell-2021-01-05_20:15:07-fs:workload-master-distro-basic-...
- 08:09 PM Bug #48756 (Resolved): qa: kclient does not synchronously write with O_DIRECT
- ...
- 08:35 PM Bug #48757 (Fix Under Review): qa: "[WRN] Replacing daemon mds.d as rank 0 with standby daemon md...
- 08:30 PM Bug #48757 (Resolved): qa: "[WRN] Replacing daemon mds.d as rank 0 with standby daemon mds.f"
- /ceph/teuthology-archive/pdonnell-2020-12-24_22:49:03-fs:workload-wip-pdonnell-testing-20201224.195406-distro-basic-s...
- 05:07 PM Bug #48753 (Fix Under Review): mds: spurious wakeups in cache upkeep
- 05:06 PM Bug #48753 (Resolved): mds: spurious wakeups in cache upkeep
- ...
- 04:39 PM Documentation #48731 (Resolved): mgr/nfs: Add info related to rook, clarify pseudo path and dashb...
- 03:02 PM Feature #46074 (Resolved): mds: provide altrenatives to increase the total cephfs subvolume snaps...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:00 PM Bug #48555 (Resolved): pybind/ceph_volume_client: allows authorize on auth_ids not created throug...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:08 PM Documentation #48531 (Fix Under Review): doc/cephfs: "ceph fs new" command is, ironically, old. T...
01/04/2021
- 02:44 PM Bug #48673 (Need More Info): High memory usage on standby replay MDS
- 02:44 PM Bug #48711 (Triaged): mds: standby-replay mds abort when replay metablob
- 11:46 AM Feature #45746 (In Progress): mgr/nfs: Add interface to update export
- 10:09 AM Feature #48736 (Fix Under Review): qa: enable debug loglevel kclient test suits
- 04:24 AM Feature #48736 (In Progress): qa: enable debug loglevel kclient test suits
- 03:43 AM Feature #48736 (Resolved): qa: enable debug loglevel kclient test suits
- This is helpful when debugging and resolving bugs.
12/31/2020
- 12:25 PM Documentation #48731 (In Progress): mgr/nfs: Add info related to rook, clarify pseudo path and da...
- 12:19 PM Documentation #48731 (Resolved): mgr/nfs: Add info related to rook, clarify pseudo path and dashb...
12/30/2020
- 01:46 AM Bug #48679 (Fix Under Review): client: items pinned in cache preventing unmount
- 01:35 AM Bug #48679: client: items pinned in cache preventing unmount
- Xiubo Li wrote:
> For example for the inode 0x10000000e51:
>
> [...]
>
> Because it has the Fb cap, so the flu...
12/28/2020
- 09:27 AM Backport #47085 (Resolved): octopus: common: validate type CephBool cause 'invalid command json'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37362
m... - 09:26 AM Backport #47095 (Resolved): octopus: mds: provide altrenatives to increase the total cephfs subvo...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38553
m... - 09:25 AM Backport #48372 (Resolved): octopus: client: dump which fs is used by client for multiple-fs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38551
m...
12/24/2020
- 11:27 AM Bug #48679: client: items pinned in cache preventing unmount
- For example for the inode 0x10000000e51:...
- 04:35 AM Bug #47662 (Resolved): mds: try to replicate hot dir to restarted MDS
- 04:33 AM Fix #48053 (Resolved): qa: update test_readahead to work with the kernel
- 04:32 AM Bug #48701 (Resolved): pybind/cephfs: MCommand message is constructed with command separated into...
- 04:20 AM Bug #47294 (Resolved): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- 02:55 AM Bug #48711 (Closed): mds: standby-replay mds abort when replay metablob
- Ceph Version 14.2.15
OS: CentOS 7.6.1810
We create a fs that have three active mds, three standby-replay mds, three... - 12:24 AM Feature #44192 (Fix Under Review): mds: stable multimds scrub
12/23/2020
- 08:25 AM Bug #48707 (Fix Under Review): client: unmount() doesn't dump the cache
- 08:23 AM Bug #48707 (Resolved): client: unmount() doesn't dump the cache
- The delay_put_inodes() will be called by the tick() periodically
per second, and when the _unmount() is waiting for ... - 07:53 AM Bug #48706: mgr/nfs: Does not detect exports created by dashboard
- Dashboard can detect volume/nfs exports
- 07:50 AM Bug #48706 (New): mgr/nfs: Does not detect exports created by dashboard
- ...
- 06:28 AM Bug #48679 (In Progress): client: items pinned in cache preventing unmount
- 05:54 AM Bug #48559 (Fix Under Review): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- 02:40 AM Bug #44565: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XL...
- We have just hit this in 14.2.11, with 3 active mds.
mds log is at ceph-post-file: b1a56b74-6fbe-41bb-adcd-183695c39...
12/22/2020
- 09:14 PM Feature #48704 (New): mds: recall caps proportional to the number issued
- mds_recall_max_caps may wipe out the client cache for small clients. It may also not be large enough for very aggress...
- 06:30 PM Backport #48703 (Rejected): octopus: mgr/nfs: Add tests for readonly exports
- 06:26 PM Feature #48622 (Pending Backport): mgr/nfs: Add tests for readonly exports
- 06:01 PM Bug #48702 (Fix Under Review): qa: fwd_scrub should only scrub rank 0
- 05:42 PM Bug #48702 (Resolved): qa: fwd_scrub should only scrub rank 0
- ...
- 05:45 PM Backport #47085: octopus: common: validate type CephBool cause 'invalid command json'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37362
merged - 05:30 PM Bug #48701 (Fix Under Review): pybind/cephfs: MCommand message is constructed with command separa...
- 05:28 PM Bug #48701 (Resolved): pybind/cephfs: MCommand message is constructed with command separated into...
- ...
- 04:47 PM Backport #48111 (Resolved): octopus: doc: document MDS recall configurations
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38202
m... - 04:44 PM Backport #48111: octopus: doc: document MDS recall configurations
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38202
merged - 04:47 PM Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38553
merged - 04:47 PM Backport #48191 (Resolved): octopus: mds: throttle workloads which acquire caps faster than the c...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38095
m... - 04:44 PM Backport #48191: octopus: mds: throttle workloads which acquire caps faster than the client can r...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38095
merged - 04:47 PM Backport #48109 (Resolved): octopus: client: ::_read fails to advance pos at EOF checking
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37989
m... - 04:43 PM Backport #48109: octopus: client: ::_read fails to advance pos at EOF checking
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37989
merged - 04:47 PM Backport #48098 (Resolved): octopus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37987
m... - 04:43 PM Backport #48098: octopus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37987
merged - 04:46 PM Backport #48372: octopus: client: dump which fs is used by client for multiple-fs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38551
merged - 04:46 PM Backport #48096 (Resolved): octopus: mds: fix file recovery crash after replaying delayed requests
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37985
m... - 04:42 PM Backport #48096: octopus: mds: fix file recovery crash after replaying delayed requests
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37985
merged - 04:45 PM Bug #48524: octopus: run_shell() got an unexpected keyword argument 'timeout'
- https://github.com/ceph/ceph/pull/38550 merged
- 02:33 PM Feature #44928 (In Progress): mgr/volumes: evict clients based on auth ID and subvolume mounted
- 01:46 PM Bug #48700 (Closed): client: Client::rmdir() may fail to remove a snapshot
- Call to Client::may_delete() from Client::rmdir() is done here https://github.com/ceph/ceph/blob/master/src/client/Cl...
12/21/2020
- 04:26 PM Bug #48673: High memory usage on standby replay MDS
- Thanks for the information. There were a few fixes in v15.2.8 relating to memory consumption for the MDS which may be...
- 04:17 PM Bug #48673: High memory usage on standby replay MDS
- Patrick Donnelly wrote:
> Please share `ceph versions` and `ceph fs dump`.
>
> I believe we've recently fixed som... - 02:53 PM Bug #48673: High memory usage on standby replay MDS
- Daniel Persson wrote:
> Hi.
>
> We have recently installed a Ceph cluster and with about 27M objects. The filesys... - 04:12 PM Fix #48121 (Fix Under Review): qa: merge fs/multimds suites
- 02:52 PM Bug #48679: client: items pinned in cache preventing unmount
- Patrick, this one seems similiar to the one I have fixed before, I will take it.
Thanks. - 02:41 PM Bug #48679: client: items pinned in cache preventing unmount
- ...
- 02:46 PM Feature #48619 (In Progress): client: track (and forward to MDS) average read/write/metadata latency
- 06:38 AM Bug #47294 (Fix Under Review): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Fix in https://github.com/ceph/ceph/pull/38668
- 03:46 AM Bug #48559: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- The commit(03908aa04344) has removed the code which was dumpping the scrub detail result. And currently only the foll...
12/19/2020
- 09:26 PM Feature #7320 (Fix Under Review): qa: thrash directory fragmentation
- 06:36 PM Fix #48683 (Resolved): mds/MDSMap: print each flag value in MDSMap::dump
- Don't require operators to do bitwise arithmetic on the "flags" field. Print each flag.
https://github.com/ceph/ce... - 06:35 PM Feature #48682 (Resolved): MDSMonitor: add command to print fs flags
- From this list:
https://github.com/ceph/ceph/blob/master/src/include/ceph_fs.h#L275-L285
12/18/2020
- 09:27 PM Bug #48517 (Resolved): mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
- 09:25 PM Feature #22477 (Resolved): multifs: remove multifs experimental warnings
- 09:15 PM Bug #48680 (New): mds: scrubbing stuck "scrub active (0 inodes in the stack)"
- ...
- 09:10 PM Bug #48679 (Resolved): client: items pinned in cache preventing unmount
- ...
- 09:06 PM Bug #48678 (In Progress): client: spins on tick interval
- ...
- 02:41 PM Bug #48501 (Fix Under Review): pybind/mgr/volumes: inherited snapshots should be filtered out of ...
- 08:21 AM Bug #48673 (Pending Backport): High memory usage on standby replay MDS
- Hi.
We have recently installed a Ceph cluster and with about 27M objects. The filesystem seems to have 15M files.
... - 04:33 AM Feature #44931 (Fix Under Review): mgr/volumes: get the list of auth IDs that have been granted a...
12/17/2020
- 11:39 PM Bug #21539: man: missing man page for mount.fuse.ceph
- Adding this to the packaging in https://github.com/ceph/ceph/pull/38642
- 07:29 PM Bug #48661 (Fix Under Review): mds: reserved can be set on feature set
- 07:28 PM Bug #48661 (Resolved): mds: reserved can be set on feature set
- ...
- 05:40 PM Backport #48638 (Resolved): nautilus: pybind/ceph_volume_client: allows authorize on auth_ids not...
- 05:39 PM Backport #48638 (In Progress): nautilus: pybind/ceph_volume_client: allows authorize on auth_ids ...
- 12:06 AM Backport #48638 (Resolved): nautilus: pybind/ceph_volume_client: allows authorize on auth_ids not...
- 12:05 AM Backport #48638 (Resolved): nautilus: pybind/ceph_volume_client: allows authorize on auth_ids not...
- ...
- 04:15 AM Backport #48644 (Resolved): octopus: client: ceph.dir.entries does not acquire necessary caps
- https://github.com/ceph/ceph/pull/38949
- 04:15 AM Backport #48643 (Resolved): nautilus: client: ceph.dir.entries does not acquire necessary caps
- https://github.com/ceph/ceph/pull/38950
- 04:12 AM Bug #48313 (Pending Backport): client: ceph.dir.entries does not acquire necessary caps
- 04:11 AM Feature #17856 (Resolved): qa: background cephfs forward scrub teuthology task
- 04:10 AM Backport #48642 (Resolved): octopus: Client: the directory's capacity will not be updated after w...
- https://github.com/ceph/ceph/pull/38947
- 04:10 AM Backport #48641 (Resolved): nautilus: Client: the directory's capacity will not be updated after ...
- https://github.com/ceph/ceph/pull/38948
- 04:09 AM Bug #48318 (Pending Backport): Client: the directory's capacity will not be updated after write d...
- 02:19 AM Backport #48639 (Resolved): luminous: pybind/ceph_volume_client: allows authorize on auth_ids not...
- 12:10 AM Backport #48639 (Resolved): luminous: pybind/ceph_volume_client: allows authorize on auth_ids not...
- ...
- 12:06 AM Backport #48637 (Resolved): octopus: pybind/ceph_volume_client: allows authorize on auth_ids not ...
- 12:05 AM Backport #48637 (Resolved): octopus: pybind/ceph_volume_client: allows authorize on auth_ids not ...
- ...
- 12:05 AM Bug #48555: pybind/ceph_volume_client: allows authorize on auth_ids not created through ceph_volu...
- ...
- 12:03 AM Bug #48555 (Pending Backport): pybind/ceph_volume_client: allows authorize on auth_ids not create...
- 12:03 AM Bug #48555 (Resolved): pybind/ceph_volume_client: allows authorize on auth_ids not created throug...
- Backports done manually.
12/16/2020
- 10:19 PM Backport #48634 (In Progress): nautilus: qa: tox failures
- 10:15 PM Backport #48634 (Resolved): nautilus: qa: tox failures
- https://github.com/ceph/ceph/pull/38627
- 10:18 PM Backport #48635 (In Progress): octopus: qa: tox failures
- 10:15 PM Backport #48635 (Resolved): octopus: qa: tox failures
- https://github.com/ceph/ceph/pull/38626
- 10:14 PM Bug #48633 (Pending Backport): qa: tox failures
- 08:44 PM Bug #48633 (Fix Under Review): qa: tox failures
- 08:43 PM Bug #48633 (Resolved): qa: tox failures
- ...
- 02:52 PM Backport #47158 (In Progress): octopus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxat...
- 01:50 PM Feature #48622 (Fix Under Review): mgr/nfs: Add tests for readonly exports
- 08:23 AM Feature #48622 (Resolved): mgr/nfs: Add tests for readonly exports
- 09:50 AM Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process
- I can reproduce it with SIGTERM...
- 06:52 AM Feature #48619: client: track (and forward to MDS) average read/write/metadata latency
- Xiubo suggested that the client also sends min/max and stddev.
- 05:17 AM Feature #48619 (Pending Backport): client: track (and forward to MDS) average read/write/metadata...
- Client already tracks cumulative read/write/metadata latencies. However, average latencies are much more useful to th...
- 06:01 AM Tasks #48620 (In Progress): mds: break the mds_lock or get rid of the mds_lock for some code
- 05:48 AM Tasks #48620 (In Progress): mds: break the mds_lock or get rid of the mds_lock for some code
- 05:54 AM Bug #48559 (In Progress): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- 03:18 AM Bug #48517 (Fix Under Review): mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
12/15/2020
- 01:02 PM Feature #48602 (Resolved): `cephfs-top` frontend utility
- The plumbing work for tracking (client) metrics in the MDS is already done and mgr/stats module provides an interface...
- 12:41 PM Documentation #48585 (Fix Under Review): mds_cache_trim_decay_rate misnamed?
- No other places, just being more explicit would be helpful I think.
Also available in: Atom