Activity
From 03/15/2021 to 04/13/2021
04/13/2021
- 02:24 PM Bug #50035: cephfs-mirror: use sensible mount/shutdown timeouts
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Mostly setting this as a config option (in ceph.conf) would suff... - 02:04 PM Bug #50035: cephfs-mirror: use sensible mount/shutdown timeouts
- Venky Shankar wrote:
> Mostly setting this as a config option (in ceph.conf) would suffice. However, cephfs-mirror c... - 01:07 PM Bug #50035: cephfs-mirror: use sensible mount/shutdown timeouts
- Mostly setting this as a config option (in ceph.conf) would suffice. However, cephfs-mirror can connect to the remote...
- 05:52 AM Bug #49939 (Fix Under Review): cephfs-mirror: be resilient to recreated snapshot during synchroni...
04/12/2021
- 09:19 PM Bug #50083: CephFS file access issues using kernel driver: file overwritten with null bytes
- David, ping? Are you able to answer the questions above?
- 07:20 PM Bug #49873 (Duplicate): ceph_lremovexattr does not return error on file in ceph pacific
- Thanks for checking that Sidharth.
- 03:15 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- That sounds sensible to me, thanks! I did attempt to build ceph a copule of weeks ago, but unfortunately I was unabl...
- 02:54 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- I tested this incorrectly last week using libcephfs due to an incorrect reproduction of the steps. Upon retesting cor...
- 07:19 PM Bug #50305 (Resolved): MDS doesn't set fscrypt flag on new inodes with crypto context in xattr bu...
- The new fscrypt context handling code will set the "fscrypt" flag when the encryption.ctx xattr is set explicitly, bu...
- 06:03 PM Backport #50253 (In Progress): pacific: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/...
- 04:46 PM Bug #50271 (Won't Fix): vmware esxi NFS client cannot create thin provisioned vmdk files
- Closing this as WONTFIX since we can't really do it without harming performance for important workloads.
- 03:48 PM Bug #50271: vmware esxi NFS client cannot create thin provisioned vmdk files
- I responded in the original email thread. The problem here is that ceph doesn't report sparse file usage correctly. W...
- 01:44 PM Bug #50271 (Triaged): vmware esxi NFS client cannot create thin provisioned vmdk files
- 02:26 PM Bug #50237: cephfs-journal-tool/cephfs-data-scan: Stuck in infinite loop with "NetHandler create_...
- thanks for the reply!
i'll collect relevant mon/mds debug logs, and the strace outputs this week
if there is anythi... - 01:49 PM Bug #50237 (Need More Info): cephfs-journal-tool/cephfs-data-scan: Stuck in infinite loop with "N...
- 01:46 PM Bug #50258 (Triaged): pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
- 01:44 PM Bug #50279 (Triaged): qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
- 11:16 AM Bug #50298 (Fix Under Review): libcephfs: support file descriptor based *at() APIs
- 11:14 AM Bug #50298 (Resolved): libcephfs: support file descriptor based *at() APIs
04/11/2021
- 10:00 AM Backport #50252 (Need More Info): octopus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d...
- cherry-pick applies cleanly, but the resulting code does not compile
see https://github.com/ceph/ceph/pull/40781 f...
04/10/2021
- 09:44 PM Backport #50288 (In Progress): octopus: MDS stuck at stopping when reducing max_mds
- 03:10 AM Backport #50288 (Resolved): octopus: MDS stuck at stopping when reducing max_mds
- https://github.com/ceph/ceph/pull/40768
- 09:43 PM Backport #50290 (In Progress): nautilus: MDS stuck at stopping when reducing max_mds
- 05:12 PM Backport #50290 (Need More Info): nautilus: MDS stuck at stopping when reducing max_mds
- appears to depend on #48991 which is a large, non-trivial changeset
- 03:10 AM Backport #50290 (Resolved): nautilus: MDS stuck at stopping when reducing max_mds
- https://github.com/ceph/ceph/pull/40769
- 09:34 PM Backport #50286 (In Progress): octopus: qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- 03:10 AM Backport #50286 (Resolved): octopus: qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- https://github.com/ceph/ceph/pull/40783
- 09:33 PM Backport #50283 (In Progress): octopus: MDS slow request lookupino #0x100 on rank 1 block forever...
- 03:00 AM Backport #50283 (Resolved): octopus: MDS slow request lookupino #0x100 on rank 1 block forever on...
- https://github.com/ceph/ceph/pull/40782
- 09:30 PM Backport #50252 (In Progress): octopus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/...
- 09:30 PM Backport #50188 (Need More Info): octopus: qa: "Assertion `cb_done' failed."
- not clear how new yaml file fits into octopus directory structure
- 09:28 PM Backport #50188 (In Progress): octopus: qa: "Assertion `cb_done' failed."
- 09:27 PM Backport #50181 (In Progress): octopus: client: only check pool permissions for regular files
- 09:26 PM Backport #50027 (In Progress): octopus: client: items pinned in cache preventing unmount
- 09:25 PM Backport #49950 (In Progress): octopus: mgr/nfs: Update about cephadm single nfs-ganesha daemon p...
- 09:23 PM Backport #49934 (In Progress): octopus: libcephfs: test termination "what(): Too many open files"
- 09:23 PM Backport #49752 (In Progress): octopus: snap-schedule doc
- 09:20 PM Backport #49851 (In Progress): octopus: mds: race of fetching large dirfrag
- 09:16 PM Backport #49611 (In Progress): octopus: qa: racy session evicted check
- 09:14 PM Backport #49560 (In Progress): octopus: qa: file system deletion not complete because starter fs ...
- 09:13 PM Backport #49518 (In Progress): octopus: client: wake up the front pos waiter
- 09:12 PM Backport #49515 (In Progress): octopus: pybind/cephfs: DT_REG and DT_LNK values are wrong
- 09:11 PM Backport #49514 (In Progress): nautilus: client: allow looking up snapped inodes by inode number+...
- 09:10 PM Backport #49513 (In Progress): octopus: client: allow looking up snapped inodes by inode number+s...
- 09:01 PM Backport #49472 (In Progress): octopus: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- 08:58 PM Backport #49413 (In Progress): octopus: mgr/nfs: Update about user config
- 08:57 PM Backport #49347 (In Progress): octopus: qa: Test failure: test_damaged_dentry (tasks.cephfs.test_...
- 08:56 PM Backport #48878 (In Progress): octopus: mds: fix recall defaults based on feedback from productio...
- 08:55 PM Backport #48836 (In Progress): octopus: have mount helper pick appropriate mon sockets for ms_mod...
- 08:54 PM Backport #45853 (In Progress): octopus: cephfs-journal-tool: NetHandler create_socket couldn't cr...
- 04:00 PM Bug #50271: vmware esxi NFS client cannot create thin provisioned vmdk files
- I have upgraded now to Octopus 15.2.10 and the issue is still present
Attached is the packet capture from the esxi... - 01:03 PM Backport #50284 (In Progress): nautilus: MDS slow request lookupino #0x100 on rank 1 block foreve...
- 03:00 AM Backport #50284 (Rejected): nautilus: MDS slow request lookupino #0x100 on rank 1 block forever o...
- 12:59 PM Backport #50251 (Need More Info): nautilus: mds: "cluster [WRN] Scrub error on inode 0x1000000039...
- non-trivial backport. The cherry-pick from master applies cleanly, but the resulting code does not compile -- presuma...
- 11:53 AM Backport #50251 (In Progress): nautilus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (...
- 12:44 PM Backport #50255 (In Progress): nautilus: mds: standby-replay only trims cache when it reaches the...
- 11:58 AM Backport #50255 (Need More Info): nautilus: mds: standby-replay only trims cache when it reaches ...
- non-trivial backport
- 12:15 PM Backport #48813 (In Progress): octopus: mds: spurious wakeups in cache upkeep
- 12:11 PM Backport #50256 (In Progress): octopus: mds: standby-replay only trims cache when it reaches the ...
- 11:52 AM Backport #50189 (Need More Info): nautilus: qa: "Assertion `cb_done' failed."
- this one is non-trivial to backport because the yaml structure under qa/ has changed substantially since nautilus
- 10:58 AM Backport #50184 (Need More Info): octopus: client: openned inodes counter is inconsistent
- This ticket is for tracking the octopus backport of a follow-on fix for #46865 which was backported to pacific only?
- 10:57 AM Backport #50182 (Need More Info): nautilus: client: openned inodes counter is inconsistent
- This ticket is for tracking the nautilus backport of a follow-on fix for #46865 which was backported to pacific only?
- 03:10 AM Backport #50289 (Resolved): pacific: MDS stuck at stopping when reducing max_mds
- https://github.com/ceph/ceph/pull/40856
- 03:10 AM Backport #50287 (Resolved): pacific: qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- https://github.com/ceph/ceph/pull/40852
- 03:09 AM Bug #50215 (Pending Backport): qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- 03:08 AM Bug #50112 (Pending Backport): MDS stuck at stopping when reducing max_mds
- 03:05 AM Backport #50285 (Resolved): pacific: qa: test standby_replay in workloads
- https://github.com/ceph/ceph/pull/40853
- 03:03 AM Fix #50045 (Pending Backport): qa: test standby_replay in workloads
- 03:02 AM Bug #49466 (Resolved): qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/t...
- 03:00 AM Backport #50282 (Resolved): pacific: MDS slow request lookupino #0x100 on rank 1 block forever on...
- https://github.com/ceph/ceph/pull/40856
- 02:58 AM Bug #49922 (Pending Backport): MDS slow request lookupino #0x100 on rank 1 block forever on dispa...
- 02:55 AM Bug #50281 (Resolved): qa: untar_snap_rm timeout
- ...
- 02:45 AM Bug #48365 (New): qa: ffsb build failure on CentOS 8.2
- Back from the dead: /ceph/teuthology-archive/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-dis...
- 02:39 AM Bug #50279 (Need More Info): qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
- ...
04/09/2021
- 10:24 PM Bug #47689: rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting unresponsive c...
- looks similar: https://tracker.ceph.com/issues/50275
- 05:20 PM Bug #50271 (Won't Fix): vmware esxi NFS client cannot create thin provisioned vmdk files
- I have a 3 node Ceph octopus 15.2.7 cluster running on fully up to date Centos 7 with nfs-ganesha 3.5.
After follo... - 03:41 PM Backport #50015: pacific: qa: "AttributeError: 'NoneType' object has no attribute 'mon_manager'"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40645
merged - 03:41 PM Backport #50179 (In Progress): nautilus: client: only check pool permissions for regular files
- 03:40 PM Backport #50128 (Need More Info): nautilus: pybind/mgr/volumes: deadlock on async job hangs finis...
- nautilus backport is non-trivial
- 03:39 PM Bug #50266: "ceph fs snapshot mirror daemon status" should not use json keys as value
- After discussing with Venky, it seems that a daemon can mirror multiple ffs so we need another list for the fs_id, so...
- 01:33 PM Bug #50266 (Resolved): "ceph fs snapshot mirror daemon status" should not use json keys as value
- Currently the command outputs:...
- 10:19 AM Bug #49684 (Resolved): qa: fs:cephadm mount does not wait for mds to be created
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:15 AM Backport #50025 (Resolved): pacific: client: items pinned in cache preventing unmount
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40629
m... - 10:15 AM Backport #50022 (Resolved): pacific: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_n...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40628
m... - 10:15 AM Backport #50030 (Resolved): pacific: qa: fs:cephadm mount does not wait for mds to be created
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40528
m... - 09:01 AM Backport #50026 (In Progress): nautilus: client: items pinned in cache preventing unmount
- 09:01 AM Bug #49936: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= 1024)
- @Patrick, you asked for this bugfix to be backported all the way back to nautilus, but AFAICT the code containing the...
- 09:00 AM Backport #50024 (Need More Info): octopus: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_ass...
- AFAICT the code being fixed does not exist in Octopus
- 08:58 AM Backport #50023 (Need More Info): nautilus: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_as...
- AFAICT the code being fixed does not exist in nautilus
- 08:56 AM Backport #49931 (Need More Info): octopus: MDS should return -ENODATA when asked to remove xattr ...
- AFAICT the line that introduced the bug is not present in octopus.
- 08:56 AM Bug #49833: MDS should return -ENODATA when asked to remove xattr that doesn't exist
- @Patrick, you requested that this bugfix be backported to octopus, but AFAICT the line that introduced the bug is not...
- 08:53 AM Bug #49833: MDS should return -ENODATA when asked to remove xattr that doesn't exist
- @Patrick, you requested that this bugfix be backported to nautilus, but AFAICT the line that introduced the bug is no...
- 08:54 AM Backport #49933 (Need More Info): nautilus: MDS should return -ENODATA when asked to remove xattr...
- AFAICT the line that introduced the bug is not present in nautilus.
- 07:42 AM Backport #49853 (In Progress): nautilus: mds: race of fetching large dirfrag
- 06:21 AM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- I have figured one case could reprodce it in theory:
1, I have check the `mv` source code, before doing the `renam... - 03:10 AM Feature #48791 (Rejected): mds: support file block size
- Discussions over email indicate we'll be going in another direction.
- 03:06 AM Feature #41566: mds: support rolling upgrades
- Jos's current work: https://github.com/ceph/ceph/pull/36821
04/08/2021
- 10:22 PM Backport #50025: pacific: client: items pinned in cache preventing unmount
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40629
merged - 10:21 PM Backport #50022: pacific: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= 1024)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40628
merged - 10:21 PM Backport #50030: pacific: qa: fs:cephadm mount does not wait for mds to be created
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40528
merged - 09:17 PM Bug #50237: cephfs-journal-tool/cephfs-data-scan: Stuck in infinite loop with "NetHandler create_...
- The warning is probably unrelated. Please collect logs.
- 01:29 PM Bug #50237 (Need More Info): cephfs-journal-tool/cephfs-data-scan: Stuck in infinite loop with "N...
- Both tools are getting stuck in an infinite loop and only outputting this message.
Env:
version: 15.2.4
docker... - 08:40 PM Backport #49613 (In Progress): nautilus: qa: racy session evicted check
- 08:37 PM Bug #50260 (New): pacific: qa: "rmdir: failed to remove '/home/ubuntu/cephtest': Directory not em...
- ...
- 08:29 PM Bug #50258 (Resolved): pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
- ...
- 08:22 PM Backport #49471 (In Progress): nautilus: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- 07:54 PM Backport #49519 (Need More Info): nautilus: client: wake up the front pos waiter
- commit applies cleanly in nautilus, but does not compile. The compiler error can be seen in a comment I appended to t...
- 05:53 PM Backport #49519 (In Progress): nautilus: client: wake up the front pos waiter
- 06:40 PM Backport #50256 (Resolved): octopus: mds: standby-replay only trims cache when it reaches the end...
- https://github.com/ceph/ceph/pull/40743
- 06:40 PM Backport #50255 (Resolved): nautilus: mds: standby-replay only trims cache when it reaches the en...
- https://github.com/ceph/ceph/pull/40744
- 06:40 PM Backport #50254 (Resolved): pacific: mds: standby-replay only trims cache when it reaches the end...
- https://github.com/ceph/ceph/pull/40855
- 06:35 PM Backport #50253 (Resolved): pacific: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/cli...
- https://github.com/ceph/ceph/pull/40825
- 06:35 PM Backport #50252 (Rejected): octopus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/cli...
- 06:35 PM Backport #50251 (Rejected): nautilus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/cl...
- 06:35 PM Bug #50048 (Pending Backport): mds: standby-replay only trims cache when it reaches the end of th...
- 06:34 PM Bug #48805 (Pending Backport): mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/...
- 06:30 PM Bug #50250 (New): mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/cli...
- ...
- 05:58 PM Backport #49562 (In Progress): nautilus: qa: file system deletion not complete because starter fs...
- 05:54 PM Backport #49475 (In Progress): octopus: nautilus: qa: "Assertion `cb_done' failed."
- 05:11 PM Bug #50246 (Resolved): mds: failure replaying journal (EMetaBlob)
- ...
- 05:07 PM Backport #49516 (In Progress): nautilus: pybind/cephfs: DT_REG and DT_LNK values are wrong
- 05:05 PM Backport #49514 (Need More Info): nautilus: client: allow looking up snapped inodes by inode numb...
- large, non-trivial changeset
- 04:57 PM Backport #49473 (In Progress): nautilus: nautilus: qa: "Assertion `cb_done' failed."
- 03:51 PM Backport #50086 (In Progress): pacific: tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeErr...
- 03:50 PM Backport #50173 (In Progress): pacific: mgr/nfs: validation error on creating custom export
- 03:50 PM Backport #50180 (In Progress): pacific: client: only check pool permissions for regular files
- 03:49 PM Backport #50183 (In Progress): pacific: client: openned inodes counter is inconsistent
- 03:49 PM Backport #50185 (In Progress): pacific: qa: "RADOS object not found (Failed to operate read op fo...
- 03:49 PM Feature #50150: qa: begin grepping kernel logs for kclient warnings/failures to fail a test
- "BUG:"
"Oops:"
"INFO: task .+ blocked for more than .+ seconds."
"KASAN"
I'm not sure if we build with KASAN as... - 03:48 PM Backport #50190 (In Progress): pacific: qa: "Assertion `cb_done' failed."
- 03:48 PM Backport #50225 (In Progress): pacific: mds: failed to decode message of type 29 v1: void CapInfo...
- 02:40 AM Backport #50225 (Resolved): pacific: mds: failed to decode message of type 29 v1: void CapInfoPay...
- https://github.com/ceph/ceph/pull/40682
- 03:47 PM Backport #50241 (In Progress): pacific: cephfs-mirror: update docs with `fs snapshot mirror daemo...
- 02:30 PM Backport #50241 (Resolved): pacific: cephfs-mirror: update docs with `fs snapshot mirror daemon s...
- https://github.com/ceph/ceph/pull/41475
- 03:10 PM Bug #45553: mds: rstats on snapshot are updated by changes to HEAD
- rstats propagation is working by now
but the propagation goofs up snapshot rstats
see this: https://tracker.ceph.co... - 02:29 PM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- Since the `ls` command output was correct, the ORDERED flag should have been cleared as expected, or it should show '...
- 07:58 AM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- Xiaoxi Chen wrote:
> Jeff Layton wrote:
> > I think that after the mv, the directory should no longer be considered... - 02:28 PM Documentation #50229 (Pending Backport): cephfs-mirror: update docs with `fs snapshot mirror daem...
- 09:42 AM Documentation #50229 (Resolved): cephfs-mirror: update docs with `fs snapshot mirror daemon statu...
- This was missing in the docs.
- 01:59 PM Bug #50238: mds: ceph.dir.rctime for older snaps is erroneously updated
- To fix this issue:
# the correct sequence of snap IDs must be chosen to be updated on a mksnap request (snapshot cre... - 01:54 PM Bug #50238 (New): mds: ceph.dir.rctime for older snaps is erroneously updated
- vxattr ceph.dir.rctime shows new values for older snaps when new snap is created or when live data is updated.
- 12:11 PM Feature #50235 (Resolved): allow cephfs-shell to mount named filesystems
- Currently, cephfs-shell can only mount the "default" filesystem. There may be some way to use the config file to dire...
- 02:37 AM Bug #49972 (Pending Backport): mds: failed to decode message of type 29 v1: void CapInfoPayload::...
- 02:36 AM Bug #50021 (Resolved): qa: snaptest-git-ceph failure during mon thrashing
- 02:30 AM Bug #50224 (Resolved): qa: test_mirroring_init_failure_with_recovery failure
- ...
- 02:20 AM Bug #50221 (New): qa: snaptest-git-ceph failure in git diff
- ...
- 02:12 AM Bug #50220 (New): qa: dbench workload timeout
- ...
04/07/2021
- 07:46 PM Bug #50216 (Resolved): qa: "ls: cannot access 'lost+found': No such file or directory"
- ...
- 07:28 PM Bug #50215 (Fix Under Review): qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- 07:26 PM Bug #50215 (Resolved): qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- ...
- 01:43 PM Backport #50015 (In Progress): pacific: qa: "AttributeError: 'NoneType' object has no attribute '...
- 06:51 AM Bug #50178: qa: "TypeError: run() got an unexpected keyword argument 'shell'"
- Fix - https://github.com/ceph/teuthology/pull/1639
- 06:50 AM Bug #50178 (Fix Under Review): qa: "TypeError: run() got an unexpected keyword argument 'shell'"
- 04:58 AM Bug #50112: MDS stuck at stopping when reducing max_mds
- 玮文 胡 wrote:
> Xiubo Li wrote:
> > For the mds private inode/dirs, it shouldn't be normal to handle the client caps,... - 04:44 AM Bug #50112: MDS stuck at stopping when reducing max_mds
- Xiubo Li wrote:
> For the mds private inode/dirs, it shouldn't be normal to handle the client caps, right ?
I thi... - 02:27 AM Bug #50112: MDS stuck at stopping when reducing max_mds
- Patrick Donnelly wrote:
> I wonder if this is related to
>
> https://tracker.ceph.com/issues/49922
IMO you are... - 01:18 AM Bug #50112: MDS stuck at stopping when reducing max_mds
- 玮文 胡 wrote:
> Xiubo Li wrote:
> > It seems the "mdsdir_in->get_num_ref()" is not zero in mds.0
>
> Yes, do you h... - 04:13 AM Backport #50186 (In Progress): pacific: qa: daemonwatchdog fails if mounts not defined
04/06/2021
- 07:25 PM Backport #50127 (In Progress): pacific: pybind/mgr/volumes: deadlock on async job hangs finisher ...
- 07:22 PM Backport #50025 (In Progress): pacific: client: items pinned in cache preventing unmount
- 07:18 PM Backport #50022 (In Progress): pacific: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert...
- 06:10 PM Backport #50187 (In Progress): pacific: ceph-dokan improvements for additional mounts
- 05:50 PM Backport #50187 (Resolved): pacific: ceph-dokan improvements for additional mounts
- https://github.com/ceph/ceph/pull/40627
- 05:50 PM Backport #50190 (Resolved): pacific: qa: "Assertion `cb_done' failed."
- https://github.com/ceph/ceph/pull/40683
- 05:50 PM Backport #50189 (Rejected): nautilus: qa: "Assertion `cb_done' failed."
- 05:50 PM Backport #50188 (Rejected): octopus: qa: "Assertion `cb_done' failed."
- 05:46 PM Bug #49662 (Pending Backport): ceph-dokan improvements for additional mounts
- 05:46 PM Backport #50186 (Resolved): pacific: qa: daemonwatchdog fails if mounts not defined
- https://github.com/ceph/ceph/pull/40634
- 05:45 PM Backport #50185 (Resolved): pacific: qa: "RADOS object not found (Failed to operate read op for o...
- https://github.com/ceph/ceph/pull/40684
- 05:45 PM Backport #50184 (Rejected): octopus: client: openned inodes counter is inconsistent
- 05:45 PM Backport #50183 (Resolved): pacific: client: openned inodes counter is inconsistent
- https://github.com/ceph/ceph/pull/40685
- 05:45 PM Backport #50182 (Rejected): nautilus: client: openned inodes counter is inconsistent
- 05:45 PM Bug #49500 (Pending Backport): qa: "Assertion `cb_done' failed."
- 05:45 PM Backport #50181 (Resolved): octopus: client: only check pool permissions for regular files
- https://github.com/ceph/ceph/pull/40779
- 05:45 PM Backport #50180 (Resolved): pacific: client: only check pool permissions for regular files
- https://github.com/ceph/ceph/pull/40686
- 05:45 PM Backport #50179 (Resolved): nautilus: client: only check pool permissions for regular files
- https://github.com/ceph/ceph/pull/40730
- 05:44 PM Bug #50090 (Pending Backport): client: only check pool permissions for regular files
- 05:43 PM Bug #50020 (Pending Backport): qa: "RADOS object not found (Failed to operate read op for oid cep...
- 05:41 PM Bug #50057 (Pending Backport): client: openned inodes counter is inconsistent
- 05:30 PM Bug #50178 (Rejected): qa: "TypeError: run() got an unexpected keyword argument 'shell'"
- ...
- 03:25 PM Backport #50173 (Resolved): pacific: mgr/nfs: validation error on creating custom export
- https://github.com/ceph/ceph/pull/40687
- 03:24 PM Documentation #50161 (Pending Backport): mgr/nfs: validation error on creating custom export
- 10:24 AM Documentation #50161 (In Progress): mgr/nfs: validation error on creating custom export
- 10:18 AM Documentation #50161 (Resolved): mgr/nfs: validation error on creating custom export
- ...
- 12:33 PM Documentation #49406: Exceeding osd nearfull ratio causes write throttle.
- It's unfortunate that it caught you by surprise. Would you care to draft a patch to update the documentation? Where w...
- 11:59 AM Bug #50112: MDS stuck at stopping when reducing max_mds
- Xiubo Li wrote:
> BTW, do you have more logs for the mds.0 ? I need to check the logs to get why.
Only less than ... - 11:14 AM Bug #50112: MDS stuck at stopping when reducing max_mds
- 玮文 胡 wrote:
> Xiubo Li wrote:
> > It seems the "mdsdir_in->get_num_ref()" is not zero in mds.0
>
> Yes, do you h... - 11:00 AM Bug #50112: MDS stuck at stopping when reducing max_mds
- Xiubo Li wrote:
> It seems the "mdsdir_in->get_num_ref()" is not zero in mds.0
Yes, do you have any idea why? As ... - 10:46 AM Bug #50112: MDS stuck at stopping when reducing max_mds
- It seems the "mdsdir_in->get_num_ref()" is not zero in mds.0:...
- 09:03 AM Bug #50112: MDS stuck at stopping when reducing max_mds
- 玮文 胡 wrote:
> Some more progress. mds.0 seems actually sending an empty cache_expire message to mds.1, despite sayin... - 09:22 AM Bug #50021 (Fix Under Review): qa: snaptest-git-ceph failure during mon thrashing
- 05:54 AM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Patrick Donnelly wrote:
> Xiubo Li wrote:
>
[...]
> > > >
> > > > What repo do you want to mirror?
> > >
... - 03:21 AM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Xiubo Li wrote:
> Patrick Donnelly wrote:
> > David Galloway wrote:
> > > Xiubo Li wrote:
> > > > Patrick Donnell... - 02:15 AM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Patrick Donnelly wrote:
> David Galloway wrote:
> > Xiubo Li wrote:
> > > Patrick Donnelly wrote:
> > > > Xiubo L... - 01:50 AM Feature #50150 (Pending Backport): qa: begin grepping kernel logs for kclient warnings/failures t...
- Right now, TMK, we are not confirming there are no warnings/errors/lockups in the kclient before passing a test. We d...
- 01:46 AM Cleanup #50149 (Resolved): client: always register callbacks before mount()
- Make Client::ll_register_callbacks() return 'int' type value instead
and return negative errno if not successful.
...
04/05/2021
- 05:56 PM Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client sess...
- Deepika Upadhyay wrote:
> @Patrick I am seeing this issue on Ubuntu 18.04; doesn't seem to be related to testing pr.... - 05:02 PM Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client sess...
- @Patrick I am seeing this issue on Ubuntu 18.04; doesn't seem to be related to testing pr.
Not sure if it's related ... - 05:12 PM Bug #50112: MDS stuck at stopping when reducing max_mds
- I wonder if this is related to
https://tracker.ceph.com/issues/49922
The code would indicate it's normal for md... - 04:48 PM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- David Galloway wrote:
> Xiubo Li wrote:
> > Patrick Donnelly wrote:
> > > Xiubo Li wrote:
> > >
> > > [...]
> >... - 01:35 PM Bug #50035 (Triaged): cephfs-mirror: use sensible mount/shutdown timeouts
- 11:47 AM Feature #49304 (Resolved): nfs-ganesha: plumb xattr support into FSAL_CEPH
- Patches are merged into the ganesha next branch and should make next release.
04/04/2021
- 02:06 PM Bug #50112: MDS stuck at stopping when reducing max_mds
- Some more progress. mds.0 seems actually sending an empty cache_expire message to mds.1, despite saying "successfully...
- 09:09 AM Bug #50112: MDS stuck at stopping when reducing max_mds
- I spent some more time investigating this. Hope this is helpful.
When I issue "dump cache" to the stopping mds.1, ...
04/03/2021
- 04:35 PM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- Xiubo Li wrote:
> I am doubting that if there has two tasks are doing the rename:
>
> For task1, if it just do _l... - 04:21 PM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- Jeff Layton wrote:
> I think that after the mv, the directory should no longer be considered ORDERED. We probably _c... - 02:25 PM Backport #50128 (Resolved): nautilus: pybind/mgr/volumes: deadlock on async job hangs finisher th...
- https://github.com/ceph/ceph/pull/41394
- 02:25 PM Backport #50127 (Resolved): pacific: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- https://github.com/ceph/ceph/pull/40630
- 02:25 PM Backport #50126 (Rejected): octopus: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- https://github.com/ceph/ceph/pull/43269
- 02:24 PM Bug #49605 (Pending Backport): pybind/mgr/volumes: deadlock on async job hangs finisher thread
04/02/2021
- 05:44 PM Bug #50112 (Triaged): MDS stuck at stopping when reducing max_mds
- 01:42 PM Bug #50112: MDS stuck at stopping when reducing max_mds
- I've figured out "7 mds.1.cache still have replicated objects" may be the reason that this MDS cannot complete its sh...
- 12:13 PM Bug #50112 (Resolved): MDS stuck at stopping when reducing max_mds
- We are trying to upgrade to v16 today. Cephadm is trying to reduce max_mds to 1 automatically. However, MDS.1 is stuc...
- 07:10 AM Feature #46865 (Resolved): client: add metric for number of pinned capabilities
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:09 AM Bug #48559 (Resolved): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:09 AM Fix #48802 (Resolved): mds: define CephFS errors that replace standard errno values
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:08 AM Bug #48912 (Resolved): ls -l in cephfs-shell tries to chase symlinks when stat'ing and errors out...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:08 AM Bug #49074 (Resolved): mds: don't start purging inodes in the middle of recovery
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:08 AM Bug #49121 (Resolved): vstart: volumes/nfs interface complaints cluster does not exists
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:07 AM Bug #49391 (Resolved): qa: run fs:verify with tcmalloc
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:07 AM Bug #49464 (Resolved): qa: rank_freeze prevents failover on some tests
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:07 AM Bug #49507 (Resolved): qa: mds removed because trimming for too long with valgrind
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:07 AM Bug #49607 (Resolved): qa: slow metadata ops during scrubbing
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:07 AM Feature #49619 (Resolved): cephfs-mirror: add mirror peers via bootstrapping
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:07 AM Feature #49623 (Resolved): Windows CephFS support - ceph-dokan
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:06 AM Bug #49711 (Resolved): cephfs-mirror: symbolic links do not get synchronized at times
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:06 AM Bug #49719 (Resolved): mon/MDSMonitor: standby-replay daemons should be removed when the flag is ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:06 AM Bug #49725 (Resolved): client: crashed in cct->_conf.get_val() in Client::start_tick_thread()
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:06 AM Bug #49822 (Resolved): test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirr...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:55 AM Bug #50108 (New): access to a file with the wrong permission when changing the parent directory's...
- Recently, we tried to manage the permission of files and directories in ceph with ACL.
Basically, we planed to set...
04/01/2021
- 06:17 PM Backport #49935 (Resolved): pacific: libcephfs: test termination "what(): Too many open files"
- 06:17 PM Backport #49932 (Resolved): pacific: MDS should return -ENODATA when asked to remove xattr that d...
- 06:17 PM Backport #49929 (Resolved): pacific: test: test_mirroring_command_idempotency (tasks.cephfs.test_...
- 06:16 PM Backport #49905 (Resolved): pacific: mgr/volumes: setuid and setgid file bits are not retained af...
- 06:16 PM Backport #49854 (Resolved): pacific: client: crashed in cct->_conf.get_val() in Client::start_tic...
- 06:16 PM Backport #49852 (Resolved): pacific: mds: race of fetching large dirfrag
- 06:16 PM Backport #49765 (Resolved): pacific: cephfs-mirror: symbolic links do not get synchronized at times
- 06:16 PM Backport #49753 (Resolved): pacific: cephfs-mirror: add mirror peers via bootstrapping
- 06:15 PM Backport #49751 (Resolved): pacific: snap-schedule doc
- 06:14 PM Backport #49713 (Resolved): pacific: mgr/nfs: Add interface to update export
- 06:14 PM Backport #49687 (Resolved): pacific: client: add metric for number of pinned capabilities
- 06:14 PM Backport #49685 (Resolved): pacific: ls -l in cephfs-shell tries to chase symlinks when stat'ing ...
- 06:14 PM Backport #49634 (Resolved): pacific: Windows CephFS support - ceph-dokan
- 06:13 PM Backport #49631 (Resolved): pacific: mds: don't start purging inodes in the middle of recovery
- 06:13 PM Backport #49630 (Resolved): pacific: qa: slow metadata ops during scrubbing
- 06:13 PM Backport #49612 (Resolved): pacific: qa: racy session evicted check
- 06:13 PM Backport #49610 (Resolved): pacific: qa: mds removed because trimming for too long with valgrind
- 06:12 PM Backport #49609 (Resolved): pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- 06:12 PM Backport #49608 (Resolved): pacific: mds: define CephFS errors that replace standard errno values
- 06:12 PM Backport #49569 (Resolved): pacific: qa: rank_freeze prevents failover on some tests
- 06:12 PM Backport #49563 (Resolved): pacific: qa: run fs:verify with tcmalloc
- 06:11 PM Backport #49561 (Resolved): pacific: qa: file system deletion not complete because starter fs alr...
- 06:11 PM Backport #49520 (Resolved): pacific: client: wake up the front pos waiter
- 06:11 PM Backport #49517 (Resolved): pacific: pybind/cephfs: DT_REG and DT_LNK values are wrong
- 06:11 PM Backport #49512 (Resolved): pacific: client: allow looking up snapped inodes by inode number+snap...
- 06:10 PM Backport #49474 (Resolved): pacific: nautilus: qa: "Assertion `cb_done' failed."
- 06:10 PM Backport #49470 (Resolved): pacific: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- 06:10 PM Backport #49346 (Resolved): pacific: vstart: volumes/nfs interface complaints cluster does not ex...
- 03:45 PM Bug #49662 (Fix Under Review): ceph-dokan improvements for additional mounts
- 02:58 PM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Xiubo Li wrote:
> Patrick Donnelly wrote:
> > Xiubo Li wrote:
> >
> > [...]
> >
> > > @Patrick,
> > >
> > > ... - 12:27 PM Bug #50083: CephFS file access issues using kernel driver: file overwritten with null bytes
- More questions:
1) how large are these files (generally)?
2) at what point does the corruption start?
3) How far... - 12:05 PM Bug #50083: CephFS file access issues using kernel driver: file overwritten with null bytes
- Ok, so it is reproducible in your environment. The problem is that I'm unclear on what sort of I/O is being done here...
- 08:32 AM Bug #50083: CephFS file access issues using kernel driver: file overwritten with null bytes
- We do have a reliable repro currently yes. A file that we write to several times an hour shows clear corruption corre...
- 09:18 AM Bug #50091 (Fix Under Review): cephfs-top: exception: addwstr() returned ERR
- 04:24 AM Bug #50091 (Resolved): cephfs-top: exception: addwstr() returned ERR
- When the terminal is not wide enough, this can be seen 100%.
- 01:32 AM Bug #50090 (Resolved): client: only check pool permissions for regular files
- There is no need to do a check_pool_perm() on anything that isn't
a regular file, as the MDS is what handles talking...
03/31/2021
- 08:46 PM Bug #50083: CephFS file access issues using kernel driver: file overwritten with null bytes
- This is the first I've heard of this as well. You mentioned seeing this on v5.11 kernels. Have you also seen it on ea...
- 07:05 PM Bug #50083 (Triaged): CephFS file access issues using kernel driver: file overwritten with null b...
- This has never been heard of before. The most likely cause is something in your setup, e.g. a rogue process (rsync) m...
- 01:57 PM Bug #50083 (Resolved): CephFS file access issues using kernel driver: file overwritten with null ...
- Ceph cluster is running 14.2.9 (nautilus), a 3 node containerised cluster. 1 active MDS, 2 standby
Using ceph kernel... - 06:09 PM Feature #41073 (Rejected): cephfs-sync: tool for synchronizing cephfs snapshots to remote target
- 04:24 AM Feature #41073: cephfs-sync: tool for synchronizing cephfs snapshots to remote target
- I think this can be closed. Right Milind?
(We dropped relying on rsync) - 06:08 PM Feature #46432 (Resolved): cephfs-mirror: manager module interface to add/remove directory snapshots
- 04:22 AM Feature #46432 (Closed): cephfs-mirror: manager module interface to add/remove directory snapshots
- Feature available in Pacific.
- 06:08 PM Feature #44191 (Resolved): cephfs: geo-replication
- 04:22 AM Feature #44191 (Closed): cephfs: geo-replication
- Feature available in Pacific.
- 06:08 PM Feature #41074 (Resolved): pybind/mgr/volumes: mirror (scheduled) snapshots to remote target
- 04:21 AM Feature #41074 (Closed): pybind/mgr/volumes: mirror (scheduled) snapshots to remote target
- Patrick Donnelly wrote:
> Close this out?
Definitely. - 03:30 PM Backport #50086 (Resolved): pacific: tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError:...
- https://github.com/ceph/ceph/pull/40688
- 03:26 PM Bug #48411 (Pending Backport): tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank ...
- 02:31 PM Backport #50030 (In Progress): pacific: qa: fs:cephadm mount does not wait for mds to be created
- 02:11 PM Feature #49811 (Fix Under Review): mds: collect I/O sizes from client for cephfs-top
- 01:06 PM Bug #49939: cephfs-mirror: be resilient to recreated snapshot during synchronization
- So, I am experimenting with how MDS handles path traversals when just an inode number rather than inode number+dname ...
- 12:48 PM Cleanup #50080 (In Progress): mgr/nfs: move nfs code out of volumes plugin
- 12:45 PM Cleanup #50080 (Resolved): mgr/nfs: move nfs code out of volumes plugin
- 09:54 AM Bug #48805 (Fix Under Review): mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/...
03/30/2021
- 10:30 PM Feature #41073: cephfs-sync: tool for synchronizing cephfs snapshots to remote target
- ping
- 10:30 PM Feature #46432: cephfs-mirror: manager module interface to add/remove directory snapshots
- Close this out?
- 10:30 PM Feature #44191: cephfs: geo-replication
- Close this out?
- 10:29 PM Feature #41074: pybind/mgr/volumes: mirror (scheduled) snapshots to remote target
- Close this out?
- 10:28 PM Backport #49930 (Resolved): pacific: mon/MDSMonitor: standby-replay daemons should be removed whe...
- 10:27 PM Bug #49720 (Fix Under Review): mon/MDSMonitor: do not pointlessly kill standbys that are incompat...
- 08:52 PM Bug #48411 (In Progress): tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all f...
- 06:58 PM Bug #50060 (Triaged): client: access(path, X_OK) on non-executable file as root always succeeds
- 06:58 PM Bug #50060 (Resolved): client: access(path, X_OK) on non-executable file as root always succeeds
- See "[ceph-users] ceph-fuse false passed X_OK check".
Check works for non-root users. - 04:00 PM Bug #50057 (Fix Under Review): client: openned inodes counter is inconsistent
- 03:12 PM Bug #50057 (Resolved): client: openned inodes counter is inconsistent
- ...
- 11:08 AM Bug #49466: qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJ...
- Thanks Rishabh for your analysis.
> The best fix is to add a method to teuthology.orchestra.remote.Remote. It woul... - 03:48 AM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Patrick Donnelly wrote:
> Xiubo Li wrote:
>
> [...]
>
> > @Patrick,
> >
> > Maybe we could save a ceph repo s... - 03:20 AM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Xiubo Li wrote:
> Okay, I was in wrong direction yesterday.
>
> I think it was the `git clone ceph...` command's ... - 03:16 AM Bug #50048 (Fix Under Review): mds: standby-replay only trims cache when it reaches the end of th...
- 03:03 AM Bug #50048 (Resolved): mds: standby-replay only trims cache when it reaches the end of the replay...
- This could take a significant amount of time under load. Trim regularly like the active MDS.
03/29/2021
- 11:55 PM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Okay, I was in wrong direction yesterday.
I think it was the `git clone ceph...` command's problem, it took too lo... - 01:51 PM Bug #50021 (In Progress): qa: snaptest-git-ceph failure during mon thrashing
- 01:05 PM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- The exception occured just before the snap test at:...
- 12:58 PM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Checked all the mds/osd/mon/client/kernel/misc related logs, didn't find any error during that exception around 2021-...
- 08:19 AM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Checked the client logs in `smithi016/log/ceph-client.0.25180.log.gz`, everything works well till now, I didn't see a...
- 10:13 PM Fix #50045 (Fix Under Review): qa: test standby_replay in workloads
- 10:12 PM Fix #50045 (Resolved): qa: test standby_replay in workloads
- To improve our test coverage of this frequently enabled feature (both in cephadm and Rook).
- 01:47 PM Bug #49939 (In Progress): cephfs-mirror: be resilient to recreated snapshot during synchronization
- 01:43 PM Bug #50033 (Triaged): mgr/stats: be resilient to offline MDS rank-0
- 06:41 AM Bug #50033 (Resolved): mgr/stats: be resilient to offline MDS rank-0
- mgr/stats can repeatedly report stale perf stats when MDS rank-0 becomes offline. Even after a standby daemon transit...
- 09:15 AM Bug #50035 (Resolved): cephfs-mirror: use sensible mount/shutdown timeouts
- The mirror daemon just relies on the defaults which are pretty high:...
- 07:21 AM Bug #50020: qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
- test case fix: prio -> low
- 07:21 AM Bug #50020: qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
- The real failure was the "failed" state not reflecting in mirror status. From the daemon logs, the mirror daemon rest...
- 07:19 AM Bug #50020 (Fix Under Review): qa: "RADOS object not found (Failed to operate read op for oid cep...
03/28/2021
- 12:05 PM Backport #50030 (Resolved): pacific: qa: fs:cephadm mount does not wait for mds to be created
- https://github.com/ceph/ceph/pull/40528
- 12:04 PM Bug #49684 (Pending Backport): qa: fs:cephadm mount does not wait for mds to be created
03/27/2021
- 06:38 PM Bug #49301 (Resolved): mon/MonCap: `fs authorize` generates unparseable cap for file system name ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:37 PM Bug #49736 (Resolved): cephfs-top: missing keys in the client_metadata
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:37 PM Feature #49953 (Resolved): cephfs-top : allow configurable stats refresh interval
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:37 PM Bug #49974 (Resolved): cephfs-top: fails with exception "OPENED_FILES"
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:36 PM Bug #50005 (Resolved): cephfs-top: flake8 E501 line too long error
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:39 AM Bug #50020: qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
- The index object (cephfs_mirror) is missing in the rados pool. This is created when mirroring is enabled (via mgr/mir...
- 12:53 AM Feature #48682 (Fix Under Review): MDSMonitor: add command to print fs flags
- 12:53 AM Fix #48683 (Fix Under Review): mds/MDSMap: print each flag value in MDSMap::dump
03/26/2021
- 10:16 PM Backport #50027 (Resolved): octopus: client: items pinned in cache preventing unmount
- https://github.com/ceph/ceph/pull/40778
- 10:15 PM Backport #50026 (Resolved): nautilus: client: items pinned in cache preventing unmount
- https://github.com/ceph/ceph/pull/40722
- 10:15 PM Backport #50025 (Resolved): pacific: client: items pinned in cache preventing unmount
- https://github.com/ceph/ceph/pull/40629
- 10:15 PM Backport #50024 (Rejected): octopus: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_n...
- 10:15 PM Backport #50023 (Rejected): nautilus: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_...
- 10:15 PM Backport #50022 (Resolved): pacific: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_n...
- https://github.com/ceph/ceph/pull/40628
- 10:13 PM Bug #48679 (Pending Backport): client: items pinned in cache preventing unmount
- 10:11 PM Bug #49936 (Pending Backport): ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= ...
- 10:08 PM Bug #50016: qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
- /ceph/teuthology-archive/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/599...
- 03:25 PM Bug #50016 (New): qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
- ...
- 10:07 PM Bug #48771: qa: iogen: workload fails to cause balancing
- /ceph/teuthology-archive/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/599...
- 10:05 PM Bug #50021 (Resolved): qa: snaptest-git-ceph failure during mon thrashing
- ...
- 09:57 PM Bug #50020 (Resolved): qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirr...
- ...
- 09:56 PM Bug #50019 (Closed): qa: mount failure with cephadm "probably no MDS server is up?"
- ...
- 07:29 PM Backport #49564 (Resolved): pacific: mon/MonCap: `fs authorize` generates unparseable cap for fil...
- 05:48 PM Backport #49935: pacific: libcephfs: test termination "what(): Too many open files"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40372
merged - 05:31 PM Backport #50011 (Resolved): pacific: cephfs-top: flake8 E501 line too long error
- 12:17 PM Backport #50011 (In Progress): pacific: cephfs-top: flake8 E501 line too long error
- 12:00 PM Backport #50011 (Resolved): pacific: cephfs-top: flake8 E501 line too long error
- https://github.com/ceph/ceph/pull/40422
- 05:31 PM Backport #49994 (Resolved): pacific: cephfs-top: fails with exception "OPENED_FILES"
- 12:17 PM Backport #49994 (In Progress): pacific: cephfs-top: fails with exception "OPENED_FILES"
- 06:25 AM Backport #49994 (Resolved): pacific: cephfs-top: fails with exception "OPENED_FILES"
- https://github.com/ceph/ceph/pull/40422
- 05:24 PM Backport #49986 (Resolved): pacific: cephfs-top : allow configurable stats refresh interval
- 05:24 PM Backport #49973 (Resolved): pacific: cephfs-top: missing keys in the client_metadata
- 03:34 PM Backport #49932: pacific: MDS should return -ENODATA when asked to remove xattr that doesn't exist
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40371
merged - 03:33 PM Backport #49685: pacific: ls -l in cephfs-shell tries to chase symlinks when stat'ing and errors ...
- Varsha Rao wrote:
> https://github.com/ceph/ceph/pull/40308
merged - 03:33 PM Backport #49713: pacific: mgr/nfs: Add interface to update export
- Varsha Rao wrote:
> https://github.com/ceph/ceph/pull/40307
merged - 03:32 PM Backport #49852: pacific: mds: race of fetching large dirfrag
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40252
merged - 03:32 PM Backport #49854: pacific: client: crashed in cct->_conf.get_val() in Client::start_tick_thread()
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40251
merged - 03:31 PM Bug #49379: client: wake up the front pos waiter
- https://github.com/ceph/ceph/pull/40109 merged
- 03:30 PM Backport #49609: pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40108
merged - 03:25 PM Backport #50015 (Resolved): pacific: qa: "AttributeError: 'NoneType' object has no attribute 'mon...
- https://github.com/ceph/ceph/pull/40645
- 03:21 PM Bug #49511 (Pending Backport): qa: "AttributeError: 'NoneType' object has no attribute 'mon_manag...
- Also seen in pacific: https://pulpito.ceph.com/yuriw-2021-03-25_21:03:23-fs-wip-yuri-testing-2021-03-25-1105-pacific-...
- 12:20 PM Documentation #49921: mgr/nfs: Update about cephadm single nfs-ganesha daemon per host limitation
- pacifci backport merged in #40355
- 11:56 AM Bug #50005 (Pending Backport): cephfs-top: flake8 E501 line too long error
- 09:43 AM Bug #50005 (Fix Under Review): cephfs-top: flake8 E501 line too long error
- 09:42 AM Bug #50005 (Resolved): cephfs-top: flake8 E501 line too long error
- ...
- 11:39 AM Bug #50010 (Resolved): qa/cephfs: get_key_from_keyfile() return None when key is not found in key...
- Absence of key in a keyring file is an odd and exceptional situation. Therefore, @CephFSMount.get_key_from_keyfile()@...
- 11:03 AM Documentation #49372 (Resolved): doc: broken links multimds and kcephfs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:02 AM Documentation #49763 (Resolved): doc: Document mds cap acquisition readdir throttle
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:52 AM Documentation #50008 (Resolved): mgr/nfs: Add troubleshooting section
- 08:59 AM Bug #49466: qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJ...
- Cause of the bug: Write was attempted on @/tmp@ file with the @root@ user. Files in @/tmp@ can't be written by any us...
- 06:26 AM Bug #49466 (In Progress): qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tm...
- 08:18 AM Bug #49972 (Fix Under Review): mds: failed to decode message of type 29 v1: void CapInfoPayload::...
- 07:26 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Venky Shankar wrote:
> Xiubo Li wrote:
> > @Venkey, with your backport patch I can reproduce it locally.
>
> Whi... - 06:20 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Venky Shankar wrote:
> Xiubo Li wrote:
> > @Venkey, with your backport patch I can reproduce it locally.
>
> Whi... - 06:13 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Xiubo Li wrote:
> @Venkey, with your backport patch I can reproduce it locally.
Which backport? cephfs-mirror ser... - 06:04 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- @Venkey, with your backport patch I can reproduce it locally.
- 04:58 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Patrick Donnelly wrote:
> I think this might just be because the pacific branch was missing: https://tracker.ceph.co... - 04:37 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Locally I have built the origin/pacific ceph and with the lasted origin/testing kclient, it works well:...
- 03:06 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- ...
- 06:22 AM Bug #49974 (Pending Backport): cephfs-top: fails with exception "OPENED_FILES"
03/25/2021
- 08:37 PM Bug #49500 (Fix Under Review): qa: "Assertion `cb_done' failed."
- 06:05 PM Backport #49986 (In Progress): pacific: cephfs-top : allow configurable stats refresh interval
- 05:15 PM Backport #49986 (Resolved): pacific: cephfs-top : allow configurable stats refresh interval
- https://github.com/ceph/ceph/pull/40417
- 05:41 PM Bug #49974: cephfs-top: fails with exception "OPENED_FILES"
- PR https://github.com/ceph/ceph/pull/39972 is merged in pacific. Backport should be straightforward.
- 11:18 AM Bug #49974 (Fix Under Review): cephfs-top: fails with exception "OPENED_FILES"
- 11:12 AM Bug #49974 (Resolved): cephfs-top: fails with exception "OPENED_FILES"
- Commit 89cc2cda4aa4 introduces additional metrics but did not add those metrics to cephfs-top.
Also, include a che... - 05:34 PM Backport #49905: pacific: mgr/volumes: setuid and setgid file bits are not retained after a subvo...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40267
merged - 05:33 PM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Patrick Donnelly wrote:
> I think this might just be because the pacific branch was missing: https://tracker.ceph.co... - 05:33 PM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- I think this might just be because the pacific branch was missing: https://tracker.ceph.com/issues/46865
- 01:40 PM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Patrick mentioned that this could be related to the testing kernel (as Jeff merged some of Xiubo's patches that adds ...
- 08:52 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- another instance (same branch): https://pulpito.ceph.com/vshankar-2021-03-25_05:53:38-fs-wip-cephfs-mirror-pacific-ba...
- 08:51 AM Bug #49972 (Resolved): mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- This was seen in pacific backport branch: https://pulpito.ceph.com/vshankar-2021-03-25_05:53:38-fs-wip-cephfs-mirror-...
- 05:33 PM Backport #49563: pacific: qa: run fs:verify with tcmalloc
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40091
merged - 05:33 PM Backport #49610: pacific: qa: mds removed because trimming for too long with valgrind
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40091
merged - 05:32 PM Backport #49634: pacific: Windows CephFS support - ceph-dokan
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40069
merged - 05:32 PM Backport #49346: pacific: vstart: volumes/nfs interface complaints cluster does not exists
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/39974
merged - 05:31 PM Backport #49687: pacific: client: add metric for number of pinned capabilities
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/39972
merged - 12:16 PM Bug #49843 (Resolved): qa: fs/snaps/snaptest-upchildrealms.sh failure
- 10:46 AM Backport #49973 (In Progress): pacific: cephfs-top: missing keys in the client_metadata
- 09:50 AM Backport #49973 (Resolved): pacific: cephfs-top: missing keys in the client_metadata
- https://github.com/ceph/ceph/pull/40402
- 09:46 AM Feature #49953 (Pending Backport): cephfs-top : allow configurable stats refresh interval
- 09:46 AM Bug #49736 (Pending Backport): cephfs-top: missing keys in the client_metadata
- 09:39 AM Bug #49736 (Fix Under Review): cephfs-top: missing keys in the client_metadata
- 03:55 AM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Jeff Layton wrote:
> Are you exporting your ceph mount via knfsd?
No, I don't have anything related to NFS deploy...
03/24/2021
- 11:24 PM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Jeff Layton wrote:
> Yeah, that repeated call makes it look like the client is repeatedly calling in to the MDS for ... - 10:10 PM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Yeah, that repeated call makes it look like the client is repeatedly calling in to the MDS for that inode number. It'...
- 09:53 PM Bug #49922 (Fix Under Review): MDS slow request lookupino #0x100 on rank 1 block forever on dispa...
- 09:01 PM Bug #49922 (In Progress): MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- 08:45 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Patrick Donnelly wrote:
> Jeff Layton wrote:
> > Maybe we could lower mds_max_caps_per_client for this test? It def... - 08:33 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Jeff Layton wrote:
> Maybe we could lower mds_max_caps_per_client for this test? It defaults to 1M now, but we could... - 06:52 PM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > So the issue is that AsyncJobs.get_job() is called with AsyncJob... - 04:01 PM Backport #49935 (In Progress): pacific: libcephfs: test termination "what(): Too many open files"
- 03:59 PM Backport #49520 (In Progress): pacific: client: wake up the front pos waiter
- 03:58 PM Backport #49609 (In Progress): pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validat...
- 03:58 PM Backport #49930 (In Progress): pacific: mon/MDSMonitor: standby-replay daemons should be removed ...
- 03:58 PM Backport #49932 (In Progress): pacific: MDS should return -ENODATA when asked to remove xattr tha...
- 03:53 PM Backport #49423 (Resolved): pacific: doc: broken links multimds and kcephfs
- 03:49 PM Backport #49877 (Resolved): pacific: doc: Document mds cap acquisition readdir throttle
- 03:47 PM Backport #49414 (Resolved): pacific: mgr/nfs: Update about user config
- 03:45 PM Backport #49951 (Resolved): pacific: mgr/nfs: Update about cephadm single nfs-ganesha daemon per ...
- 10:27 AM Backport #49951 (In Progress): pacific: mgr/nfs: Update about cephadm single nfs-ganesha daemon p...
- 01:50 AM Backport #49951 (Resolved): pacific: mgr/nfs: Update about cephadm single nfs-ganesha daemon per ...
- https://github.com/ceph/ceph/pull/40362
- 05:43 AM Feature #49953 (Fix Under Review): cephfs-top : allow configurable stats refresh interval
- 05:42 AM Feature #49953 (In Progress): cephfs-top : allow configurable stats refresh interval
- 05:39 AM Feature #49953 (Resolved): cephfs-top : allow configurable stats refresh interval
- 03:11 AM Bug #49928 (Duplicate): client: items pinned in cache preventing unmount x2
- 03:10 AM Bug #49928: client: items pinned in cache preventing unmount x2
- For the inode `0x10000001949`, since it has Fb cap and the flush cap snap was delayed, but never did it after that:
... - 12:43 AM Bug #49928 (In Progress): client: items pinned in cache preventing unmount x2
- 01:50 AM Backport #49950 (Resolved): octopus: mgr/nfs: Update about cephadm single nfs-ganesha daemon per ...
- https://github.com/ceph/ceph/pull/40777
- 01:47 AM Bug #49936 (Fix Under Review): ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= ...
- 01:46 AM Documentation #49921 (Pending Backport): mgr/nfs: Update about cephadm single nfs-ganesha daemon ...
03/23/2021
- 03:00 PM Backport #49564: pacific: mon/MonCap: `fs authorize` generates unparseable cap for file system na...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40086
merged - 03:00 PM Backport #49569: pacific: qa: rank_freeze prevents failover on some tests
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40082
merged - 02:56 PM Backport #49474: pacific: nautilus: qa: "Assertion `cb_done' failed."
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40093
merged - 02:55 PM Backport #49512: pacific: client: allow looking up snapped inodes by inode number+snapid tuple
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40092
merged - 02:55 PM Backport #49751: pacific: snap-schedule doc
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40090
merged - 02:53 PM Backport #49561: pacific: qa: file system deletion not complete because starter fs already destroyed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40089
merged - 02:53 PM Backport #49470: pacific: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40087
merged - 02:51 PM Backport #49517: pacific: pybind/cephfs: DT_REG and DT_LNK values are wrong
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40085
merged - 02:50 PM Backport #49608: pacific: mds: define CephFS errors that replace standard errno values
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40083
merged - 02:49 PM Backport #49612: pacific: qa: racy session evicted check
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40081
merged - 02:48 PM Backport #49630: pacific: qa: slow metadata ops during scrubbing
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40080
merged - 02:47 PM Backport #49631: pacific: mds: don't start purging inodes in the middle of recovery
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40079
merged - 01:24 PM Feature #49942 (Resolved): cephfs-mirror: enable running in HA
- cephfs-mirror and mgr/mirroring has the machinery to run/support HA but we do not have any test coverage for such a s...
- 09:13 AM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- I am doubting that if there has two tasks are doing the rename:
For task1, if it just do _lookup(_INPROGRESS) and ... - 03:29 AM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- @Xiaoxi
Is this reproduceable for you ? If so, how often ? Locally I was trying in a loop by renaming two file for... - 09:03 AM Bug #49939 (Resolved): cephfs-mirror: be resilient to recreated snapshot during synchronization
- The mirror daemon works with snapshots paths. It does rely on snap-id to infer deleted and renamed snapshots, but onc...
- 07:04 AM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- Patrick Donnelly wrote:
> So the issue is that AsyncJobs.get_job() is called with AsyncJobs.lock locked. Then gettin... - 05:43 AM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Logs from mds.0. Also repeating at the same frequency....
- 04:21 AM Backport #49929 (In Progress): pacific: test: test_mirroring_command_idempotency (tasks.cephfs.te...
- 03:05 AM Backport #49929 (Resolved): pacific: test: test_mirroring_command_idempotency (tasks.cephfs.test_...
- https://github.com/ceph/ceph/pull/40206
- 03:19 AM Bug #49936 (Pending Backport): ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= ...
- ...
- 03:10 AM Backport #49935 (Resolved): pacific: libcephfs: test termination "what(): Too many open files"
- https://github.com/ceph/ceph/pull/40372
- 03:10 AM Backport #49934 (Resolved): octopus: libcephfs: test termination "what(): Too many open files"
- https://github.com/ceph/ceph/pull/40776
- 03:10 AM Backport #49933 (Rejected): nautilus: MDS should return -ENODATA when asked to remove xattr that ...
- 03:10 AM Backport #49932 (Resolved): pacific: MDS should return -ENODATA when asked to remove xattr that d...
- https://github.com/ceph/ceph/pull/40371
- 03:10 AM Backport #49931 (Rejected): octopus: MDS should return -ENODATA when asked to remove xattr that d...
- 03:06 AM Bug #49559 (Pending Backport): libcephfs: test termination "what(): Too many open files"
- 03:05 AM Bug #49621 (Resolved): qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestData...
- 03:05 AM Backport #49930 (Resolved): pacific: mon/MDSMonitor: standby-replay daemons should be removed whe...
- https://github.com/ceph/ceph/pull/40325
- 03:05 AM Bug #49833 (Pending Backport): MDS should return -ENODATA when asked to remove xattr that doesn't...
- 03:04 AM Bug #49822 (Pending Backport): test: test_mirroring_command_idempotency (tasks.cephfs.test_admin....
- 03:03 AM Bug #49719 (Pending Backport): mon/MDSMonitor: standby-replay daemons should be removed when the ...
- 02:53 AM Bug #49928 (Duplicate): client: items pinned in cache preventing unmount x2
- ...
03/22/2021
- 05:04 PM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Jeff Layton wrote:
> Was there anything useful in the logs from mds 1 about the op and what state it's in?
I set ... - 03:31 PM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- I'm unfamiliar with the MDS code so some notes as I peruse it:
Ok, so the TrackedOp entries get put on the list wh... - 12:05 PM Bug #49922 (Resolved): MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- We have two MDSs deployed by cephadm.
Several hours ago, we got a health warning:... - 02:02 PM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- I think that after the mv, the directory should no longer be considered ORDERED. We probably _can_ consider it comple...
- 01:41 PM Bug #49912 (Triaged): client: dir->dentries inconsistent, both newname and oldname points to same...
- 01:51 PM Bug #49845 (Resolved): qa: failed umount in test_volumes
- The client kernel in this test had a bad patch in it that has since been fixed. See:
https://tracker.ceph.com/... - 12:44 PM Backport #49685 (In Progress): pacific: ls -l in cephfs-shell tries to chase symlinks when stat'i...
- https://github.com/ceph/ceph/pull/40308
- 12:36 PM Backport #49713 (In Progress): pacific: mgr/nfs: Add interface to update export
- https://github.com/ceph/ceph/pull/40307
- 12:23 PM Backport #49414 (In Progress): pacific: mgr/nfs: Update about user config
- 11:58 AM Documentation #49921 (In Progress): mgr/nfs: Update about cephadm single nfs-ganesha daemon per h...
- 11:38 AM Documentation #49921 (Resolved): mgr/nfs: Update about cephadm single nfs-ganesha daemon per host...
- 03:45 AM Feature #49811: mds: collect I/O sizes from client for cephfs-top
- @Patric, @Jeff
Comparing the iotop/iostat:
We may also need to collect the average IO READ/WRITE speed per-seco... - 03:23 AM Feature #49811 (In Progress): mds: collect I/O sizes from client for cephfs-top
- 02:33 AM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- So the issue is that AsyncJobs.get_job() is called with AsyncJobs.lock locked. Then getting the next job involves ope...
- 02:30 AM Feature #46866: kceph: add metric for number of pinned capabilities
- Pushing the kclient patchwork.
03/21/2021
- 05:50 PM Bug #49605 (In Progress): pybind/mgr/volumes: deadlock on async job hangs finisher thread
- ...
- 04:51 PM Bug #49912 (Resolved): client: dir->dentries inconsistent, both newname and oldname points to sam...
- we have an applications that use FS as a lock --- an empty file named .dw_gem2_cmn_sd_{INPROGRESS/COMPLETE} , applic...
03/20/2021
- 04:19 AM Backport #49903 (In Progress): nautilus: mgr/volumes: setuid and setgid file bits are not retaine...
- 03:15 AM Backport #49903 (Resolved): nautilus: mgr/volumes: setuid and setgid file bits are not retained a...
- https://github.com/ceph/ceph/pull/40270
- 04:01 AM Backport #49904 (In Progress): octopus: mgr/volumes: setuid and setgid file bits are not retained...
- 03:15 AM Backport #49904 (Resolved): octopus: mgr/volumes: setuid and setgid file bits are not retained af...
- https://github.com/ceph/ceph/pull/40268
- 03:29 AM Backport #49905 (In Progress): pacific: mgr/volumes: setuid and setgid file bits are not retained...
- 03:15 AM Backport #49905 (Resolved): pacific: mgr/volumes: setuid and setgid file bits are not retained af...
- https://github.com/ceph/ceph/pull/40267
- 03:12 AM Bug #49882 (Pending Backport): mgr/volumes: setuid and setgid file bits are not retained after a ...
03/19/2021
- 09:46 PM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- debugging pr https://github.com/ceph/ceph/pull/40264
- 09:20 PM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- /ceph/teuthology-archive/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/596...
- 06:13 PM Backport #49852 (In Progress): pacific: mds: race of fetching large dirfrag
- 06:12 PM Backport #49854 (In Progress): pacific: client: crashed in cct->_conf.get_val() in Client::start_...
- 06:12 PM Backport #49877 (In Progress): pacific: doc: Document mds cap acquisition readdir throttle
- 02:46 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Maybe we could lower mds_max_caps_per_client for this test? It defaults to 1M now, but we could take that down to 500...
- 02:39 PM Bug #49500: qa: "Assertion `cb_done' failed."
- I'm not sure that setting is enough to explain this. AFAICT, that setting is only consulted in notify_health(), so I ...
- 12:57 PM Backport #49753 (In Progress): pacific: cephfs-mirror: add mirror peers via bootstrapping
- 12:57 PM Backport #49765 (In Progress): pacific: cephfs-mirror: symbolic links do not get synchronized at ...
- 10:07 AM Feature #48943 (Resolved): cephfs-mirror: display cephfs mirror instances in `ceph status` command
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:06 AM Bug #49419 (Resolved): cephfs-mirror: dangling pointer in PeerReplayer
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:12 AM Bug #49882 (Fix Under Review): mgr/volumes: setuid and setgid file bits are not retained after a ...
03/18/2021
- 03:07 PM Bug #49882 (In Progress): mgr/volumes: setuid and setgid file bits are not retained after a subvo...
- 02:23 PM Bug #49882 (Resolved): mgr/volumes: setuid and setgid file bits are not retained after a subvolum...
- setuid and setgid file bits are not retained after a subvolume snapshot restore
Reproducer on vstart cluster:
#... - 01:53 PM Backport #49686: pacific: cephfs-mirror: display cephfs mirror instances in `ceph status` command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39973
m... - 01:53 PM Backport #49432: pacific: cephfs-mirror: dangling pointer in PeerReplayer
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39810
m... - 01:26 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- Jeff Layton wrote:
> John, I fixed a similar sounding bug in the MDS yesterday:
>
> https://tracker.ceph.com/... - 01:01 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- John, I fixed a similar sounding bug in the MDS yesterday:
https://tracker.ceph.com/issues/49833
Are you ab... - 09:29 AM Bug #49736: cephfs-top: missing keys in the client_metadata
- https://github.com/ceph/ceph/pull/40210
- 04:50 AM Bug #44100: cephfs rsync kworker high load.
- We have also experienced a similar issue, where kernel mount performance degraded severely while doing rsync (running...
- 02:45 AM Backport #49877 (Resolved): pacific: doc: Document mds cap acquisition readdir throttle
- https://github.com/ceph/ceph/pull/40250
- 02:41 AM Documentation #49763 (Pending Backport): doc: Document mds cap acquisition readdir throttle
03/17/2021
- 09:47 PM Feature #48791 (Need More Info): mds: support file block size
- 09:45 PM Feature #41073: cephfs-sync: tool for synchronizing cephfs snapshots to remote target
- Milind what's the status of this tickeT?
- 07:03 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- I'll also note that I did find the following issue
https://tracker.ceph.com/issues/49833
But forgot to reference ... - 07:00 PM Bug #49873 (Triaged): ceph_lremovexattr does not return error on file in ceph pacific
- John Mulligan wrote:
> To try and clarify:
>
> The xattr is set on the link. There should be no xattr of that nam... - 06:52 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- To try and clarify:
The xattr is set on the link. There should be no xattr of that name on the file the link point... - 06:46 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- John Mulligan wrote:
> While running our go-ceph CI against pacific for the first time our CI failed in the xattr te... - 06:24 PM Bug #49873 (Duplicate): ceph_lremovexattr does not return error on file in ceph pacific
- While running our go-ceph CI against pacific for the first time our CI failed in the xattr tests.
It expected a call... - 04:02 PM Bug #49859 (Triaged): Snapshot schedules are not deleted after enabling/disabling snap module
- 10:15 AM Bug #49859 (Triaged): Snapshot schedules are not deleted after enabling/disabling snap module
- Assuming the following:...
- 03:41 PM Bug #49559: libcephfs: test termination "what(): Too many open files"
- Xiubo Li wrote:
> It seems the tests will fire many event works, which will open many fds, the last issue about this... - 01:48 PM Backport #49686 (Resolved): pacific: cephfs-mirror: display cephfs mirror instances in `ceph stat...
- 01:46 PM Backport #49432 (Resolved): pacific: cephfs-mirror: dangling pointer in PeerReplayer
- 10:02 AM Bug #49621: qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
- There is another error log ahead of the above call trace:...
- 09:59 AM Bug #49621 (Fix Under Review): qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan....
- 04:28 AM Bug #49621 (In Progress): qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestD...
- 04:28 AM Feature #49811: mds: collect I/O sizes from client for cephfs-top
- Sure, will work on it. Thanks.
- 03:30 AM Backport #49854 (Resolved): pacific: client: crashed in cct->_conf.get_val() in Client::start_tic...
- https://github.com/ceph/ceph/pull/40251
- 03:25 AM Backport #49853 (Resolved): nautilus: mds: race of fetching large dirfrag
- https://github.com/ceph/ceph/pull/40720
- 03:25 AM Backport #49852 (Resolved): pacific: mds: race of fetching large dirfrag
- https://github.com/ceph/ceph/pull/40252
- 03:25 AM Bug #49725 (Pending Backport): client: crashed in cct->_conf.get_val() in Client::start_tick_thre...
- 03:25 AM Backport #49851 (Resolved): octopus: mds: race of fetching large dirfrag
- https://github.com/ceph/ceph/pull/40774
- 03:23 AM Bug #49617 (Pending Backport): mds: race of fetching large dirfrag
03/16/2021
- 08:42 PM Bug #49843 (Fix Under Review): qa: fs/snaps/snaptest-upchildrealms.sh failure
- Bad error handling in this patch:
https://lore.kernel.org/ceph-devel/20210315180717.266155-3-jlayton@kernel.or... - 08:12 PM Bug #49843: qa: fs/snaps/snaptest-upchildrealms.sh failure
- This may be fallout from the recent snapdir handling fixes. I'll take a look.
- 07:53 PM Bug #49843 (Resolved): qa: fs/snaps/snaptest-upchildrealms.sh failure
- ...
- 08:01 PM Bug #49845 (Resolved): qa: failed umount in test_volumes
- ...
- 07:21 PM Bug #49837 (Fix Under Review): mgr/pybind/snap_schedule: do not fail when no fs snapshots are ava...
- 05:16 PM Bug #49837 (Resolved): mgr/pybind/snap_schedule: do not fail when no fs snapshots are available
- When calling the json output, we should not return any error but just an empty dict:...
- 05:35 PM Bug #49833 (Fix Under Review): MDS should return -ENODATA when asked to remove xattr that doesn't...
- 04:36 PM Bug #49833: MDS should return -ENODATA when asked to remove xattr that doesn't exist
- I'll take this one since I have a patch (and testcase).
- 04:22 PM Bug #49833 (Triaged): MDS should return -ENODATA when asked to remove xattr that doesn't exist
- 04:16 PM Bug #49833 (Resolved): MDS should return -ENODATA when asked to remove xattr that doesn't exist
- This patch adds a small gtest that shows that the handling of removexattr is wrong:...
- 04:50 PM Bug #49834 (Won't Fix - EOL): octopus: qa: test_statfs_on_deleted_fs failure
- https://pulpito.ceph.com/yuriw-2021-03-13_22:13:22-fs-wip-yuriw-octopus-15.2.10-distro-basic-smithi/5962994/
Test ... - 04:34 PM Bug #49826: Multiple nfs-ganesha instances and strays objects in CephFS
- The strays behavior makes some sense, since we don't really do anything client-side to notify the application when th...
- 04:27 PM Bug #49826: Multiple nfs-ganesha instances and strays objects in CephFS
- Aleksandr Rudenko wrote:
> Usual stray objects are purged after 10-20 secs. But not in this case. In this case stray... - 07:15 AM Bug #49826 (New): Multiple nfs-ganesha instances and strays objects in CephFS
- Hi!
We have one CephFS and two standalone ganesha instances on different hosts which export the same directory.
W... - 12:15 PM Bug #49736: cephfs-top: missing keys in the client_metadata
- Venky Shankar wrote:
MDSRank::dump_sessions() has this filter:
>
> [...]
>
> ... which might be the reason tha... - 05:34 AM Bug #49822 (Fix Under Review): test: test_mirroring_command_idempotency (tasks.cephfs.test_admin....
- 04:09 AM Bug #49822 (Resolved): test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirr...
- With https://github.com/ceph/ceph/pull/39845/commits/a04010e9490aa726d219c41139c27417dac836e2 peer_add monitor interf...
- 02:46 AM Bug #49719 (Fix Under Review): mon/MDSMonitor: standby-replay daemons should be removed when the ...
03/15/2021
- 06:25 PM Feature #49811 (Resolved): mds: collect I/O sizes from client for cephfs-top
- An average is a start but a histogram would be better for this kind of data.
- 05:44 AM Backport #49520: pacific: client: wake up the front pos waiter
- Patrick Donnelly wrote:
> Xiubo, please do this backport.
The backport PR: https://github.com/ceph/ceph/pull/40109 - 05:38 AM Backport #49609: pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- Patrick Donnelly wrote:
> Xiubo, please do this backport.
The backport PR: https://github.com/ceph/ceph/pull/40108
Also available in: Atom