Project

General

Profile

Activity

From 05/09/2023 to 06/07/2023

06/07/2023

12:23 PM Bug #61444 (Fix Under Review): mds: session ls command appears twice in command listing
Neeraj Pratap Singh
10:46 AM Bug #61397 (Fix Under Review): cephfs-top: enhance --dump code to include the missing fields
Jos Collin
08:36 AM Bug #61610: quincy: CommandFailedError for qa/workunits/suites/fsstress.sh
quincy:
http://pulpito.front.sepia.ceph.com/yuriw-2023-05-31_21:56:15-fs-wip-yuri6-testing-2023-05-31-0933-quincy-di...
Milind Changire
08:02 AM Bug #61610 (New): quincy: CommandFailedError for qa/workunits/suites/fsstress.sh
... Milind Changire
08:29 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
quincy:
http://pulpito.front.sepia.ceph.com/yuriw-2023-05-31_21:56:15-fs-wip-yuri6-testing-2023-05-31-0933-quincy-di...
Milind Changire
07:44 AM Bug #61609 (New): quincy: CommandFailedError for qa/workunits/libcephfs/test.sh
... Milind Changire

06/06/2023

02:02 PM Feature #61599 (Fix Under Review): mon/MDSMonitor: optionally forbid to use standby for another f...
Mykola Golub
01:03 PM Feature #61599 (Pending Backport): mon/MDSMonitor: optionally forbid to use standby for another f...
Currently if standby for current fs is not available the mon will look for standby for another fs. Although it makes ... Mykola Golub
05:19 AM Bug #61584 (Fix Under Review): mds: the parent dir's mtime will be overwrote by update cap reques...
Xiubo Li
02:31 AM Feature #61595 (Pending Backport): Consider setting "bulk" autoscale pool flag when automatically...
Currently when CephFS volume is created with command @ceph fs volume create x@ the command creates automatically the ... Voja Molani

06/05/2023

12:34 PM Bug #61584: mds: the parent dir's mtime will be overwrote by update cap request when making dirs
Well after I fix the issue in https://tracker.ceph.com/issues/61584#note-2, the issue still exists.
I checked the ...
Xiubo Li
06:06 AM Bug #61584: mds: the parent dir's mtime will be overwrote by update cap request when making dirs
I reproduced it by creating *dirk4444/dirk5555* and found the root cause:... Xiubo Li
05:51 AM Bug #61584: mds: the parent dir's mtime will be overwrote by update cap request when making dirs
Tested with latest ceph code, I only could reproduce this with kclient, the libcephfs worked well. Xiubo Li
05:51 AM Bug #61584 (Fix Under Review): mds: the parent dir's mtime will be overwrote by update cap reques...
The *ceph-user* mail list link: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/WZWGNLT2XEPV2BYBNL6MW... Xiubo Li
06:44 AM Bug #61394: qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726...
https://pulpito.ceph.com/yuriw-2023-06-02_15:02:25-fs-reef-release-distro-default-smithi/7294368 Kotresh Hiremath Ravishankar
06:44 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
https://pulpito.ceph.com/yuriw-2023-06-02_15:02:25-fs-reef-release-distro-default-smithi/7294367 Kotresh Hiremath Ravishankar
06:43 AM Bug #48773: qa: scrub does not complete
https://pulpito.ceph.com/yuriw-2023-06-02_15:02:25-fs-reef-release-distro-default-smithi/7294366 Kotresh Hiremath Ravishankar
06:42 AM Bug #61574: qa: build failure for mdtest project
https://pulpito.ceph.com/yuriw-2023-06-02_15:02:25-fs-reef-release-distro-default-smithi/7294361
https://pulpito.cep...
Kotresh Hiremath Ravishankar
06:42 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
https://pulpito.ceph.com/yuriw-2023-06-02_15:02:25-fs-reef-release-distro-default-smithi/7294354
https://pulpito.cep...
Kotresh Hiremath Ravishankar
06:42 AM Bug #61399: qa: build failure for ior
https://pulpito.ceph.com/yuriw-2023-06-02_15:02:25-fs-reef-release-distro-default-smithi/7294351 Kotresh Hiremath Ravishankar
06:41 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
https://pulpito.ceph.com/yuriw-2023-06-02_15:02:25-fs-reef-release-distro-default-smithi/7294348
https://pulpito.cep...
Kotresh Hiremath Ravishankar
06:40 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
https://pulpito.ceph.com/yuriw-2023-06-02_15:02:25-fs-reef-release-distro-default-smithi/7294346 Kotresh Hiremath Ravishankar

06/02/2023

06:47 PM Bug #61363: ceph mds keeps crashing with ceph_assert(auth_pins == 0)
So the assert is that auth_pins != 0. Presumably because auth_pins > 0 and we still have one outstanding? (Or, possib... Greg Farnum
11:32 AM Bug #61186: mgr/nfs: hitting incomplete command returns same suggestion twice
Thanks to John Mulligan debugging the mgr code and finding the root cause.
NFS commands are decorated using CLICom...
Dhairya Parmar
11:14 AM Bug #61397 (In Progress): cephfs-top: enhance --dump code to include the missing fields
Jos Collin
10:07 AM Bug #61575 (New): qa: cfuse_workunit_suites_fsstress fails because mds_thrash.py asserts
Description: fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon... Kotresh Hiremath Ravishankar
09:32 AM Bug #61394 (Fix Under Review): qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4...
Xiubo Li
09:09 AM Bug #61394: qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726...
Patrick Donnelly wrote:
> Client log here:
>
> /teuthology/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-0...
Xiubo Li
09:05 AM Bug #61394: qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726...
another instance in reef:
https://pulpito.ceph.com/yuriw-2023-05-28_14:46:14-fs-reef-release-distro-default-smithi/7...
Kotresh Hiremath Ravishankar
08:57 AM Bug #61394 (In Progress): qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298),...
Xiubo Li
08:57 AM Bug #61394: qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726...
Patrick,
From reading the *src/test/fs/test_trim_caps.cc* and the test case, the expected result is that the two m...
Xiubo Li
07:35 AM Bug #61394: qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726...
Checked the logs, the client was just stopped due to some reasons suddenly and I haven't found any suspicious logs yet. Xiubo Li
09:20 AM Bug #61574 (Pending Backport): qa: build failure for mdtest project
... Kotresh Hiremath Ravishankar
09:05 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
reef:
https://pulpito.ceph.com/yuriw-2023-05-28_14:46:14-fs-reef-release-distro-default-smithi/7288953
Kotresh Hiremath Ravishankar
09:02 AM Bug #59344: qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
reef:
https://pulpito.ceph.com/yuriw-2023-05-28_14:46:14-fs-reef-release-distro-default-smithi/7288914
Kotresh Hiremath Ravishankar
09:01 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
reef:
https://pulpito.ceph.com/yuriw-2023-05-28_14:46:14-fs-reef-release-distro-default-smithi/7288905
Kotresh Hiremath Ravishankar
09:01 AM Bug #59346: qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not rai...
reef:
https://pulpito.ceph.com/yuriw-2023-05-28_14:46:14-fs-reef-release-distro-default-smithi/7288903
Kotresh Hiremath Ravishankar
09:00 AM Bug #61399: qa: build failure for ior
reef:
https://pulpito.ceph.com/yuriw-2023-05-28_14:46:14-fs-reef-release-distro-default-smithi/7288890
Kotresh Hiremath Ravishankar
08:59 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
reef:
https://pulpito.ceph.com/yuriw-2023-05-28_14:46:14-fs-reef-release-distro-default-smithi/7288884
https://pulp...
Kotresh Hiremath Ravishankar
08:58 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
reef:
https://pulpito.ceph.com/yuriw-2023-05-28_14:46:14-fs-reef-release-distro-default-smithi/7288880
Kotresh Hiremath Ravishankar
08:58 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
reef:
https://pulpito.ceph.com/yuriw-2023-05-28_14:46:14-fs-reef-release-distro-default-smithi/7288877
https://pulp...
Kotresh Hiremath Ravishankar
08:57 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
reef:
https://pulpito.ceph.com/yuriw-2023-05-28_14:46:14-fs-reef-release-distro-default-smithi/7288861
https://pulp...
Kotresh Hiremath Ravishankar

06/01/2023

01:07 PM Backport #58573 (Resolved): pacific: mds: fragment directory snapshots
Patrick Donnelly
11:38 AM Bug #61552 (Fix Under Review): ceph-fuse: generic/193 failed on non-root user chmod a+r on root o...
Xiubo Li
04:40 AM Bug #61552 (In Progress): ceph-fuse: generic/193 failed on non-root user chmod a+r on root owned ...
Xiubo Li
04:40 AM Bug #61552 (Fix Under Review): ceph-fuse: generic/193 failed on non-root user chmod a+r on root o...
... Xiubo Li
03:14 AM Bug #61551 (Fix Under Review): ceph-fuse: generic/192 failed with "delta1 is NOT in range 5 .. 7"
Sent one patch to fix it in *xfstests-dev* to skip the test cases when *atime* is required.
The patchwork link: ht...
Xiubo Li
02:50 AM Bug #61551: ceph-fuse: generic/192 failed with "delta1 is NOT in range 5 .. 7"
A similar issue with https://tracker.ceph.com/issues/53844, which is for kclient. We should disable the tests when th... Xiubo Li
02:48 AM Bug #61551 (Fix Under Review): ceph-fuse: generic/192 failed with "delta1 is NOT in range 5 .. 7"
... Xiubo Li
02:40 AM Backport #58347 (Resolved): quincy: mds: fragment directory snapshots
Patrick Donnelly

05/31/2023

12:08 PM Bug #61407: mds: abort on CInode::verify_dirfrags
Suggest you grep for this assert in older teuthology runs of `fs` suite to locate any missed past occurrences. That m... Patrick Donnelly
11:37 AM Bug #61501: ceph-fuse: generic/126 failed due to file couldn't be executed without the 'r' mode
The execlp(),etc system call will ignore the 'r' permission in VFS,
and allowed a file without 'r' permission still ...
Xiubo Li
11:37 AM Bug #61501: ceph-fuse: generic/126 failed due to file couldn't be executed without the 'r' mode
With this fixes it worked as expected:... Xiubo Li
10:44 AM Bug #61501 (Fix Under Review): ceph-fuse: generic/126 failed due to file couldn't be executed wit...
Xiubo Li
10:56 AM Bug #61523 (Fix Under Review): client: do not send metrics until the MDS rank is ready
Xiubo Li
10:49 AM Bug #61523 (Resolved): client: do not send metrics until the MDS rank is ready
In some cases when there are a lot of clients and these clients have a lots of known requests need to replay too, the... Xiubo Li

05/30/2023

12:59 PM Bug #61394: qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726...
Client log here:
/teuthology/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default...
Patrick Donnelly
12:41 PM Bug #61459 (Fix Under Review): mds: session in the importing state cannot be cleared if an export...
Milind Changire
09:09 AM Bug #61501: ceph-fuse: generic/126 failed due to file couldn't be executed without the 'r' mode
For *kclient* the *VFS* will do the permission check and in case a file without the 'r' permission and if there is 'x... Xiubo Li
09:06 AM Bug #61501 (Fix Under Review): ceph-fuse: generic/126 failed due to file couldn't be executed wit...
... Xiubo Li
09:07 AM Bug #61243 (In Progress): qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
Xiubo Li
07:33 AM Bug #61496 (Fix Under Review): ceph-fuse: generic/020 failed of "No space left on device"
Sent one patch to *xfstests-dev* and the patchwork link: https://patchwork.kernel.org/project/ceph-devel/patch/202305... Xiubo Li
01:31 AM Bug #61496: ceph-fuse: generic/020 failed of "No space left on device"
... Xiubo Li
01:20 AM Bug #61496 (In Progress): ceph-fuse: generic/020 failed of "No space left on device"
This is a *xfstests-dev*'s bug. Xiubo Li
01:19 AM Bug #61496 (Fix Under Review): ceph-fuse: generic/020 failed of "No space left on device"
... Xiubo Li
12:38 AM Bug #50719: xattr returning from the dead (sic!)
Tobias Hachmer wrote:
> Hi Xiubo,
>
> thanks for taking care of this issue and the debug info.
>
> We have her...
Xiubo Li

05/29/2023

08:43 AM Cleanup #61482: mgr/nfs: remove deprecated `nfs export delete` and `nfs cluster delete` interfaces
Dhairya Parmar wrote:
> [0] deprecated both the interfaces, and introduced `nfs export rm` and `nfs cluster rm` inte...
Dhairya Parmar
08:42 AM Cleanup #61482 (New): mgr/nfs: remove deprecated `nfs export delete` and `nfs cluster delete` int...
[0] deprecated both the interfaces, and introduced `nfs export rm` and `nfs cluster rm` interfaces. It would be good ... Dhairya Parmar
08:09 AM Bug #61184 (Closed): mgr/nfs: setting config using external file gets overidden
Identical to https://tracker.ceph.com/issues/61183 and thus the same rationale: `ceph nfs cluster` and `ceph nfs expo... Dhairya Parmar
08:07 AM Bug #61183 (Closed): mgr/nfs: Applying for first time, NFS-Ganesha Config Added Successfully but ...
`ceph nfs cluster` and `ceph nfs export` are two different interfaces, adding custom export block to cluster config u... Dhairya Parmar
07:53 AM Bug #59463 (Closed): mgr/nfs: Setting NFS export config using -i option is not working
`ceph nfs cluster config set <cluster_id> -i <config_file>` isn't actually meant for creating exports, rather it is u... Dhairya Parmar

05/26/2023

09:21 AM Bug #58949 (Rejected): qa: test_cephfs.test_disk_quota_exceeeded_error is racy
Hi Rishabh,
You are right. There was already a ticket (https://tracker.ceph.com/issues/59346) by Xiubo. I will clo...
Kotresh Hiremath Ravishankar
09:11 AM Bug #58949: qa: test_cephfs.test_disk_quota_exceeeded_error is racy
Hi Kotresh. This ticket was closed because the failure described in this ticket was not due to a bug but due to some ... Rishabh Dave
09:20 AM Bug #59346: qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not rai...
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/7270166 Kotresh Hiremath Ravishankar
06:24 AM Bug #61459 (Fix Under Review): mds: session in the importing state cannot be cleared if an export...
The related sessions in the importer are in the importing state('Session::is_importing' return true) when the state o... Zhansong Gao
01:27 AM Bug #61151: libcephfs: incorrectly showing the size for snapshots when stating them
Greg Farnum wrote:
> > If the rstat is enabled for the .snap snapdir it should report the total size for all the sna...
Xiubo Li

05/25/2023

05:55 PM Bug #61444 (Resolved): mds: session ls command appears twice in command listing
https://github.com/ceph/ceph/blob/3bc1ea5519c0ec98894c5cc16c4404ab7bd186bc/src/mds/MDSDaemon.cc#L379
and
https:...
Patrick Donnelly
12:38 PM Backport #61426 (Resolved): pacific: mon/MDSMonitor: daemon booting may get failed if mon handles...
https://github.com/ceph/ceph/pull/52244 Backport Bot
12:38 PM Backport #61425 (Resolved): quincy: mon/MDSMonitor: daemon booting may get failed if mon handles ...
https://github.com/ceph/ceph/pull/52243 Backport Bot
12:38 PM Backport #61424 (Resolved): reef: mon/MDSMonitor: daemon booting may get failed if mon handles up...
https://github.com/ceph/ceph/pull/52242 Backport Bot
12:26 PM Bug #59318 (Pending Backport): mon/MDSMonitor: daemon booting may get failed if mon handles up:bo...
Patrick Donnelly
11:37 AM Documentation #61185 (Closed): mgr/nfs: ceph nfs cluster config reset CLUSTER_NAME -i PATH_TO_CON...
Exists only in redhat docs, ceph docs look good https://docs.ceph.com/en/latest/mgr/nfs/ Dhairya Parmar
11:24 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
kernel_untar_build.sh test passes with latest code (HEAD 17f4abe9c9c) in the main branch
Xiubo pointed that "a rever...
Milind Changire

05/24/2023

09:08 PM Backport #61415 (Resolved): quincy: mon/MDSMonitor: do not trigger propose on error from prepare_...
https://github.com/ceph/ceph/pull/52239 Backport Bot
09:08 PM Backport #61414 (Resolved): pacific: mon/MDSMonitor: do not trigger propose on error from prepare...
https://github.com/ceph/ceph/pull/52240 Backport Bot
09:08 PM Backport #61413 (Resolved): reef: mon/MDSMonitor: do not trigger propose on error from prepare_up...
https://github.com/ceph/ceph/pull/52238 Backport Bot
09:08 PM Backport #61412 (Resolved): quincy: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
https://github.com/ceph/ceph/pull/52234 Backport Bot
09:08 PM Backport #61411 (Resolved): pacific: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
https://github.com/ceph/ceph/pull/52233 Backport Bot
09:08 PM Backport #61410 (Resolved): reef: mon/MDSMonitor: may lookup non-existent fs in current MDSMap
https://github.com/ceph/ceph/pull/52232 Backport Bot
09:04 PM Bug #58971 (Pending Backport): mon/MDSMonitor: do not trigger propose on error from prepare_update
Patrick Donnelly
09:01 PM Bug #59691 (Pending Backport): mon/MDSMonitor: may lookup non-existent fs in current MDSMap
Patrick Donnelly
08:56 PM Bug #61409 (Pending Backport): qa: _test_stale_caps does not wait for file flush before stat
... Patrick Donnelly
07:24 PM Bug #61407 (New): mds: abort on CInode::verify_dirfrags
... Patrick Donnelly
02:29 PM Backport #59416: quincy: pybind/mgr/volumes: investigate moving calls which may block on libcephf...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51044
merged
Yuri Weinstein
02:11 PM Bug #61399 (Pending Backport): qa: build failure for ior
... Patrick Donnelly
01:17 PM Bug #61397 (Pending Backport): cephfs-top: enhance --dump code to include the missing fields
At the moment, --dump option doesn't display the below fields:
date,client count,filters
Include:
{"date": "Tue ...
Jos Collin
11:00 AM Bug #58726: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
quincy
https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-defaul...
Kotresh Hiremath Ravishankar
10:58 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
quincy:
https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-defau...
Kotresh Hiremath Ravishankar
10:57 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
quincy
https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-defaul...
Kotresh Hiremath Ravishankar
10:57 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
quincy:
https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-defau...
Kotresh Hiremath Ravishankar
10:55 AM Bug #61394 (Resolved): qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), af...
Description: fs/bugs/client_trim_caps/{begin/{0-install 1-ceph 2-logrotate} centos_latest clusters/small-cluster conf... Kotresh Hiremath Ravishankar

05/23/2023

06:24 PM Backport #59222 (Resolved): reef: mds: catch damage to CDentry's first member before persisting
Patrick Donnelly
05:33 PM Fix #61378 (Resolved): mds: turn off MDS balancer by default
Operators should deliberately decide to turn on the balancer. Most folks with multiple active MDS are using pinning. ... Patrick Donnelly
05:29 PM Documentation #61377 (New): doc: add suggested use-cases for random emphemeral pinning
Patrick Donnelly
05:18 PM Documentation #61375 (New): doc: cephfs-data-scan should discuss multiple data support
In particular:
1) Indicate to users when and why they may have multiple data pools.
2) How to check if a file s...
Patrick Donnelly
04:07 PM Bug #50719: xattr returning from the dead (sic!)
Hi Xiubo,
thanks for taking care of this issue and the debug info.
We have here only a production system. Can y...
Tobias Hachmer
02:53 PM Backport #59481 (Resolved): reef: cephfs-top, qa: test the current python version is supported
Jos Collin
02:52 PM Backport #59481: reef: cephfs-top, qa: test the current python version is supported
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51142
merged
Yuri Weinstein
02:53 PM Backport #59411: reef: snap-schedule: handle non-existent path gracefully during snapshot creation
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51248
merged
Yuri Weinstein
12:15 PM Bug #61363: ceph mds keeps crashing with ceph_assert(auth_pins == 0)
Greg,
please log your thoughts on this issue
Milind Changire
12:12 PM Bug #61363 (New): ceph mds keeps crashing with ceph_assert(auth_pins == 0)
... Milind Changire
12:07 PM Bug #59163: mds: stuck in up:rejoin when it cannot "open" missing directory inode
Sorry for the delay - back to working on this. Venky Shankar
11:01 AM Bug #59683 (Fix Under Review): Error: Unable to find a match: userspace-rcu-devel libedit-devel d...
Xiubo Li
11:00 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
With *--enablerepo=powertools* we can successfully install these 3 packages:... Xiubo Li
08:50 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
07:29 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
Checked from the *centos 8 stream* mirror, the *userspace-rcu-devel libedit-devel device-mapper-devel* packages were ... Xiubo Li
06:56 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
09:02 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
08:58 AM Backport #61202 (In Progress): pacific: MDS imported_inodes metric is not updated.
https://github.com/ceph/ceph/pull/51699 Dhairya Parmar
08:57 AM Bug #59350 (Fix Under Review): qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScr...
Dhairya Parmar
08:54 AM Backport #61204 (In Progress): reef: MDS imported_inodes metric is not updated.
Venky Shankar wrote:
> Dhairya, please take this one.
https://github.com/ceph/ceph/pull/51698
Dhairya Parmar
08:53 AM Backport #61203 (In Progress): quincy: MDS imported_inodes metric is not updated.
Dhairya Parmar
08:53 AM Backport #61203: quincy: MDS imported_inodes metric is not updated.
Venky Shankar wrote:
> Dhairya, please take this one.
https://github.com/ceph/ceph/pull/51697
Dhairya Parmar
04:52 AM Backport #61203: quincy: MDS imported_inodes metric is not updated.
Dhairya, please take this one. Venky Shankar
08:52 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:53 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
reef - https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-sm... Kotresh Hiremath Ravishankar
08:52 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
08:51 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-sm...
Kotresh Hiremath Ravishankar
08:51 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:57 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
08:50 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:51 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
08:50 AM Backport #59620: quincy: client: fix dump mds twice
Venky Shankar wrote:
> Dhairya, please take this one.
Patch exists in quincy https://github.com/ceph/ceph/commits...
Dhairya Parmar
04:53 AM Backport #59620: quincy: client: fix dump mds twice
Dhairya, please take this one. Venky Shankar
08:50 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:51 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
08:43 AM Backport #61158: reef: client: fix dump mds twice
Venky Shankar wrote:
> Dhairya, please take this one.
This seems to already exist in reef https://github.com/ceph...
Dhairya Parmar
06:54 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:54 AM Fix #59667: qa: ignore cluster warning encountered in test_refuse_client_session_on_reconnect
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:52 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:19 AM Bug #61357 (New): cephfs-data-scan: parallelize cleanup step
https://docs.ceph.com/en/latest/cephfs/disaster-recovery-experts/#recovery-from-missing-metadata-objects
In the do...
Venky Shankar
06:12 AM Backport #59722 (In Progress): quincy: qa: run scrub post disaster recovery procedure
Venky Shankar
05:17 AM Backport #59726 (In Progress): quincy: mds: allow entries to be removed from lost+found directory
Venky Shankar
04:58 AM Backport #61233 (In Progress): quincy: mds: a few simple operations crash mds
Venky Shankar
04:49 AM Backport #59725 (In Progress): pacific: mds: allow entries to be removed from lost+found directory
Venky Shankar
01:12 AM Bug #56532 (Resolved): client stalls during vstart_runner test
Xiubo Li
01:12 AM Backport #58603 (Resolved): pacific: client stalls during vstart_runner test
Xiubo Li
01:11 AM Backport #58881 (Resolved): pacific: mds: Jenkins fails with skipping unrecognized type MClientRe...
Xiubo Li
01:08 AM Bug #57044 (Resolved): mds: add some debug logs for "crash during construction of internal request"
Xiubo Li
01:00 AM Bug #55409 (Resolved): client: incorrect operator precedence in Client.cc
Xiubo Li
12:51 AM Backport #61346 (In Progress): pacific: mds: fsstress.sh hangs with multimds (deadlock between un...
Xiubo Li
12:48 AM Backport #61348 (In Progress): quincy: mds: fsstress.sh hangs with multimds (deadlock between unl...
Xiubo Li
12:46 AM Backport #61347 (In Progress): reef: mds: fsstress.sh hangs with multimds (deadlock between unlin...
Xiubo Li

05/22/2023

07:45 PM Backport #61348 (Resolved): quincy: mds: fsstress.sh hangs with multimds (deadlock between unlink...
https://github.com/ceph/ceph/pull/51685 Backport Bot
07:45 PM Backport #61347 (Resolved): reef: mds: fsstress.sh hangs with multimds (deadlock between unlink a...
https://github.com/ceph/ceph/pull/51684 Backport Bot
07:45 PM Backport #61346 (Resolved): pacific: mds: fsstress.sh hangs with multimds (deadlock between unlin...
https://github.com/ceph/ceph/pull/51686 Backport Bot
07:44 PM Bug #58340 (Pending Backport): mds: fsstress.sh hangs with multimds (deadlock between unlink and ...
Patrick Donnelly
07:43 PM Bug #59657 (Resolved): qa: test with postgres failed (deadlock between link and migrate straydn(r...
Backported via https://tracker.ceph.com/issues/58340 Patrick Donnelly
03:06 PM Bug #57244: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10000000003 p...
PR https://github.com/ceph/ceph/pull/47752 has been reverted: https://github.com/ceph/ceph/pull/51661 Venky Shankar
03:05 PM Bug #57244 (Fix Under Review): [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ...
Venky Shankar
12:54 PM Bug #61243: qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
Please disable tests needed to be disabled. Milind Changire
10:40 AM Feature #61334 (Fix Under Review): cephfs-mirror: use snapdiff api for efficient tree traversal
With https://github.com/ceph/ceph/pull/43546 merged, cephfs-mirror can make use the snapdiff api (via readdir_snapdif... Venky Shankar
08:51 AM Bug #61151: libcephfs: incorrectly showing the size for snapshots when stating them
The tests worked as expected:... Xiubo Li
08:44 AM Bug #61151 (Fix Under Review): libcephfs: incorrectly showing the size for snapshots when stating...
Xiubo Li
04:37 AM Bug #59551 (Fix Under Review): mgr/stats: exception ValueError :invalid literal for int() with ba...
Jos Collin
02:30 AM Backport #58994 (Resolved): pacific: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
Xiubo Li
02:29 AM Backport #59268 (Resolved): pacific: libcephfs: clear the suid/sgid for fallocate
Xiubo Li
01:10 AM Bug #61148: dbench test results in call trace in dmesg
Hit this again in Patrick's `fsstressh` relevant tests:
https://pulpito.ceph.com/pdonnell-2023-05-19_20:26:49-fs-w...
Xiubo Li

05/20/2023

11:01 AM Backport #59721 (In Progress): pacific: qa: run scrub post disaster recovery procedure
Venky Shankar
10:53 AM Backport #61235 (In Progress): pacific: mds: a few simple operations crash mds
Venky Shankar
10:48 AM Backport #59558: quincy: qa: RuntimeError: more than one file system available
Patrick, https://github.com/ceph/ceph/pull/50922#issuecomment-1521943127 is waiting on this. Venky Shankar
10:43 AM Backport #61158: reef: client: fix dump mds twice
Dhairya, please take this one. Venky Shankar
10:42 AM Backport #61204: reef: MDS imported_inodes metric is not updated.
Dhairya, please take this one. Venky Shankar
10:37 AM Backport #61234 (In Progress): reef: mds: a few simple operations crash mds
Venky Shankar
10:34 AM Backport #59724 (In Progress): reef: mds: allow entries to be removed from lost+found directory
Venky Shankar
10:29 AM Backport #59723 (In Progress): reef: qa: run scrub post disaster recovery procedure
Venky Shankar
09:41 AM Backport #59559 (Resolved): reef: qa: RuntimeError: more than one file system available
Venky Shankar

05/19/2023

09:27 PM Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2
/a/yuriw-2023-05-17_19:39:18-rados-wip-yuri5-testing-2023-05-09-1324-pacific-distro-default-smithi/7276765 Laura Flores
07:25 AM Bug #61279: qa: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays) failed
This was addressed with https://tracker.ceph.com/issues/52606 but hit again. Kotresh Hiremath Ravishankar
07:24 AM Bug #61279 (New): qa: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays) failed
Description: fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd... Kotresh Hiremath Ravishankar
06:39 AM Bug #61265 (New): qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
Description: fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{c... Kotresh Hiremath Ravishankar
05:54 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
Milind, PTAL at the discussion in https://tracker.ceph.com/issues/59342 if it helps. I closed that as duplicate of this. Kotresh Hiremath Ravishankar
05:52 AM Bug #59342 (Duplicate): qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
This is the duplicate of https://tracker.ceph.com/issues/57655 Hence closing this. Kotresh Hiremath Ravishankar

05/18/2023

02:33 PM Backport #58994: pacific: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50988
merged
Yuri Weinstein
02:33 PM Backport #59268: pacific: libcephfs: clear the suid/sgid for fallocate
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50988
merged
Yuri Weinstein
02:32 PM Backport #59386: pacific: [RHEL stock] pjd test failures(a bug that need to wait the unlink to fi...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50986
merged
Yuri Weinstein
02:31 PM Backport #59023: pacific: mds: warning `clients failing to advance oldest client/flush tid` seen ...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50811
merged
Yuri Weinstein
02:31 PM Backport #59246: pacific: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_cluste...
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50809
merged
Yuri Weinstein
02:31 PM Backport #59249: pacific: qa: intermittent nfs test failures at nfs cluster creation
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50809
merged
Yuri Weinstein
02:31 PM Backport #59252: pacific: mgr/nfs: disallow non-existent paths when creating export
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50809
merged
Yuri Weinstein
02:30 PM Backport #57721: pacific: qa: data-scan/journal-tool do not output debugging in upstream testing
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50773
merged
Yuri Weinstein
02:29 PM Backport #57825: pacific: qa: mirror tests should cleanup fs during unwind
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50765
merged
Yuri Weinstein
12:30 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
Venky Shankar wrote:
> Rishabh Dave wrote:
> > http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-w...
Milind Changire
12:16 PM Bug #58934: snaptest-git-ceph.sh failure with ceph-fuse
should we move to cloning from github.com instead of git.ceph.com for this specific test ?
git.ceph.com network conn...
Milind Changire
05:02 AM Bug #58934: snaptest-git-ceph.sh failure with ceph-fuse
Xiubo Li wrote:
> This should be the same issue with https://tracker.ceph.com/issues/59343.
Which means this can ...
Venky Shankar
11:43 AM Bug #61243: qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
Cluster log shows fs degraded/offline and then cleared before test ends. Bunch of osd failure msgs after test ended.
...
Kotresh Hiremath Ravishankar
11:07 AM Bug #61243 (Duplicate): qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
Job: fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro... Kotresh Hiremath Ravishankar
10:45 AM Bug #58949 (Need More Info): qa: test_cephfs.test_disk_quota_exceeeded_error is racy
Kotresh Hiremath Ravishankar
10:45 AM Bug #58949: qa: test_cephfs.test_disk_quota_exceeeded_error is racy
This is seen again in https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-... Kotresh Hiremath Ravishankar
10:27 AM Bug #59342: qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/7270060 Kotresh Hiremath Ravishankar
07:29 AM Backport #61235 (Resolved): pacific: mds: a few simple operations crash mds
https://github.com/ceph/ceph/pull/51609 Backport Bot
07:29 AM Backport #61234 (Resolved): reef: mds: a few simple operations crash mds
https://github.com/ceph/ceph/pull/51608 Backport Bot
07:29 AM Backport #61233 (Resolved): quincy: mds: a few simple operations crash mds
https://github.com/ceph/ceph/pull/51688 Backport Bot
07:23 AM Bug #58411 (Pending Backport): mds: a few simple operations crash mds
Venky Shankar

05/17/2023

03:35 PM Backport #52854: pacific: qa: test_simple failure
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50756
merged
Yuri Weinstein
03:33 PM Backport #59415: reef: pybind/mgr/volumes: investigate moving calls which may block on libcephfs ...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51042
merged
Yuri Weinstein
01:42 PM Bug #61151: libcephfs: incorrectly showing the size for snapshots when stating them
> If the rstat is enabled for the .snap snapdir it should report the total size for all the snapshots.
I'm not sur...
Greg Farnum
06:48 AM Bug #61151: libcephfs: incorrectly showing the size for snapshots when stating them
There is one option *mds_snap_rstat* in *MDS* daemons, after it's enabled it will enable nested rstat for snapshots, ... Xiubo Li
01:11 PM Bug #61217 (Fix Under Review): ceph: corrupt snap message from mds1
Xiubo Li
12:57 PM Bug #61217: ceph: corrupt snap message from mds1
Introduced by :... Xiubo Li
12:55 PM Bug #61217 (Resolved): ceph: corrupt snap message from mds1
From ceph user mail list: https://www.spinics.net/lists/ceph-users/msg77106.html
The kclient received a corrupted ...
Xiubo Li
01:00 PM Bug #61148: dbench test results in call trace in dmesg
Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > Venky Shankar wrote:
> > > > Xiubo Li wrote:
> ...
Venky Shankar
10:38 AM Bug #61148: dbench test results in call trace in dmesg
Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo Li wrote:
> > > > Okay, this morning ...
Xiubo Li
07:39 AM Bug #61148: dbench test results in call trace in dmesg
Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > Okay, this morning I saw you have merged the PR h...
Venky Shankar
12:51 PM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Nothing has changed on storage01 since my first crash report. The crash happens on Leap 15.4 with Quincy. I just want... Eugen Block
12:42 PM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Eugen Block wrote:
> Both VMs use the same kernel version (they are not running 15.1 anymore, both have been upgrade...
Jos Collin
12:29 PM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Both VMs use the same kernel version (they are not running 15.1 anymore, both have been upgraded to 15.4 on the way t... Eugen Block
12:12 PM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Eugen Block wrote:
> This is kind of strange, when I initially wanted to test cephfs-top I chose a different virtual...
Jos Collin
09:23 AM Fix #58023 (Fix Under Review): mds: do not evict clients if OSDs are laggy
Dhairya Parmar
09:18 AM Bug #59394: ACLs not fully supported.
Brian,
Have you read up "these docs":https://docs.ceph.com/en/latest/cephfs/client-config-ref/#confval-client_acl_ty...
Milind Changire
07:46 AM Bug #55446: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command
pacific:... Kotresh Hiremath Ravishankar
07:44 AM Bug #54462: Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status...
pacific:
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-de...
Kotresh Hiremath Ravishankar
07:43 AM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
pacific:
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-de...
Kotresh Hiremath Ravishankar
07:43 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
pacific
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-de...
Kotresh Hiremath Ravishankar
06:27 AM Backport #61204 (Resolved): reef: MDS imported_inodes metric is not updated.
https://github.com/ceph/ceph/pull/51698 Backport Bot
06:27 AM Bug #61201: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific
PRs in the test batch:
# pacific: qa: mirror tests should cleanup fs during unwind by batrick · Pull Request #5076...
Kotresh Hiremath Ravishankar
06:24 AM Bug #61201 (Resolved): qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in...
Description: fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/... Kotresh Hiremath Ravishankar
06:27 AM Backport #61203 (In Progress): quincy: MDS imported_inodes metric is not updated.
Backport Bot
06:27 AM Backport #61202 (Resolved): pacific: MDS imported_inodes metric is not updated.
Backport Bot
06:18 AM Bug #59107 (Pending Backport): MDS imported_inodes metric is not updated.
Venky Shankar
01:31 AM Bug #50719 (Need More Info): xattr returning from the dead (sic!)
Xiubo Li
01:11 AM Bug #50719: xattr returning from the dead (sic!)
Tobias Hachmer wrote:
> Hi Jeff,
>
> we're also affected by this issue and can confirm the behaviour Thomas descr...
Xiubo Li
01:12 AM Backport #58826 (Resolved): pacific: mds: make num_fwd and num_retry to __u32
Xiubo Li
01:03 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
Kotresh Hiremath Ravishankar wrote:
> This is seen in reef QA run - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:...
Xiubo Li
12:08 AM Backport #59406 (Resolved): reef: cephfs-top: navigate to home screen when no fs
Jos Collin

05/16/2023

07:11 PM Backport #59245: reef: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50808
merged
Yuri Weinstein
07:10 PM Backport #59248: reef: qa: intermittent nfs test failures at nfs cluster creation
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50808
merged
Yuri Weinstein
07:10 PM Backport #59251: reef: mgr/nfs: disallow non-existent paths when creating export
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50808
merged
Yuri Weinstein
07:09 PM Backport #59227: reef: cephfs-data-scan: does not scan_links for lost+found
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50782
merged
Yuri Weinstein
06:36 PM Backport #59430: reef: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.Test...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51047
merged
Yuri Weinstein
06:35 PM Backport #59406: reef: cephfs-top: navigate to home screen when no fs
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51003
merged
Yuri Weinstein
06:33 PM Backport #59596: reef: cephfs-top: fix help text for delay
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50998
merged
Yuri Weinstein
06:33 PM Backport #59397: reef: cephfs-top: cephfs-top -d <seconds> not working as expected
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50998
merged
Yuri Weinstein
06:31 PM Backport #59020: reef: cephfs-data-scan: multiple data pools are not supported
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50524
merged
Yuri Weinstein
04:27 PM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm... Kotresh Hiremath Ravishankar
11:19 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
reef:
https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s...
Kotresh Hiremath Ravishankar
04:27 PM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm... Kotresh Hiremath Ravishankar
11:27 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
reef - https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s... Kotresh Hiremath Ravishankar
04:26 PM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm... Kotresh Hiremath Ravishankar
04:25 PM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
This is seen in reef QA run - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247... Kotresh Hiremath Ravishankar
04:23 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm... Kotresh Hiremath Ravishankar
11:15 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
reef: https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-sm... Kotresh Hiremath Ravishankar
04:21 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
another instance in reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-re... Kotresh Hiremath Ravishankar
10:32 AM Bug #61182 (Resolved): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the...
Description: fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} clus... Kotresh Hiremath Ravishankar
04:21 PM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm... Kotresh Hiremath Ravishankar
04:19 PM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
reef- https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-smi... Kotresh Hiremath Ravishankar
04:06 PM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm... Kotresh Hiremath Ravishankar
04:07 PM Bug #54460: snaptest-multiple-capsnaps.sh test failure
reef
https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
11:22 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
reef - https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s... Kotresh Hiremath Ravishankar
03:48 PM Bug #50719: xattr returning from the dead (sic!)
Hi Jeff,
we're also affected by this issue and can confirm the behaviour Thomas described.
Versions here:
Ce...
Tobias Hachmer
02:38 PM Backport #61187 (In Progress): reef: qa: ignore cluster warning encountered in test_refuse_client...
https://github.com/ceph/ceph/pull/51515 Dhairya Parmar
12:53 PM Backport #61187 (Resolved): reef: qa: ignore cluster warning encountered in test_refuse_client_se...
Backport Bot
12:49 PM Fix #59667 (Pending Backport): qa: ignore cluster warning encountered in test_refuse_client_sessi...
Venky Shankar
11:16 AM Fix #59667: qa: ignore cluster warning encountered in test_refuse_client_session_on_reconnect
This needs backport for reef:
Issue is hit in reef
https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10...
Kotresh Hiremath Ravishankar
11:30 AM Bug #61186 (Fix Under Review): mgr/nfs: hitting incomplete command returns same suggestion twice
for example... Dhairya Parmar
11:26 AM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
seen in reef - https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-d... Kotresh Hiremath Ravishankar
11:24 AM Bug #38704: qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN) in cluster log"
Similar issue seen again in reef on different test (tasks/cephfs_misc_tests) - https://pulpito.ceph.com/yuriw-2023-05... Kotresh Hiremath Ravishankar
11:24 AM Documentation #61185 (Closed): mgr/nfs: ceph nfs cluster config reset CLUSTER_NAME -i PATH_TO_CON...
While going through Red Hat doc https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/file_system... Dhairya Parmar
11:20 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
reef - https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s... Kotresh Hiremath Ravishankar
11:14 AM Bug #47292: cephfs-shell: test_df_for_valid_file failure
seen in yuri's reef run
https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-ree...
Kotresh Hiremath Ravishankar
11:13 AM Bug #61184 (Closed): mgr/nfs: setting config using external file gets overidden
Followed similar steps as mentioned in https://tracker.ceph.com/issues/59463... Dhairya Parmar
10:50 AM Bug #59463: mgr/nfs: Setting NFS export config using -i option is not working
Dhairya Parmar wrote:
> I was going through the config file content and found `pseudo_path` to be a bit doubtful, it...
Dhairya Parmar
10:48 AM Bug #61183 (Closed): mgr/nfs: Applying for first time, NFS-Ganesha Config Added Successfully but ...
While setting NFS cluster config for the first time using -i option it does say "NFS-Ganesha Config Added Successfull... Dhairya Parmar
07:30 AM Bug #59688: mds: idempotence issue in client request
Mer Xuanyi wrote:
> Venky Shankar wrote:
> > Hi Mer Xuanyi,
> >
> > Thanks for the detailed report.
> >
> > S...
Venky Shankar
06:36 AM Bug #59688: mds: idempotence issue in client request
Venky Shankar wrote:
> Hi Mer Xuanyi,
>
> Thanks for the detailed report.
>
> So, this requires flipping the c...
Mer Xuanyi
04:08 AM Bug #59688 (Triaged): mds: idempotence issue in client request
Hi Mer Xuanyi,
Thanks for the detailed report.
So, this requires flipping the config tunables from their defaul...
Venky Shankar
07:28 AM Backport #59202 (In Progress): pacific: qa: add testing in fs:workload for different kinds of sub...
Milind Changire
05:53 AM Backport #59202 (New): pacific: qa: add testing in fs:workload for different kinds of subvolumes
redoing backport Milind Changire
06:10 AM Backport #59706 (In Progress): pacific: Mds crash and fails with assert on prepare_new_inode
Xiubo Li
05:53 AM Backport #59707 (In Progress): quincy: Mds crash and fails with assert on prepare_new_inode
Xiubo Li
05:48 AM Backport #59708 (In Progress): reef: Mds crash and fails with assert on prepare_new_inode
Xiubo Li
05:39 AM Backport #59007 (Resolved): pacific: mds stuck in 'up:replay' and crashed.
Xiubo Li
02:26 AM Bug #58411 (Fix Under Review): mds: a few simple operations crash mds
Venky Shankar
02:21 AM Bug #61148 (Fix Under Review): dbench test results in call trace in dmesg
I created one PR to fix it.
Fixed the racy of https://github.com/ceph/ceph/pull/47752/commits/9fbde6da076f2d7c8bfd...
Xiubo Li
02:13 AM Bug #61148: dbench test results in call trace in dmesg
Venky Shankar wrote:
> Xiubo Li wrote:
> > Okay, this morning I saw you have merged the PR https://github.com/ceph/...
Xiubo Li
01:23 AM Bug #61148: dbench test results in call trace in dmesg
Xiubo Li wrote:
> Okay, this morning I saw you have merged the PR https://github.com/ceph/ceph/pull/47752.
>
> Th...
Venky Shankar
01:12 AM Bug #61148: dbench test results in call trace in dmesg
Xiubo Li wrote:
> Okay, this morning I saw you have merged the PR https://github.com/ceph/ceph/pull/47752.
I was ...
Venky Shankar
12:51 AM Bug #61148: dbench test results in call trace in dmesg
Okay, this morning I saw you have merged the PR https://github.com/ceph/ceph/pull/47752.
This issue should be intr...
Xiubo Li

05/15/2023

04:52 PM Backport #61167 (Resolved): quincy: [WRN] : client.408214273 isn't responding to mclientcaps(revo...
https://github.com/ceph/ceph/pull/52851
Backported https://tracker.ceph.com/issues/62197 together with this tracker.
Backport Bot
04:52 PM Backport #61166 (Resolved): pacific: [WRN] : client.408214273 isn't responding to mclientcaps(rev...
https://github.com/ceph/ceph/pull/52852
Backported https://tracker.ceph.com/issues/62199 together with this tracker.
Backport Bot
04:52 PM Backport #61165 (Resolved): reef: [WRN] : client.408214273 isn't responding to mclientcaps(revoke...
https://github.com/ceph/ceph/pull/52850
Backported https://tracker.ceph.com/issues/62198 together with this tracker.
Backport Bot
04:51 PM Bug #57244 (Pending Backport): [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ...
Venky Shankar
04:02 PM Backport #59595 (Resolved): pacific: cephfs-top: fix help text for delay
Jos Collin
03:35 PM Backport #59595: pacific: cephfs-top: fix help text for delay
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50715
merged
Yuri Weinstein
04:01 PM Backport #59398 (Resolved): pacific: cephfs-top: cephfs-top -d <seconds> not working as expected
Jos Collin
03:35 PM Backport #59398: pacific: cephfs-top: cephfs-top -d <seconds> not working as expected
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50715
merged
Yuri Weinstein
04:00 PM Backport #58807 (Resolved): pacific: cephfs-top: add an option to dump the computed values to stdout
Jos Collin
03:35 PM Backport #58807: pacific: cephfs-top: add an option to dump the computed values to stdout
Jos Collin wrote:
> Backport PR: https://github.com/ceph/ceph/pull/50715.
merged
Yuri Weinstein
03:37 PM Backport #58881: pacific: mds: Jenkins fails with skipping unrecognized type MClientRequest::Release
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50733
merged
Yuri Weinstein
03:37 PM Backport #58826: pacific: mds: make num_fwd and num_retry to __u32
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50733
merged
Yuri Weinstein
03:36 PM Backport #59007: pacific: mds stuck in 'up:replay' and crashed.
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50725
merged
Yuri Weinstein
03:34 PM Backport #58866: pacific: cephfs-top: Sort menu doesn't show 'No filesystem available' screen whe...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50596
merged
Yuri Weinstein
03:34 PM Backport #59720 (In Progress): pacific: client: read wild pointer when reconnect to mds
Venky Shankar
03:34 PM Backport #59019: pacific: cephfs-data-scan: multiple data pools are not supported
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50523
merged
Yuri Weinstein
03:32 PM Backport #59718 (In Progress): quincy: client: read wild pointer when reconnect to mds
Venky Shankar
03:24 PM Backport #59719 (In Progress): reef: client: read wild pointer when reconnect to mds
Venky Shankar
03:18 PM Backport #61158 (Resolved): reef: client: fix dump mds twice
Backport Bot
11:00 AM Bug #55446: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi... Kotresh Hiremath Ravishankar
10:41 AM Bug #50719: xattr returning from the dead (sic!)
Hi Jeff!
Just wanted to let you know that this issue is still relevant and severe with more recent versions of bot...
Thomas Hukkelberg
08:27 AM Bug #61151 (Fix Under Review): libcephfs: incorrectly showing the size for snapshots when stating...
If the *rstat* is enabled for the *.snap* snapdir it should report the total size for all the snapshots. And at the s... Xiubo Li
07:42 AM Bug #61148: dbench test results in call trace in dmesg
More detail call trace:... Xiubo Li
06:05 AM Bug #61148: dbench test results in call trace in dmesg
Another instance, but this time another workunit: https://pulpito.ceph.com/vshankar-2023-05-12_08:25:27-fs-wip-vshank... Venky Shankar
05:13 AM Bug #61148 (Rejected): dbench test results in call trace in dmesg
https://pulpito.ceph.com/vshankar-2023-05-12_08:25:27-fs-wip-vshankar-testing-20230509.090020-1-testing-default-smith... Venky Shankar
06:57 AM Fix #59667 (Resolved): qa: ignore cluster warning encountered in test_refuse_client_session_on_re...
Venky Shankar
04:50 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- https://pulpito.ceph.com/vshankar-2023-05-12_08:25:27-fs-wip-vshankar-testing-20230509.090020-1-testing-default-smi... Venky Shankar
02:57 AM Bug #61009 (Fix Under Review): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0ea734a1639fc4740189dcd0...
Telemetry Bot
02:57 AM Bug #61008 (New): crash: void interval_set<T, C>::insert(T, T, T*, T*) [with T = inodeno_t; C = s...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8e3f5c0126b1f4f50f08ff5b...
Telemetry Bot
02:57 AM Bug #61004 (New): crash: MDSRank::is_stale_message(boost::intrusive_ptr<Message const> const&) const

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=af82cc6e82ac3651d4918c4a...
Telemetry Bot
02:56 AM Bug #60986 (New): crash: void MDCache::rejoin_send_rejoins(): assert(auth >= 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=71d482317bedfc17674af4f5...
Telemetry Bot
02:56 AM Bug #60980 (New): crash: Session* MDSRank::get_session(ceph::cref_t<Message>&): assert(session->i...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e2a2ae4253fafecb8b3ca014...
Telemetry Bot
02:55 AM Bug #60949 (New): crash: cephfs-journal-tool(

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0d4dcd3a80040e9cf7f44ee7...
Telemetry Bot
02:55 AM Bug #60945 (New): crash: virtual void C_Client_Remount::finish(int): abort

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=594102a81a05cdba00f19a82...
Telemetry Bot
02:49 AM Bug #60685 (New): crash: elist<T>::~elist() [with T = CInode*]: assert(_head.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c4679655cf1f90278c3b054b...
Telemetry Bot
02:49 AM Bug #60679 (New): crash: C_GatherBuilderBase<ContextType, GatherType>::~C_GatherBuilderBase() [wi...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5b6b5acff6ff7a2c7ee0565f...
Telemetry Bot
02:48 AM Bug #60669 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2cf0c9e2e9b09fc177e74e6f...
Telemetry Bot
02:48 AM Bug #60668 (New): crash: void Migrator::export_try_cancel(CDir*, bool): assert(it != export_state...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ff4594c6abd556deefbc7327...
Telemetry Bot
02:48 AM Bug #60665 (New): crash: void MDCache::open_snaprealms(): assert(rejoin_done)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0da14d340ca86eb98a0a866c...
Telemetry Bot
02:48 AM Bug #60664 (New): crash: elist<T>::~elist() [with T = CDentry*]: assert(_head.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=770f229d0a641695e3d43ffb...
Telemetry Bot
02:48 AM Bug #60660 (New): crash: std::_Rb_tree_rebalance_for_erase(std::_Rb_tree_node_base*, std::_Rb_tre...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6bb8c01a74a9dcf2f2500ef7...
Telemetry Bot
02:48 AM Bug #60640 (New): crash: void Journaler::_write_head(Context*): assert(last_written.write_pos >= ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9e5c5d1ecf602782154f9b19...
Telemetry Bot
02:48 AM Bug #60636 (New): crash: elist<T>::~elist() [with T = CDir*]: assert(_head.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5fc520e822550d2d86ab92a1...
Telemetry Bot
02:47 AM Bug #60630 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e4d4bb371a344df64ed9d22c...
Telemetry Bot
02:47 AM Bug #60629 (New): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=978159d0d2675e074cadedff...
Telemetry Bot
02:47 AM Bug #60628 (New): crash: MDCache::purge_inodes(const interval_set<inodeno_t>&, LogSegment*)::<lam...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=de50fd8802ec5732812d300d...
Telemetry Bot
02:47 AM Bug #60627 (New): crash: void MDCache::handle_dentry_unlink(ceph::cref_t<MDentryUnlink>&): assert...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f9f4072660844962480cb518...
Telemetry Bot
02:47 AM Bug #60625 (Resolved): crash: MDSRank::send_message_client(boost::intrusive_ptr<Message> const&, ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c2092a196eb69c3c08a39646...
Telemetry Bot
02:47 AM Bug #60622 (New): crash: void MDCache::handle_dentry_unlink(ceph::cref_t<MDentryUnlink>&): assert...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a081e9516cda7c2c8dfa596c...
Telemetry Bot
02:47 AM Bug #60618 (New): crash: Session* MDSRank::get_session(ceph::cref_t<Message>&): assert(session->i...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b0d320aceb93370daf216e36...
Telemetry Bot
02:47 AM Bug #60607 (New): crash: virtual void MDSCacheObject::bad_put(int): assert(ref_map[by] > 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7ad58204e57e59e10794c28d...
Telemetry Bot
02:47 AM Bug #60606 (New): crash: ceph::buffer::list::iterator_impl<true>::copy(unsigned int, char*)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e841c48356848e8657d3cb5e...
Telemetry Bot
02:47 AM Bug #60600 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=183a087b3f731571c6337b66...
Telemetry Bot
02:46 AM Bug #60598 (New): crash: void MDCache::handle_dentry_unlink(ceph::cref_t<MDentryUnlink>&): assert...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1ebda7a102930caf786ba16e...
Telemetry Bot
02:41 AM Bug #60372 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=42c202ed03daf7e9cbd008db...
Telemetry Bot
02:40 AM Bug #60343 (New): crash: void MDCache::handle_cache_rejoin_ack(ceph::cref_t<MMDSCacheRejoin>&): a...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=374c1bb49952f4c442a939bb...
Telemetry Bot
02:40 AM Bug #60319 (New): crash: std::_Rb_tree<dirfrag_t, dirfrag_t, std::_Identity<dirfrag_t>, std::less...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=20a0ec6572610c3fb12e9f12...
Telemetry Bot
02:39 AM Bug #60303 (New): crash: __pthread_mutex_lock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f88cc08b61611695f0a919ea...
Telemetry Bot
02:38 AM Bug #60241 (New): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1e491c6d2002d8aa5fa17dac...
Telemetry Bot
02:35 AM Bug #60126 (New): crash: bool MDCache::shutdown_pass(): assert(!migrator->is_importing())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a36f9e1d71483b4075ca8625...
Telemetry Bot
02:34 AM Bug #60109 (New): crash: Session* MDSRank::get_session(ceph::cref_t<Message>&): assert(session->i...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=abb374dbfa3649257e6590ea...
Telemetry Bot
02:34 AM Bug #60092 (New): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&): asser...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=19e45b6ba87e5ddb0df16715...
Telemetry Bot
02:32 AM Bug #60014 (New): crash: void MDCache::remove_replay_cap_reconnect(inodeno_t, client_t): assert(c...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cc7ca4bafc15d4883c77e861...
Telemetry Bot
02:29 AM Bug #59865 (New): crash: CInode::get_dirfrags() const

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=546c0379bf1bc4705b166c60...
Telemetry Bot
02:28 AM Bug #59833 (Pending Backport): crash: void MDLog::trim(int): assert(segments.size() >= pre_segmen...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=21725944feb692959f706d0f...
Telemetry Bot
02:27 AM Bug #59819 (New): crash: virtual CDentry::~CDentry(): assert(batch_ops.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5520f521a1ed7653b6505f60...
Telemetry Bot
02:25 AM Bug #59802 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=78028e7ef848aec87e854972...
Telemetry Bot
02:25 AM Bug #59799 (New): crash: ProtocolV2::handle_auth_request(ceph::buffer::list&)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=34dce467397e70daae79ec8e...
Telemetry Bot
02:24 AM Bug #59785 (Closed): crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == L...

*New crash events were reported via Telemetry with newer versions (['17.2.1', '17.2.5']) than encountered in Tracke...
Telemetry Bot
02:23 AM Bug #59768 (Duplicate): crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): asse...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1a84e31a4bc3ae6dc69d901c...
Telemetry Bot
02:23 AM Bug #59767 (New): crash: MDSDaemon::dump_status(ceph::Formatter*)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=15752cc3020e5047d0724344...
Telemetry Bot
02:23 AM Bug #59766 (New): crash: virtual void ESession::replay(MDSRank*): assert(g_conf()->mds_wipe_sessi...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=36e909287876c9be42c068ca...
Telemetry Bot
02:22 AM Bug #59761 (New): crash: void MDLog::_replay_thread(): assert(journaler->is_readable() || mds->is...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=71bbbf63c8f73aa37e3aa82e...
Telemetry Bot
02:22 AM Bug #59751 (New): crash: MDSDaemon::respawn()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8bfff60bfa4ffd456fa49fb8...
Telemetry Bot
02:14 AM Bug #59749 (New): crash: virtual CInode::~CInode(): assert(batch_ops.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=479afe0191403e023f1878c1...
Telemetry Bot
02:14 AM Bug #58934 (Duplicate): snaptest-git-ceph.sh failure with ceph-fuse
This should be the same issue with https://tracker.ceph.com/issues/59343. Xiubo Li
02:13 AM Bug #59741 (New): crash: void MDCache::remove_inode(CInode*): assert(o->get_num_ref() == 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f60159ef05cf6bfbf51c6688...
Telemetry Bot

05/12/2023

09:24 AM Bug #59736 (New): qa: add one test case for "kclient: ln: failed to create hard link 'file name':...
We need to add one test case for https://tracker.ceph.com/issues/59515.... Xiubo Li

05/11/2023

08:18 PM Bug #56774: crash: Client::_get_vino(Inode*)
Since this issue is marked as "Duplicate" it needs to specify what issue it duplicates in the "Related Issues" field.... Yaarit Hatuka
08:15 PM Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_...
Since this issue is marked as "Duplicate" it needs to specify what issue it duplicates in the "Related Issues" field.... Yaarit Hatuka
05:10 PM Bug #59716 (Fix Under Review): tools/cephfs/first-damage: unicode decode errors break iteration
Patrick Donnelly
12:55 PM Feature #59601: Provide way to abort kernel mount after lazy umount
Niklas Hambuechen wrote:
> Venky Shankar wrote:
> > Have you tried force unmounting the mount (unount -f)?
>
> A...
Venky Shankar
08:53 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
Seen in pacific run
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-...
Kotresh Hiremath Ravishankar
08:48 AM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi... Kotresh Hiremath Ravishankar
08:05 AM Bug #48773: qa: scrub does not complete
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi... Kotresh Hiremath Ravishankar
07:46 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi... Kotresh Hiremath Ravishankar
05:58 AM Backport #59726 (Resolved): quincy: mds: allow entries to be removed from lost+found directory
https://github.com/ceph/ceph/pull/51689 Backport Bot
05:58 AM Backport #59725 (Resolved): pacific: mds: allow entries to be removed from lost+found directory
https://github.com/ceph/ceph/pull/51687 Backport Bot
05:58 AM Backport #59724 (Resolved): reef: mds: allow entries to be removed from lost+found directory
https://github.com/ceph/ceph/pull/51607 Backport Bot
05:51 AM Bug #59569 (Pending Backport): mds: allow entries to be removed from lost+found directory
Venky Shankar
05:50 AM Backport #59723 (Resolved): reef: qa: run scrub post disaster recovery procedure
https://github.com/ceph/ceph/pull/51606 Backport Bot
05:50 AM Backport #59722 (In Progress): quincy: qa: run scrub post disaster recovery procedure
https://github.com/ceph/ceph/pull/51690 Backport Bot
05:50 AM Backport #59721 (Resolved): pacific: qa: run scrub post disaster recovery procedure
https://github.com/ceph/ceph/pull/51610 Backport Bot
05:50 AM Bug #59527 (Pending Backport): qa: run scrub post disaster recovery procedure
Venky Shankar
03:59 AM Backport #59720 (Resolved): pacific: client: read wild pointer when reconnect to mds
https://github.com/ceph/ceph/pull/51487 Backport Bot
03:59 AM Backport #59719 (Resolved): reef: client: read wild pointer when reconnect to mds
https://github.com/ceph/ceph/pull/51484 Backport Bot
03:59 AM Backport #59718 (Resolved): quincy: client: read wild pointer when reconnect to mds
https://github.com/ceph/ceph/pull/51486 Backport Bot
03:56 AM Bug #59514 (Pending Backport): client: read wild pointer when reconnect to mds
Venky Shankar

05/10/2023

04:54 PM Bug #59716 (Pending Backport): tools/cephfs/first-damage: unicode decode errors break iteration
... Patrick Donnelly
11:35 AM Feature #59714 (Pending Backport): mgr/volumes: Support to reject CephFS clones if cloner threads...
1. CephFS clone creation have a limit of 4 parallel clones at a time and rest
of the clone create requests are queue...
Neeraj Pratap Singh
08:43 AM Backport #59708 (Resolved): reef: Mds crash and fails with assert on prepare_new_inode
https://github.com/ceph/ceph/pull/51506 Backport Bot
08:43 AM Backport #59707 (Resolved): quincy: Mds crash and fails with assert on prepare_new_inode
https://github.com/ceph/ceph/pull/51507 Backport Bot
08:43 AM Backport #59706 (Resolved): pacific: Mds crash and fails with assert on prepare_new_inode
https://github.com/ceph/ceph/pull/51508 Backport Bot
08:36 AM Bug #52280 (Pending Backport): Mds crash and fails with assert on prepare_new_inode
Venky Shankar
04:49 AM Bug #59705 (Fix Under Review): client: only wait for write MDS OPs when unmounting
Xiubo Li
04:46 AM Bug #59705 (Resolved): client: only wait for write MDS OPs when unmounting
We do not care about the read MDS OPs and it's safe by just dropping
them when unmounting.
Xiubo Li

05/09/2023

07:50 PM Bug #59394: ACLs not fully supported.
Milind Changire wrote:
> Brian,
> The command you are using is correct.
> However, the config key is incorrect.
>...
Brian Woods
07:44 PM Bug #59394: ACLs not fully supported.
Milind Changire wrote:
> Brian,
> The command you are using is correct.
> However, the config key is incorrect.
>...
Brian Woods
01:30 PM Bug #59691 (Fix Under Review): mon/MDSMonitor: may lookup non-existent fs in current MDSMap
Patrick Donnelly
01:22 PM Bug #59691 (Resolved): mon/MDSMonitor: may lookup non-existent fs in current MDSMap
... Patrick Donnelly
01:06 PM Feature #59601: Provide way to abort kernel mount after lazy umount
Venky Shankar wrote:
> Have you tried force unmounting the mount (unount -f)?
After *umount --lazy*, the mount po...
Niklas Hambuechen
12:48 PM Bug #59688 (Triaged): mds: idempotence issue in client request
Found the mds may process a same client request twice after session with client rebuild because the network issue.
...
Mer Xuanyi
06:41 AM Bug #59683 (Resolved): Error: Unable to find a match: userspace-rcu-devel libedit-devel device-ma...
- https://pulpito.ceph.com/vshankar-2023-05-06_17:28:05-fs-wip-vshankar-testing-20230506.143554-testing-default-smith... Venky Shankar
05:52 AM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
Another instance: https://pulpito.ceph.com/vshankar-2023-05-06_17:28:05-fs-wip-vshankar-testing-20230506.143554-testi... Venky Shankar
05:23 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
Venky Shankar wrote:
> Xiubo Li wrote:
> > This is a failure with *libcephfs* and have the client side logs:
> >
...
Xiubo Li
04:19 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
Xiubo Li wrote:
> This is a failure with *libcephfs* and have the client side logs:
>
> vshankar-2023-04-06_04:14...
Venky Shankar
01:54 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
This also needs to be fixed in kclient. Xiubo Li
03:52 AM Bug #59682: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the unit file o...
Thanks for letting us know, Zac. Venky Shankar
02:28 AM Bug #59682 (Resolved): CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the ...
The Debian package "cephfs-mirror" in the Ceph repository doesn't install the unit file or the man page.
This was ...
Zac Dover
 

Also available in: Atom