Project

General

Profile

Activity

From 04/24/2023 to 05/23/2023

05/23/2023

06:24 PM Backport #59222 (Resolved): reef: mds: catch damage to CDentry's first member before persisting
Patrick Donnelly
05:33 PM Fix #61378 (Fix Under Review): mds: turn off MDS balancer by default
Operators should deliberately decide to turn on the balancer. Most folks with multiple active MDS are using pinning. ... Patrick Donnelly
05:29 PM Documentation #61377 (New): doc: add suggested use-cases for random emphemeral pinning
Patrick Donnelly
05:18 PM Documentation #61375 (New): doc: cephfs-data-scan should discuss multiple data support
In particular:
1) Indicate to users when and why they may have multiple data pools.
2) How to check if a file s...
Patrick Donnelly
04:07 PM Bug #50719: xattr returning from the dead (sic!)
Hi Xiubo,
thanks for taking care of this issue and the debug info.
We have here only a production system. Can y...
Tobias Hachmer
02:53 PM Backport #59481 (Resolved): reef: cephfs-top, qa: test the current python version is supported
Jos Collin
02:52 PM Backport #59481: reef: cephfs-top, qa: test the current python version is supported
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51142
merged
Yuri Weinstein
02:53 PM Backport #59411: reef: snap-schedule: handle non-existent path gracefully during snapshot creation
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51248
merged
Yuri Weinstein
12:15 PM Bug #61363: ceph mds keeps crashing with ceph_assert(auth_pins == 0)
Greg,
please log your thoughts on this issue
Milind Changire
12:12 PM Bug #61363 (New): ceph mds keeps crashing with ceph_assert(auth_pins == 0)
... Milind Changire
12:07 PM Bug #59163: mds: stuck in up:rejoin when it cannot "open" missing directory inode
Sorry for the delay - back to working on this. Venky Shankar
11:01 AM Bug #59683 (Fix Under Review): Error: Unable to find a match: userspace-rcu-devel libedit-devel d...
Xiubo Li
11:00 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
With *--enablerepo=powertools* we can successfully install these 3 packages:... Xiubo Li
08:50 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
07:29 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
Checked from the *centos 8 stream* mirror, the *userspace-rcu-devel libedit-devel device-mapper-devel* packages were ... Xiubo Li
06:56 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
09:02 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
08:58 AM Backport #61202 (In Progress): pacific: MDS imported_inodes metric is not updated.
https://github.com/ceph/ceph/pull/51699 Dhairya Parmar
08:57 AM Bug #59350 (Fix Under Review): qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScr...
Dhairya Parmar
08:54 AM Backport #61204 (In Progress): reef: MDS imported_inodes metric is not updated.
Venky Shankar wrote:
> Dhairya, please take this one.
https://github.com/ceph/ceph/pull/51698
Dhairya Parmar
08:53 AM Backport #61203 (In Progress): quincy: MDS imported_inodes metric is not updated.
Dhairya Parmar
08:53 AM Backport #61203: quincy: MDS imported_inodes metric is not updated.
Venky Shankar wrote:
> Dhairya, please take this one.
https://github.com/ceph/ceph/pull/51697
Dhairya Parmar
04:52 AM Backport #61203: quincy: MDS imported_inodes metric is not updated.
Dhairya, please take this one. Venky Shankar
08:52 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:53 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
reef - https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-sm... Kotresh Hiremath Ravishankar
08:52 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
08:51 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-sm...
Kotresh Hiremath Ravishankar
08:51 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:57 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
08:50 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:51 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
08:50 AM Backport #59620: quincy: client: fix dump mds twice
Venky Shankar wrote:
> Dhairya, please take this one.
Patch exists in quincy https://github.com/ceph/ceph/commits...
Dhairya Parmar
04:53 AM Backport #59620: quincy: client: fix dump mds twice
Dhairya, please take this one. Venky Shankar
08:50 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:51 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
08:43 AM Backport #61158: reef: client: fix dump mds twice
Venky Shankar wrote:
> Dhairya, please take this one.
This seems to already exist in reef https://github.com/ceph...
Dhairya Parmar
06:54 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:54 AM Fix #59667: qa: ignore cluster warning encountered in test_refuse_client_session_on_reconnect
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:52 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
06:19 AM Bug #61357 (New): cephfs-data-scan: parallelize cleanup step
https://docs.ceph.com/en/latest/cephfs/disaster-recovery-experts/#recovery-from-missing-metadata-objects
In the do...
Venky Shankar
06:12 AM Backport #59722 (In Progress): quincy: qa: run scrub post disaster recovery procedure
Venky Shankar
05:17 AM Backport #59726 (In Progress): quincy: mds: allow entries to be removed from lost+found directory
Venky Shankar
04:58 AM Backport #61233 (In Progress): quincy: mds: a few simple operations crash mds
Venky Shankar
04:49 AM Backport #59725 (In Progress): pacific: mds: allow entries to be removed from lost+found directory
Venky Shankar
01:12 AM Bug #56532 (Resolved): client stalls during vstart_runner test
Xiubo Li
01:12 AM Backport #58603 (Resolved): pacific: client stalls during vstart_runner test
Xiubo Li
01:11 AM Backport #58881 (Resolved): pacific: mds: Jenkins fails with skipping unrecognized type MClientRe...
Xiubo Li
01:08 AM Bug #57044 (Resolved): mds: add some debug logs for "crash during construction of internal request"
Xiubo Li
01:00 AM Bug #55409 (Resolved): client: incorrect operator precedence in Client.cc
Xiubo Li
12:51 AM Backport #61346 (In Progress): pacific: mds: fsstress.sh hangs with multimds (deadlock between un...
Xiubo Li
12:48 AM Backport #61348 (In Progress): quincy: mds: fsstress.sh hangs with multimds (deadlock between unl...
Xiubo Li
12:46 AM Backport #61347 (In Progress): reef: mds: fsstress.sh hangs with multimds (deadlock between unlin...
Xiubo Li

05/22/2023

07:45 PM Backport #61348 (Resolved): quincy: mds: fsstress.sh hangs with multimds (deadlock between unlink...
https://github.com/ceph/ceph/pull/51685 Backport Bot
07:45 PM Backport #61347 (Resolved): reef: mds: fsstress.sh hangs with multimds (deadlock between unlink a...
https://github.com/ceph/ceph/pull/51684 Backport Bot
07:45 PM Backport #61346 (Resolved): pacific: mds: fsstress.sh hangs with multimds (deadlock between unlin...
https://github.com/ceph/ceph/pull/51686 Backport Bot
07:44 PM Bug #58340 (Pending Backport): mds: fsstress.sh hangs with multimds (deadlock between unlink and ...
Patrick Donnelly
07:43 PM Bug #59657 (Resolved): qa: test with postgres failed (deadlock between link and migrate straydn(r...
Backported via https://tracker.ceph.com/issues/58340 Patrick Donnelly
03:06 PM Bug #57244: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10000000003 p...
PR https://github.com/ceph/ceph/pull/47752 has been reverted: https://github.com/ceph/ceph/pull/51661 Venky Shankar
03:05 PM Bug #57244 (Fix Under Review): [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ...
Venky Shankar
12:54 PM Bug #61243: qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
Please disable tests needed to be disabled. Milind Changire
10:40 AM Feature #61334 (Fix Under Review): cephfs-mirror: use snapdiff api for efficient tree traversal
With https://github.com/ceph/ceph/pull/43546 merged, cephfs-mirror can make use the snapdiff api (via readdir_snapdif... Venky Shankar
08:51 AM Bug #61151: libcephfs: incorrectly showing the size for snapshots when stating them
The tests worked as expected:... Xiubo Li
08:44 AM Bug #61151 (Fix Under Review): libcephfs: incorrectly showing the size for snapshots when stating...
Xiubo Li
04:37 AM Bug #59551 (Fix Under Review): mgr/stats: exception ValueError :invalid literal for int() with ba...
Jos Collin
02:30 AM Backport #58994 (Resolved): pacific: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
Xiubo Li
02:29 AM Backport #59268 (Resolved): pacific: libcephfs: clear the suid/sgid for fallocate
Xiubo Li
01:10 AM Bug #61148: dbench test results in call trace in dmesg
Hit this again in Patrick's `fsstressh` relevant tests:
https://pulpito.ceph.com/pdonnell-2023-05-19_20:26:49-fs-w...
Xiubo Li

05/20/2023

11:01 AM Backport #59721 (In Progress): pacific: qa: run scrub post disaster recovery procedure
Venky Shankar
10:53 AM Backport #61235 (In Progress): pacific: mds: a few simple operations crash mds
Venky Shankar
10:48 AM Backport #59558: quincy: qa: RuntimeError: more than one file system available
Patrick, https://github.com/ceph/ceph/pull/50922#issuecomment-1521943127 is waiting on this. Venky Shankar
10:43 AM Backport #61158: reef: client: fix dump mds twice
Dhairya, please take this one. Venky Shankar
10:42 AM Backport #61204: reef: MDS imported_inodes metric is not updated.
Dhairya, please take this one. Venky Shankar
10:37 AM Backport #61234 (In Progress): reef: mds: a few simple operations crash mds
Venky Shankar
10:34 AM Backport #59724 (In Progress): reef: mds: allow entries to be removed from lost+found directory
Venky Shankar
10:29 AM Backport #59723 (In Progress): reef: qa: run scrub post disaster recovery procedure
Venky Shankar
09:41 AM Backport #59559 (Resolved): reef: qa: RuntimeError: more than one file system available
Venky Shankar

05/19/2023

09:27 PM Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2
/a/yuriw-2023-05-17_19:39:18-rados-wip-yuri5-testing-2023-05-09-1324-pacific-distro-default-smithi/7276765 Laura Flores
07:25 AM Bug #61279: qa: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays) failed
This was addressed with https://tracker.ceph.com/issues/52606 but hit again. Kotresh Hiremath Ravishankar
07:24 AM Bug #61279 (New): qa: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays) failed
Description: fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd... Kotresh Hiremath Ravishankar
06:39 AM Bug #61265 (New): qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
Description: fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{c... Kotresh Hiremath Ravishankar
05:54 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
Milind, PTAL at the discussion in https://tracker.ceph.com/issues/59342 if it helps. I closed that as duplicate of this. Kotresh Hiremath Ravishankar
05:52 AM Bug #59342 (Duplicate): qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
This is the duplicate of https://tracker.ceph.com/issues/57655 Hence closing this. Kotresh Hiremath Ravishankar

05/18/2023

02:33 PM Backport #58994: pacific: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50988
merged
Yuri Weinstein
02:33 PM Backport #59268: pacific: libcephfs: clear the suid/sgid for fallocate
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50988
merged
Yuri Weinstein
02:32 PM Backport #59386: pacific: [RHEL stock] pjd test failures(a bug that need to wait the unlink to fi...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50986
merged
Yuri Weinstein
02:31 PM Backport #59023: pacific: mds: warning `clients failing to advance oldest client/flush tid` seen ...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50811
merged
Yuri Weinstein
02:31 PM Backport #59246: pacific: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_cluste...
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50809
merged
Yuri Weinstein
02:31 PM Backport #59249: pacific: qa: intermittent nfs test failures at nfs cluster creation
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50809
merged
Yuri Weinstein
02:31 PM Backport #59252: pacific: mgr/nfs: disallow non-existent paths when creating export
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50809
merged
Yuri Weinstein
02:30 PM Backport #57721: pacific: qa: data-scan/journal-tool do not output debugging in upstream testing
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50773
merged
Yuri Weinstein
02:29 PM Backport #57825: pacific: qa: mirror tests should cleanup fs during unwind
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50765
merged
Yuri Weinstein
12:30 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
Venky Shankar wrote:
> Rishabh Dave wrote:
> > http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-w...
Milind Changire
12:16 PM Bug #58934: snaptest-git-ceph.sh failure with ceph-fuse
should we move to cloning from github.com instead of git.ceph.com for this specific test ?
git.ceph.com network conn...
Milind Changire
05:02 AM Bug #58934: snaptest-git-ceph.sh failure with ceph-fuse
Xiubo Li wrote:
> This should be the same issue with https://tracker.ceph.com/issues/59343.
Which means this can ...
Venky Shankar
11:43 AM Bug #61243: qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
Cluster log shows fs degraded/offline and then cleared before test ends. Bunch of osd failure msgs after test ended.
...
Kotresh Hiremath Ravishankar
11:07 AM Bug #61243 (Duplicate): qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
Job: fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro... Kotresh Hiremath Ravishankar
10:45 AM Bug #58949 (Need More Info): qa: test_cephfs.test_disk_quota_exceeeded_error is racy
Kotresh Hiremath Ravishankar
10:45 AM Bug #58949: qa: test_cephfs.test_disk_quota_exceeeded_error is racy
This is seen again in https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-... Kotresh Hiremath Ravishankar
10:27 AM Bug #59342: qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/7270060 Kotresh Hiremath Ravishankar
07:29 AM Backport #61235 (Resolved): pacific: mds: a few simple operations crash mds
https://github.com/ceph/ceph/pull/51609 Backport Bot
07:29 AM Backport #61234 (Resolved): reef: mds: a few simple operations crash mds
https://github.com/ceph/ceph/pull/51608 Backport Bot
07:29 AM Backport #61233 (Resolved): quincy: mds: a few simple operations crash mds
https://github.com/ceph/ceph/pull/51688 Backport Bot
07:23 AM Bug #58411 (Pending Backport): mds: a few simple operations crash mds
Venky Shankar

05/17/2023

03:35 PM Backport #52854: pacific: qa: test_simple failure
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50756
merged
Yuri Weinstein
03:33 PM Backport #59415: reef: pybind/mgr/volumes: investigate moving calls which may block on libcephfs ...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51042
merged
Yuri Weinstein
01:42 PM Bug #61151: libcephfs: incorrectly showing the size for snapshots when stating them
> If the rstat is enabled for the .snap snapdir it should report the total size for all the snapshots.
I'm not sur...
Greg Farnum
06:48 AM Bug #61151: libcephfs: incorrectly showing the size for snapshots when stating them
There is one option *mds_snap_rstat* in *MDS* daemons, after it's enabled it will enable nested rstat for snapshots, ... Xiubo Li
01:11 PM Bug #61217 (Fix Under Review): ceph: corrupt snap message from mds1
Xiubo Li
12:57 PM Bug #61217: ceph: corrupt snap message from mds1
Introduced by :... Xiubo Li
12:55 PM Bug #61217 (Resolved): ceph: corrupt snap message from mds1
From ceph user mail list: https://www.spinics.net/lists/ceph-users/msg77106.html
The kclient received a corrupted ...
Xiubo Li
01:00 PM Bug #61148: dbench test results in call trace in dmesg
Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > Venky Shankar wrote:
> > > > Xiubo Li wrote:
> ...
Venky Shankar
10:38 AM Bug #61148: dbench test results in call trace in dmesg
Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo Li wrote:
> > > > Okay, this morning ...
Xiubo Li
07:39 AM Bug #61148: dbench test results in call trace in dmesg
Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > Okay, this morning I saw you have merged the PR h...
Venky Shankar
12:51 PM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Nothing has changed on storage01 since my first crash report. The crash happens on Leap 15.4 with Quincy. I just want... Eugen Block
12:42 PM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Eugen Block wrote:
> Both VMs use the same kernel version (they are not running 15.1 anymore, both have been upgrade...
Jos Collin
12:29 PM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Both VMs use the same kernel version (they are not running 15.1 anymore, both have been upgraded to 15.4 on the way t... Eugen Block
12:12 PM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Eugen Block wrote:
> This is kind of strange, when I initially wanted to test cephfs-top I chose a different virtual...
Jos Collin
09:23 AM Fix #58023 (Fix Under Review): mds: do not evict clients if OSDs are laggy
Dhairya Parmar
09:18 AM Bug #59394: ACLs not fully supported.
Brian,
Have you read up "these docs":https://docs.ceph.com/en/latest/cephfs/client-config-ref/#confval-client_acl_ty...
Milind Changire
07:46 AM Bug #55446: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command
pacific:... Kotresh Hiremath Ravishankar
07:44 AM Bug #54462: Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status...
pacific:
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-de...
Kotresh Hiremath Ravishankar
07:43 AM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
pacific:
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-de...
Kotresh Hiremath Ravishankar
07:43 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
pacific
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-de...
Kotresh Hiremath Ravishankar
06:27 AM Backport #61204 (Resolved): reef: MDS imported_inodes metric is not updated.
https://github.com/ceph/ceph/pull/51698 Backport Bot
06:27 AM Bug #61201: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific
PRs in the test batch:
# pacific: qa: mirror tests should cleanup fs during unwind by batrick · Pull Request #5076...
Kotresh Hiremath Ravishankar
06:24 AM Bug #61201 (Resolved): qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in...
Description: fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/... Kotresh Hiremath Ravishankar
06:27 AM Backport #61203 (In Progress): quincy: MDS imported_inodes metric is not updated.
Backport Bot
06:27 AM Backport #61202 (Resolved): pacific: MDS imported_inodes metric is not updated.
Backport Bot
06:18 AM Bug #59107 (Pending Backport): MDS imported_inodes metric is not updated.
Venky Shankar
01:31 AM Bug #50719 (Need More Info): xattr returning from the dead (sic!)
Xiubo Li
01:11 AM Bug #50719: xattr returning from the dead (sic!)
Tobias Hachmer wrote:
> Hi Jeff,
>
> we're also affected by this issue and can confirm the behaviour Thomas descr...
Xiubo Li
01:12 AM Backport #58826 (Resolved): pacific: mds: make num_fwd and num_retry to __u32
Xiubo Li
01:03 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
Kotresh Hiremath Ravishankar wrote:
> This is seen in reef QA run - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:...
Xiubo Li
12:08 AM Backport #59406 (Resolved): reef: cephfs-top: navigate to home screen when no fs
Jos Collin

05/16/2023

07:11 PM Backport #59245: reef: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50808
merged
Yuri Weinstein
07:10 PM Backport #59248: reef: qa: intermittent nfs test failures at nfs cluster creation
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50808
merged
Yuri Weinstein
07:10 PM Backport #59251: reef: mgr/nfs: disallow non-existent paths when creating export
Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50808
merged
Yuri Weinstein
07:09 PM Backport #59227: reef: cephfs-data-scan: does not scan_links for lost+found
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50782
merged
Yuri Weinstein
06:36 PM Backport #59430: reef: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.Test...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51047
merged
Yuri Weinstein
06:35 PM Backport #59406: reef: cephfs-top: navigate to home screen when no fs
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51003
merged
Yuri Weinstein
06:33 PM Backport #59596: reef: cephfs-top: fix help text for delay
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50998
merged
Yuri Weinstein
06:33 PM Backport #59397: reef: cephfs-top: cephfs-top -d <seconds> not working as expected
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50998
merged
Yuri Weinstein
06:31 PM Backport #59020: reef: cephfs-data-scan: multiple data pools are not supported
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50524
merged
Yuri Weinstein
04:27 PM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm... Kotresh Hiremath Ravishankar
11:19 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
reef:
https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s...
Kotresh Hiremath Ravishankar
04:27 PM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm... Kotresh Hiremath Ravishankar
11:27 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
reef - https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s... Kotresh Hiremath Ravishankar
04:26 PM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm... Kotresh Hiremath Ravishankar
04:25 PM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
This is seen in reef QA run - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247... Kotresh Hiremath Ravishankar
04:23 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm... Kotresh Hiremath Ravishankar
11:15 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
reef: https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-sm... Kotresh Hiremath Ravishankar
04:21 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
another instance in reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-re... Kotresh Hiremath Ravishankar
10:32 AM Bug #61182 (Pending Backport): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon a...
Description: fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} clus... Kotresh Hiremath Ravishankar
04:21 PM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm... Kotresh Hiremath Ravishankar
04:19 PM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
reef- https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-smi... Kotresh Hiremath Ravishankar
04:06 PM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm... Kotresh Hiremath Ravishankar
04:07 PM Bug #54460: snaptest-multiple-capsnaps.sh test failure
reef
https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-smi...
Kotresh Hiremath Ravishankar
11:22 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
reef - https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s... Kotresh Hiremath Ravishankar
03:48 PM Bug #50719: xattr returning from the dead (sic!)
Hi Jeff,
we're also affected by this issue and can confirm the behaviour Thomas described.
Versions here:
Ce...
Tobias Hachmer
02:38 PM Backport #61187 (In Progress): reef: qa: ignore cluster warning encountered in test_refuse_client...
https://github.com/ceph/ceph/pull/51515 Dhairya Parmar
12:53 PM Backport #61187 (Resolved): reef: qa: ignore cluster warning encountered in test_refuse_client_se...
Backport Bot
12:49 PM Fix #59667 (Pending Backport): qa: ignore cluster warning encountered in test_refuse_client_sessi...
Venky Shankar
11:16 AM Fix #59667: qa: ignore cluster warning encountered in test_refuse_client_session_on_reconnect
This needs backport for reef:
Issue is hit in reef
https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10...
Kotresh Hiremath Ravishankar
11:30 AM Bug #61186 (Fix Under Review): mgr/nfs: hitting incomplete command returns same suggestion twice
for example... Dhairya Parmar
11:26 AM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
seen in reef - https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-d... Kotresh Hiremath Ravishankar
11:24 AM Bug #38704: qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN) in cluster log"
Similar issue seen again in reef on different test (tasks/cephfs_misc_tests) - https://pulpito.ceph.com/yuriw-2023-05... Kotresh Hiremath Ravishankar
11:24 AM Documentation #61185 (Closed): mgr/nfs: ceph nfs cluster config reset CLUSTER_NAME -i PATH_TO_CON...
While going through Red Hat doc https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/file_system... Dhairya Parmar
11:20 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
reef - https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s... Kotresh Hiremath Ravishankar
11:14 AM Bug #47292: cephfs-shell: test_df_for_valid_file failure
seen in yuri's reef run
https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-ree...
Kotresh Hiremath Ravishankar
11:13 AM Bug #61184 (Closed): mgr/nfs: setting config using external file gets overidden
Followed similar steps as mentioned in https://tracker.ceph.com/issues/59463... Dhairya Parmar
10:50 AM Bug #59463: mgr/nfs: Setting NFS export config using -i option is not working
Dhairya Parmar wrote:
> I was going through the config file content and found `pseudo_path` to be a bit doubtful, it...
Dhairya Parmar
10:48 AM Bug #61183 (Closed): mgr/nfs: Applying for first time, NFS-Ganesha Config Added Successfully but ...
While setting NFS cluster config for the first time using -i option it does say "NFS-Ganesha Config Added Successfull... Dhairya Parmar
07:30 AM Bug #59688: mds: idempotence issue in client request
Mer Xuanyi wrote:
> Venky Shankar wrote:
> > Hi Mer Xuanyi,
> >
> > Thanks for the detailed report.
> >
> > S...
Venky Shankar
06:36 AM Bug #59688: mds: idempotence issue in client request
Venky Shankar wrote:
> Hi Mer Xuanyi,
>
> Thanks for the detailed report.
>
> So, this requires flipping the c...
Mer Xuanyi
04:08 AM Bug #59688 (Triaged): mds: idempotence issue in client request
Hi Mer Xuanyi,
Thanks for the detailed report.
So, this requires flipping the config tunables from their defaul...
Venky Shankar
07:28 AM Backport #59202 (In Progress): pacific: qa: add testing in fs:workload for different kinds of sub...
Milind Changire
05:53 AM Backport #59202 (New): pacific: qa: add testing in fs:workload for different kinds of subvolumes
redoing backport Milind Changire
06:10 AM Backport #59706 (In Progress): pacific: Mds crash and fails with assert on prepare_new_inode
Xiubo Li
05:53 AM Backport #59707 (In Progress): quincy: Mds crash and fails with assert on prepare_new_inode
Xiubo Li
05:48 AM Backport #59708 (In Progress): reef: Mds crash and fails with assert on prepare_new_inode
Xiubo Li
05:39 AM Backport #59007 (Resolved): pacific: mds stuck in 'up:replay' and crashed.
Xiubo Li
02:26 AM Bug #58411 (Fix Under Review): mds: a few simple operations crash mds
Venky Shankar
02:21 AM Bug #61148 (Fix Under Review): dbench test results in call trace in dmesg
I created one PR to fix it.
Fixed the racy of https://github.com/ceph/ceph/pull/47752/commits/9fbde6da076f2d7c8bfd...
Xiubo Li
02:13 AM Bug #61148: dbench test results in call trace in dmesg
Venky Shankar wrote:
> Xiubo Li wrote:
> > Okay, this morning I saw you have merged the PR https://github.com/ceph/...
Xiubo Li
01:23 AM Bug #61148: dbench test results in call trace in dmesg
Xiubo Li wrote:
> Okay, this morning I saw you have merged the PR https://github.com/ceph/ceph/pull/47752.
>
> Th...
Venky Shankar
01:12 AM Bug #61148: dbench test results in call trace in dmesg
Xiubo Li wrote:
> Okay, this morning I saw you have merged the PR https://github.com/ceph/ceph/pull/47752.
I was ...
Venky Shankar
12:51 AM Bug #61148: dbench test results in call trace in dmesg
Okay, this morning I saw you have merged the PR https://github.com/ceph/ceph/pull/47752.
This issue should be intr...
Xiubo Li

05/15/2023

04:52 PM Backport #61167 (Resolved): quincy: [WRN] : client.408214273 isn't responding to mclientcaps(revo...
https://github.com/ceph/ceph/pull/52851
Backported https://tracker.ceph.com/issues/62197 together with this tracker.
Backport Bot
04:52 PM Backport #61166 (Resolved): pacific: [WRN] : client.408214273 isn't responding to mclientcaps(rev...
https://github.com/ceph/ceph/pull/52852
Backported https://tracker.ceph.com/issues/62199 together with this tracker.
Backport Bot
04:52 PM Backport #61165 (Resolved): reef: [WRN] : client.408214273 isn't responding to mclientcaps(revoke...
https://github.com/ceph/ceph/pull/52850
Backported https://tracker.ceph.com/issues/62198 together with this tracker.
Backport Bot
04:51 PM Bug #57244 (Pending Backport): [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ...
Venky Shankar
04:02 PM Backport #59595 (Resolved): pacific: cephfs-top: fix help text for delay
Jos Collin
03:35 PM Backport #59595: pacific: cephfs-top: fix help text for delay
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50715
merged
Yuri Weinstein
04:01 PM Backport #59398 (Resolved): pacific: cephfs-top: cephfs-top -d <seconds> not working as expected
Jos Collin
03:35 PM Backport #59398: pacific: cephfs-top: cephfs-top -d <seconds> not working as expected
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50715
merged
Yuri Weinstein
04:00 PM Backport #58807 (Resolved): pacific: cephfs-top: add an option to dump the computed values to stdout
Jos Collin
03:35 PM Backport #58807: pacific: cephfs-top: add an option to dump the computed values to stdout
Jos Collin wrote:
> Backport PR: https://github.com/ceph/ceph/pull/50715.
merged
Yuri Weinstein
03:37 PM Backport #58881: pacific: mds: Jenkins fails with skipping unrecognized type MClientRequest::Release
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50733
merged
Yuri Weinstein
03:37 PM Backport #58826: pacific: mds: make num_fwd and num_retry to __u32
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50733
merged
Yuri Weinstein
03:36 PM Backport #59007: pacific: mds stuck in 'up:replay' and crashed.
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50725
merged
Yuri Weinstein
03:34 PM Backport #58866: pacific: cephfs-top: Sort menu doesn't show 'No filesystem available' screen whe...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50596
merged
Yuri Weinstein
03:34 PM Backport #59720 (In Progress): pacific: client: read wild pointer when reconnect to mds
Venky Shankar
03:34 PM Backport #59019: pacific: cephfs-data-scan: multiple data pools are not supported
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50523
merged
Yuri Weinstein
03:32 PM Backport #59718 (In Progress): quincy: client: read wild pointer when reconnect to mds
Venky Shankar
03:24 PM Backport #59719 (In Progress): reef: client: read wild pointer when reconnect to mds
Venky Shankar
03:18 PM Backport #61158 (Resolved): reef: client: fix dump mds twice
Backport Bot
11:00 AM Bug #55446: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi... Kotresh Hiremath Ravishankar
10:41 AM Bug #50719: xattr returning from the dead (sic!)
Hi Jeff!
Just wanted to let you know that this issue is still relevant and severe with more recent versions of bot...
Thomas Hukkelberg
08:27 AM Bug #61151 (Fix Under Review): libcephfs: incorrectly showing the size for snapshots when stating...
If the *rstat* is enabled for the *.snap* snapdir it should report the total size for all the snapshots. And at the s... Xiubo Li
07:42 AM Bug #61148: dbench test results in call trace in dmesg
More detail call trace:... Xiubo Li
06:05 AM Bug #61148: dbench test results in call trace in dmesg
Another instance, but this time another workunit: https://pulpito.ceph.com/vshankar-2023-05-12_08:25:27-fs-wip-vshank... Venky Shankar
05:13 AM Bug #61148 (Rejected): dbench test results in call trace in dmesg
https://pulpito.ceph.com/vshankar-2023-05-12_08:25:27-fs-wip-vshankar-testing-20230509.090020-1-testing-default-smith... Venky Shankar
06:57 AM Fix #59667 (Resolved): qa: ignore cluster warning encountered in test_refuse_client_session_on_re...
Venky Shankar
04:50 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- https://pulpito.ceph.com/vshankar-2023-05-12_08:25:27-fs-wip-vshankar-testing-20230509.090020-1-testing-default-smi... Venky Shankar
02:57 AM Bug #61009 (Fix Under Review): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0ea734a1639fc4740189dcd0...
Telemetry Bot
02:57 AM Bug #61008 (New): crash: void interval_set<T, C>::insert(T, T, T*, T*) [with T = inodeno_t; C = s...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8e3f5c0126b1f4f50f08ff5b...
Telemetry Bot
02:57 AM Bug #61004 (New): crash: MDSRank::is_stale_message(boost::intrusive_ptr<Message const> const&) const

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=af82cc6e82ac3651d4918c4a...
Telemetry Bot
02:56 AM Bug #60986 (New): crash: void MDCache::rejoin_send_rejoins(): assert(auth >= 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=71d482317bedfc17674af4f5...
Telemetry Bot
02:56 AM Bug #60980 (New): crash: Session* MDSRank::get_session(ceph::cref_t<Message>&): assert(session->i...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e2a2ae4253fafecb8b3ca014...
Telemetry Bot
02:55 AM Bug #60949 (New): crash: cephfs-journal-tool(

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0d4dcd3a80040e9cf7f44ee7...
Telemetry Bot
02:55 AM Bug #60945 (New): crash: virtual void C_Client_Remount::finish(int): abort

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=594102a81a05cdba00f19a82...
Telemetry Bot
02:49 AM Bug #60685 (New): crash: elist<T>::~elist() [with T = CInode*]: assert(_head.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c4679655cf1f90278c3b054b...
Telemetry Bot
02:49 AM Bug #60679 (New): crash: C_GatherBuilderBase<ContextType, GatherType>::~C_GatherBuilderBase() [wi...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5b6b5acff6ff7a2c7ee0565f...
Telemetry Bot
02:48 AM Bug #60669 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2cf0c9e2e9b09fc177e74e6f...
Telemetry Bot
02:48 AM Bug #60668 (New): crash: void Migrator::export_try_cancel(CDir*, bool): assert(it != export_state...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ff4594c6abd556deefbc7327...
Telemetry Bot
02:48 AM Bug #60665 (New): crash: void MDCache::open_snaprealms(): assert(rejoin_done)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0da14d340ca86eb98a0a866c...
Telemetry Bot
02:48 AM Bug #60664 (New): crash: elist<T>::~elist() [with T = CDentry*]: assert(_head.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=770f229d0a641695e3d43ffb...
Telemetry Bot
02:48 AM Bug #60660 (New): crash: std::_Rb_tree_rebalance_for_erase(std::_Rb_tree_node_base*, std::_Rb_tre...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6bb8c01a74a9dcf2f2500ef7...
Telemetry Bot
02:48 AM Bug #60640 (New): crash: void Journaler::_write_head(Context*): assert(last_written.write_pos >= ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9e5c5d1ecf602782154f9b19...
Telemetry Bot
02:48 AM Bug #60636 (New): crash: elist<T>::~elist() [with T = CDir*]: assert(_head.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5fc520e822550d2d86ab92a1...
Telemetry Bot
02:47 AM Bug #60630 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e4d4bb371a344df64ed9d22c...
Telemetry Bot
02:47 AM Bug #60629 (New): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=978159d0d2675e074cadedff...
Telemetry Bot
02:47 AM Bug #60628 (New): crash: MDCache::purge_inodes(const interval_set<inodeno_t>&, LogSegment*)::<lam...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=de50fd8802ec5732812d300d...
Telemetry Bot
02:47 AM Bug #60627 (New): crash: void MDCache::handle_dentry_unlink(ceph::cref_t<MDentryUnlink>&): assert...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f9f4072660844962480cb518...
Telemetry Bot
02:47 AM Bug #60625 (Resolved): crash: MDSRank::send_message_client(boost::intrusive_ptr<Message> const&, ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c2092a196eb69c3c08a39646...
Telemetry Bot
02:47 AM Bug #60622 (New): crash: void MDCache::handle_dentry_unlink(ceph::cref_t<MDentryUnlink>&): assert...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a081e9516cda7c2c8dfa596c...
Telemetry Bot
02:47 AM Bug #60618 (New): crash: Session* MDSRank::get_session(ceph::cref_t<Message>&): assert(session->i...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b0d320aceb93370daf216e36...
Telemetry Bot
02:47 AM Bug #60607 (New): crash: virtual void MDSCacheObject::bad_put(int): assert(ref_map[by] > 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7ad58204e57e59e10794c28d...
Telemetry Bot
02:47 AM Bug #60606 (New): crash: ceph::buffer::list::iterator_impl<true>::copy(unsigned int, char*)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e841c48356848e8657d3cb5e...
Telemetry Bot
02:47 AM Bug #60600 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=183a087b3f731571c6337b66...
Telemetry Bot
02:46 AM Bug #60598 (New): crash: void MDCache::handle_dentry_unlink(ceph::cref_t<MDentryUnlink>&): assert...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1ebda7a102930caf786ba16e...
Telemetry Bot
02:41 AM Bug #60372 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=42c202ed03daf7e9cbd008db...
Telemetry Bot
02:40 AM Bug #60343 (New): crash: void MDCache::handle_cache_rejoin_ack(ceph::cref_t<MMDSCacheRejoin>&): a...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=374c1bb49952f4c442a939bb...
Telemetry Bot
02:40 AM Bug #60319 (New): crash: std::_Rb_tree<dirfrag_t, dirfrag_t, std::_Identity<dirfrag_t>, std::less...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=20a0ec6572610c3fb12e9f12...
Telemetry Bot
02:39 AM Bug #60303 (New): crash: __pthread_mutex_lock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f88cc08b61611695f0a919ea...
Telemetry Bot
02:38 AM Bug #60241 (New): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1e491c6d2002d8aa5fa17dac...
Telemetry Bot
02:35 AM Bug #60126 (New): crash: bool MDCache::shutdown_pass(): assert(!migrator->is_importing())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a36f9e1d71483b4075ca8625...
Telemetry Bot
02:34 AM Bug #60109 (New): crash: Session* MDSRank::get_session(ceph::cref_t<Message>&): assert(session->i...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=abb374dbfa3649257e6590ea...
Telemetry Bot
02:34 AM Bug #60092 (New): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&): asser...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=19e45b6ba87e5ddb0df16715...
Telemetry Bot
02:32 AM Bug #60014 (New): crash: void MDCache::remove_replay_cap_reconnect(inodeno_t, client_t): assert(c...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cc7ca4bafc15d4883c77e861...
Telemetry Bot
02:29 AM Bug #59865 (New): crash: CInode::get_dirfrags() const

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=546c0379bf1bc4705b166c60...
Telemetry Bot
02:28 AM Bug #59833 (Pending Backport): crash: void MDLog::trim(int): assert(segments.size() >= pre_segmen...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=21725944feb692959f706d0f...
Telemetry Bot
02:27 AM Bug #59819 (New): crash: virtual CDentry::~CDentry(): assert(batch_ops.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5520f521a1ed7653b6505f60...
Telemetry Bot
02:25 AM Bug #59802 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=78028e7ef848aec87e854972...
Telemetry Bot
02:25 AM Bug #59799 (New): crash: ProtocolV2::handle_auth_request(ceph::buffer::list&)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=34dce467397e70daae79ec8e...
Telemetry Bot
02:24 AM Bug #59785 (Closed): crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == L...

*New crash events were reported via Telemetry with newer versions (['17.2.1', '17.2.5']) than encountered in Tracke...
Telemetry Bot
02:23 AM Bug #59768 (Duplicate): crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): asse...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1a84e31a4bc3ae6dc69d901c...
Telemetry Bot
02:23 AM Bug #59767 (New): crash: MDSDaemon::dump_status(ceph::Formatter*)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=15752cc3020e5047d0724344...
Telemetry Bot
02:23 AM Bug #59766 (New): crash: virtual void ESession::replay(MDSRank*): assert(g_conf()->mds_wipe_sessi...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=36e909287876c9be42c068ca...
Telemetry Bot
02:22 AM Bug #59761 (New): crash: void MDLog::_replay_thread(): assert(journaler->is_readable() || mds->is...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=71bbbf63c8f73aa37e3aa82e...
Telemetry Bot
02:22 AM Bug #59751 (New): crash: MDSDaemon::respawn()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8bfff60bfa4ffd456fa49fb8...
Telemetry Bot
02:14 AM Bug #59749 (New): crash: virtual CInode::~CInode(): assert(batch_ops.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=479afe0191403e023f1878c1...
Telemetry Bot
02:14 AM Bug #58934 (Duplicate): snaptest-git-ceph.sh failure with ceph-fuse
This should be the same issue with https://tracker.ceph.com/issues/59343. Xiubo Li
02:13 AM Bug #59741 (New): crash: void MDCache::remove_inode(CInode*): assert(o->get_num_ref() == 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f60159ef05cf6bfbf51c6688...
Telemetry Bot

05/12/2023

09:24 AM Bug #59736 (New): qa: add one test case for "kclient: ln: failed to create hard link 'file name':...
We need to add one test case for https://tracker.ceph.com/issues/59515.... Xiubo Li

05/11/2023

08:18 PM Bug #56774: crash: Client::_get_vino(Inode*)
Since this issue is marked as "Duplicate" it needs to specify what issue it duplicates in the "Related Issues" field.... Yaarit Hatuka
08:15 PM Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_...
Since this issue is marked as "Duplicate" it needs to specify what issue it duplicates in the "Related Issues" field.... Yaarit Hatuka
05:10 PM Bug #59716 (Fix Under Review): tools/cephfs/first-damage: unicode decode errors break iteration
Patrick Donnelly
12:55 PM Feature #59601: Provide way to abort kernel mount after lazy umount
Niklas Hambuechen wrote:
> Venky Shankar wrote:
> > Have you tried force unmounting the mount (unount -f)?
>
> A...
Venky Shankar
08:53 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
Seen in pacific run
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-...
Kotresh Hiremath Ravishankar
08:48 AM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi... Kotresh Hiremath Ravishankar
08:05 AM Bug #48773: qa: scrub does not complete
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi... Kotresh Hiremath Ravishankar
07:46 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi... Kotresh Hiremath Ravishankar
05:58 AM Backport #59726 (Resolved): quincy: mds: allow entries to be removed from lost+found directory
https://github.com/ceph/ceph/pull/51689 Backport Bot
05:58 AM Backport #59725 (Resolved): pacific: mds: allow entries to be removed from lost+found directory
https://github.com/ceph/ceph/pull/51687 Backport Bot
05:58 AM Backport #59724 (Resolved): reef: mds: allow entries to be removed from lost+found directory
https://github.com/ceph/ceph/pull/51607 Backport Bot
05:51 AM Bug #59569 (Pending Backport): mds: allow entries to be removed from lost+found directory
Venky Shankar
05:50 AM Backport #59723 (Resolved): reef: qa: run scrub post disaster recovery procedure
https://github.com/ceph/ceph/pull/51606 Backport Bot
05:50 AM Backport #59722 (In Progress): quincy: qa: run scrub post disaster recovery procedure
https://github.com/ceph/ceph/pull/51690 Backport Bot
05:50 AM Backport #59721 (Resolved): pacific: qa: run scrub post disaster recovery procedure
https://github.com/ceph/ceph/pull/51610 Backport Bot
05:50 AM Bug #59527 (Pending Backport): qa: run scrub post disaster recovery procedure
Venky Shankar
03:59 AM Backport #59720 (Resolved): pacific: client: read wild pointer when reconnect to mds
https://github.com/ceph/ceph/pull/51487 Backport Bot
03:59 AM Backport #59719 (Resolved): reef: client: read wild pointer when reconnect to mds
https://github.com/ceph/ceph/pull/51484 Backport Bot
03:59 AM Backport #59718 (Resolved): quincy: client: read wild pointer when reconnect to mds
https://github.com/ceph/ceph/pull/51486 Backport Bot
03:56 AM Bug #59514 (Pending Backport): client: read wild pointer when reconnect to mds
Venky Shankar

05/10/2023

04:54 PM Bug #59716 (Pending Backport): tools/cephfs/first-damage: unicode decode errors break iteration
... Patrick Donnelly
11:35 AM Feature #59714 (Fix Under Review): mgr/volumes: Support to reject CephFS clones if cloner threads...
1. CephFS clone creation have a limit of 4 parallel clones at a time and rest
of the clone create requests are queue...
Neeraj Pratap Singh
08:43 AM Backport #59708 (Resolved): reef: Mds crash and fails with assert on prepare_new_inode
https://github.com/ceph/ceph/pull/51506 Backport Bot
08:43 AM Backport #59707 (Resolved): quincy: Mds crash and fails with assert on prepare_new_inode
https://github.com/ceph/ceph/pull/51507 Backport Bot
08:43 AM Backport #59706 (Resolved): pacific: Mds crash and fails with assert on prepare_new_inode
https://github.com/ceph/ceph/pull/51508 Backport Bot
08:36 AM Bug #52280 (Pending Backport): Mds crash and fails with assert on prepare_new_inode
Venky Shankar
04:49 AM Bug #59705 (Fix Under Review): client: only wait for write MDS OPs when unmounting
Xiubo Li
04:46 AM Bug #59705 (Resolved): client: only wait for write MDS OPs when unmounting
We do not care about the read MDS OPs and it's safe by just dropping
them when unmounting.
Xiubo Li

05/09/2023

07:50 PM Bug #59394: ACLs not fully supported.
Milind Changire wrote:
> Brian,
> The command you are using is correct.
> However, the config key is incorrect.
>...
Brian Woods
07:44 PM Bug #59394: ACLs not fully supported.
Milind Changire wrote:
> Brian,
> The command you are using is correct.
> However, the config key is incorrect.
>...
Brian Woods
01:30 PM Bug #59691 (Fix Under Review): mon/MDSMonitor: may lookup non-existent fs in current MDSMap
Patrick Donnelly
01:22 PM Bug #59691 (Resolved): mon/MDSMonitor: may lookup non-existent fs in current MDSMap
... Patrick Donnelly
01:06 PM Feature #59601: Provide way to abort kernel mount after lazy umount
Venky Shankar wrote:
> Have you tried force unmounting the mount (unount -f)?
After *umount --lazy*, the mount po...
Niklas Hambuechen
12:48 PM Bug #59688 (Triaged): mds: idempotence issue in client request
Found the mds may process a same client request twice after session with client rebuild because the network issue.
...
Mer Xuanyi
06:41 AM Bug #59683 (Resolved): Error: Unable to find a match: userspace-rcu-devel libedit-devel device-ma...
- https://pulpito.ceph.com/vshankar-2023-05-06_17:28:05-fs-wip-vshankar-testing-20230506.143554-testing-default-smith... Venky Shankar
05:52 AM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
Another instance: https://pulpito.ceph.com/vshankar-2023-05-06_17:28:05-fs-wip-vshankar-testing-20230506.143554-testi... Venky Shankar
05:23 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
Venky Shankar wrote:
> Xiubo Li wrote:
> > This is a failure with *libcephfs* and have the client side logs:
> >
...
Xiubo Li
04:19 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
Xiubo Li wrote:
> This is a failure with *libcephfs* and have the client side logs:
>
> vshankar-2023-04-06_04:14...
Venky Shankar
01:54 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
This also needs to be fixed in kclient. Xiubo Li
03:52 AM Bug #59682: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the unit file o...
Thanks for letting us know, Zac. Venky Shankar
02:28 AM Bug #59682 (Resolved): CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the ...
The Debian package "cephfs-mirror" in the Ceph repository doesn't install the unit file or the man page.
This was ...
Zac Dover

05/08/2023

06:33 PM Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2
Lots more in https://pulpito.ceph.com/yuriw-2023-05-06_14:41:44-rados-pacific-release-distro-default-smithi/ Laura Flores
02:59 PM Feature #59601: Provide way to abort kernel mount after lazy umount
Niklas Hambuechen wrote:
> In some situations, e.g. when changing monitor IPs during an emergency network reconfigur...
Venky Shankar
09:16 AM Bug #59163: mds: stuck in up:rejoin when it cannot "open" missing directory inode
The MDS got marked as down:damaged as it could not decode the CDir fnode:... Venky Shankar
09:09 AM Fix #59667 (In Progress): qa: ignore cluster warning encountered in test_refuse_client_session_on...
Dhairya Parmar
08:48 AM Fix #59667 (Pending Backport): qa: ignore cluster warning encountered in test_refuse_client_sessi...
Seen in http://pulpito.front.sepia.ceph.com/vshankar-2023-05-06_17:28:05-fs-wip-vshankar-testing-20230506.143554-test... Dhairya Parmar
06:34 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
This is a failure with *libcephfs* and have the client side logs:
vshankar-2023-04-06_04:14:11-fs-wip-vshankar-tes...
Xiubo Li
06:13 AM Bug #59343 (Fix Under Review): qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
Xiubo Li
06:13 AM Bug #14557: snaps: failed snaptest-multiple-capsnaps.sh

Xiubo Li
03:16 AM Bug #14557: snaps: failed snaptest-multiple-capsnaps.sh
Xiubo Li wrote:
> vshankar-2023-04-06_04:14:11-fs-wip-vshankar-testing-20230330.105356-testing-default-smithi/723370...
Xiubo Li
03:03 AM Bug #14557: snaps: failed snaptest-multiple-capsnaps.sh
vshankar-2023-04-06_04:14:11-fs-wip-vshankar-testing-20230330.105356-testing-default-smithi/7233705/teuthology.log
...
Xiubo Li
03:06 AM Bug #59626 (Resolved): pacific: FSMissing: File system xxxx does not exist in the map
Venky Shankar

05/05/2023

06:24 PM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
https://github.com/ceph/ceph/pull/51344 merged Yuri Weinstein
03:38 PM Backport #59560: pacific: qa: RuntimeError: more than one file system available
Rishabh Dave wrote:
> https://github.com/ceph/ceph/pull/51232
merged
Yuri Weinstein
11:44 AM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
This is kind of strange, when I initially wanted to test cephfs-top I chose a different virtual ceph cluster which al... Eugen Block
07:39 AM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Jos Collin wrote:
> Eugen Block wrote:
> > Not sure if required but I wanted to add some more information, while ru...
Eugen Block
06:04 AM Bug #59551 (Need More Info): mgr/stats: exception ValueError :invalid literal for int() with base...
xinyu wang wrote:
> 'ceph fs perf stats' command miss some metadata for cephfs client, such as kernel_version.
>
...
Jos Collin
05:25 AM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Eugen Block wrote:
> Not sure if required but I wanted to add some more information, while running cephfs-top the mg...
Jos Collin
09:36 AM Bug #54017: Problem with ceph fs snapshot mirror and read-only folders
This seems the same issue: https://pulpito.ceph.com/vshankar-2023-05-03_16:19:11-fs-wip-vshankar-testing-20230503.142... Xiubo Li
09:17 AM Bug #59657 (Fix Under Review): qa: test with postgres failed (deadlock between link and migrate s...
Xiubo Li
08:40 AM Bug #59657: qa: test with postgres failed (deadlock between link and migrate straydn(rename))
From the logs evicting the unresponding client or closing the sessions could unblock the deadlock issue:... Xiubo Li
07:48 AM Bug #59657 (Resolved): qa: test with postgres failed (deadlock between link and migrate straydn(r...
https://pulpito.ceph.com/vshankar-2023-05-03_16:19:11-fs-wip-vshankar-testing-20230503.142424-testing-default-smithi/... Xiubo Li
08:49 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
Please evict the corresponding client to unblock this deadlock. Mostly this should work, if not then please restart t... Xiubo Li
02:49 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
Another failure the same with this: https://pulpito.ceph.com/vshankar-2023-05-03_16:19:11-fs-wip-vshankar-testing-202... Xiubo Li

05/04/2023

06:06 PM Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2
/a/yuriw-2023-04-25_18:56:08-rados-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/7252409 Laura Flores
11:57 AM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Not sure if required but I wanted to add some more information, while running cephfs-top the mgr module crashes all t... Eugen Block
10:57 AM Bug #59626 (Fix Under Review): pacific: FSMissing: File system xxxx does not exist in the map
Rishabh Dave
01:01 AM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
Rishabh Dave wrote:
> @setupfs()@ is not being called quincy onwards which is why we don't see this bug after paci...
Venky Shankar
01:00 AM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
... and also doesn't explain why are we seeing this failure now. The last pacific run https://tracker.ceph.com/projec... Venky Shankar
10:50 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
This issue occurred in Pacific run - /ceph/teuthology-archive/yuriw-2023-04-25_19:03:49-fs-wip-yuri5-testing-2023-04-... Rishabh Dave

05/03/2023

06:30 PM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
Apparently, this exception is expected because @backup_fs@ is being deleted little before the traceback is printed an... Rishabh Dave
01:34 PM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
This commit removed the createfs boolean:... Venky Shankar
01:31 PM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
The pacific code is a bit different than other branches where the interesting bit is:
pacific:...
Venky Shankar
01:27 PM Bug #59626 (Resolved): pacific: FSMissing: File system xxxx does not exist in the map
Two separate jobs fail due to this issue
* TestMirroring.test_cephfs_mirror_cancel_sync: https://pulpito.ceph.com/yu...
Venky Shankar
10:09 AM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
Seen in https://pulpito.ceph.com/yuriw-2023-04-26_21:59:47-fs-wip-yuri2-testing-2023-04-26-1247-pacific-distro-defaul... Kotresh Hiremath Ravishankar
04:55 AM Backport #59596 (In Progress): reef: cephfs-top: fix help text for delay
Neeraj Pratap Singh
02:25 AM Backport #59620 (Resolved): quincy: client: fix dump mds twice
Backport Bot

05/02/2023

02:36 PM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
Rishabh, can you post a backport for this? We are hitting this in Yuri's run: https://pulpito.ceph.com/yuriw-2023-04-... Venky Shankar
02:36 PM Feature #59601 (New): Provide way to abort kernel mount after lazy umount
In some situations, e.g. when changing monitor IPs during an emergency network reconfiguration, CephFS kernel mounts ... Niklas Hambuechen
01:30 PM Feature #59388: mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with uid and/or...
The PR's makes all 31 types of MDS cap parse successfully. Rishabh Dave
01:30 PM Feature #59388 (Fix Under Review): mds/MDSAuthCaps: "fsname", path, root_squash can't be in same ...
Rishabh Dave
01:13 PM Feature #59388: mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with uid and/or...
We can have 5 elements in one MDS Cap -
1. fs name (string)
2. fs path (string)
3. root_squash (bool)
4. uid (i...
Rishabh Dave
12:57 PM Bug #59552 (Triaged): mon: block osd pool mksnap for fs pools
Venky Shankar
10:59 AM Backport #59595 (In Progress): pacific: cephfs-top: fix help text for delay
Jos Collin
06:49 AM Backport #59595 (Resolved): pacific: cephfs-top: fix help text for delay
https://github.com/ceph/ceph/pull/50715 Backport Bot
10:22 AM Bug #58228: mgr/nfs: disallow non-existent paths when creating export
Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya, please pick up additional commits from https://github.com...
Dhairya Parmar
09:38 AM Bug #58228: mgr/nfs: disallow non-existent paths when creating export
Venky Shankar wrote:
> Dhairya, please pick up additional commits from https://github.com/ceph/ceph/pull/51005.
A...
Dhairya Parmar
06:01 AM Bug #58228: mgr/nfs: disallow non-existent paths when creating export
Dhairya, please pick up additional commits from https://github.com/ceph/ceph/pull/51005. Venky Shankar
09:53 AM Backport #59594 (In Progress): quincy: cephfs-top: fix help text for delay
Jos Collin
06:49 AM Backport #59594 (Resolved): quincy: cephfs-top: fix help text for delay
https://github.com/ceph/ceph/pull/50717 Backport Bot
06:49 AM Backport #59596 (Resolved): reef: cephfs-top: fix help text for delay
https://github.com/ceph/ceph/pull/50998 Backport Bot
06:48 AM Bug #59553 (Pending Backport): cephfs-top: fix help text for delay
Venky Shankar

04/28/2023

10:10 PM Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2
/a/yuriw-2023-04-26_20:20:05-rados-pacific-release-distro-default-smithi/7255292... Laura Flores
07:13 AM Bug #59582 (Pending Backport): snap-schedule: allow retention spec to specify max number of snaps...
Along with daily, weekly, monthly and yearly snaps, users also need a way to mention the max number of snaps they nee... Milind Changire

04/27/2023

12:21 PM Bug #51271 (Resolved): mgr/volumes: use a dedicated libcephfs handle for subvolume API calls
Konstantin Shalygin
12:20 PM Bug #51357 (Resolved): osd: sent kickoff request to MDS and then stuck for 15 minutes until MDS c...
Konstantin Shalygin
12:20 PM Bug #50389 (Resolved): mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or ...
Konstantin Shalygin
12:20 PM Backport #50849 (Rejected): octopus: mds: "cluster [ERR] Error recovering journal 0x203: (2) No...
Octopus is EOL Konstantin Shalygin
12:20 PM Backport #51482 (Rejected): octopus: osd: sent kickoff request to MDS and then stuck for 15 minut...
Octopus is EOL Konstantin Shalygin
12:20 PM Backport #51545 (Rejected): octopus: mgr/volumes: use a dedicated libcephfs handle for subvolume ...
Octopus is EOL Konstantin Shalygin
12:19 PM Bug #51857 (Resolved): client: make sure only to update dir dist from auth mds
Konstantin Shalygin
12:19 PM Backport #51976 (Rejected): octopus: client: make sure only to update dir dist from auth mds
Octopus is EOL Konstantin Shalygin
12:18 PM Backport #53304 (Rejected): octopus: Improve API documentation for struct ceph_client_callback_args
Octopus is EOL Konstantin Shalygin
09:52 AM Bug #59569 (In Progress): mds: allow entries to be removed from lost+found directory
Venky Shankar
09:38 AM Bug #59569 (Resolved): mds: allow entries to be removed from lost+found directory
Post file system recovery, files which have missing backtraces are recovered in lost+found directory. Users could cho... Venky Shankar
09:07 AM Backport #50252 (Rejected): octopus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/cli...
Octopus is EOL Konstantin Shalygin
08:59 AM Backport #59411 (In Progress): reef: snap-schedule: handle non-existent path gracefully during sn...
Milind Changire
08:58 AM Bug #51600 (Resolved): mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_hit_rate ...
Konstantin Shalygin
08:57 AM Backport #51831 (Rejected): octopus: mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and c...
Octopus is EOL Konstantin Shalygin
08:56 AM Backport #52442 (In Progress): pacific: client: fix dump mds twice
Konstantin Shalygin
08:55 AM Backport #52443 (Resolved): octopus: client: fix dump mds twice
Konstantin Shalygin
08:55 AM Backport #59017 (In Progress): pacific: snap-schedule: handle non-existent path gracefully during...
Milind Changire
08:53 AM Bug #51870 (Resolved): pybind/mgr/volumes: first subvolume permissions set perms on /volumes and ...
Konstantin Shalygin
08:53 AM Backport #52629 (Resolved): octopus: pybind/mgr/volumes: first subvolume permissions set perms on...
Konstantin Shalygin
08:53 AM Bug #56727 (Resolved): mgr/volumes: Subvolume creation failed on FIPs enabled system
Konstantin Shalygin
08:53 AM Backport #56979 (Resolved): quincy: mgr/volumes: Subvolume creation failed on FIPs enabled system
Konstantin Shalygin
08:52 AM Backport #56980 (Rejected): octopus: mgr/volumes: Subvolume creation failed on FIPs enabled system
Octopus is EOL Konstantin Shalygin
08:51 AM Backport #53995 (Rejected): octopus: qa: begin grepping kernel logs for kclient warnings/failures...
Octopus is EOL Konstantin Shalygin
06:49 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Venky Shankar wrote:
> Xiubo Li wrote:
> > This https://github.com/ceph/ceph/pull/47457 PR just merged 3 weeks ago ...
Xiubo Li
06:40 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
> This https://github.com/ceph/ceph/pull/47457 PR just merged 3 weeks ago and changed the *MOSDOp* h...
Venky Shankar
05:17 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
This https://github.com/ceph/ceph/pull/47457 PR just merged 3 weeks ago and changed the *MOSDOp* head version to *9* ... Xiubo Li
05:13 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Sorry the comments conflict and assigned it back when committing it. Xiubo Li
05:13 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
> Xiubo Li wrote:
> > https://pulpito.ceph.com/vshankar-2023-04-20_04:50:51-fs-wip-vshankar-testing...
Xiubo Li
05:09 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Taking this from Xiubo (spoke verbally over call). Venky Shankar
04:43 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
> https://pulpito.ceph.com/vshankar-2023-04-20_04:50:51-fs-wip-vshankar-testing-20230412.053558-test...
Xiubo Li
04:14 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
This is a normal case: the client sent two requests with *e95* then the osd handled it and at the same time sent back... Xiubo Li
03:24 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
> This one is using the *pacific* cluster and also has the same issue : https://pulpito.ceph.com/vsh...
Xiubo Li
02:47 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
This one is using the *pacific* cluster and also has the same issue : https://pulpito.ceph.com/vshankar-2023-04-20_04... Xiubo Li

04/26/2023

04:29 PM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
The log I copied is not actually showing the rmdir latency since I filtered incorrectly, sorry about that. This one i... Florian Pritz
04:24 PM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
Has there been any progress on this in the last year? We believe we are hitting the same issue on a production cluste... Florian Pritz
02:26 PM Backport #59560 (In Progress): pacific: qa: RuntimeError: more than one file system available
Rishabh Dave
02:20 PM Backport #59560 (Resolved): pacific: qa: RuntimeError: more than one file system available
https://github.com/ceph/ceph/pull/51232 Rishabh Dave
02:24 PM Backport #59559 (In Progress): reef: qa: RuntimeError: more than one file system available
Rishabh Dave
02:20 PM Backport #59559 (Resolved): reef: qa: RuntimeError: more than one file system available
https://github.com/ceph/ceph/pull/51231 Rishabh Dave
02:20 PM Backport #59558 (In Progress): quincy: qa: RuntimeError: more than one file system available
https://github.com/ceph/ceph/pull/52241 Rishabh Dave
02:20 PM Bug #59425 (Pending Backport): qa: RuntimeError: more than one file system available
Rishabh Dave
12:48 PM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Venky Shankar wrote:
> BTW - all failures are for ceph-fuse. kclient seems fine.
Yeah, because the upgrade client...
Xiubo Li
12:26 PM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
BTW - all failures are for ceph-fuse. kclient seems fine. Venky Shankar
08:41 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Venky Shankar wrote:
> Xiubo Li wrote:
>
> >
> > This is a upgrade client test case from nautilus, and the clie...
Xiubo Li
06:04 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
>
> This is a upgrade client test case from nautilus, and the client sent two osd request with *...
Venky Shankar
04:44 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> >
> > [...]
> >
> > > The osd requests are just ...
Venky Shankar
06:03 AM Bug #59394: ACLs not fully supported.
Brian,
The command you are using is correct.
However, the config key is incorrect.
Set debug_mds to 20 for all mds...
Milind Changire
05:56 AM Bug #59553 (Fix Under Review): cephfs-top: fix help text for delay
Jos Collin
05:55 AM Bug #59553 (Resolved): cephfs-top: fix help text for delay
... Jos Collin
05:19 AM Bug #59552 (Resolved): mon: block osd pool mksnap for fs pools
disabling of mon-managed snaps for fs pools has been taken care of for 'rados mksnap' path
unfortunately, the 'ceph ...
Milind Changire
03:18 AM Bug #59551 (Resolved): mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
'ceph fs perf stats' command miss some metadata for cephfs client, such as kernel_version. ... xinyu wang

04/25/2023

12:52 PM Bug #59530 (Triaged): mgr-nfs-upgrade: mds.foofs has 0/2
Venky Shankar
12:51 PM Bug #59534 (Triaged): qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Inp...
Venky Shankar
05:45 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Venky Shankar wrote:
> Xiubo, I saw updates in this tracker late and assigned the tracker to you in a rush. Do we kn...
Xiubo Li
05:44 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Venky Shankar wrote:
> Xiubo Li wrote:
>
> [...]
>
> > The osd requests are just dropped by *osd.3* because of...
Xiubo Li
05:12 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
[...]
> The osd requests are just dropped by *osd.3* because of the osdmap versions were mismat...
Venky Shankar
04:44 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo, I saw updates in this tracker late and assigned the tracker to you in a rush. Do we know what is causing the o... Venky Shankar
04:36 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Another one: https://pulpito.ceph.com/vshankar-2023-04-21_04:40:06-fs-wip-vshankar-testing-20230420.132447-testing-de... Xiubo Li
02:21 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Another: http://qa-proxy.ceph.com/teuthology/vshankar-2023-04-20_04:50:51-fs-wip-vshankar-testing-20230412.053558-tes... Xiubo Li
02:12 AM Bug #59534 (Triaged): qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Inp...
https://pulpito.ceph.com/vshankar-2023-04-20_04:50:51-fs-wip-vshankar-testing-20230412.053558-testing-default-smithi/... Xiubo Li
06:41 AM Bug #59527 (In Progress): qa: run scrub post disaster recovery procedure
Venky Shankar
06:11 AM Feature #58057: cephfs-top: enhance fstop tests to cover testing displayed data
Jos had a query regarding this - the idea is to dump (JSON) data which would otherwise be diplayed via ncurses so as ... Venky Shankar
05:40 AM Feature #58057 (In Progress): cephfs-top: enhance fstop tests to cover testing displayed data
Jos Collin
04:33 AM Bug #59463: mgr/nfs: Setting NFS export config using -i option is not working
Dhairya Parmar wrote:
> I was going through the config file content and found `pseudo_path` to be a bit doubtful, it...
Venky Shankar
03:40 AM Bug #59514 (Triaged): client: read wild pointer when reconnect to mds
Venky Shankar

04/24/2023

10:35 PM Bug #59530 (Triaged): mgr-nfs-upgrade: mds.foofs has 0/2
/a/yuriw-2023-04-06_15:37:58-rados-wip-yuri3-testing-2023-04-04-0833-pacific-distro-default-smithi/7234302... Laura Flores
03:49 PM Bug #59394: ACLs not fully supported.
Milind Changire wrote:
> Brian,
> Could you share the MDS debug logs for this specific operation.
> It'll help us ...
Brian Woods
09:32 AM Bug #59394: ACLs not fully supported.
Brian,
Could you share the MDS debug logs for this specific operation.
It'll help us identify the failure point.
...
Milind Changire
12:26 PM Bug #59527 (Pending Backport): qa: run scrub post disaster recovery procedure
test_data_scan does test a variety of data/metadata recovery steps, however, many tests do not run scrub which is rec... Venky Shankar
03:47 AM Bug #59514 (Pending Backport): client: read wild pointer when reconnect to mds
We use `shallow_copy`(24279ef8) for `MetaRequest::set_caller_perms ` in `Client::make_request` but indeed the lifetim... Mer Xuanyi
 

Also available in: Atom