Activity
From 04/24/2023 to 05/23/2023
05/23/2023
- 06:24 PM Backport #59222 (Resolved): reef: mds: catch damage to CDentry's first member before persisting
- 05:33 PM Fix #61378 (Fix Under Review): mds: turn off MDS balancer by default
- Operators should deliberately decide to turn on the balancer. Most folks with multiple active MDS are using pinning. ...
- 05:29 PM Documentation #61377 (New): doc: add suggested use-cases for random emphemeral pinning
- 05:18 PM Documentation #61375 (New): doc: cephfs-data-scan should discuss multiple data support
- In particular:
1) Indicate to users when and why they may have multiple data pools.
2) How to check if a file s... - 04:07 PM Bug #50719: xattr returning from the dead (sic!)
- Hi Xiubo,
thanks for taking care of this issue and the debug info.
We have here only a production system. Can y... - 02:53 PM Backport #59481 (Resolved): reef: cephfs-top, qa: test the current python version is supported
- 02:52 PM Backport #59481: reef: cephfs-top, qa: test the current python version is supported
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51142
merged - 02:53 PM Backport #59411: reef: snap-schedule: handle non-existent path gracefully during snapshot creation
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51248
merged - 12:15 PM Bug #61363: ceph mds keeps crashing with ceph_assert(auth_pins == 0)
- Greg,
please log your thoughts on this issue - 12:12 PM Bug #61363 (New): ceph mds keeps crashing with ceph_assert(auth_pins == 0)
- ...
- 12:07 PM Bug #59163: mds: stuck in up:rejoin when it cannot "open" missing directory inode
- Sorry for the delay - back to working on this.
- 11:01 AM Bug #59683 (Fix Under Review): Error: Unable to find a match: userspace-rcu-devel libedit-devel d...
- 11:00 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- With *--enablerepo=powertools* we can successfully install these 3 packages:...
- 08:50 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi... - 07:29 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- Checked from the *centos 8 stream* mirror, the *userspace-rcu-devel libedit-devel device-mapper-devel* packages were ...
- 06:56 AM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi... - 09:02 AM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi... - 08:58 AM Backport #61202 (In Progress): pacific: MDS imported_inodes metric is not updated.
- https://github.com/ceph/ceph/pull/51699
- 08:57 AM Bug #59350 (Fix Under Review): qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScr...
- 08:54 AM Backport #61204 (In Progress): reef: MDS imported_inodes metric is not updated.
- Venky Shankar wrote:
> Dhairya, please take this one.
https://github.com/ceph/ceph/pull/51698 - 08:53 AM Backport #61203 (In Progress): quincy: MDS imported_inodes metric is not updated.
- 08:53 AM Backport #61203: quincy: MDS imported_inodes metric is not updated.
- Venky Shankar wrote:
> Dhairya, please take this one.
https://github.com/ceph/ceph/pull/51697 - 04:52 AM Backport #61203: quincy: MDS imported_inodes metric is not updated.
- Dhairya, please take this one.
- 08:52 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi... - 06:53 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- reef - https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-sm...
- 08:52 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
- reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi... - 08:51 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-sm... - 08:51 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
- reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi... - 06:57 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
- reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi... - 08:50 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi... - 06:51 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi... - 08:50 AM Backport #59620: quincy: client: fix dump mds twice
- Venky Shankar wrote:
> Dhairya, please take this one.
Patch exists in quincy https://github.com/ceph/ceph/commits... - 04:53 AM Backport #59620: quincy: client: fix dump mds twice
- Dhairya, please take this one.
- 08:50 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- reef
https://pulpito.ceph.com/yuriw-2023-05-22_14:44:12-fs-wip-yuri3-testing-2023-05-21-0740-reef-distro-default-smi... - 06:51 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi... - 08:43 AM Backport #61158: reef: client: fix dump mds twice
- Venky Shankar wrote:
> Dhairya, please take this one.
This seems to already exist in reef https://github.com/ceph... - 06:54 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi... - 06:54 AM Fix #59667: qa: ignore cluster warning encountered in test_refuse_client_session_on_reconnect
- reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi... - 06:52 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
- reef
https://pulpito.ceph.com/yuriw-2023-05-10_18:53:39-fs-wip-yuri3-testing-2023-05-10-0851-reef-distro-default-smi... - 06:19 AM Bug #61357 (New): cephfs-data-scan: parallelize cleanup step
- https://docs.ceph.com/en/latest/cephfs/disaster-recovery-experts/#recovery-from-missing-metadata-objects
In the do... - 06:12 AM Backport #59722 (In Progress): quincy: qa: run scrub post disaster recovery procedure
- 05:17 AM Backport #59726 (In Progress): quincy: mds: allow entries to be removed from lost+found directory
- 04:58 AM Backport #61233 (In Progress): quincy: mds: a few simple operations crash mds
- 04:49 AM Backport #59725 (In Progress): pacific: mds: allow entries to be removed from lost+found directory
- 01:12 AM Bug #56532 (Resolved): client stalls during vstart_runner test
- 01:12 AM Backport #58603 (Resolved): pacific: client stalls during vstart_runner test
- 01:11 AM Backport #58881 (Resolved): pacific: mds: Jenkins fails with skipping unrecognized type MClientRe...
- 01:08 AM Bug #57044 (Resolved): mds: add some debug logs for "crash during construction of internal request"
- 01:00 AM Bug #55409 (Resolved): client: incorrect operator precedence in Client.cc
- 12:51 AM Backport #61346 (In Progress): pacific: mds: fsstress.sh hangs with multimds (deadlock between un...
- 12:48 AM Backport #61348 (In Progress): quincy: mds: fsstress.sh hangs with multimds (deadlock between unl...
- 12:46 AM Backport #61347 (In Progress): reef: mds: fsstress.sh hangs with multimds (deadlock between unlin...
05/22/2023
- 07:45 PM Backport #61348 (Resolved): quincy: mds: fsstress.sh hangs with multimds (deadlock between unlink...
- https://github.com/ceph/ceph/pull/51685
- 07:45 PM Backport #61347 (Resolved): reef: mds: fsstress.sh hangs with multimds (deadlock between unlink a...
- https://github.com/ceph/ceph/pull/51684
- 07:45 PM Backport #61346 (Resolved): pacific: mds: fsstress.sh hangs with multimds (deadlock between unlin...
- https://github.com/ceph/ceph/pull/51686
- 07:44 PM Bug #58340 (Pending Backport): mds: fsstress.sh hangs with multimds (deadlock between unlink and ...
- 07:43 PM Bug #59657 (Resolved): qa: test with postgres failed (deadlock between link and migrate straydn(r...
- Backported via https://tracker.ceph.com/issues/58340
- 03:06 PM Bug #57244: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10000000003 p...
- PR https://github.com/ceph/ceph/pull/47752 has been reverted: https://github.com/ceph/ceph/pull/51661
- 03:05 PM Bug #57244 (Fix Under Review): [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ...
- 12:54 PM Bug #61243: qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
- Please disable tests needed to be disabled.
- 10:40 AM Feature #61334 (Fix Under Review): cephfs-mirror: use snapdiff api for efficient tree traversal
- With https://github.com/ceph/ceph/pull/43546 merged, cephfs-mirror can make use the snapdiff api (via readdir_snapdif...
- 08:51 AM Bug #61151: libcephfs: incorrectly showing the size for snapshots when stating them
- The tests worked as expected:...
- 08:44 AM Bug #61151 (Fix Under Review): libcephfs: incorrectly showing the size for snapshots when stating...
- 04:37 AM Bug #59551 (Fix Under Review): mgr/stats: exception ValueError :invalid literal for int() with ba...
- 02:30 AM Backport #58994 (Resolved): pacific: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- 02:29 AM Backport #59268 (Resolved): pacific: libcephfs: clear the suid/sgid for fallocate
- 01:10 AM Bug #61148: dbench test results in call trace in dmesg
- Hit this again in Patrick's `fsstressh` relevant tests:
https://pulpito.ceph.com/pdonnell-2023-05-19_20:26:49-fs-w...
05/20/2023
- 11:01 AM Backport #59721 (In Progress): pacific: qa: run scrub post disaster recovery procedure
- 10:53 AM Backport #61235 (In Progress): pacific: mds: a few simple operations crash mds
- 10:48 AM Backport #59558: quincy: qa: RuntimeError: more than one file system available
- Patrick, https://github.com/ceph/ceph/pull/50922#issuecomment-1521943127 is waiting on this.
- 10:43 AM Backport #61158: reef: client: fix dump mds twice
- Dhairya, please take this one.
- 10:42 AM Backport #61204: reef: MDS imported_inodes metric is not updated.
- Dhairya, please take this one.
- 10:37 AM Backport #61234 (In Progress): reef: mds: a few simple operations crash mds
- 10:34 AM Backport #59724 (In Progress): reef: mds: allow entries to be removed from lost+found directory
- 10:29 AM Backport #59723 (In Progress): reef: qa: run scrub post disaster recovery procedure
- 09:41 AM Backport #59559 (Resolved): reef: qa: RuntimeError: more than one file system available
05/19/2023
- 09:27 PM Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2
- /a/yuriw-2023-05-17_19:39:18-rados-wip-yuri5-testing-2023-05-09-1324-pacific-distro-default-smithi/7276765
- 07:25 AM Bug #61279: qa: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays) failed
- This was addressed with https://tracker.ceph.com/issues/52606 but hit again.
- 07:24 AM Bug #61279 (New): qa: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays) failed
- Description: fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd...
- 06:39 AM Bug #61265 (New): qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
- Description: fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{c...
- 05:54 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- Milind, PTAL at the discussion in https://tracker.ceph.com/issues/59342 if it helps. I closed that as duplicate of this.
- 05:52 AM Bug #59342 (Duplicate): qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
- This is the duplicate of https://tracker.ceph.com/issues/57655 Hence closing this.
05/18/2023
- 02:33 PM Backport #58994: pacific: client: fix CEPH_CAP_FILE_WR caps reference leakage in _write()
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50988
merged - 02:33 PM Backport #59268: pacific: libcephfs: clear the suid/sgid for fallocate
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50988
merged - 02:32 PM Backport #59386: pacific: [RHEL stock] pjd test failures(a bug that need to wait the unlink to fi...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50986
merged - 02:31 PM Backport #59023: pacific: mds: warning `clients failing to advance oldest client/flush tid` seen ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50811
merged - 02:31 PM Backport #59246: pacific: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_cluste...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50809
merged - 02:31 PM Backport #59249: pacific: qa: intermittent nfs test failures at nfs cluster creation
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50809
merged - 02:31 PM Backport #59252: pacific: mgr/nfs: disallow non-existent paths when creating export
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50809
merged - 02:30 PM Backport #57721: pacific: qa: data-scan/journal-tool do not output debugging in upstream testing
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50773
merged - 02:29 PM Backport #57825: pacific: qa: mirror tests should cleanup fs during unwind
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50765
merged - 12:30 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Venky Shankar wrote:
> Rishabh Dave wrote:
> > http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-w... - 12:16 PM Bug #58934: snaptest-git-ceph.sh failure with ceph-fuse
- should we move to cloning from github.com instead of git.ceph.com for this specific test ?
git.ceph.com network conn... - 05:02 AM Bug #58934: snaptest-git-ceph.sh failure with ceph-fuse
- Xiubo Li wrote:
> This should be the same issue with https://tracker.ceph.com/issues/59343.
Which means this can ... - 11:43 AM Bug #61243: qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
- Cluster log shows fs degraded/offline and then cleared before test ends. Bunch of osd failure msgs after test ended.
... - 11:07 AM Bug #61243 (Duplicate): qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
- Job: fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro...
- 10:45 AM Bug #58949 (Need More Info): qa: test_cephfs.test_disk_quota_exceeeded_error is racy
- 10:45 AM Bug #58949: qa: test_cephfs.test_disk_quota_exceeeded_error is racy
- This is seen again in https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-...
- 10:27 AM Bug #59342: qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
- https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/7270060
- 07:29 AM Backport #61235 (Resolved): pacific: mds: a few simple operations crash mds
- https://github.com/ceph/ceph/pull/51609
- 07:29 AM Backport #61234 (Resolved): reef: mds: a few simple operations crash mds
- https://github.com/ceph/ceph/pull/51608
- 07:29 AM Backport #61233 (Resolved): quincy: mds: a few simple operations crash mds
- https://github.com/ceph/ceph/pull/51688
- 07:23 AM Bug #58411 (Pending Backport): mds: a few simple operations crash mds
05/17/2023
- 03:35 PM Backport #52854: pacific: qa: test_simple failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50756
merged - 03:33 PM Backport #59415: reef: pybind/mgr/volumes: investigate moving calls which may block on libcephfs ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51042
merged - 01:42 PM Bug #61151: libcephfs: incorrectly showing the size for snapshots when stating them
- > If the rstat is enabled for the .snap snapdir it should report the total size for all the snapshots.
I'm not sur... - 06:48 AM Bug #61151: libcephfs: incorrectly showing the size for snapshots when stating them
- There is one option *mds_snap_rstat* in *MDS* daemons, after it's enabled it will enable nested rstat for snapshots, ...
- 01:11 PM Bug #61217 (Fix Under Review): ceph: corrupt snap message from mds1
- 12:57 PM Bug #61217: ceph: corrupt snap message from mds1
- Introduced by :...
- 12:55 PM Bug #61217 (Resolved): ceph: corrupt snap message from mds1
- From ceph user mail list: https://www.spinics.net/lists/ceph-users/msg77106.html
The kclient received a corrupted ... - 01:00 PM Bug #61148: dbench test results in call trace in dmesg
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > Venky Shankar wrote:
> > > > Xiubo Li wrote:
> ... - 10:38 AM Bug #61148: dbench test results in call trace in dmesg
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo Li wrote:
> > > > Okay, this morning ... - 07:39 AM Bug #61148: dbench test results in call trace in dmesg
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > Okay, this morning I saw you have merged the PR h... - 12:51 PM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- Nothing has changed on storage01 since my first crash report. The crash happens on Leap 15.4 with Quincy. I just want...
- 12:42 PM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- Eugen Block wrote:
> Both VMs use the same kernel version (they are not running 15.1 anymore, both have been upgrade... - 12:29 PM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- Both VMs use the same kernel version (they are not running 15.1 anymore, both have been upgraded to 15.4 on the way t...
- 12:12 PM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- Eugen Block wrote:
> This is kind of strange, when I initially wanted to test cephfs-top I chose a different virtual... - 09:23 AM Fix #58023 (Fix Under Review): mds: do not evict clients if OSDs are laggy
- 09:18 AM Bug #59394: ACLs not fully supported.
- Brian,
Have you read up "these docs":https://docs.ceph.com/en/latest/cephfs/client-config-ref/#confval-client_acl_ty... - 07:46 AM Bug #55446: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command
- pacific:...
- 07:44 AM Bug #54462: Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status...
- pacific:
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-de... - 07:43 AM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- pacific:
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-de... - 07:43 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
- pacific
https://pulpito.ceph.com/yuriw-2023-05-15_21:56:33-fs-wip-yuri2-testing-2023-05-15-0810-pacific_2-distro-de... - 06:27 AM Backport #61204 (Resolved): reef: MDS imported_inodes metric is not updated.
- https://github.com/ceph/ceph/pull/51698
- 06:27 AM Bug #61201: qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in pacific
- PRs in the test batch:
# pacific: qa: mirror tests should cleanup fs during unwind by batrick · Pull Request #5076... - 06:24 AM Bug #61201 (Resolved): qa: test_rebuild_moved_file (tasks/data-scan) fails because mds crashes in...
- Description: fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/...
- 06:27 AM Backport #61203 (In Progress): quincy: MDS imported_inodes metric is not updated.
- 06:27 AM Backport #61202 (Resolved): pacific: MDS imported_inodes metric is not updated.
- 06:18 AM Bug #59107 (Pending Backport): MDS imported_inodes metric is not updated.
- 01:31 AM Bug #50719 (Need More Info): xattr returning from the dead (sic!)
- 01:11 AM Bug #50719: xattr returning from the dead (sic!)
- Tobias Hachmer wrote:
> Hi Jeff,
>
> we're also affected by this issue and can confirm the behaviour Thomas descr... - 01:12 AM Backport #58826 (Resolved): pacific: mds: make num_fwd and num_retry to __u32
- 01:03 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Kotresh Hiremath Ravishankar wrote:
> This is seen in reef QA run - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:... - 12:08 AM Backport #59406 (Resolved): reef: cephfs-top: navigate to home screen when no fs
05/16/2023
- 07:11 PM Backport #59245: reef: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50808
merged - 07:10 PM Backport #59248: reef: qa: intermittent nfs test failures at nfs cluster creation
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50808
merged - 07:10 PM Backport #59251: reef: mgr/nfs: disallow non-existent paths when creating export
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/50808
merged - 07:09 PM Backport #59227: reef: cephfs-data-scan: does not scan_links for lost+found
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50782
merged - 06:36 PM Backport #59430: reef: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.Test...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51047
merged - 06:35 PM Backport #59406: reef: cephfs-top: navigate to home screen when no fs
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51003
merged - 06:33 PM Backport #59596: reef: cephfs-top: fix help text for delay
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50998
merged - 06:33 PM Backport #59397: reef: cephfs-top: cephfs-top -d <seconds> not working as expected
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50998
merged - 06:31 PM Backport #59020: reef: cephfs-data-scan: multiple data pools are not supported
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50524
merged - 04:27 PM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
- reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm...
- 11:19 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
- reef:
https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s... - 04:27 PM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
- reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm...
- 11:27 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
- reef - https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s...
- 04:26 PM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm...
- 04:25 PM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- This is seen in reef QA run - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247...
- 04:23 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm...
- 11:15 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- reef: https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-sm...
- 04:21 PM Bug #61182: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finis...
- another instance in reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-re...
- 10:32 AM Bug #61182 (Pending Backport): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon a...
- Description: fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} clus...
- 04:21 PM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
- reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm...
- 04:19 PM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- reef- https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-smi...
- 04:06 PM Bug #59683: Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel ...
- reef - https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-sm...
- 04:07 PM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- reef
https://pulpito.ceph.com/yuriw-2023-05-15_15:22:39-fs-wip-yuri6-testing-2023-04-26-1247-reef-distro-default-smi... - 11:22 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- reef - https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s...
- 03:48 PM Bug #50719: xattr returning from the dead (sic!)
- Hi Jeff,
we're also affected by this issue and can confirm the behaviour Thomas described.
Versions here:
Ce... - 02:38 PM Backport #61187 (In Progress): reef: qa: ignore cluster warning encountered in test_refuse_client...
- https://github.com/ceph/ceph/pull/51515
- 12:53 PM Backport #61187 (Resolved): reef: qa: ignore cluster warning encountered in test_refuse_client_se...
- 12:49 PM Fix #59667 (Pending Backport): qa: ignore cluster warning encountered in test_refuse_client_sessi...
- 11:16 AM Fix #59667: qa: ignore cluster warning encountered in test_refuse_client_session_on_reconnect
- This needs backport for reef:
Issue is hit in reef
https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10... - 11:30 AM Bug #61186 (Fix Under Review): mgr/nfs: hitting incomplete command returns same suggestion twice
- for example...
- 11:26 AM Bug #58220: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
- seen in reef - https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-d...
- 11:24 AM Bug #38704: qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN) in cluster log"
- Similar issue seen again in reef on different test (tasks/cephfs_misc_tests) - https://pulpito.ceph.com/yuriw-2023-05...
- 11:24 AM Documentation #61185 (Closed): mgr/nfs: ceph nfs cluster config reset CLUSTER_NAME -i PATH_TO_CON...
- While going through Red Hat doc https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/file_system...
- 11:20 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- reef - https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-reef-distro-default-s...
- 11:14 AM Bug #47292: cephfs-shell: test_df_for_valid_file failure
- seen in yuri's reef run
https://pulpito.ceph.com/yuriw-2023-05-09_19:37:41-fs-wip-yuri10-testing-2023-05-08-0849-ree... - 11:13 AM Bug #61184 (Closed): mgr/nfs: setting config using external file gets overidden
- Followed similar steps as mentioned in https://tracker.ceph.com/issues/59463...
- 10:50 AM Bug #59463: mgr/nfs: Setting NFS export config using -i option is not working
- Dhairya Parmar wrote:
> I was going through the config file content and found `pseudo_path` to be a bit doubtful, it... - 10:48 AM Bug #61183 (Closed): mgr/nfs: Applying for first time, NFS-Ganesha Config Added Successfully but ...
- While setting NFS cluster config for the first time using -i option it does say "NFS-Ganesha Config Added Successfull...
- 07:30 AM Bug #59688: mds: idempotence issue in client request
- Mer Xuanyi wrote:
> Venky Shankar wrote:
> > Hi Mer Xuanyi,
> >
> > Thanks for the detailed report.
> >
> > S... - 06:36 AM Bug #59688: mds: idempotence issue in client request
- Venky Shankar wrote:
> Hi Mer Xuanyi,
>
> Thanks for the detailed report.
>
> So, this requires flipping the c... - 04:08 AM Bug #59688 (Triaged): mds: idempotence issue in client request
- Hi Mer Xuanyi,
Thanks for the detailed report.
So, this requires flipping the config tunables from their defaul... - 07:28 AM Backport #59202 (In Progress): pacific: qa: add testing in fs:workload for different kinds of sub...
- 05:53 AM Backport #59202 (New): pacific: qa: add testing in fs:workload for different kinds of subvolumes
- redoing backport
- 06:10 AM Backport #59706 (In Progress): pacific: Mds crash and fails with assert on prepare_new_inode
- 05:53 AM Backport #59707 (In Progress): quincy: Mds crash and fails with assert on prepare_new_inode
- 05:48 AM Backport #59708 (In Progress): reef: Mds crash and fails with assert on prepare_new_inode
- 05:39 AM Backport #59007 (Resolved): pacific: mds stuck in 'up:replay' and crashed.
- 02:26 AM Bug #58411 (Fix Under Review): mds: a few simple operations crash mds
- 02:21 AM Bug #61148 (Fix Under Review): dbench test results in call trace in dmesg
- I created one PR to fix it.
Fixed the racy of https://github.com/ceph/ceph/pull/47752/commits/9fbde6da076f2d7c8bfd... - 02:13 AM Bug #61148: dbench test results in call trace in dmesg
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Okay, this morning I saw you have merged the PR https://github.com/ceph/... - 01:23 AM Bug #61148: dbench test results in call trace in dmesg
- Xiubo Li wrote:
> Okay, this morning I saw you have merged the PR https://github.com/ceph/ceph/pull/47752.
>
> Th... - 01:12 AM Bug #61148: dbench test results in call trace in dmesg
- Xiubo Li wrote:
> Okay, this morning I saw you have merged the PR https://github.com/ceph/ceph/pull/47752.
I was ... - 12:51 AM Bug #61148: dbench test results in call trace in dmesg
- Okay, this morning I saw you have merged the PR https://github.com/ceph/ceph/pull/47752.
This issue should be intr...
05/15/2023
- 04:52 PM Backport #61167 (Resolved): quincy: [WRN] : client.408214273 isn't responding to mclientcaps(revo...
- https://github.com/ceph/ceph/pull/52851
Backported https://tracker.ceph.com/issues/62197 together with this tracker. - 04:52 PM Backport #61166 (Resolved): pacific: [WRN] : client.408214273 isn't responding to mclientcaps(rev...
- https://github.com/ceph/ceph/pull/52852
Backported https://tracker.ceph.com/issues/62199 together with this tracker. - 04:52 PM Backport #61165 (Resolved): reef: [WRN] : client.408214273 isn't responding to mclientcaps(revoke...
- https://github.com/ceph/ceph/pull/52850
Backported https://tracker.ceph.com/issues/62198 together with this tracker. - 04:51 PM Bug #57244 (Pending Backport): [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ...
- 04:02 PM Backport #59595 (Resolved): pacific: cephfs-top: fix help text for delay
- 03:35 PM Backport #59595: pacific: cephfs-top: fix help text for delay
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50715
merged - 04:01 PM Backport #59398 (Resolved): pacific: cephfs-top: cephfs-top -d <seconds> not working as expected
- 03:35 PM Backport #59398: pacific: cephfs-top: cephfs-top -d <seconds> not working as expected
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50715
merged - 04:00 PM Backport #58807 (Resolved): pacific: cephfs-top: add an option to dump the computed values to stdout
- 03:35 PM Backport #58807: pacific: cephfs-top: add an option to dump the computed values to stdout
- Jos Collin wrote:
> Backport PR: https://github.com/ceph/ceph/pull/50715.
merged - 03:37 PM Backport #58881: pacific: mds: Jenkins fails with skipping unrecognized type MClientRequest::Release
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50733
merged - 03:37 PM Backport #58826: pacific: mds: make num_fwd and num_retry to __u32
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50733
merged - 03:36 PM Backport #59007: pacific: mds stuck in 'up:replay' and crashed.
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50725
merged - 03:34 PM Backport #58866: pacific: cephfs-top: Sort menu doesn't show 'No filesystem available' screen whe...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50596
merged - 03:34 PM Backport #59720 (In Progress): pacific: client: read wild pointer when reconnect to mds
- 03:34 PM Backport #59019: pacific: cephfs-data-scan: multiple data pools are not supported
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50523
merged - 03:32 PM Backport #59718 (In Progress): quincy: client: read wild pointer when reconnect to mds
- 03:24 PM Backport #59719 (In Progress): reef: client: read wild pointer when reconnect to mds
- 03:18 PM Backport #61158 (Resolved): reef: client: fix dump mds twice
- 11:00 AM Bug #55446: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command
- https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi...
- 10:41 AM Bug #50719: xattr returning from the dead (sic!)
- Hi Jeff!
Just wanted to let you know that this issue is still relevant and severe with more recent versions of bot... - 08:27 AM Bug #61151 (Fix Under Review): libcephfs: incorrectly showing the size for snapshots when stating...
- If the *rstat* is enabled for the *.snap* snapdir it should report the total size for all the snapshots. And at the s...
- 07:42 AM Bug #61148: dbench test results in call trace in dmesg
- More detail call trace:...
- 06:05 AM Bug #61148: dbench test results in call trace in dmesg
- Another instance, but this time another workunit: https://pulpito.ceph.com/vshankar-2023-05-12_08:25:27-fs-wip-vshank...
- 05:13 AM Bug #61148 (Rejected): dbench test results in call trace in dmesg
- https://pulpito.ceph.com/vshankar-2023-05-12_08:25:27-fs-wip-vshankar-testing-20230509.090020-1-testing-default-smith...
- 06:57 AM Fix #59667 (Resolved): qa: ignore cluster warning encountered in test_refuse_client_session_on_re...
- 04:50 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- - https://pulpito.ceph.com/vshankar-2023-05-12_08:25:27-fs-wip-vshankar-testing-20230509.090020-1-testing-default-smi...
- 02:57 AM Bug #61009 (Fix Under Review): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0ea734a1639fc4740189dcd0...- 02:57 AM Bug #61008 (New): crash: void interval_set<T, C>::insert(T, T, T*, T*) [with T = inodeno_t; C = s...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8e3f5c0126b1f4f50f08ff5b...- 02:57 AM Bug #61004 (New): crash: MDSRank::is_stale_message(boost::intrusive_ptr<Message const> const&) const
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=af82cc6e82ac3651d4918c4a...- 02:56 AM Bug #60986 (New): crash: void MDCache::rejoin_send_rejoins(): assert(auth >= 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=71d482317bedfc17674af4f5...- 02:56 AM Bug #60980 (New): crash: Session* MDSRank::get_session(ceph::cref_t<Message>&): assert(session->i...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e2a2ae4253fafecb8b3ca014...- 02:55 AM Bug #60949 (New): crash: cephfs-journal-tool(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0d4dcd3a80040e9cf7f44ee7...- 02:55 AM Bug #60945 (New): crash: virtual void C_Client_Remount::finish(int): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=594102a81a05cdba00f19a82...- 02:49 AM Bug #60685 (New): crash: elist<T>::~elist() [with T = CInode*]: assert(_head.empty())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c4679655cf1f90278c3b054b...- 02:49 AM Bug #60679 (New): crash: C_GatherBuilderBase<ContextType, GatherType>::~C_GatherBuilderBase() [wi...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5b6b5acff6ff7a2c7ee0565f...- 02:48 AM Bug #60669 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2cf0c9e2e9b09fc177e74e6f...- 02:48 AM Bug #60668 (New): crash: void Migrator::export_try_cancel(CDir*, bool): assert(it != export_state...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ff4594c6abd556deefbc7327...- 02:48 AM Bug #60665 (New): crash: void MDCache::open_snaprealms(): assert(rejoin_done)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0da14d340ca86eb98a0a866c...- 02:48 AM Bug #60664 (New): crash: elist<T>::~elist() [with T = CDentry*]: assert(_head.empty())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=770f229d0a641695e3d43ffb...- 02:48 AM Bug #60660 (New): crash: std::_Rb_tree_rebalance_for_erase(std::_Rb_tree_node_base*, std::_Rb_tre...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6bb8c01a74a9dcf2f2500ef7...- 02:48 AM Bug #60640 (New): crash: void Journaler::_write_head(Context*): assert(last_written.write_pos >= ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9e5c5d1ecf602782154f9b19...- 02:48 AM Bug #60636 (New): crash: elist<T>::~elist() [with T = CDir*]: assert(_head.empty())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5fc520e822550d2d86ab92a1...- 02:47 AM Bug #60630 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e4d4bb371a344df64ed9d22c...- 02:47 AM Bug #60629 (New): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=978159d0d2675e074cadedff...- 02:47 AM Bug #60628 (New): crash: MDCache::purge_inodes(const interval_set<inodeno_t>&, LogSegment*)::<lam...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=de50fd8802ec5732812d300d...- 02:47 AM Bug #60627 (New): crash: void MDCache::handle_dentry_unlink(ceph::cref_t<MDentryUnlink>&): assert...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f9f4072660844962480cb518...- 02:47 AM Bug #60625 (Resolved): crash: MDSRank::send_message_client(boost::intrusive_ptr<Message> const&, ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c2092a196eb69c3c08a39646...- 02:47 AM Bug #60622 (New): crash: void MDCache::handle_dentry_unlink(ceph::cref_t<MDentryUnlink>&): assert...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a081e9516cda7c2c8dfa596c...- 02:47 AM Bug #60618 (New): crash: Session* MDSRank::get_session(ceph::cref_t<Message>&): assert(session->i...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b0d320aceb93370daf216e36...- 02:47 AM Bug #60607 (New): crash: virtual void MDSCacheObject::bad_put(int): assert(ref_map[by] > 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7ad58204e57e59e10794c28d...- 02:47 AM Bug #60606 (New): crash: ceph::buffer::list::iterator_impl<true>::copy(unsigned int, char*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e841c48356848e8657d3cb5e...- 02:47 AM Bug #60600 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=183a087b3f731571c6337b66...- 02:46 AM Bug #60598 (New): crash: void MDCache::handle_dentry_unlink(ceph::cref_t<MDentryUnlink>&): assert...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1ebda7a102930caf786ba16e...- 02:41 AM Bug #60372 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=42c202ed03daf7e9cbd008db...- 02:40 AM Bug #60343 (New): crash: void MDCache::handle_cache_rejoin_ack(ceph::cref_t<MMDSCacheRejoin>&): a...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=374c1bb49952f4c442a939bb...- 02:40 AM Bug #60319 (New): crash: std::_Rb_tree<dirfrag_t, dirfrag_t, std::_Identity<dirfrag_t>, std::less...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=20a0ec6572610c3fb12e9f12...- 02:39 AM Bug #60303 (New): crash: __pthread_mutex_lock()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f88cc08b61611695f0a919ea...- 02:38 AM Bug #60241 (New): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1e491c6d2002d8aa5fa17dac...- 02:35 AM Bug #60126 (New): crash: bool MDCache::shutdown_pass(): assert(!migrator->is_importing())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a36f9e1d71483b4075ca8625...- 02:34 AM Bug #60109 (New): crash: Session* MDSRank::get_session(ceph::cref_t<Message>&): assert(session->i...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=abb374dbfa3649257e6590ea...- 02:34 AM Bug #60092 (New): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&): asser...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=19e45b6ba87e5ddb0df16715...- 02:32 AM Bug #60014 (New): crash: void MDCache::remove_replay_cap_reconnect(inodeno_t, client_t): assert(c...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cc7ca4bafc15d4883c77e861...- 02:29 AM Bug #59865 (New): crash: CInode::get_dirfrags() const
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=546c0379bf1bc4705b166c60...- 02:28 AM Bug #59833 (Pending Backport): crash: void MDLog::trim(int): assert(segments.size() >= pre_segmen...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=21725944feb692959f706d0f...- 02:27 AM Bug #59819 (New): crash: virtual CDentry::~CDentry(): assert(batch_ops.empty())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5520f521a1ed7653b6505f60...- 02:25 AM Bug #59802 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=78028e7ef848aec87e854972...- 02:25 AM Bug #59799 (New): crash: ProtocolV2::handle_auth_request(ceph::buffer::list&)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=34dce467397e70daae79ec8e...- 02:24 AM Bug #59785 (Closed): crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == L...
*New crash events were reported via Telemetry with newer versions (['17.2.1', '17.2.5']) than encountered in Tracke...- 02:23 AM Bug #59768 (Duplicate): crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): asse...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1a84e31a4bc3ae6dc69d901c...- 02:23 AM Bug #59767 (New): crash: MDSDaemon::dump_status(ceph::Formatter*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=15752cc3020e5047d0724344...- 02:23 AM Bug #59766 (New): crash: virtual void ESession::replay(MDSRank*): assert(g_conf()->mds_wipe_sessi...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=36e909287876c9be42c068ca...- 02:22 AM Bug #59761 (New): crash: void MDLog::_replay_thread(): assert(journaler->is_readable() || mds->is...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=71bbbf63c8f73aa37e3aa82e...- 02:22 AM Bug #59751 (New): crash: MDSDaemon::respawn()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8bfff60bfa4ffd456fa49fb8...- 02:14 AM Bug #59749 (New): crash: virtual CInode::~CInode(): assert(batch_ops.empty())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=479afe0191403e023f1878c1...- 02:14 AM Bug #58934 (Duplicate): snaptest-git-ceph.sh failure with ceph-fuse
- This should be the same issue with https://tracker.ceph.com/issues/59343.
- 02:13 AM Bug #59741 (New): crash: void MDCache::remove_inode(CInode*): assert(o->get_num_ref() == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f60159ef05cf6bfbf51c6688...
05/12/2023
- 09:24 AM Bug #59736 (New): qa: add one test case for "kclient: ln: failed to create hard link 'file name':...
- We need to add one test case for https://tracker.ceph.com/issues/59515....
05/11/2023
- 08:18 PM Bug #56774: crash: Client::_get_vino(Inode*)
- Since this issue is marked as "Duplicate" it needs to specify what issue it duplicates in the "Related Issues" field....
- 08:15 PM Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_...
- Since this issue is marked as "Duplicate" it needs to specify what issue it duplicates in the "Related Issues" field....
- 05:10 PM Bug #59716 (Fix Under Review): tools/cephfs/first-damage: unicode decode errors break iteration
- 12:55 PM Feature #59601: Provide way to abort kernel mount after lazy umount
- Niklas Hambuechen wrote:
> Venky Shankar wrote:
> > Have you tried force unmounting the mount (unount -f)?
>
> A... - 08:53 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
- Seen in pacific run
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-... - 08:48 AM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi...
- 08:05 AM Bug #48773: qa: scrub does not complete
- https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi...
- 07:46 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi...
- 05:58 AM Backport #59726 (Resolved): quincy: mds: allow entries to be removed from lost+found directory
- https://github.com/ceph/ceph/pull/51689
- 05:58 AM Backport #59725 (Resolved): pacific: mds: allow entries to be removed from lost+found directory
- https://github.com/ceph/ceph/pull/51687
- 05:58 AM Backport #59724 (Resolved): reef: mds: allow entries to be removed from lost+found directory
- https://github.com/ceph/ceph/pull/51607
- 05:51 AM Bug #59569 (Pending Backport): mds: allow entries to be removed from lost+found directory
- 05:50 AM Backport #59723 (Resolved): reef: qa: run scrub post disaster recovery procedure
- https://github.com/ceph/ceph/pull/51606
- 05:50 AM Backport #59722 (In Progress): quincy: qa: run scrub post disaster recovery procedure
- https://github.com/ceph/ceph/pull/51690
- 05:50 AM Backport #59721 (Resolved): pacific: qa: run scrub post disaster recovery procedure
- https://github.com/ceph/ceph/pull/51610
- 05:50 AM Bug #59527 (Pending Backport): qa: run scrub post disaster recovery procedure
- 03:59 AM Backport #59720 (Resolved): pacific: client: read wild pointer when reconnect to mds
- https://github.com/ceph/ceph/pull/51487
- 03:59 AM Backport #59719 (Resolved): reef: client: read wild pointer when reconnect to mds
- https://github.com/ceph/ceph/pull/51484
- 03:59 AM Backport #59718 (Resolved): quincy: client: read wild pointer when reconnect to mds
- https://github.com/ceph/ceph/pull/51486
- 03:56 AM Bug #59514 (Pending Backport): client: read wild pointer when reconnect to mds
05/10/2023
- 04:54 PM Bug #59716 (Pending Backport): tools/cephfs/first-damage: unicode decode errors break iteration
- ...
- 11:35 AM Feature #59714 (Fix Under Review): mgr/volumes: Support to reject CephFS clones if cloner threads...
- 1. CephFS clone creation have a limit of 4 parallel clones at a time and rest
of the clone create requests are queue... - 08:43 AM Backport #59708 (Resolved): reef: Mds crash and fails with assert on prepare_new_inode
- https://github.com/ceph/ceph/pull/51506
- 08:43 AM Backport #59707 (Resolved): quincy: Mds crash and fails with assert on prepare_new_inode
- https://github.com/ceph/ceph/pull/51507
- 08:43 AM Backport #59706 (Resolved): pacific: Mds crash and fails with assert on prepare_new_inode
- https://github.com/ceph/ceph/pull/51508
- 08:36 AM Bug #52280 (Pending Backport): Mds crash and fails with assert on prepare_new_inode
- 04:49 AM Bug #59705 (Fix Under Review): client: only wait for write MDS OPs when unmounting
- 04:46 AM Bug #59705 (Resolved): client: only wait for write MDS OPs when unmounting
- We do not care about the read MDS OPs and it's safe by just dropping
them when unmounting.
05/09/2023
- 07:50 PM Bug #59394: ACLs not fully supported.
- Milind Changire wrote:
> Brian,
> The command you are using is correct.
> However, the config key is incorrect.
>... - 07:44 PM Bug #59394: ACLs not fully supported.
- Milind Changire wrote:
> Brian,
> The command you are using is correct.
> However, the config key is incorrect.
>... - 01:30 PM Bug #59691 (Fix Under Review): mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- 01:22 PM Bug #59691 (Resolved): mon/MDSMonitor: may lookup non-existent fs in current MDSMap
- ...
- 01:06 PM Feature #59601: Provide way to abort kernel mount after lazy umount
- Venky Shankar wrote:
> Have you tried force unmounting the mount (unount -f)?
After *umount --lazy*, the mount po... - 12:48 PM Bug #59688 (Triaged): mds: idempotence issue in client request
- Found the mds may process a same client request twice after session with client rebuild because the network issue.
... - 06:41 AM Bug #59683 (Resolved): Error: Unable to find a match: userspace-rcu-devel libedit-devel device-ma...
- - https://pulpito.ceph.com/vshankar-2023-05-06_17:28:05-fs-wip-vshankar-testing-20230506.143554-testing-default-smith...
- 05:52 AM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
- Another instance: https://pulpito.ceph.com/vshankar-2023-05-06_17:28:05-fs-wip-vshankar-testing-20230506.143554-testi...
- 05:23 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- Venky Shankar wrote:
> Xiubo Li wrote:
> > This is a failure with *libcephfs* and have the client side logs:
> >
... - 04:19 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- Xiubo Li wrote:
> This is a failure with *libcephfs* and have the client side logs:
>
> vshankar-2023-04-06_04:14... - 01:54 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- This also needs to be fixed in kclient.
- 03:52 AM Bug #59682: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the unit file o...
- Thanks for letting us know, Zac.
- 02:28 AM Bug #59682 (Resolved): CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the ...
- The Debian package "cephfs-mirror" in the Ceph repository doesn't install the unit file or the man page.
This was ...
05/08/2023
- 06:33 PM Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2
- Lots more in https://pulpito.ceph.com/yuriw-2023-05-06_14:41:44-rados-pacific-release-distro-default-smithi/
- 02:59 PM Feature #59601: Provide way to abort kernel mount after lazy umount
- Niklas Hambuechen wrote:
> In some situations, e.g. when changing monitor IPs during an emergency network reconfigur... - 09:16 AM Bug #59163: mds: stuck in up:rejoin when it cannot "open" missing directory inode
- The MDS got marked as down:damaged as it could not decode the CDir fnode:...
- 09:09 AM Fix #59667 (In Progress): qa: ignore cluster warning encountered in test_refuse_client_session_on...
- 08:48 AM Fix #59667 (Pending Backport): qa: ignore cluster warning encountered in test_refuse_client_sessi...
- Seen in http://pulpito.front.sepia.ceph.com/vshankar-2023-05-06_17:28:05-fs-wip-vshankar-testing-20230506.143554-test...
- 06:34 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- This is a failure with *libcephfs* and have the client side logs:
vshankar-2023-04-06_04:14:11-fs-wip-vshankar-tes... - 06:13 AM Bug #59343 (Fix Under Review): qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- 06:13 AM Bug #14557: snaps: failed snaptest-multiple-capsnaps.sh
- 03:16 AM Bug #14557: snaps: failed snaptest-multiple-capsnaps.sh
- Xiubo Li wrote:
> vshankar-2023-04-06_04:14:11-fs-wip-vshankar-testing-20230330.105356-testing-default-smithi/723370... - 03:03 AM Bug #14557: snaps: failed snaptest-multiple-capsnaps.sh
- vshankar-2023-04-06_04:14:11-fs-wip-vshankar-testing-20230330.105356-testing-default-smithi/7233705/teuthology.log
... - 03:06 AM Bug #59626 (Resolved): pacific: FSMissing: File system xxxx does not exist in the map
05/05/2023
- 06:24 PM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
- https://github.com/ceph/ceph/pull/51344 merged
- 03:38 PM Backport #59560: pacific: qa: RuntimeError: more than one file system available
- Rishabh Dave wrote:
> https://github.com/ceph/ceph/pull/51232
merged - 11:44 AM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- This is kind of strange, when I initially wanted to test cephfs-top I chose a different virtual ceph cluster which al...
- 07:39 AM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- Jos Collin wrote:
> Eugen Block wrote:
> > Not sure if required but I wanted to add some more information, while ru... - 06:04 AM Bug #59551 (Need More Info): mgr/stats: exception ValueError :invalid literal for int() with base...
- xinyu wang wrote:
> 'ceph fs perf stats' command miss some metadata for cephfs client, such as kernel_version.
>
... - 05:25 AM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- Eugen Block wrote:
> Not sure if required but I wanted to add some more information, while running cephfs-top the mg... - 09:36 AM Bug #54017: Problem with ceph fs snapshot mirror and read-only folders
- This seems the same issue: https://pulpito.ceph.com/vshankar-2023-05-03_16:19:11-fs-wip-vshankar-testing-20230503.142...
- 09:17 AM Bug #59657 (Fix Under Review): qa: test with postgres failed (deadlock between link and migrate s...
- 08:40 AM Bug #59657: qa: test with postgres failed (deadlock between link and migrate straydn(rename))
- From the logs evicting the unresponding client or closing the sessions could unblock the deadlock issue:...
- 07:48 AM Bug #59657 (Resolved): qa: test with postgres failed (deadlock between link and migrate straydn(r...
- https://pulpito.ceph.com/vshankar-2023-05-03_16:19:11-fs-wip-vshankar-testing-20230503.142424-testing-default-smithi/...
- 08:49 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
- Please evict the corresponding client to unblock this deadlock. Mostly this should work, if not then please restart t...
- 02:49 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
- Another failure the same with this: https://pulpito.ceph.com/vshankar-2023-05-03_16:19:11-fs-wip-vshankar-testing-202...
05/04/2023
- 06:06 PM Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2
- /a/yuriw-2023-04-25_18:56:08-rados-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/7252409
- 11:57 AM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- Not sure if required but I wanted to add some more information, while running cephfs-top the mgr module crashes all t...
- 10:57 AM Bug #59626 (Fix Under Review): pacific: FSMissing: File system xxxx does not exist in the map
- 01:01 AM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
- Rishabh Dave wrote:
> @setupfs()@ is not being called quincy onwards which is why we don't see this bug after paci... - 01:00 AM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
- ... and also doesn't explain why are we seeing this failure now. The last pacific run https://tracker.ceph.com/projec...
- 10:50 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
- This issue occurred in Pacific run - /ceph/teuthology-archive/yuriw-2023-04-25_19:03:49-fs-wip-yuri5-testing-2023-04-...
05/03/2023
- 06:30 PM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
- Apparently, this exception is expected because @backup_fs@ is being deleted little before the traceback is printed an...
- 01:34 PM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
- This commit removed the createfs boolean:...
- 01:31 PM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
- The pacific code is a bit different than other branches where the interesting bit is:
pacific:... - 01:27 PM Bug #59626 (Resolved): pacific: FSMissing: File system xxxx does not exist in the map
- Two separate jobs fail due to this issue
* TestMirroring.test_cephfs_mirror_cancel_sync: https://pulpito.ceph.com/yu... - 10:09 AM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- Seen in https://pulpito.ceph.com/yuriw-2023-04-26_21:59:47-fs-wip-yuri2-testing-2023-04-26-1247-pacific-distro-defaul...
- 04:55 AM Backport #59596 (In Progress): reef: cephfs-top: fix help text for delay
- 02:25 AM Backport #59620 (Resolved): quincy: client: fix dump mds twice
05/02/2023
- 02:36 PM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- Rishabh, can you post a backport for this? We are hitting this in Yuri's run: https://pulpito.ceph.com/yuriw-2023-04-...
- 02:36 PM Feature #59601 (New): Provide way to abort kernel mount after lazy umount
- In some situations, e.g. when changing monitor IPs during an emergency network reconfiguration, CephFS kernel mounts ...
- 01:30 PM Feature #59388: mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with uid and/or...
- The PR's makes all 31 types of MDS cap parse successfully.
- 01:30 PM Feature #59388 (Fix Under Review): mds/MDSAuthCaps: "fsname", path, root_squash can't be in same ...
- 01:13 PM Feature #59388: mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with uid and/or...
- We can have 5 elements in one MDS Cap -
1. fs name (string)
2. fs path (string)
3. root_squash (bool)
4. uid (i... - 12:57 PM Bug #59552 (Triaged): mon: block osd pool mksnap for fs pools
- 10:59 AM Backport #59595 (In Progress): pacific: cephfs-top: fix help text for delay
- 06:49 AM Backport #59595 (Resolved): pacific: cephfs-top: fix help text for delay
- https://github.com/ceph/ceph/pull/50715
- 10:22 AM Bug #58228: mgr/nfs: disallow non-existent paths when creating export
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya, please pick up additional commits from https://github.com... - 09:38 AM Bug #58228: mgr/nfs: disallow non-existent paths when creating export
- Venky Shankar wrote:
> Dhairya, please pick up additional commits from https://github.com/ceph/ceph/pull/51005.
A... - 06:01 AM Bug #58228: mgr/nfs: disallow non-existent paths when creating export
- Dhairya, please pick up additional commits from https://github.com/ceph/ceph/pull/51005.
- 09:53 AM Backport #59594 (In Progress): quincy: cephfs-top: fix help text for delay
- 06:49 AM Backport #59594 (Resolved): quincy: cephfs-top: fix help text for delay
- https://github.com/ceph/ceph/pull/50717
- 06:49 AM Backport #59596 (Resolved): reef: cephfs-top: fix help text for delay
- https://github.com/ceph/ceph/pull/50998
- 06:48 AM Bug #59553 (Pending Backport): cephfs-top: fix help text for delay
04/28/2023
- 10:10 PM Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2
- /a/yuriw-2023-04-26_20:20:05-rados-pacific-release-distro-default-smithi/7255292...
- 07:13 AM Bug #59582 (Pending Backport): snap-schedule: allow retention spec to specify max number of snaps...
- Along with daily, weekly, monthly and yearly snaps, users also need a way to mention the max number of snaps they nee...
04/27/2023
- 12:21 PM Bug #51271 (Resolved): mgr/volumes: use a dedicated libcephfs handle for subvolume API calls
- 12:20 PM Bug #51357 (Resolved): osd: sent kickoff request to MDS and then stuck for 15 minutes until MDS c...
- 12:20 PM Bug #50389 (Resolved): mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or ...
- 12:20 PM Backport #50849 (Rejected): octopus: mds: "cluster [ERR] Error recovering journal 0x203: (2) No...
- Octopus is EOL
- 12:20 PM Backport #51482 (Rejected): octopus: osd: sent kickoff request to MDS and then stuck for 15 minut...
- Octopus is EOL
- 12:20 PM Backport #51545 (Rejected): octopus: mgr/volumes: use a dedicated libcephfs handle for subvolume ...
- Octopus is EOL
- 12:19 PM Bug #51857 (Resolved): client: make sure only to update dir dist from auth mds
- 12:19 PM Backport #51976 (Rejected): octopus: client: make sure only to update dir dist from auth mds
- Octopus is EOL
- 12:18 PM Backport #53304 (Rejected): octopus: Improve API documentation for struct ceph_client_callback_args
- Octopus is EOL
- 09:52 AM Bug #59569 (In Progress): mds: allow entries to be removed from lost+found directory
- 09:38 AM Bug #59569 (Resolved): mds: allow entries to be removed from lost+found directory
- Post file system recovery, files which have missing backtraces are recovered in lost+found directory. Users could cho...
- 09:07 AM Backport #50252 (Rejected): octopus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/cli...
- Octopus is EOL
- 08:59 AM Backport #59411 (In Progress): reef: snap-schedule: handle non-existent path gracefully during sn...
- 08:58 AM Bug #51600 (Resolved): mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_hit_rate ...
- 08:57 AM Backport #51831 (Rejected): octopus: mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and c...
- Octopus is EOL
- 08:56 AM Backport #52442 (In Progress): pacific: client: fix dump mds twice
- 08:55 AM Backport #52443 (Resolved): octopus: client: fix dump mds twice
- 08:55 AM Backport #59017 (In Progress): pacific: snap-schedule: handle non-existent path gracefully during...
- 08:53 AM Bug #51870 (Resolved): pybind/mgr/volumes: first subvolume permissions set perms on /volumes and ...
- 08:53 AM Backport #52629 (Resolved): octopus: pybind/mgr/volumes: first subvolume permissions set perms on...
- 08:53 AM Bug #56727 (Resolved): mgr/volumes: Subvolume creation failed on FIPs enabled system
- 08:53 AM Backport #56979 (Resolved): quincy: mgr/volumes: Subvolume creation failed on FIPs enabled system
- 08:52 AM Backport #56980 (Rejected): octopus: mgr/volumes: Subvolume creation failed on FIPs enabled system
- Octopus is EOL
- 08:51 AM Backport #53995 (Rejected): octopus: qa: begin grepping kernel logs for kclient warnings/failures...
- Octopus is EOL
- 06:49 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Venky Shankar wrote:
> Xiubo Li wrote:
> > This https://github.com/ceph/ceph/pull/47457 PR just merged 3 weeks ago ... - 06:40 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Xiubo Li wrote:
> This https://github.com/ceph/ceph/pull/47457 PR just merged 3 weeks ago and changed the *MOSDOp* h... - 05:17 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- This https://github.com/ceph/ceph/pull/47457 PR just merged 3 weeks ago and changed the *MOSDOp* head version to *9* ...
- 05:13 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Sorry the comments conflict and assigned it back when committing it.
- 05:13 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Xiubo Li wrote:
> Xiubo Li wrote:
> > https://pulpito.ceph.com/vshankar-2023-04-20_04:50:51-fs-wip-vshankar-testing... - 05:09 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Taking this from Xiubo (spoke verbally over call).
- 04:43 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Xiubo Li wrote:
> https://pulpito.ceph.com/vshankar-2023-04-20_04:50:51-fs-wip-vshankar-testing-20230412.053558-test... - 04:14 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- This is a normal case: the client sent two requests with *e95* then the osd handled it and at the same time sent back...
- 03:24 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Xiubo Li wrote:
> This one is using the *pacific* cluster and also has the same issue : https://pulpito.ceph.com/vsh... - 02:47 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- This one is using the *pacific* cluster and also has the same issue : https://pulpito.ceph.com/vshankar-2023-04-20_04...
04/26/2023
- 04:29 PM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- The log I copied is not actually showing the rmdir latency since I filtered incorrectly, sorry about that. This one i...
- 04:24 PM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- Has there been any progress on this in the last year? We believe we are hitting the same issue on a production cluste...
- 02:26 PM Backport #59560 (In Progress): pacific: qa: RuntimeError: more than one file system available
- 02:20 PM Backport #59560 (Resolved): pacific: qa: RuntimeError: more than one file system available
- https://github.com/ceph/ceph/pull/51232
- 02:24 PM Backport #59559 (In Progress): reef: qa: RuntimeError: more than one file system available
- 02:20 PM Backport #59559 (Resolved): reef: qa: RuntimeError: more than one file system available
- https://github.com/ceph/ceph/pull/51231
- 02:20 PM Backport #59558 (In Progress): quincy: qa: RuntimeError: more than one file system available
- https://github.com/ceph/ceph/pull/52241
- 02:20 PM Bug #59425 (Pending Backport): qa: RuntimeError: more than one file system available
- 12:48 PM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Venky Shankar wrote:
> BTW - all failures are for ceph-fuse. kclient seems fine.
Yeah, because the upgrade client... - 12:26 PM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- BTW - all failures are for ceph-fuse. kclient seems fine.
- 08:41 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Venky Shankar wrote:
> Xiubo Li wrote:
>
> >
> > This is a upgrade client test case from nautilus, and the clie... - 06:04 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Xiubo Li wrote:
>
> This is a upgrade client test case from nautilus, and the client sent two osd request with *... - 04:44 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> >
> > [...]
> >
> > > The osd requests are just ... - 06:03 AM Bug #59394: ACLs not fully supported.
- Brian,
The command you are using is correct.
However, the config key is incorrect.
Set debug_mds to 20 for all mds... - 05:56 AM Bug #59553 (Fix Under Review): cephfs-top: fix help text for delay
- 05:55 AM Bug #59553 (Resolved): cephfs-top: fix help text for delay
- ...
- 05:19 AM Bug #59552 (Resolved): mon: block osd pool mksnap for fs pools
- disabling of mon-managed snaps for fs pools has been taken care of for 'rados mksnap' path
unfortunately, the 'ceph ... - 03:18 AM Bug #59551 (Resolved): mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
- 'ceph fs perf stats' command miss some metadata for cephfs client, such as kernel_version. ...
04/25/2023
- 12:52 PM Bug #59530 (Triaged): mgr-nfs-upgrade: mds.foofs has 0/2
- 12:51 PM Bug #59534 (Triaged): qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Inp...
- 05:45 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Venky Shankar wrote:
> Xiubo, I saw updates in this tracker late and assigned the tracker to you in a rush. Do we kn... - 05:44 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Venky Shankar wrote:
> Xiubo Li wrote:
>
> [...]
>
> > The osd requests are just dropped by *osd.3* because of... - 05:12 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Xiubo Li wrote:
[...]
> The osd requests are just dropped by *osd.3* because of the osdmap versions were mismat... - 04:44 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Xiubo, I saw updates in this tracker late and assigned the tracker to you in a rush. Do we know what is causing the o...
- 04:36 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Another one: https://pulpito.ceph.com/vshankar-2023-04-21_04:40:06-fs-wip-vshankar-testing-20230420.132447-testing-de...
- 02:21 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
- Another: http://qa-proxy.ceph.com/teuthology/vshankar-2023-04-20_04:50:51-fs-wip-vshankar-testing-20230412.053558-tes...
- 02:12 AM Bug #59534 (Triaged): qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Inp...
- https://pulpito.ceph.com/vshankar-2023-04-20_04:50:51-fs-wip-vshankar-testing-20230412.053558-testing-default-smithi/...
- 06:41 AM Bug #59527 (In Progress): qa: run scrub post disaster recovery procedure
- 06:11 AM Feature #58057: cephfs-top: enhance fstop tests to cover testing displayed data
- Jos had a query regarding this - the idea is to dump (JSON) data which would otherwise be diplayed via ncurses so as ...
- 05:40 AM Feature #58057 (In Progress): cephfs-top: enhance fstop tests to cover testing displayed data
- 04:33 AM Bug #59463: mgr/nfs: Setting NFS export config using -i option is not working
- Dhairya Parmar wrote:
> I was going through the config file content and found `pseudo_path` to be a bit doubtful, it... - 03:40 AM Bug #59514 (Triaged): client: read wild pointer when reconnect to mds
04/24/2023
- 10:35 PM Bug #59530 (Triaged): mgr-nfs-upgrade: mds.foofs has 0/2
- /a/yuriw-2023-04-06_15:37:58-rados-wip-yuri3-testing-2023-04-04-0833-pacific-distro-default-smithi/7234302...
- 03:49 PM Bug #59394: ACLs not fully supported.
- Milind Changire wrote:
> Brian,
> Could you share the MDS debug logs for this specific operation.
> It'll help us ... - 09:32 AM Bug #59394: ACLs not fully supported.
- Brian,
Could you share the MDS debug logs for this specific operation.
It'll help us identify the failure point.
... - 12:26 PM Bug #59527 (Pending Backport): qa: run scrub post disaster recovery procedure
- test_data_scan does test a variety of data/metadata recovery steps, however, many tests do not run scrub which is rec...
- 03:47 AM Bug #59514 (Pending Backport): client: read wild pointer when reconnect to mds
- We use `shallow_copy`(24279ef8) for `MetaRequest::set_caller_perms ` in `Client::make_request` but indeed the lifetim...
Also available in: Atom