Activity
From 07/21/2022 to 08/19/2022
08/19/2022
- 11:00 PM Bug #57210: NFS client unable to see newly created files when listing directory contents in a FS ...
- I wonder if the real difference here is not the cloned subvolume, but whether the mount point had files in it prior t...
- 10:36 PM Bug #57210 (Resolved): NFS client unable to see newly created files when listing directory conten...
- Tried the following in a vstart cluster on ceph-main that launches ganesha v3.5 containers...
- 05:59 PM Bug #57206 (Rejected): ceph_test_libcephfs_reclaim crashes during test
- /a/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/6978421
Core is at... - 05:51 PM Bug #57205 (Pending Backport): Test failure: test_subvolume_group_ls_filter_internal_directories ...
- /a/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/6978395...
- 05:50 PM Bug #57204 (Duplicate): MDLog.h: 99: FAILED ceph_assert(!segments.empty())
- /a/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/6978343
MDS crashe... - 04:41 PM Backport #55756: quincy: mds: flush mdlog if locked and still has wanted caps not satisfied
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46494
merged - 02:58 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/46899
merged - 02:53 PM Backport #55736: quincy: client: do not release the global snaprealm until unmounting
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46495
merged - 01:28 PM Backport #57201 (Resolved): pacific: snap_schedule: replace .snap with the client configured snap...
- https://github.com/ceph/ceph/pull/47726
- 01:28 PM Backport #57200 (Resolved): quincy: snap_schedule: replace .snap with the client configured snap ...
- https://github.com/ceph/ceph/pull/47734
- 01:14 PM Bug #54283 (Resolved): qa/cephfs: is_mounted() depends on a mutable variable
- 01:14 PM Bug #55234 (Pending Backport): snap_schedule: replace .snap with the client configured snap dir name
- 01:10 PM Backport #57194 (Resolved): pacific: ceph pacific fails to perform fs/mirror test
- https://github.com/ceph/ceph/pull/48269
- 01:09 PM Backport #57193 (Resolved): quincy: ceph pacific fails to perform fs/mirror test
- https://github.com/ceph/ceph/pull/48268
- 01:03 PM Bug #55134 (Pending Backport): ceph pacific fails to perform fs/mirror test
08/18/2022
- 04:24 PM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- This topic was discussed during the User + Dev meeting on Aug. 8th, 2022. One revelation that came of the meeting (as...
- 01:12 PM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- Seen during weekly QA run - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-20...
- 09:01 AM Bug #53724 (Fix Under Review): mds: stray directories are not purged when all past parents are clear
08/17/2022
- 01:09 PM Feature #16745: mon: prevent allocating snapids allocated for CephFS
- Greg, You mean to disable taking snaps for a pool if its in use with CephFS?
- 08:46 AM Backport #57156 (In Progress): quincy: cephfs-top: wrong/infinitely changing wsp values
- 05:47 AM Backport #57156 (Resolved): quincy: cephfs-top: wrong/infinitely changing wsp values
- https://github.com/ceph/ceph/pull/47648
- 08:42 AM Backport #57155 (In Progress): pacific: cephfs-top: wrong/infinitely changing wsp values
- 05:47 AM Backport #57155 (Resolved): pacific: cephfs-top: wrong/infinitely changing wsp values
- https://github.com/ceph/ceph/pull/47647
- 07:08 AM Backport #57158 (Resolved): quincy: doc: update snap-schedule notes regarding 'start' time
- https://github.com/ceph/ceph/pull/53577
- 07:08 AM Backport #57157 (Resolved): pacific: doc: update snap-schedule notes regarding 'start' time
- https://github.com/ceph/ceph/pull/53576
- 07:06 AM Documentation #56730 (Pending Backport): doc: update snap-schedule notes regarding 'start' time
- 05:44 AM Bug #56537 (Pending Backport): cephfs-top: wrong/infinitely changing wsp values
08/16/2022
- 08:53 PM Bug #57154: kernel/fuse client using ceph ID with uid restricted MDS caps cannot update caps
- This issue was first described in https://tracker.ceph.com/issues/56067#note-15
- 08:47 PM Bug #57154 (Pending Backport): kernel/fuse client using ceph ID with uid restricted MDS caps cann...
- A kclient sends cap updates as caller_uid:caller_gid 0:0. A fuse client sends cap updates as caller_uid:caller_gid -1...
- 08:51 PM Bug #56067: Cephfs data loss with root_squash enabled
- Created a separate tracker ticket for the cap updates being dropped for clients using ceph IDs with uid restricted MD...
- 02:41 PM Backport #57042: quincy: pybind/mgr/volumes: interface to check the presence of subvolumegroups/s...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47474
merged - 02:35 PM Backport #56112: pacific: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya, please do the backport.
>
> https://github.com/ceph/ce... - 02:31 PM Bug #56666: mds: standby-replay daemon always removed in MDSMonitor::prepare_beacon
- https://github.com/ceph/ceph/pull/47281 merged
- 02:07 PM Backport #56590: quincy: qa: iogen workunit: "The following counters failed to be set on mds daem...
- Ramana, please post the backport.
- 02:06 PM Backport #56541: quincy: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = Sn...
- Milind, please take this one.
- 02:06 PM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- Venky Shankar wrote:
> Milind, please take this one.
Sorry - I meant to update the backport tracker. - 02:04 PM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- Milind, please take this one.
- 02:05 PM Backport #56542: pacific: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = S...
- Milind, please take this one.
- 01:46 PM Bug #50224: qa: test_mirroring_init_failure_with_recovery failure
- This is seen recently in https://pulpito.ceph.com/yuriw-2022-08-11_16:57:01-fs-wip-yuri3-testing-2022-08-11-0809-paci...
- 04:20 AM Bug #56249 (Fix Under Review): crash: int Client::_do_remount(bool): abort
08/15/2022
- 03:21 PM Backport #56978: pacific: mgr/volumes: Subvolume creation failed on FIPs enabled system
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47369
merged - 12:27 PM Bug #50834 (Resolved): MDS heartbeat timed out between during executing MDCache::start_files_to_r...
- 12:27 PM Backport #50914 (Resolved): octopus: MDS heartbeat timed out between during executing MDCache::st...
- 12:26 PM Bug #52123 (Resolved): mds sends cap updates with btime zeroed out
- 12:26 PM Backport #52635 (Resolved): pacific: mds sends cap updates with btime zeroed out
- 12:26 PM Backport #52634 (Resolved): octopus: mds sends cap updates with btime zeroed out
- 12:26 PM Bug #48422 (Resolved): mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.count(mds->get_n...
- 12:26 PM Backport #51933 (Resolved): octopus: mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.co...
- 12:25 PM Bug #48231 (Resolved): qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- 12:25 PM Backport #51201 (Resolved): octopus: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- 12:23 PM Bug #41072 (Resolved): scheduled cephfs snapshots (via ceph manager)
- 12:23 PM Backport #47200 (Rejected): octopus: scheduled cephfs snapshots (via ceph manager)
- 12:22 PM Bug #53952 (Resolved): mds: mds_oft_prefetch_dirfrags default to false
- 12:21 PM Backport #54194 (Resolved): pacific: mds: mds_oft_prefetch_dirfrags default to false
- 12:21 PM Backport #54196 (Resolved): quincy: mds: mds_oft_prefetch_dirfrags default to false
- 12:21 PM Backport #54195 (Resolved): octopus: mds: mds_oft_prefetch_dirfrags default to false
- 12:21 PM Bug #53805 (Resolved): mds: seg fault in expire_recursive
- 12:21 PM Backport #54407 (Resolved): quincy: mds: seg fault in expire_recursive
- 12:20 PM Backport #54220 (Resolved): pacific: mds: seg fault in expire_recursive
- 12:20 PM Backport #54219 (Resolved): octopus: mds: seg fault in expire_recursive
- 09:35 AM Bug #56249 (In Progress): crash: int Client::_do_remount(bool): abort
- 09:35 AM Bug #56249: crash: int Client::_do_remount(bool): abort
- Went through the kernel code I couldn't find any case in our case could cause the failure.
And from https://tracke... - 07:05 AM Bug #56249: crash: int Client::_do_remount(bool): abort
- Xiubo Li wrote:
> This only exist in the **v17.1.0** and the logic has been changed after [1][2][3] below. When tryi... - 06:16 AM Bug #56249: crash: int Client::_do_remount(bool): abort
- This only exist in the **v17.1.0** and the logic has been changed after [1][2][3] below. When trying remount to inval...
- 06:04 AM Bug #56249: crash: int Client::_do_remount(bool): abort
- Venky,
Please check this one https://tracker.ceph.com/issues/56532. It should be the same bug with this one. - 07:18 AM Bug #57126 (Fix Under Review): client: abort the client daemons when we couldn't invalidate the d...
- 07:12 AM Bug #57126 (Resolved): client: abort the client daemons when we couldn't invalidate the dentry ca...
- It was introduced by https://tracker.ceph.com/issues/54049.
From the options:... - 05:54 AM Bug #54653: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_map.count(stag))
- I found one case could cause this, such as in the xfstests-dev's open_by_handle.c, which will use the name_to_handle_...
- 05:53 AM Bug #56380: crash: Client::_get_vino(Inode*)
- I found one case could cause this, such as in the xfstests-dev's open_by_handle.c, which will use the name_to_handle_...
- 05:52 AM Bug #56774: crash: Client::_get_vino(Inode*)
- I found one case could cause this, such as in the xfstests-dev's open_by_handle.c, which will use the name_to_handle_...
- 05:18 AM Bug #56774 (Duplicate): crash: Client::_get_vino(Inode*)
- 05:52 AM Bug #56263: crash: Client::_get_vino(Inode*)
- I found one case could cause this, such as in the xfstests-dev's open_by_handle.c, which will use the name_to_handle_...
08/12/2022
- 05:22 PM Documentation #57062: Document access patterns that have good/pathological performance on CephFS
- I think that a good place for this info to be added would be https://docs.ceph.com/en/quincy/cephfs/app-best-practice...
- 12:03 PM Documentation #57115 (New): Explanation for cache pressure
- Following up on the "thread":https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/DOUQNI5YQ74YB3FS5ZOQI2MS...
- 09:54 AM Bug #56996: Transient data read corruption from other machine
- Confirmed with Venky, when the **CInode::filelock** is in **LOCK_MIX** state we won't guarantee the data consistency ...
- 09:31 AM Feature #40633 (Resolved): mds: dump recent log events for extraordinary events
- 09:23 AM Backport #57113 (Resolved): pacific: Intermittent ParsingError failure in mgr/volumes module dur...
- https://github.com/ceph/ceph/pull/47112
- 09:23 AM Backport #57112 (In Progress): quincy: Intermittent ParsingError failure in mgr/volumes module d...
- https://github.com/ceph/ceph/pull/47747
- 09:12 AM Bug #55583 (Pending Backport): Intermittent ParsingError failure in mgr/volumes module during "c...
- 09:11 AM Backport #57111 (Resolved): quincy: mds: handle deferred client request core when mds reboot
- https://github.com/ceph/ceph/pull/53363
- 09:11 AM Backport #57110 (Resolved): pacific: mds: handle deferred client request core when mds reboot
- https://github.com/ceph/ceph/pull/53362
- 09:10 AM Bug #56116 (Pending Backport): mds: handle deferred client request core when mds reboot
- 05:54 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- Milind Changire wrote:
> client-type: fuse
>
> * Iteratively running shell scripts under *qa/workunits/fs/snaps/*...
08/11/2022
- 04:08 PM Bug #57048: osdc/Journaler: better handle ENOENT during replay as up:standby-replay
- Greg Farnum wrote:
> Patrick Donnelly wrote:
> > Venky Shankar wrote:
> > > Patrick,
> > >
> > > Do you mean a ... - 03:12 PM Bug #57048: osdc/Journaler: better handle ENOENT during replay as up:standby-replay
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick,
> >
> > Do you mean a standby-replay MDS should tole... - 01:37 PM Bug #57048: osdc/Journaler: better handle ENOENT during replay as up:standby-replay
- Venky Shankar wrote:
> Patrick,
>
> Do you mean a standby-replay MDS should tolerate missing journal objects?
... - 01:40 PM Backport #51337 (Rejected): nautilus: mds: avoid journaling overhead for setxattr("ceph.dir.subvo...
- 06:56 AM Bug #57087: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
- Note that the test successfully passed on the re-run
https://pulpito.ceph.com/yuriw-2022-08-10_20:34:29-fs-wip-yuri6... - 03:44 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Unfortunately I must report that I'm still hitting this issue even with Ceph 16.2.7 and...
08/10/2022
- 05:30 PM Feature #56140 (Fix Under Review): cephfs: tooling to identify inode (metadata) corruption
- 05:20 PM Feature #57091 (Resolved): mds: modify scrub to catch dentry corruption
- Such as "first" snapshot being an invalid value.
- 05:01 PM Feature #57090 (Resolved): MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
- During some recovery situations, it would be useful to have MDS up but prevent clients from establishing sessions. Us...
- 03:18 PM Backport #56979: quincy: mgr/volumes: Subvolume creation failed on FIPs enabled system
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47368
mergedReviewed-by: Ramana Raja <rraja@redhat.com>
- 02:34 PM Bug #55216 (Resolved): cephfs-shell: creates directories in local file system even if file not found
- PR along with backport PRs merged. Marking as resolved.
- 02:30 PM Backport #55627 (Resolved): pacific: cephfs-shell: creates directories in local file system even ...
- merged
- 01:59 PM Feature #55715 (Resolved): pybind/mgr/cephadm/upgrade: allow upgrades without reducing max_mds
- 11:13 AM Bug #54271: mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- We will wait for this to happen in recent versions.
- 11:11 AM Bug #54271: mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- Lowering the priority as this is seen only in nautilus and not seen in supported versions.
- 10:48 AM Bug #56644: qa: test_rapid_creation fails with "No space left on device"
- https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-default-smithi...
- 10:35 AM Bug #57087 (Pending Backport): qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDat...
- Seen in https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-defaul...
- 10:01 AM Bug #51276 (Resolved): mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") for no-o...
- 10:00 AM Backport #51337 (Resolved): nautilus: mds: avoid journaling overhead for setxattr("ceph.dir.subvo...
- Nautilus is EOL
- 09:49 AM Bug #51267: CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps...
- Seen in https://pulpito.ceph.com/yuriw-2022-08-04_20:54:08-fs-wip-yuri6-testing-2022-08-04-0617-pacific-distro-defaul...
- 08:18 AM Bug #57083 (Fix Under Review): ceph-fuse: monclient(hunting): handle_auth_bad_method server allow...
- 07:56 AM Bug #57083: ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_methods [2] but ...
- The **nautilus** is using **python2**, while the **pacific** qa suite is using **python3** and the qa test suite seem...
- 07:37 AM Bug #57083: ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_methods [2] but ...
- From **remote/smithi029/log/ceph-mon.a.log.gz**: ...
- 07:26 AM Bug #57083: ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_methods [2] but ...
- The root cause is that in **nautilus** the **qa/workunits/fs/upgrade/volume_client** script is using **python2** to r...
- 07:21 AM Bug #57083: ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_methods [2] but ...
- From **remote/smithi029/log/ceph-mon.a.log.gz**: ...
- 07:10 AM Bug #57083 (Resolved): ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_metho...
- From https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-s...
- 07:54 AM Bug #53360 (Duplicate): pacific: client: "handle_auth_bad_method server allowed_methods [2] but i...
- Missed this existing tracker. Will track this in https://tracker.ceph.com/issues/57083 tracker. Have found root cause...
- 07:37 AM Bug #57084 (Resolved): Permissions of the .snap directory do not inherit ACLs
- when using CephFS with POSIX ACLs I noticed that the .snap directory does not inherit the ACLs from its parent but on...
- 07:26 AM Backport #53861: pacific: qa: tasks.cephfs.fuse_mount:mount command failed
- Created a new tracker to fix it https://tracker.ceph.com/issues/57083.
- 06:50 AM Backport #53861: pacific: qa: tasks.cephfs.fuse_mount:mount command failed
- Xiubo Li wrote:
> Kotresh Hiremath Ravishankar wrote:
> > Xiubo,
> >
> > Looks like this is seen again in this p... - 07:23 AM Bug #55572: qa/cephfs: omit_sudo doesn't have effect when passed to run_shell()
- I think this needs to be backported. Nikhil mentioned that the PR https://github.com/ceph/ceph/pull/47112 in pacific ...
- 07:09 AM Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED ...
- Seen in recent quincy run
https://pulpito.ceph.com/yuriw-2022-08-04_11:54:20-fs-wip-yuri8-testing-2022-08-0... - 07:08 AM Bug #57071 (Fix Under Review): mds: consider mds_cap_revoke_eviction_timeout for get_late_revokin...
08/09/2022
- 04:13 PM Backport #56527: pacific: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47111
merged - 04:12 PM Backport #56152: pacific: mgr/snap_schedule: schedule updates are not persisted across mgr restart
- https://github.com/ceph/ceph/pull/46797 merged
- 12:55 PM Bug #56529: ceph-fs crashes on getfattr
- Frank Schilder wrote:
> Thanks for the quick answer. Then, I guess, the patch to the ceph-fs clients will handle thi... - 12:54 PM Bug #56529: ceph-fs crashes on getfattr
- Frank Schilder wrote:
> Thanks for the quick answer. Then, I guess, the patch to the ceph-fs clients will handle thi... - 12:47 PM Bug #56529: ceph-fs crashes on getfattr
- Thanks for the quick answer. Then, I guess, the patch to the ceph-fs clients will handle this once it is approved? I ...
- 12:40 PM Bug #56529: ceph-fs crashes on getfattr
- Frank Schilder wrote:
> Hi all,
>
> this story continues, this time with a _valid_ vxattr name. I just observed e... - 12:33 PM Bug #56529: ceph-fs crashes on getfattr
- Hi all,
this story continues, this time with a _valid_ vxattr name. I just observed exactly the same problem now w... - 11:40 AM Bug #57072 (Pending Backport): Quincy 17.2.3 pybind/mgr/status: assert metadata failed
- `ceph fs status` return AssertionError
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/m... - 10:24 AM Backport #53861: pacific: qa: tasks.cephfs.fuse_mount:mount command failed
- Kotresh Hiremath Ravishankar wrote:
> Xiubo,
>
> Looks like this is seen again in this pacific run ?
>
> https... - 10:13 AM Backport #53861: pacific: qa: tasks.cephfs.fuse_mount:mount command failed
- Xiubo,
Looks like this is seen again in this pacific run ?
https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-f... - 10:24 AM Bug #56644: qa: test_rapid_creation fails with "No space left on device"
- Seen in recent pacific run https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pa...
- 09:27 AM Bug #57071 (Fix Under Review): mds: consider mds_cap_revoke_eviction_timeout for get_late_revokin...
- Even though mds_cap_revoke_eviction_timeout is set to zero, ceph-mon reports some clients failing to respond to capab...
- 09:01 AM Bug #57048: osdc/Journaler: better handle ENOENT during replay as up:standby-replay
- Patrick,
Do you mean a standby-replay MDS should tolerate missing journal objects? How can it end up in such a sit... - 08:58 AM Bug #56808: crash: LogSegment* MDLog::get_current_segment(): assert(!segments.empty())
- Looks similar to https://tracker.ceph.com/issues/51589 which was fixed a while ago.
Kotresh, please RCA this. - 08:16 AM Backport #57058 (In Progress): pacific: mgr/volumes: Handle internal metadata directories under '...
- 08:06 AM Backport #57057 (In Progress): quincy: mgr/volumes: Handle internal metadata directories under '/...
- 07:07 AM Bug #54462: Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status...
- This maybe duplicated to https://tracker.ceph.com/issues/55332.
- 06:55 AM Bug #54462: Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status...
- Seen in this run too.
https://pulpito.ceph.com/yuriw-2022-08-02_21:20:37-fs-wip-yuri7-testing-2022-07-27-0808-quin... - 06:51 AM Bug #56592: mds: crash when mounting a client during the scrub repair is going on
- More info:
I was just simulating the cu case we hit by just removing one object of the directory from the metadata... - 06:47 AM Bug #56592: mds: crash when mounting a client during the scrub repair is going on
- Venky Shankar wrote:
> Xiubo,
>
> Were you trying to mount /mydir when it was getting repaired?
No, I was just... - 06:23 AM Bug #56592: mds: crash when mounting a client during the scrub repair is going on
- Xiubo,
Were you trying to mount /mydir when it was getting repaired? - 06:47 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- Seen in this quincy run https://pulpito.ceph.com/yuriw-2022-08-02_21:20:37-fs-wip-yuri7-testing-2022-07-27-0808-quinc...
- 06:30 AM Bug #56830: crash: cephfs::mirror::PeerReplayer::pick_directory()
- Dhairya,
Please take a look at this. I think there is some sort of race that is causing this crash while iterating... - 06:25 AM Bug #57014: cephfs-top: add an option to dump the computed values to stdout
- Jos, please take this one.
- 05:54 AM Bug #56996 (In Progress): Transient data read corruption from other machine
- 04:48 AM Bug #56996: Transient data read corruption from other machine
- Witold Baryluk wrote:
> Ok. I still do not understand why this can happen:
>
> writer: write("a"); write("b"); wr... - 04:59 AM Bug #57065 (Closed): qa: test_query_client_ip_filter fails with latest 'perf stats' structure cha...
- test_query_client_ip_filter fails with the below error in tests [1] and [2]. This happens when PR [3] is tested.
<... - 04:41 AM Bug #57064 (Need More Info): qa: test_add_ancestor_and_child_directory failure
- Seen in recent quincy run https://pulpito.ceph.com/yuriw-2022-08-04_11:54:20-fs-wip-yuri8-testing-2022-08-03-1028-qui...
- 02:47 AM Bug #56067: Cephfs data loss with root_squash enabled
- Patrick Donnelly wrote:
> Please open a PR for discussion.
https://github.com/ceph/ceph/pull/47506 . Please take ... - 02:45 AM Bug #56067 (Fix Under Review): Cephfs data loss with root_squash enabled
08/08/2022
- 08:56 PM Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED ...
- /a/yuriw-2022-08-04_11:58:29-rados-wip-yuri3-testing-2022-08-03-0828-pacific-distro-default-smithi/6958108
- 07:03 PM Documentation #57062 (New): Document access patterns that have good/pathological performance on C...
- I have a CephFS 16.2.7 with 200 M small files (between 1 KB and 100 KB; ther are a few larger ones up to 200 MB) and ...
- 03:28 PM Bug #56048: ceph.mirror.info is not removed from target FS when mirroring is disabled
- Hi Venky,
I tried it again, now with 17.2.1, and I could reproduce the issue. The mgr debug log is below.
As fa... - 01:08 PM Bug #56048: ceph.mirror.info is not removed from target FS when mirroring is disabled
- Andreas Teuchert wrote:
> When disabling mirroring on a FS with "ceph fs snapshot mirror disable <source-fs>" the "c... - 02:32 PM Bug #56996: Transient data read corruption from other machine
- Ok. I still do not understand why this can happen:
writer: write("a"); write("b"); write("c");
reader (other cl... - 06:33 AM Bug #56996: Transient data read corruption from other machine
- Witold Baryluk wrote:
> What about when there is one writer and one reader?
This will depend on whether they are ... - 01:18 PM Feature #56643: scrub: add one subcommand or option to add the missing objects back
- Venky Shankar wrote:
> Xiubo Li wrote:
> > When we are scrub repairing the metadatas and some objects may get lost ... - 01:02 PM Feature #56643: scrub: add one subcommand or option to add the missing objects back
- Xiubo Li wrote:
> When we are scrub repairing the metadatas and some objects may get lost due to some reasons. After... - 01:01 PM Bug #56249: crash: int Client::_do_remount(bool): abort
- Xiubo Li wrote:
> Should be fixed by https://tracker.ceph.com/issues/54049.
Looks the same. However, I'm not sure... - 09:41 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- Milind Changire wrote:
> Adding any more condition to the assertion expression and passing the assertion is not goin... - 08:07 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- Adding any more condition to the assertion expression and passing the assertion is not going to do any good.
Since M... - 05:37 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- Never mind - I see the err coming from JournalPointer. If the MDS is respawning/shutting down could that condition ad...
- 05:29 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- Milind Changire wrote:
> This seems to be a race between an mds respawn and the MDLog::_recovery_thread()
> In Paci... - 08:55 AM Backport #57058 (Resolved): pacific: mgr/volumes: Handle internal metadata directories under '/vo...
- https://github.com/ceph/ceph/pull/47512
- 08:55 AM Backport #57057 (Resolved): quincy: mgr/volumes: Handle internal metadata directories under '/vol...
- https://github.com/ceph/ceph/pull/47511
- 08:54 AM Bug #55762 (Pending Backport): mgr/volumes: Handle internal metadata directories under '/volumes'...
08/05/2022
- 09:26 PM Bug #56067: Cephfs data loss with root_squash enabled
- Greg Farnum wrote:
>
>
> But now I have another question -- does this mean that a kclient which only has access ... - 04:21 PM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- This seems to be a race between an mds respawn and the MDLog::_recovery_thread()
In Pacific, the MDLog::_recovery_th... - 01:15 PM Bug #57048 (Pending Backport): osdc/Journaler: better handle ENOENT during replay as up:standby-r...
- ...
- 06:37 AM Backport #57042 (In Progress): quincy: pybind/mgr/volumes: interface to check the presence of sub...
- 04:42 AM Bug #48673: High memory usage on standby replay MDS
- We seem to be running into this pretty frequently and easily with standby-replay configuration.
08/04/2022
- 11:43 PM Bug #57044 (Fix Under Review): mds: add some debug logs for "crash during construction of interna...
- 11:42 PM Bug #57044 (Resolved): mds: add some debug logs for "crash during construction of internal request"
- ...
- 07:26 PM Bug #56802 (Duplicate): crash: void MDLog::_submit_entry(LogEvent*, MDSLogContextBase*): assert(!...
- 03:33 PM Bug #55897: test_nfs: update of export's access type should not trigger NFS service restart
- /a/yuriw-2022-08-03_20:33:43-rados-wip-yuri8-testing-2022-08-03-1028-quincy-distro-default-smithi/6957515
- 02:37 PM Backport #57041 (In Progress): pacific: pybind/mgr/volumes: interface to check the presence of su...
- 01:15 PM Backport #57041 (Resolved): pacific: pybind/mgr/volumes: interface to check the presence of subvo...
- https://github.com/ceph/ceph/pull/47460
- 01:15 PM Backport #57042 (Resolved): quincy: pybind/mgr/volumes: interface to check the presence of subvol...
- https://github.com/ceph/ceph/pull/47474
- 01:10 PM Feature #55821 (Pending Backport): pybind/mgr/volumes: interface to check the presence of subvolu...
- 12:19 PM Bug #56996: Transient data read corruption from other machine
- What about when there is one writer and one reader?
- 12:36 AM Bug #56996: Transient data read corruption from other machine
- I am not very sure this is a bug.
If there are multiple clients and they are in any of:... - 10:59 AM Fix #51177: pybind/mgr/volumes: investigate moving calls which may block on libcephfs into anothe...
- Kotresh, please take a look at this.
08/03/2022
- 02:46 PM Bug #56644: qa: test_rapid_creation fails with "No space left on device"
- Rishabh,
Do we know why the space issue started to show up recently? - 02:19 PM Bug #56517 (Resolved): fuse_ll.cc: error: expected identifier before ‘{’ token 1379 | {
- 10:36 AM Bug #57014 (Resolved): cephfs-top: add an option to dump the computed values to stdout
- It would be nice if cephfs-top dumps it's computed values to stdout in json format. The json should contain all the f...
- 08:16 AM Backport #56462 (In Progress): pacific: mds: crash due to seemingly unrecoverable metadata error
- 08:15 AM Backport #56462 (Need More Info): pacific: mds: crash due to seemingly unrecoverable metadata error
- 08:12 AM Backport #56461 (In Progress): quincy: mds: crash due to seemingly unrecoverable metadata error
- 06:13 AM Bug #56506: pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_scan.TestDa...
- Milind, please RCA this.
- 12:04 AM Fix #51177: pybind/mgr/volumes: investigate moving calls which may block on libcephfs into anothe...
- Downstream BZ - https://bugzilla.redhat.com/show_bug.cgi?id=2114615
08/02/2022
- 02:09 PM Bug #56626 (Closed): "ceph fs volume create" fails with error ERANGE
- Closing the bug. Changes in devstack-plugin-ceph, https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/85152...
- 02:03 PM Bug #55858: Pacific 16.2.7 MDS constantly crashing
- I've noticed a commonality when this is being triggered, Singularity is being used https://en.wikipedia.org/wiki/Sing...
- 08:15 AM Bug #56802: crash: void MDLog::_submit_entry(LogEvent*, MDSLogContextBase*): assert(!mds->is_any_...
- Maybe this is relevant information to reproduce the crash:
I have NFS Ganesha running to export CephFS and when I ... - 06:47 AM Bug #56988: mds: memory leak suspected
- Here is a graph of the memory summary without and with the automated restart.
- 06:34 AM Bug #56988: mds: memory leak suspected
- I have automated restarting a single MDS-Server when MDS memory consumption is 80GB (roughly twice the configured mds...
- 06:28 AM Bug #56695 (Fix Under Review): [RHEL stock] pjd test failures(a bug that need to wait the unlink ...
- 05:42 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- Patrick Donnelly wrote:
> [...]
>
> /ceph/teuthology-archive/pdonnell-2022-07-22_19:42:58-fs-wip-pdonnell-testing... - 02:50 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- Xiubo Li wrote:
> Tried **4.18.0-348.20.1.el8_5.x86_64** and couldn't reproduce it.
>
> Will try the exact same ... - 02:37 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- Tried **4.18.0-348.20.1.el8_5.x86_64** and couldn't reproduce it.
Will try the exact same version of **kernel-4.1...
08/01/2022
- 04:34 PM Bug #56996 (In Progress): Transient data read corruption from other machine
- Kernel cephfs on both sides.
* ceph version 15.2.15 (2dfb18841cfecc2f7eb7eb2afd65986ca4d95985) octopus (stable)
*... - 09:47 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- Test this with the latest **testing** kclient branch, I couldn't reproduce it.
Will switch to use the distro kerne... - 09:46 AM Bug #56695: [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- Xiubo Li wrote:
> Currently the kclient's **testing** branch has merged the fscryption name related patches, which w... - 09:10 AM Bug #56695 (In Progress): [RHEL stock] pjd test failures(a bug that need to wait the unlink to fi...
- Currently the kclient's **testing** branch has merged the fscryption name related patches, which will limit the **MAX...
- 09:08 AM Bug #56633 (Need More Info): mds: crash during construction of internal request
- Locally I couldn't reproduce it. And by reading the code I couldn't figure out in which case will the internal reques...
- 08:59 AM Bug #53573: qa: test new clients against older Ceph clusters
- Xiubo Li wrote:
> The tracker [1] has done the test for new clients with nautilus ceph simultaneously.
>
> [1] ht... - 08:51 AM Bug #53573: qa: test new clients against older Ceph clusters
- The tracker [1] has done the test for new clients with nautilus ceph simultaneously.
[1] https://tracker.ceph.com/... - 07:01 AM Bug #56988 (Need More Info): mds: memory leak suspected
- We are runnung a cephfs pacific cluster in production:
MDS version: ceph version 16.2.9 (4c3647a322c0ff5a1dd2344...
07/30/2022
- 12:39 PM Backport #56978 (In Progress): pacific: mgr/volumes: Subvolume creation failed on FIPs enabled sy...
- 11:45 AM Backport #56978 (Resolved): pacific: mgr/volumes: Subvolume creation failed on FIPs enabled system
- https://github.com/ceph/ceph/pull/47369
- 12:31 PM Backport #56979 (In Progress): quincy: mgr/volumes: Subvolume creation failed on FIPs enabled system
- 11:45 AM Backport #56979 (Resolved): quincy: mgr/volumes: Subvolume creation failed on FIPs enabled system
- https://github.com/ceph/ceph/pull/47368
- 11:45 AM Backport #56980 (Rejected): octopus: mgr/volumes: Subvolume creation failed on FIPs enabled system
- 11:41 AM Bug #56727 (Pending Backport): mgr/volumes: Subvolume creation failed on FIPs enabled system
07/29/2022
- 02:55 PM Bug #56626: "ceph fs volume create" fails with error ERANGE
- Once we confirm that removing osd_pool_default_pgp_num and osd_pool_default_pg_num in devstack-plugin-ceph works, we ...
- 07:17 AM Bug #56626: "ceph fs volume create" fails with error ERANGE
- Tested the deployment with "osd_pool_default_pg_autoscale_mode = off" in bootstrap_conf and seems to fix the issue. H...
- 01:01 PM Backport #50126 (Rejected): octopus: pybind/mgr/volumes: deadlock on async job hangs finisher thread
07/28/2022
- 09:00 PM Bug #56626 (Need More Info): "ceph fs volume create" fails with error ERANGE
- This does not seem like a bug in the mgr/volumes code. The mgr/volumes module creates FS pools using `osd pool create...
- 01:29 PM Backport #53714 (Resolved): pacific: mds: fails to reintegrate strays if destdn's directory is fu...
- 01:26 PM Bug #56633: mds: crash during construction of internal request
- Xiubo volunteered yesterday and said he's started work on this in standup today.
- 07:53 AM Bug #46140 (Resolved): mds: couldn't see the logs in log file before the daemon get aborted
- Checked the code, all the **assert()/abort()** have been fixed. Closing it.
- 07:37 AM Bug #46140 (New): mds: couldn't see the logs in log file before the daemon get aborted
- 07:37 AM Bug #46140: mds: couldn't see the logs in log file before the daemon get aborted
- I recalled it, we need to switch `assert()` to `ceph_assert()`. And the `ceph_assert()` will help dump the recent log...
- 02:20 AM Bug #56830 (Can't reproduce): crash: cephfs::mirror::PeerReplayer::pick_directory()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d6f26d40363a53f0bed9a466...- 02:19 AM Bug #56808 (In Progress): crash: LogSegment* MDLog::get_current_segment(): assert(!segments.empty())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=70db1b6eecab75317a1e77bd...- 02:19 AM Bug #56802 (Duplicate): crash: void MDLog::_submit_entry(LogEvent*, MDSLogContextBase*): assert(!...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=902003e195a320e2927d5e39...- 02:16 AM Bug #56774 (Duplicate): crash: Client::_get_vino(Inode*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=90a2f49686f20a5d71a3cdc3...- 01:41 AM Bug #56067: Cephfs data loss with root_squash enabled
- Ramana Raja wrote:
> I made the following change to the Locker code, and then checked how kclient and fuse client be... - 01:15 AM Bug #56605 (In Progress): Snapshot and xattr scanning in cephfs-data-scan
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Our purpose here is to recover the snaprealms and snaptable from the data ...
07/27/2022
- 04:59 PM Bug #56727 (Fix Under Review): mgr/volumes: Subvolume creation failed on FIPs enabled system
- 11:06 AM Bug #56727 (Resolved): mgr/volumes: Subvolume creation failed on FIPs enabled system
The subvolume creation hits the following traceback on fips enabled system....- 02:02 PM Bug #56067: Cephfs data loss with root_squash enabled
- Please open a PR for discussion.
- 12:29 PM Bug #56067: Cephfs data loss with root_squash enabled
- I made the following change to the Locker code, and then checked how kclient and fuse client behaved with root_squash...
- 03:59 AM Bug #56067: Cephfs data loss with root_squash enabled
- Patrick Donnelly wrote:
> Good work tracking that down Ramana! I don't think it's reasonable to try to require the c... - 01:44 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Xiubo Li wrote:
> Our purpose here is to recover the snaprealms and snaptable from the data pool. It's hard to do th... - 08:17 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- The **listsnaps** could list the snapids of the objects:...
- 07:32 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- > We should be able to see that we're missing snapshots by listing snaps on objects?
Yeah. If a file was snapshote... - 07:10 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Here is my test case locally https://github.com/lxbsz/ceph/tree/wip-56605-... - 05:42 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Xiubo Li wrote:
> Here is my test case locally https://github.com/lxbsz/ceph/tree/wip-56605-draft.
>
> By using:
... - 01:30 PM Feature #55121: cephfs-top: new options to limit and order-by
- Jos Collin wrote:
> Greg Farnum wrote:
> > Can't fs top already change the sort order? I thought that was done in N... - 01:11 PM Documentation #56730: doc: update snap-schedule notes regarding 'start' time
- Adding chat discussion from #cephfs IRC channel :
<gauravsitlani> Hi team i have a quick question regarding : http... - 01:06 PM Documentation #56730 (Resolved): doc: update snap-schedule notes regarding 'start' time
- Add notes to snap-schedule mgr plugin documentation about the handling of time zone for the 'start' time.
Primary ... - 12:55 PM Bug #46140 (Closed): mds: couldn't see the logs in log file before the daemon get aborted
- After a brief discussion with @Xiubo Li, we decided to close this tracker as this issue was encountered while debuggi...
- 11:50 AM Bug #55112 (Resolved): cephfs-shell: saving files doesn't work as expected
- 11:49 AM Backport #55629 (Resolved): pacific: cephfs-shell: saving files doesn't work as expected
- 11:49 AM Bug #55242 (Resolved): cephfs-shell: put command should accept both path mandatorily and validate...
- 11:49 AM Backport #55625 (Resolved): pacific: cephfs-shell: put command should accept both path mandatoril...
- 11:36 AM Bug #40860 (Resolved): cephfs-shell: raises incorrect error when regfiles are passed to be deleted
- 11:36 AM Documentation #54551 (Resolved): docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds ca...
- 11:35 AM Backport #55238 (Resolved): pacific: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-md...
- 10:04 AM Bug #56659: mgr: crash after upgrade pacific to main
- Patrick,
Your patch to fix the libsqlite3-mod-ceph dependency and the eventual crash has worked to resolve the crash...
07/26/2022
- 08:44 PM Bug #56659 (Duplicate): mgr: crash after upgrade pacific to main
- 02:31 PM Backport #56712 (In Progress): pacific: mds: standby-replay daemon always removed in MDSMonitor::...
- 01:05 PM Backport #56712 (Resolved): pacific: mds: standby-replay daemon always removed in MDSMonitor::pre...
- https://github.com/ceph/ceph/pull/47282
- 02:30 PM Backport #56713 (In Progress): quincy: mds: standby-replay daemon always removed in MDSMonitor::p...
- 01:05 PM Backport #56713 (Resolved): quincy: mds: standby-replay daemon always removed in MDSMonitor::prep...
- https://github.com/ceph/ceph/pull/47281
- 01:03 PM Bug #56666 (Pending Backport): mds: standby-replay daemon always removed in MDSMonitor::prepare_b...
- 12:14 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Here is my test case locally https://github.com/lxbsz/ceph/tree/wip-56605-draft.
By using:...
07/25/2022
- 03:20 PM Bug #56698 (Resolved): client: FAILED ceph_assert(_size == 0)
- ...
- 03:17 PM Bug #56697 (New): qa: fs/snaps fails for fuse
- ...
- 02:46 PM Bug #56695 (Resolved): [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- ...
- 02:38 PM Bug #56694 (Fix Under Review): qa: avoid blocking forever on hung umount
- 02:34 PM Bug #56694 (Rejected): qa: avoid blocking forever on hung umount
- /ceph/teuthology-archive/pdonnell-2022-07-22_19:42:58-fs-wip-pdonnell-testing-20220721.235756-distro-default-smithi/6...
- 11:18 AM Bug #56626 (In Progress): "ceph fs volume create" fails with error ERANGE
- 11:16 AM Bug #56626: "ceph fs volume create" fails with error ERANGE
- Hi Victoria,
I am not very familiar with the osd configs but as per code if 'osd_pool_default_pg_autoscale_mode' i... - 06:39 AM Bug #55858 (Need More Info): Pacific 16.2.7 MDS constantly crashing
- 04:59 AM Backport #56469 (In Progress): quincy: mgr/volumes: display in-progress clones for a snapshot
07/24/2022
- 06:20 PM Bug #56067: Cephfs data loss with root_squash enabled
- Patrick Donnelly wrote:
> I don't think it's reasonable to try to require the client mount to keep track of which ap...
07/23/2022
- 05:27 PM Bug #55759 (Resolved): mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- 05:27 PM Bug #55822 (Resolved): mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command
- 05:25 PM Backport #56103 (Resolved): quincy: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
07/22/2022
- 06:18 PM Feature #50470 (Resolved): cephfs-top: multiple file system support
- 06:17 PM Bug #52982 (Resolved): client: Inode::hold_caps_until should be a time from a monotonic clock
- 06:17 PM Backport #55937 (Resolved): pacific: client: Inode::hold_caps_until should be a time from a monot...
- 05:31 PM Bug #55971 (Resolved): LibRadosMiscConnectFailure.ConnectFailure test failure
- 05:30 PM Backport #56005 (Resolved): pacific: LibRadosMiscConnectFailure.ConnectFailure test failure
- 05:30 PM Backport #56004 (Resolved): quincy: LibRadosMiscConnectFailure.ConnectFailure test failure
- 04:37 PM Backport #55936 (Resolved): quincy: client: Inode::hold_caps_until should be a time from a monoto...
- 12:07 PM Backport #55936: quincy: client: Inode::hold_caps_until should be a time from a monotonic clock
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46563
merged - 04:37 PM Backport #56013 (Resolved): quincy: quota support for subvolumegroup
- 12:10 PM Backport #56013: quincy: quota support for subvolumegroup
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46667
merged - 04:37 PM Backport #56108 (Resolved): quincy: mgr/volumes: Remove incorrect 'size' in the output of 'snapsh...
- 12:12 PM Backport #56108: quincy: mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' co...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46804
merged - 04:36 PM Bug #56067: Cephfs data loss with root_squash enabled
- Good work tracking that down Ramana! I don't think it's reasonable to try to require the client mount to keep track o...
- 12:13 PM Backport #56103: quincy: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46805
merged - 12:09 PM Backport #54578: quincy: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding pat...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46647
merged - 02:58 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Greg Farnum wrote:
> Matan Breizman wrote:
> > Meaning,
> > > We can see the 1000098a1a5.00000000 object is still... - 02:52 AM Bug #56605 (Need More Info): Snapshot and xattr scanning in cephfs-data-scan
- Matan Breizman wrote:
> Meaning,
> > We can see the 1000098a1a5.00000000 object is still in the data pool: ...
> ... - 12:33 AM Bug #56638: Restore the AT_NO_ATTR_SYNC define in libcephfs
- John Mulligan wrote:
> I'm setting the backport field now for pacific & quincy. I hope I am setting it properly. Ple... - 12:21 AM Bug #56666 (Fix Under Review): mds: standby-replay daemon always removed in MDSMonitor::prepare_b...
07/21/2022
- 10:25 PM Bug #56067: Cephfs data loss with root_squash enabled
- Greg Farnum wrote:
> Hmm. Is the kernel client just losing track of root_squash when flushing caps? That is a differ... - 12:49 PM Bug #56067: Cephfs data loss with root_squash enabled
- Hmm. Is the kernel client just losing track of root_squash when flushing caps? That is a different path than the more...
- 12:36 PM Bug #56067 (In Progress): Cephfs data loss with root_squash enabled
- 02:14 AM Bug #56067: Cephfs data loss with root_squash enabled
- With vstart cluster (ceph main branch), I was able to reproduce the issue with a kernel client (5.17.11-200.fc35.x86_...
- 08:19 PM Bug #56666 (Resolved): mds: standby-replay daemon always removed in MDSMonitor::prepare_beacon
- If a standby-replay daemon's beacon makes it to MDSMonitor::prepare_beacon (rarely), it's automatically removed by th...
- 02:54 PM Bug #56638: Restore the AT_NO_ATTR_SYNC define in libcephfs
- I'm setting the backport field now for pacific & quincy. I hope I am setting it properly. Please correct it if I've f...
- 12:09 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Hi Xiubo, Thank you for the detailed information!
From a RADOS standpoint everything is working as expected.
We a... - 10:22 AM Bug #54283: qa/cephfs: is_mounted() depends on a mutable variable
- Rishabh Dave wrote:
> The PR for this ticket needed fix for "ticket 56476":https://tracker.ceph.com/issues/56476 in ... - 08:48 AM Bug #56659 (Duplicate): mgr: crash after upgrade pacific to main
- ...
Also available in: Atom