Activity
From 09/01/2022 to 09/30/2022
09/30/2022
- 07:34 PM Documentation #57737 (Pending Backport): Clarify security implications of path-restricted cephx c...
- https://docs.ceph.com/en/latest/cephfs/client-auth/#path-restriction suggests that you can restrict clients to a subt...
- 08:52 AM Backport #57282 (Resolved): pacific: cephfs-top:addition of filesystem menu(improving GUI)
- 08:51 AM Backport #57395 (Resolved): pacific: crash: int Client::_do_remount(bool): abort
- 08:51 AM Backport #57393 (Resolved): pacific: client: abort the client daemons when we couldn't invalidate...
- 08:48 AM Bug #55190 (Resolved): mgr/volumes: Show clone failure reason in clone status command
- 08:48 AM Backport #55349 (Resolved): pacific: mgr/volumes: Show clone failure reason in clone status command
- 08:48 AM Bug #55313 (Resolved): Unexpected file access behavior using ceph-fuse
- 08:47 AM Backport #55926 (Resolved): quincy: Unexpected file access behavior using ceph-fuse
- 08:47 AM Backport #55927 (Resolved): pacific: Unexpected file access behavior using ceph-fuse
- 08:47 AM Bug #55240 (Resolved): mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- 08:46 AM Backport #55659 (Resolved): pacific: mds: stuck 2 seconds and keeps retrying to find ino from aut...
- 08:46 AM Bug #54237 (Resolved): pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding path ...
- 08:46 AM Backport #54578 (Resolved): quincy: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and...
- 08:43 AM Backport #54577 (Resolved): pacific: pybind/cephfs: Add mapping for Ernno 13:Permission Denied an...
- 08:41 AM Bug #54653 (Resolved): crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_ma...
- 08:40 AM Backport #56056 (Resolved): pacific: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): asser...
- 08:40 AM Bug #56012 (Resolved): mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
- 08:40 AM Backport #56526 (Resolved): quincy: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_...
- 08:39 AM Backport #56527 (Resolved): pacific: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any...
- 08:38 AM Bug #54052 (Resolved): mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr restart
- 08:38 AM Backport #55055 (Resolved): quincy: mgr/snap-schedule: scheduled snapshots are not created after ...
- 08:38 AM Backport #55056 (Resolved): pacific: mgr/snap-schedule: scheduled snapshots are not created after...
- 08:38 AM Bug #54046 (Resolved): unaccessible dentries after fsstress run with namespace-restricted caps
- 08:37 AM Backport #55427 (Resolved): pacific: unaccessible dentries after fsstress run with namespace-rest...
- 08:37 AM Bug #54374 (Resolved): mgr/snap_schedule: include timezone information in scheduled snapshots
- 08:36 AM Backport #55384 (Resolved): pacific: mgr/snap_schedule: include timezone information in scheduled...
- 08:36 AM Bug #54625 (Resolved): Issue removing subvolume with retained snapshots - Possible quincy regress...
- 08:35 AM Backport #55335 (Resolved): pacific: Issue removing subvolume with retained snapshots - Possible ...
- 08:34 AM Bug #52642 (Resolved): snap scheduler: cephfs snapshot schedule status doesn't list the snapshot ...
- 08:34 AM Backport #53760 (Resolved): pacific: snap scheduler: cephfs snapshot schedule status doesn't list...
- 08:33 AM Bug #55217 (Resolved): pybind/mgr/volumes: Clone operation hangs
- 08:33 AM Backport #55353 (Resolved): quincy: pybind/mgr/volumes: Clone operation hangs
- 08:33 AM Backport #55352 (Resolved): pacific: pybind/mgr/volumes: Clone operation hangs
- 08:33 AM Bug #52606 (Resolved): qa: test_dirfrag_limit
- 08:32 AM Backport #52875 (Resolved): pacific: qa: test_dirfrag_limit
- 08:32 AM Bug #51707 (Resolved): pybind/mgr/volumes: Cloner threads stuck in loop trying to clone the stale...
- 08:32 AM Backport #52384 (Resolved): pacific: pybind/mgr/volumes: Cloner threads stuck in loop trying to c...
- 08:31 AM Bug #54081 (Resolved): mon/MDSMonitor: sanity assert when inline data turned on in MDSMap from v1...
- 08:31 AM Backport #54160 (Resolved): quincy: mon/MDSMonitor: sanity assert when inline data turned on in M...
- 08:31 AM Backport #54161 (Resolved): pacific: mon/MDSMonitor: sanity assert when inline data turned on in ...
- 08:31 AM Bug #53911 (Resolved): client: client session state stuck in opening and hang all the time
- 08:30 AM Backport #54216 (Resolved): quincy: client: client session state stuck in opening and hang all th...
- 08:30 AM Backport #54217 (Resolved): pacific: client: client session state stuck in opening and hang all t...
- 08:30 AM Bug #51062 (Resolved): mds,client: suppport getvxattr RPC
- 08:30 AM Backport #54533 (Resolved): quincy: mds,client: suppport getvxattr RPC
- 08:29 AM Backport #54532 (Resolved): pacific: mds,client: suppport getvxattr RPC
- 08:24 AM Bug #54066 (Resolved): mgr/volumes: uid/gid of the clone is incorrect
- 08:24 AM Backport #54420 (Rejected): octopus: mgr/volumes: uid/gid of the clone is incorrect
- Octopus is EOL
- 08:24 AM Backport #54256 (Resolved): pacific: mgr/volumes: uid/gid of the clone is incorrect
- 06:37 AM Documentation #57734 (Resolved): doc: Fix disaster recovery documentation
- The note about symlink recovery needs to be fixed in the below link. The symlink recovery is fixed in quincy.
ht...
09/29/2022
- 02:32 PM Backport #57729 (Resolved): quincy: Quincy 17.2.3 pybind/mgr/status: assert metadata failed
- https://github.com/ceph/ceph/pull/49967
- 02:32 PM Backport #57728 (Resolved): pacific: Quincy 17.2.3 pybind/mgr/status: assert metadata failed
- https://github.com/ceph/ceph/pull/49966
- 02:21 PM Bug #57072 (Pending Backport): Quincy 17.2.3 pybind/mgr/status: assert metadata failed
- 01:57 PM Bug #54121 (Resolved): mgr/volumes: File Quota attributes not getting inherited to the cloned volume
- 01:57 PM Backport #54333 (Resolved): quincy: mgr/volumes: File Quota attributes not getting inherited to t...
- 01:57 PM Backport #54331 (Rejected): octopus: mgr/volumes: File Quota attributes not getting inherited to ...
- 01:56 PM Backport #54332 (Resolved): pacific: mgr/volumes: File Quota attributes not getting inherited to ...
- 01:56 PM Bug #54099 (Resolved): mgr/volumes: A deleted subvolumegroup when listed using "ceph fs subvolume...
- 01:56 PM Backport #54336 (Resolved): quincy: mgr/volumes: A deleted subvolumegroup when listed using "ceph...
- 01:55 PM Backport #54334 (Rejected): octopus: mgr/volumes: A deleted subvolumegroup when listed using "cep...
- 01:55 PM Backport #54335 (Resolved): pacific: mgr/volumes: A deleted subvolumegroup when listed using "cep...
- 01:54 PM Backport #53458 (Resolved): pacific: pacific: qa: Test failure: test_deep_split (tasks.cephfs.tes...
- 01:53 PM Backport #53912 (Resolved): pacific: qa: fs:upgrade test fails mds count check
- 01:53 PM Bug #52274 (Resolved): mgr/nfs: add more log messages
- 01:52 PM Backport #52823 (Resolved): pacific: mgr/nfs: add more log messages
- 12:47 PM Backport #57723 (Resolved): pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
- https://github.com/ceph/ceph/pull/48417
- 12:46 PM Backport #57722 (In Progress): quincy: qa: test_subvolume_snapshot_info_if_orphan_clone fails
- https://github.com/ceph/ceph/pull/48325
- 12:46 PM Backport #57721 (Resolved): pacific: qa: data-scan/journal-tool do not output debugging in upstre...
- https://github.com/ceph/ceph/pull/50773
- 12:46 PM Backport #57720 (Resolved): quincy: qa: data-scan/journal-tool do not output debugging in upstrea...
- https://github.com/ceph/ceph/pull/50772
- 12:44 PM Bug #57597 (Pending Backport): qa: data-scan/journal-tool do not output debugging in upstream tes...
- 12:26 PM Bug #57446 (Pending Backport): qa: test_subvolume_snapshot_info_if_orphan_clone fails
- PR was reviewed, tested and merged.
- 12:25 PM Backport #57719 (Resolved): quincy: Test failure: test_subvolume_group_ls_filter_internal_directo...
- https://github.com/ceph/ceph/pull/48327
- 12:25 PM Backport #57718 (Resolved): pacific: Test failure: test_subvolume_group_ls_filter_internal_direct...
- https://github.com/ceph/ceph/pull/48328
- 12:25 PM Backport #57717 (Resolved): quincy: libcephfs: incorrectly showing the size for snapdirs when sta...
- https://github.com/ceph/ceph/pull/48414
- 12:25 PM Backport #57716 (Resolved): pacific: libcephfs: incorrectly showing the size for snapdirs when st...
- https://github.com/ceph/ceph/pull/48413
- 12:24 PM Bug #57205 (Pending Backport): Test failure: test_subvolume_group_ls_filter_internal_directories ...
- 12:21 PM Bug #57344 (Pending Backport): libcephfs: incorrectly showing the size for snapdirs when stating ...
- 11:57 AM Backport #57715 (Resolved): quincy: mds: scrub locates mismatch between child accounted_rstats an...
- https://github.com/ceph/ceph/pull/50774
- 11:57 AM Backport #57714 (Resolved): pacific: mds: scrub locates mismatch between child accounted_rstats a...
- https://github.com/ceph/ceph/pull/50775
- 11:57 AM Backport #57713 (Resolved): quincy: qa: "1 MDSs behind on trimming (MDS_TRIM)"
- https://github.com/ceph/ceph/pull/50768
- 11:57 AM Backport #57712 (Resolved): pacific: qa: "1 MDSs behind on trimming (MDS_TRIM)"
- https://github.com/ceph/ceph/pull/50757
- 11:55 AM Bug #57657 (Pending Backport): mds: scrub locates mismatch between child accounted_rstats and sel...
- 11:54 AM Bug #57657: mds: scrub locates mismatch between child accounted_rstats and self rstats
- Patrick Donnelly wrote:
> During standup I was thinking of something else. This test deliberately creates this kind ... - 11:52 AM Bug #57677 (Pending Backport): qa: "1 MDSs behind on trimming (MDS_TRIM)"
- 04:18 AM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Dhairya,
Were you able to RCA this?
09/27/2022
- 02:13 PM Backport #57393: pacific: client: abort the client daemons when we couldn't invalidate the dentry...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48109
merged - 02:13 PM Backport #57395: pacific: crash: int Client::_do_remount(bool): abort
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48108
merged - 02:12 PM Backport #57282: pacific: cephfs-top:addition of filesystem menu(improving GUI)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47998
merged - 12:54 PM Bug #57682 (Triaged): client: ERROR: test_reconnect_after_blocklisted
- ...
- 12:37 PM Bug #53573: qa: test new clients against older Ceph clusters
- Dhairya has started work on this.
- 09:35 AM Bug #57611 (Duplicate): qa: failure during qa/workunits/fs/snaps/snaptest-git-ceph.sh
- Duplicate of #54462
- 09:34 AM Bug #57612 (Duplicate): qa: segmentation fault during qa/workunits/libcephfs/test.sh
- Duplicate of #57206
- 09:26 AM Backport #56713 (Resolved): quincy: mds: standby-replay daemon always removed in MDSMonitor::prep...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47281
merged. - 09:05 AM Backport #57370 (Resolved): quincy: standby-replay mds is removed from MDSMap unexpectedly
- 07:51 AM Backport #57261 (In Progress): pacific: standby-replay mds is removed from MDSMap unexpectedly
- 07:42 AM Backport #57194 (In Progress): pacific: ceph pacific fails to perform fs/mirror test
- 07:41 AM Backport #57193 (In Progress): quincy: ceph pacific fails to perform fs/mirror test
- 06:24 AM Backport #57193: quincy: ceph pacific fails to perform fs/mirror test
- Rishabh, I'm taking this one.
- 06:25 AM Backport #56978 (Resolved): pacific: mgr/volumes: Subvolume creation failed on FIPs enabled system
- 02:21 AM Bug #57677 (Fix Under Review): qa: "1 MDSs behind on trimming (MDS_TRIM)"
- 02:00 AM Bug #57677 (Resolved): qa: "1 MDSs behind on trimming (MDS_TRIM)"
- /ceph/teuthology-archive/pdonnell-2022-09-26_19:11:10-fs-wip-pdonnell-testing-20220923.171109-distro-default-smithi/7...
- 12:30 AM Bug #57676 (Triaged): qa: error during scrub thrashing: rank damage found: {'backtrace'}
- Backtrace scrub failures are back with damage checking in fwd_scrub.py, introduced by
https://github.com/ceph/ceph...
09/26/2022
- 07:46 PM Backport #57671 (In Progress): pacific: mds: damage table only stores one dentry per dirfrag
- 07:45 PM Backport #57670 (In Progress): quincy: mds: damage table only stores one dentry per dirfrag
- 05:29 PM Bug #57657 (Fix Under Review): mds: scrub locates mismatch between child accounted_rstats and sel...
- 05:26 PM Bug #57657: mds: scrub locates mismatch between child accounted_rstats and self rstats
- During standup I was thinking of something else. This test deliberately creates this kind of damage by manually delet...
- 12:48 PM Bug #57657 (Triaged): mds: scrub locates mismatch between child accounted_rstats and self rstats
- 05:08 PM Bug #57641: Ceph FS fscrypt clones missing fscrypt metadata
- Hi Marcel,
> Copying the ceph.fscrypt.auth xattr in the mgr/volumes async clone seems to work:
> https://github.c... - 02:18 PM Bug #57674: fuse mount crashes the standby MDSes
- Jos said he could take more of a look at this.
- 12:45 PM Bug #57674 (Triaged): fuse mount crashes the standby MDSes
- 10:51 AM Bug #57674 (Closed): fuse mount crashes the standby MDSes
- fuse mount fs to large number of clients crashes standby MDSes and hangs df. Thus a 2000 fuse clients cannot be achie...
- 01:07 PM Bug #57531: Mutipule zombie processes, and more and more
- ... or the daemon crashes are a different issue than the zombie processes (ceph-mds??).
- 01:06 PM Bug #57531: Mutipule zombie processes, and more and more
- Are you saying the zombie processes are ceph-osd daemons?
- 01:04 PM Bug #57610: qa: timeout during unwinding of qa/workunits/suites/fsstress.sh
- Venky Shankar wrote:
> Milind, please mark this as a duplicate. Run wiki is here - https://tracker.ceph.com/projects... - 01:03 PM Bug #57610: qa: timeout during unwinding of qa/workunits/suites/fsstress.sh
- Milind, please mark this as a duplicate. Run wiki is here - https://tracker.ceph.com/projects/cephfs/wiki/Main (for f...
- 01:03 PM Bug #57611: qa: failure during qa/workunits/fs/snaps/snaptest-git-ceph.sh
- Milind, please mark this as a duplicate. Run wiki is here - https://tracker.ceph.com/projects/cephfs/wiki/Main (for f...
- 01:03 PM Bug #57612: qa: segmentation fault during qa/workunits/libcephfs/test.sh
- Milind, please mark this as a duplicate. Run wiki is here - https://tracker.ceph.com/projects/cephfs/wiki/Main (for f...
- 01:02 PM Documentation #57115: Explanation for cache pressure
- Eugen Block wrote:
> Venky Shankar wrote:
> > Eugen, thanks for the detailed explanation. It would be immensely hel... - 12:57 PM Bug #57594 (Triaged): pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan....
- Jos, PTAL.
- 12:51 PM Bug #57655 (Triaged): qa: fs:mixed-clients kernel_untar_build failure
- 12:07 PM Backport #57665 (In Progress): pacific: Do not abort MDS on unknown messages
- https://github.com/ceph/ceph/pull/48253
- 11:41 AM Backport #57666 (In Progress): quincy: Do not abort MDS on unknown messages
- https://github.com/ceph/ceph/pull/48252
- 11:30 AM Bug #57359 (Fix Under Review): mds/Server: -ve values cause unexpected client eviction while hand...
- 10:12 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Milind Changire wrote:
> This doesn't crash on my local ubuntu focal vstart cluster.
> The stack trace points to a ... - 08:06 AM Documentation #57673 (Resolved): doc: document the relevance of mds_namespace mount option
- Users get lost trying to mount file-system with old syntax and find no mention of the 'mds_namespace' mount option in...
- 01:00 AM Bug #57210 (Fix Under Review): NFS client unable to see newly created files when listing director...
09/23/2022
- 05:19 PM Bug #57411 (Duplicate): mutiple mds crash seen while running db workloads with regular snapshots ...
- Apparently this one is known.
- 12:20 PM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- This doesn't crash on my local ubuntu focal vstart cluster.
The stack trace points to a boost::lexical_cast<>
Hyp... - 10:41 AM Backport #57671 (Resolved): pacific: mds: damage table only stores one dentry per dirfrag
- https://github.com/ceph/ceph/pull/48262
- 10:40 AM Backport #57670 (Resolved): quincy: mds: damage table only stores one dentry per dirfrag
- https://github.com/ceph/ceph/pull/48261
- 10:30 AM Bug #57249 (Pending Backport): mds: damage table only stores one dentry per dirfrag
- 07:22 AM Backport #55749 (In Progress): quincy: snap_schedule: remove subvolume(-group) interfaces
- 07:20 AM Backport #55748 (In Progress): pacific: snap_schedule: remove subvolume(-group) interfaces
- 07:17 AM Backport #57666 (Resolved): quincy: Do not abort MDS on unknown messages
- 07:17 AM Backport #57665 (Resolved): pacific: Do not abort MDS on unknown messages
- https://github.com/ceph/ceph/pull/48253
- 07:16 AM Bug #56522 (Pending Backport): Do not abort MDS on unknown messages
- 02:05 AM Bug #57657 (Resolved): mds: scrub locates mismatch between child accounted_rstats and self rstats
- ...
- 01:03 AM Bug #57655 (Pending Backport): qa: fs:mixed-clients kernel_untar_build failure
- ...
09/22/2022
- 01:59 PM Bug #57641 (Fix Under Review): Ceph FS fscrypt clones missing fscrypt metadata
- h2. Summary
When cloning a Ceph FS volume containing fscrypt-enabled subtrees,
the clone misses fscrypt metadata.... - 12:58 PM Bug #57523: CephFS performance degredation in mountpoint
- Hi,
yes the MDS is running with default configuration, we only tested if two active MDS were helping but it didnt ... - 11:57 AM Bug #57634 (Closed): mgr/volumes: small fixes in doc.
- Not Required!! Closing it.
- 10:50 AM Bug #57634 (Closed): mgr/volumes: small fixes in doc.
- 10:08 AM Bug #47693 (Rejected): qa: snap replicator tests
- 10:08 AM Bug #54064 (Resolved): pacific: qa: mon assertion failure during upgrade
- 05:32 AM Fix #57295 (Rejected): qa: remove RHEL from job matrix
- 05:30 AM Backport #57631 (Rejected): quincy: first-damage.sh does not handle dentries with spaces
- 05:30 AM Backport #57630 (Rejected): pacific: first-damage.sh does not handle dentries with spaces
- 05:25 AM Bug #57586 (Pending Backport): first-damage.sh does not handle dentries with spaces
- 05:22 AM Bug #54557: scrub repair does not clear earlier damage health status
- Neeraj, please take this one.
- 04:27 AM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Dhairya Parmar wrote:
> I can tell this issue exists even in Quincy. The Ceph environment I used operates on`ceph 17...
09/21/2022
- 11:16 AM Bug #57620 (Resolved): mgr/volumes: addition of human-readable flag to volume info command
- It intends to add a human-readable flag
to the volume info command so that the used
and avail size can be shown wi... - 09:47 AM Bug #54501: libcephfs: client needs to update the mtime and change attr when snaps are created an...
- Ramana, never mind. I see its for .snap directory rather that its parent. The thing is, attrs for .snap are initializ...
- 09:39 AM Bug #54501: libcephfs: client needs to update the mtime and change attr when snaps are created an...
- Ramana, nfs-ganesh is servicing stale readdir data by checking mtime/change_attr of .snap I presume or for its parent...
- 08:44 AM Bug #54501: libcephfs: client needs to update the mtime and change attr when snaps are created an...
- Assigning this to myself since I'm looking into related issues.
- 04:41 AM Bug #57210: NFS client unable to see newly created files when listing directory contents in a FS ...
- Identified that the issue was in MDCache::predirty_journal_parents()
Making the following change fixed the issue. ...
09/20/2022
- 07:48 PM Bug #57210: NFS client unable to see newly created files when listing directory contents in a FS ...
- Venky Shankar wrote:
> Ramana Raja wrote:
> > Continued to root cause... As described in https://tracker.ceph.com/i... - 04:25 PM Bug #57210: NFS client unable to see newly created files when listing directory contents in a FS ...
- We started running fs:workload tests w/ cephfs subvolumes (and that has caught a number of bugs). I think we do not r...
- 03:11 PM Bug #57210: NFS client unable to see newly created files when listing directory contents in a FS ...
- Ramana Raja wrote:
> Continued to root cause... As described in https://tracker.ceph.com/issues/57210#note-7, the st... - 12:08 AM Bug #57210: NFS client unable to see newly created files when listing directory contents in a FS ...
- Continued to root cause... As described in https://tracker.ceph.com/issues/57210#note-7, the stx_version (change_attr...
- 03:09 PM Bug #57598 (Fix Under Review): qa: test_recovery_pool uses wrong recovery procedure
- 09:53 AM Bug #57361: cephfs: rbytes seems not work correctly
- Setting the priority to low: rbytes is disabled by default.As directory size is recursive size with rbytes enabled,if...
- 09:28 AM Bug #48773: qa: scrub does not complete
- Kotresh, was this RCA'd?
- 09:06 AM Feature #56489: qa: test mgr plugins with standby mgr failover
- Milind Changire wrote:
> "Tested snap_schedule and cephfs-mirroring modules":http://pulpito.ceph.com/mchangir-2022-0... - 08:59 AM Feature #56489: qa: test mgr plugins with standby mgr failover
- "Tested snap_schedule and cephfs-mirroring modules":http://pulpito.ceph.com/mchangir-2022-08-11_16:22:27-fs:mgr-failo...
- 07:23 AM Bug #57612 (Duplicate): qa: segmentation fault during qa/workunits/libcephfs/test.sh
- http://qa-proxy.ceph.com/teuthology/mchangir-2022-09-12_07:34:04-fs-wip-mchangir-mds-uninline-file-during-scrub-testi...
- 07:21 AM Bug #57611 (Duplicate): qa: failure during qa/workunits/fs/snaps/snaptest-git-ceph.sh
- http://qa-proxy.ceph.com/teuthology/mchangir-2022-09-12_07:34:04-fs-wip-mchangir-mds-uninline-file-during-scrub-testi...
- 07:07 AM Bug #56529: ceph-fs crashes on getfattr
- Hi, just a confirmation. The problem is solved in ML 5.19.10-1.el7 and probably all other stable kernel lines, includ...
- 07:02 AM Bug #57610 (New): qa: timeout during unwinding of qa/workunits/suites/fsstress.sh
- http://qa-proxy.ceph.com/teuthology/mchangir-2022-09-12_07:34:04-fs-wip-mchangir-mds-uninline-file-during-scrub-testi...
- 07:01 AM Documentation #57115: Explanation for cache pressure
- Venky Shankar wrote:
> Eugen, thanks for the detailed explanation. It would be immensely help if you could carve all... - 05:38 AM Bug #57591 (Fix Under Review): cephfs: qa enables kclient for newop test
- 05:32 AM Bug #57591 (In Progress): cephfs: qa enables kclient for newop test
- 05:34 AM Bug #57580 (Fix Under Review): Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.Test...
- 05:32 AM Bug #57580 (In Progress): Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
- 04:46 AM Bug #56537 (Resolved): cephfs-top: wrong/infinitely changing wsp values
- 04:45 AM Backport #57155 (Resolved): pacific: cephfs-top: wrong/infinitely changing wsp values
- 04:28 AM Backport #57058 (Resolved): pacific: mgr/volumes: Handle internal metadata directories under '/vo...
09/19/2022
- 04:33 PM Backport #57274 (Resolved): pacific: mgr/stats: missing clients in perf stats command output.
- 03:49 PM Backport #57274: pacific: mgr/stats: missing clients in perf stats command output.
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47866
merged - 04:32 PM Backport #57439 (Resolved): pacific: client: track (and forward to MDS) average read/write/metada...
- 04:24 PM Backport #57439: pacific: client: track (and forward to MDS) average read/write/metadata latency
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47978
merged - 04:32 PM Feature #51434 (Resolved): pybind/mgr/volumes: add basic introspection
- 04:31 PM Backport #57263 (Resolved): pacific: pybind/mgr/volumes: add basic introspection
- 03:46 PM Backport #57263: pacific: pybind/mgr/volumes: add basic introspection
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47769
merged - 03:45 PM Backport #57155: pacific: cephfs-top: wrong/infinitely changing wsp values
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47647
merged - 02:14 PM Bug #57598 (Resolved): qa: test_recovery_pool uses wrong recovery procedure
- https://pulpito.ceph.com/pdonnell-2022-09-17_00:34:11-fs:functional-wip-pdonnell-testing-20220912.004719-distro-defau...
- 02:05 PM Bug #57597 (Fix Under Review): qa: data-scan/journal-tool do not output debugging in upstream tes...
- 02:04 PM Bug #57597 (Resolved): qa: data-scan/journal-tool do not output debugging in upstream testing
- 12:32 PM Bug #57531: Mutipule zombie processes, and more and more
- Venky Shankar wrote:
> I assume you are talking about https://tracker.ceph.com/issues/57411 here? If yes, could you ... - 09:38 AM Bug #57531 (Need More Info): Mutipule zombie processes, and more and more
- I assume you are talking about https://tracker.ceph.com/issues/57411 here? If yes, could you please provide more debu...
- 11:53 AM Documentation #57115: Explanation for cache pressure
- Eugen, thanks for the detailed explanation. It would be immensely help if you could carve all this into a PR :)
- 09:48 AM Bug #57523: CephFS performance degredation in mountpoint
- Robert Sander wrote:
> Hi,
>
> we have a cluster with 7 nodes each with 10 SSD OSDs providing CephFS to
> a Clo... - 09:24 AM Backport #57242: quincy: mgr/volumes: Clone operations are failing with Assertion Error
- > https://github.com/ceph/ceph/pull/47894
merged - 09:17 AM Backport #57241 (In Progress): pacific: mgr/volumes: Clone operations are failing with Assertion ...
- 09:11 AM Bug #57594: pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
- Seen in pacific teuthology run. Needs to be checked in main branch.
- 09:11 AM Bug #57594 (Can't reproduce): pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_da...
- http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_19:05:25-fs-wip-yuri11-testing-2022-09-16-0958-pacific-distro-de...
- 09:07 AM Backport #57555 (In Progress): pacific: qa: test_subvolume_snapshot_clone_quota_exceeded fails Co...
- 09:05 AM Backport #57554 (In Progress): quincy: qa: test_subvolume_snapshot_clone_quota_exceeded fails Com...
- 08:42 AM Bug #57591 (Pending Backport): cephfs: qa enables kclient for newop test
- Kclient have already fixed [1], so we need to enable the kclient for qa test.
[1] https://tracker.ceph.com/issues/... - 02:59 AM Bug #57083 (Resolved): ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_metho...
09/18/2022
- 02:37 PM Bug #57589 (Fix Under Review): cephfs-data-scan: scan_links is not verbose enough
- 02:31 PM Bug #57589 (Resolved): cephfs-data-scan: scan_links is not verbose enough
- cephfs-data-scan is used to recover cluster. Depending on the fs size to recover it may run for hours or even days, a...
09/17/2022
- 09:37 PM Bug #56063: Snapshot retention config lost after mgr restart
- I can still reproduce this error in v16.2.9.
09/16/2022
- 10:17 PM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- I can tell this issue exists even in Quincy. The Ceph environment I used operates on`ceph 17.0.0-14883-g10f7d25d`, fi...
- 09:59 AM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Dhairya is working on this as a part of https://github.com/ceph/ceph/pull/47649
- 07:16 PM Bug #57586 (Fix Under Review): first-damage.sh does not handle dentries with spaces
- 07:00 PM Bug #57586 (Resolved): first-damage.sh does not handle dentries with spaces
- 05:17 PM Bug #57210: NFS client unable to see newly created files when listing directory contents in a FS ...
- Based on inputs from Frank Filz and Jeff Layton on #ganesha IRC and https://github.com/nfs-ganesha/nfs-ganesha/issues...
- 03:00 PM Bug #57580 (Resolved): Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
- https://pulpito.ceph.com/vshankar-2022-09-16_10:20:09-fs-wip-vshankar-testing1-20220905-132828-testing-default-smithi...
- 11:52 AM Bug #57083: ceph-fuse: monclient(hunting): handle_auth_bad_method server allowed_methods [2] but ...
- https://github.com/ceph/ceph/pull/47528 merged
- 10:44 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Milind, PTAL. FWIW - https://github.com/ceph/ceph/pull/47504 fixes a similar issue for RGW.
- 10:42 AM Bug #57361 (Triaged): cephfs: rbytes seems not work correctly
- 10:42 AM Bug #57361: cephfs: rbytes seems not work correctly
- Neeraj, PTAL.
- 10:27 AM Fix #51177 (Fix Under Review): pybind/mgr/volumes: investigate moving calls which may block on li...
- 09:33 AM Bug #57280 (Fix Under Review): qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Fail...
- 05:29 AM Backport #57331 (Resolved): pacific: Test failure: test_client_metrics_and_metadata (tasks.cephfs...
- 05:27 AM Backport #57277 (Resolved): pacific: mgr/stats: 'perf stats' command shows incorrect output with ...
- 05:25 AM Backport #57279 (Resolved): pacific: mgr/stats: add fs_name as field in perf stats command output
- 01:10 AM Backport #57571 (In Progress): pacific: client: do not uninline data for read
- 01:07 AM Backport #57572 (In Progress): quincy: client: do not uninline data for read
- 12:35 AM Bug #56638 (Resolved): Restore the AT_NO_ATTR_SYNC define in libcephfs
- 12:34 AM Backport #57252 (Resolved): pacific: Restore the AT_NO_ATTR_SYNC define in libcephfs
- Merged!
- 12:33 AM Bug #24894 (Resolved): client: allow overwrites to files with size greater than the max_file_size...
- 12:33 AM Backport #55930 (Resolved): quincy: client: allow overwrites to files with size greater than the ...
- 12:32 AM Backport #55931 (Resolved): pacific: client: allow overwrites to files with size greater than the...
- 12:27 AM Backport #55931: pacific: client: allow overwrites to files with size greater than the max_file_s...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47972
merged
09/15/2022
- 10:24 PM Backport #57331: pacific: Test failure: test_client_metrics_and_metadata (tasks.cephfs.test_mds_m...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47851
merged - 10:24 PM Backport #57277: pacific: mgr/stats: 'perf stats' command shows incorrect output with non-existin...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47851
merged - 10:23 PM Backport #57279: pacific: mgr/stats: add fs_name as field in perf stats command output
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47851
merged - 08:55 PM Feature #7320 (Fix Under Review): qa: thrash directory fragmentation
- 06:52 PM Backport #57264 (Resolved): quincy: pybind/mgr/volumes: add basic introspection
- 06:51 PM Feature #55821 (Resolved): pybind/mgr/volumes: interface to check the presence of subvolumegroups...
- 06:47 PM Backport #57200 (Resolved): quincy: snap_schedule: replace .snap with the client configured snap ...
- 04:19 PM Backport #57572 (Resolved): quincy: client: do not uninline data for read
- https://github.com/ceph/ceph/pull/48132
- 04:19 PM Backport #57571 (Resolved): pacific: client: do not uninline data for read
- https://github.com/ceph/ceph/pull/48133
- 04:15 PM Bug #56553 (Pending Backport): client: do not uninline data for read
- 10:29 AM Feature #55121 (Fix Under Review): cephfs-top: new options to limit and order-by
- 09:27 AM Backport #57555 (Resolved): pacific: qa: test_subvolume_snapshot_clone_quota_exceeded fails Comma...
- https://github.com/ceph/ceph/pull/48165
- 09:27 AM Backport #57554 (Resolved): quincy: qa: test_subvolume_snapshot_clone_quota_exceeded fails Comman...
- https://github.com/ceph/ceph/pull/48164
- 09:25 AM Bug #56632 (Pending Backport): qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFail...
- 03:01 AM Backport #57392 (In Progress): quincy: client: abort the client daemons when we couldn't invalida...
- 02:59 AM Backport #57393 (In Progress): pacific: client: abort the client daemons when we couldn't invalid...
- 02:53 AM Backport #57395 (In Progress): pacific: crash: int Client::_do_remount(bool): abort
- 02:50 AM Backport #57394 (In Progress): quincy: crash: int Client::_do_remount(bool): abort
- 02:21 AM Bug #57154 (Fix Under Review): kernel/fuse client using ceph ID with uid restricted MDS caps cann...
09/14/2022
- 07:12 PM Feature #7320 (In Progress): qa: thrash directory fragmentation
- 10:41 AM Feature #55197 (Fix Under Review): cephfs-top: make cephfs-top display scrollable like top
- 08:56 AM Bug #57084 (Fix Under Review): Permissions of the .snap directory do not inherit ACLs
- 12:51 AM Bug #57531 (Need More Info): Mutipule zombie processes, and more and more
- Trying to repeat BUG#57411 on octopus version(15.2.17, cephadm, docker20.10)
*************************************...
09/13/2022
- 02:50 PM Bug #57523 (New): CephFS performance degredation in mountpoint
- Hi,
we have a cluster with 7 nodes each with 10 SSD OSDs providing CephFS to
a CloudStack system as primary stor... - 12:50 PM Bug #57361: cephfs: rbytes seems not work correctly
- Venky Shankar wrote:
> Xiubo Li wrote:
> > I had two client with the **rbytes** enabled, one is kclient and another... - 12:21 PM Bug #57361: cephfs: rbytes seems not work correctly
- Xiubo Li wrote:
> I had two client with the **rbytes** enabled, one is kclient and another is ceph-fuse client. In k... - 12:24 PM Backport #57440 (Resolved): quincy: client: track (and forward to MDS) average read/write/metadat...
- 04:54 AM Bug #45834 (Closed): cephadm: "fs volume create cephfs" overwrites existing placement specification
- This behavior may be because the placement spec needs a valid fs to apply the spec to.
closing tracker for now
lo... - 04:51 AM Bug #48562 (Closed): qa: scrub - object missing on disk; some files may be lost
- closing tracker for now
lowering priority to low
please reopen in case this seen again - 04:47 AM Bug #50250 (Closed): mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/...
- closing tracker for now
lowering priority to low
please reopen in case this seen again - 04:44 AM Bug #52260 (Closed): 1 MDSs are read only | pacific 16.2.5
- 04:43 AM Bug #52260: 1 MDSs are read only | pacific 16.2.5
- closing tracker for now
lowering priority to low
please reopen in case this seen again
09/09/2022
- 08:46 PM Backport #56541: quincy: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = Sn...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48013
merged - 08:45 PM Backport #55930: quincy: client: allow overwrites to files with size greater than the max_file_si...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47971
merged - 01:33 PM Backport #57440: quincy: client: track (and forward to MDS) average read/write/metadata latency
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47977
merged - 06:43 AM Feature #57481: mds: enhance scrub to fragment/merge dirfrags
- Additionally, the MDS does not fragment directory snapshots. The tracker for this - https://tracker.ceph.com/issues/5...
09/08/2022
- 08:49 PM Feature #57481 (Fix Under Review): mds: enhance scrub to fragment/merge dirfrags
- Typically, this can only be induced via a client workload. That can be expensive due to the caps generated. See MDBal...
- 06:00 PM Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED ...
- We would need judge whether it's expected. If so, we can extend the ignore list.
- 02:25 PM Backport #56469: quincy: mgr/volumes: display in-progress clones for a snapshot
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47894
merged - 12:26 PM Bug #57210: NFS client unable to see newly created files when listing directory contents in a FS ...
- Setup
=====
- ceph vstart cluster (ceph main branch: HEAD 91c57c3fb160db1c95d412b966d703ca08ee75ef)
- NFS-Ganesha ... - 10:44 AM Backport #56542: pacific: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = S...
- Backport not necessary for pacific since PR deals with rados object removal in lieu of sqlite database creation using...
- 06:41 AM Backport #56541 (In Progress): quincy: crash: File "mgr/snap_schedule/module.py", in __init__: se...
- 06:02 AM Bug #56063 (Closed): Snapshot retention config lost after mgr restart
- Closing after confirmation from Andreas.
- 05:55 AM Bug #54462 (Duplicate): Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055...
- Duplicate of https://tracker.ceph.com/issues/55332
- 05:54 AM Backport #56152 (Resolved): pacific: mgr/snap_schedule: schedule updates are not persisted across...
- 05:53 AM Backport #57158: quincy: doc: update snap-schedule notes regarding 'start' time
- Milind, please post a backport.
- 05:53 AM Backport #57157: pacific: doc: update snap-schedule notes regarding 'start' time
- Milind, please post a backport.
- 05:53 AM Backport #55385 (Resolved): quincy: mgr/snap_schedule: include timezone information in scheduled ...
- 05:52 AM Feature #16745 (Fix Under Review): mon: prevent allocating snapids allocated for CephFS
- 05:51 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- This is duplicate of one of Xiubo's fixes, isn't it?
Please mark this a dup if that's the case.
09/07/2022
- 08:15 AM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- Unable to reproduce issue with 3 iterations of the fs/snaps scripts suite as of today.
- 06:59 AM Backport #57282 (In Progress): pacific: cephfs-top:addition of filesystem menu(improving GUI)
- 01:52 AM Bug #54384 (Resolved): mds: crash due to seemingly unrecoverable metadata error
- 01:52 AM Backport #56462 (Resolved): pacific: mds: crash due to seemingly unrecoverable metadata error
- 01:51 AM Bug #55331 (Resolved): pjd failure (caused by xattr's value not consistent between auth MDS and r...
- 01:48 AM Backport #56449 (Resolved): pacific: pjd failure (caused by xattr's value not consistent between ...
09/06/2022
- 08:44 PM Backport #56462: pacific: mds: crash due to seemingly unrecoverable metadata error
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47433
merged - 08:43 PM Backport #56449: pacific: pjd failure (caused by xattr's value not consistent between auth MDS an...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47056
merged - 05:56 PM Feature #51716 (Resolved): Add option in `fs new` command to start rank 0 in failed state
- 05:55 PM Backport #52680 (Resolved): pacific: Add option in `fs new` command to start rank 0 in failed state
- 04:09 PM Backport #57058: pacific: mgr/volumes: Handle internal metadata directories under '/volumes' prop...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47512
merged - 01:08 PM Bug #50387 (Duplicate): client: fs/snaps failure
- Duplicate of https://tracker.ceph.com/issues/54460
- 12:34 PM Bug #57446 (In Progress): qa: test_subvolume_snapshot_info_if_orphan_clone fails
- 12:29 PM Bug #57446 (Pending Backport): qa: test_subvolume_snapshot_info_if_orphan_clone fails
- The test test_subvolume_snapshot_info_if_orphan_clone failed in the following test run
http://pulpito.front.sepia.... - 12:26 PM Backport #56712 (Resolved): pacific: mds: standby-replay daemon always removed in MDSMonitor::pre...
- 11:57 AM Backport #57283 (In Progress): quincy: cephfs-top:addition of filesystem menu(improving GUI)
- 08:17 AM Backport #57156 (Resolved): quincy: cephfs-top: wrong/infinitely changing wsp values
- 08:11 AM Bug #53126: In the 5.4.0 kernel, the mount of ceph-fuse fails
- Hi Jiang,
The fix for this is available in quincy (17.*). Do you mind upgrading your cluster? - 07:24 AM Bug #57084: Permissions of the .snap directory do not inherit ACLs
- Venky Shankar wrote:
> Is this the user-space or the kernel client?
It happens with kernel 5.15 and ceph-fuse 1... - 06:18 AM Bug #57084: Permissions of the .snap directory do not inherit ACLs
- Thanks for the detailed report, Robert. This sounds like a bug.
Is this the user-space or the kernel client? - 06:34 AM Bug #57205: Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_...
- The snapshot removal has failed because it had pending clones. Please see below....
- 06:33 AM Bug #57205 (In Progress): Test failure: test_subvolume_group_ls_filter_internal_directories (task...
- 06:22 AM Bug #54701 (Resolved): crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CD...
- 06:21 AM Backport #55933 (Resolved): quincy: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&...
- 06:21 AM Backport #55932 (Resolved): pacific: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>...
- 06:21 AM Bug #53857 (Resolved): qa: fs:upgrade test fails mds count check
- 06:20 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- Lowering priority -- this is an issue with the test case rather than a bug in cephfs-mirror.
- 06:19 AM Bug #52487 (Resolved): qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.TestFragment...
- 05:36 AM Bug #51589 (Resolved): mds: crash when journaling during replay
- 02:19 AM Backport #57041 (Resolved): pacific: pybind/mgr/volumes: interface to check the presence of subvo...
09/05/2022
- 02:11 PM Backport #57439 (In Progress): pacific: client: track (and forward to MDS) average read/write/met...
- 01:24 PM Backport #57439 (Resolved): pacific: client: track (and forward to MDS) average read/write/metada...
- https://github.com/ceph/ceph/pull/47978
- 01:54 PM Bug #57411: mutiple mds crash seen while running db workloads with regular snapshots and journal ...
- ...
- 01:02 PM Bug #57411: mutiple mds crash seen while running db workloads with regular snapshots and journal ...
- Thanks for reproducing this, Hemanth. This has been seen in the past but there were no logs to debug. Could you pleas...
- 01:49 PM Backport #57440 (In Progress): quincy: client: track (and forward to MDS) average read/write/meta...
- 01:24 PM Backport #57440 (Resolved): quincy: client: track (and forward to MDS) average read/write/metadat...
- https://github.com/ceph/ceph/pull/47977
- 01:17 PM Feature #48619 (Pending Backport): client: track (and forward to MDS) average read/write/metadata...
- 12:49 PM Bug #56261: crash: Migrator::import_notify_abort(CDir*, std::set<CDir*, std::less<CDir*>, std::al...
- Since there's any reference to an assertion failure in the backtrace, I had to resort to code walk-through.
Nothing ... - 12:33 PM Fix #51177 (In Progress): pybind/mgr/volumes: investigate moving calls which may block on libceph...
- 10:00 AM Backport #57261: pacific: standby-replay mds is removed from MDSMap unexpectedly
- Waiting for https://github.com/ceph/ceph/pull/47282 to be merged in pacific.
- 09:26 AM Fix #52591 (Resolved): mds: mds_oft_prefetch_dirfrags = false is not qa tested
- 09:14 AM Backport #55931 (In Progress): pacific: client: allow overwrites to files with size greater than ...
- 09:13 AM Backport #55930 (In Progress): quincy: client: allow overwrites to files with size greater than t...
- 08:56 AM Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Tamar,
Were you able to go through the changes for the rgw fix here: https://github.com/ceph/ceph/pull/47504 to se... - 08:07 AM Bug #54760: crash: void CDir::try_remove_dentries_for_stray(): assert(dn->get_linkage()->is_null())
- I think https://github.com/ceph/ceph/pull/46331 would mitigate this issue, however, the unlink and openc are from dif...
- 08:04 AM Bug #54760: crash: void CDir::try_remove_dentries_for_stray(): assert(dn->get_linkage()->is_null())
- This looks like a race between an unlink and openc (open w/ O_CREAT) in the MDS -- the unlink RPC projects the old an...
- 12:35 AM Backport #57253 (Resolved): quincy: Restore the AT_NO_ATTR_SYNC define in libcephfs
09/03/2022
- 02:51 PM Backport #57253: quincy: Restore the AT_NO_ATTR_SYNC define in libcephfs
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47861
merged - 02:50 PM Backport #57264: quincy: pybind/mgr/volumes: add basic introspection
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47768
merged
09/02/2022
- 10:10 PM Backport #57370: quincy: standby-replay mds is removed from MDSMap unexpectedly
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47902
merged - 05:37 PM Bug #57411: mutiple mds crash seen while running db workloads with regular snapshots and journal ...
- mds logs are copied here - http://magna002.ceph.redhat.com/ceph-qe-logs/hemanth/ceph_tracker/57411/
- 05:34 PM Bug #57411 (Duplicate): mutiple mds crash seen while running db workloads with regular snapshots ...
While the pgbench workloads are on with snapshots being taken after each run and with regular journal flush.
I ...- 03:27 PM Backport #57156: quincy: cephfs-top: wrong/infinitely changing wsp values
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47648
merged - 12:36 PM Backport #57395 (Resolved): pacific: crash: int Client::_do_remount(bool): abort
- https://github.com/ceph/ceph/pull/48108
- 12:36 PM Backport #57394 (Resolved): quincy: crash: int Client::_do_remount(bool): abort
- https://github.com/ceph/ceph/pull/48107
- 12:35 PM Backport #57393 (Resolved): pacific: client: abort the client daemons when we couldn't invalidate...
- https://github.com/ceph/ceph/pull/48109
- 12:35 PM Backport #57392 (Resolved): quincy: client: abort the client daemons when we couldn't invalidate ...
- https://github.com/ceph/ceph/pull/48110
- 12:25 PM Bug #56476 (Resolved): qa/suites: evicted client unhandled in 4-compat_client.yaml
- 12:25 PM Bug #57126 (Pending Backport): client: abort the client daemons when we couldn't invalidate the d...
- 12:25 PM Bug #56249 (Pending Backport): crash: int Client::_do_remount(bool): abort
- 06:08 AM Bug #55710 (Resolved): cephfs-shell: exit code unset when command has missing argument
- The PR was merged by Venky few months ago - https://github.com/ceph/ceph/pull/46337#event-6657873439.
- 06:04 AM Bug #55719 (Resolved): test_cephfs_shell: temporary files cause tests to fail with vstart_runner.py
- The PR was merged few months ago.
- 01:37 AM Backport #56448 (Resolved): quincy: pjd failure (caused by xattr's value not consistent between a...
09/01/2022
- 11:05 PM Backport #56448: quincy: pjd failure (caused by xattr's value not consistent between auth MDS and...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47057
merged - 01:50 PM Backport #57370 (In Progress): quincy: standby-replay mds is removed from MDSMap unexpectedly
- 01:11 PM Backport #57370 (Resolved): quincy: standby-replay mds is removed from MDSMap unexpectedly
- https://github.com/ceph/ceph/pull/47902
- 01:14 PM Bug #54501: libcephfs: client needs to update the mtime and change attr when snaps are created an...
- Dhairya, PTAL.
- 01:13 PM Bug #57205: Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_...
- Kotresh, PTAL.
- 01:01 PM Backport #57262 (Rejected): octopus: standby-replay mds is removed from MDSMap unexpectedly
- 10:02 AM Bug #57154 (In Progress): kernel/fuse client using ceph ID with uid restricted MDS caps cannot up...
- 07:42 AM Backport #57242 (In Progress): quincy: mgr/volumes: Clone operations are failing with Assertion E...
- 06:58 AM Fix #51177: pybind/mgr/volumes: investigate moving calls which may block on libcephfs into anothe...
- Discussion Summary with Patrick
1. Have a thread for each module to execute module commands. Since the finisher th... - 06:24 AM Backport #57057 (Resolved): quincy: mgr/volumes: Handle internal metadata directories under '/vol...
- 06:21 AM Bug #56632 (Fix Under Review): qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFail...
- 02:37 AM Backport #57363 (In Progress): pacific: ffsb.sh test failure
- 02:13 AM Backport #57363 (Resolved): pacific: ffsb.sh test failure
- https://github.com/ceph/ceph/pull/47891
- 02:37 AM Backport #57239 (In Progress): pacific: ceph-fs crashes on getfattr
- 02:18 AM Backport #57362 (In Progress): quincy: ffsb.sh test failure
- 02:13 AM Backport #57362 (Resolved): quincy: ffsb.sh test failure
- https://github.com/ceph/ceph/pull/47890
- 02:17 AM Backport #57240 (In Progress): quincy: ceph-fs crashes on getfattr
- 02:06 AM Bug #54461 (Pending Backport): ffsb.sh test failure
- 01:22 AM Bug #57361 (Triaged): cephfs: rbytes seems not work correctly
- I had two client with the **rbytes** enabled, one is kclient and another is ceph-fuse client. In kclient after I crea...
Also available in: Atom