Activity
From 06/28/2022 to 07/27/2022
07/27/2022
- 04:59 PM Bug #56727 (Fix Under Review): mgr/volumes: Subvolume creation failed on FIPs enabled system
- 11:06 AM Bug #56727 (Resolved): mgr/volumes: Subvolume creation failed on FIPs enabled system
The subvolume creation hits the following traceback on fips enabled system....- 02:02 PM Bug #56067: Cephfs data loss with root_squash enabled
- Please open a PR for discussion.
- 12:29 PM Bug #56067: Cephfs data loss with root_squash enabled
- I made the following change to the Locker code, and then checked how kclient and fuse client behaved with root_squash...
- 03:59 AM Bug #56067: Cephfs data loss with root_squash enabled
- Patrick Donnelly wrote:
> Good work tracking that down Ramana! I don't think it's reasonable to try to require the c... - 01:44 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Xiubo Li wrote:
> Our purpose here is to recover the snaprealms and snaptable from the data pool. It's hard to do th... - 08:17 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- The **listsnaps** could list the snapids of the objects:...
- 07:32 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- > We should be able to see that we're missing snapshots by listing snaps on objects?
Yeah. If a file was snapshote... - 07:10 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Here is my test case locally https://github.com/lxbsz/ceph/tree/wip-56605-... - 05:42 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Xiubo Li wrote:
> Here is my test case locally https://github.com/lxbsz/ceph/tree/wip-56605-draft.
>
> By using:
... - 01:30 PM Feature #55121: cephfs-top: new options to limit and order-by
- Jos Collin wrote:
> Greg Farnum wrote:
> > Can't fs top already change the sort order? I thought that was done in N... - 01:11 PM Documentation #56730: doc: update snap-schedule notes regarding 'start' time
- Adding chat discussion from #cephfs IRC channel :
<gauravsitlani> Hi team i have a quick question regarding : http... - 01:06 PM Documentation #56730 (Resolved): doc: update snap-schedule notes regarding 'start' time
- Add notes to snap-schedule mgr plugin documentation about the handling of time zone for the 'start' time.
Primary ... - 12:55 PM Bug #46140 (Closed): mds: couldn't see the logs in log file before the daemon get aborted
- After a brief discussion with @Xiubo Li, we decided to close this tracker as this issue was encountered while debuggi...
- 11:50 AM Bug #55112 (Resolved): cephfs-shell: saving files doesn't work as expected
- 11:49 AM Backport #55629 (Resolved): pacific: cephfs-shell: saving files doesn't work as expected
- 11:49 AM Bug #55242 (Resolved): cephfs-shell: put command should accept both path mandatorily and validate...
- 11:49 AM Backport #55625 (Resolved): pacific: cephfs-shell: put command should accept both path mandatoril...
- 11:36 AM Bug #40860 (Resolved): cephfs-shell: raises incorrect error when regfiles are passed to be deleted
- 11:36 AM Documentation #54551 (Resolved): docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds ca...
- 11:35 AM Backport #55238 (Resolved): pacific: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-md...
- 10:04 AM Bug #56659: mgr: crash after upgrade pacific to main
- Patrick,
Your patch to fix the libsqlite3-mod-ceph dependency and the eventual crash has worked to resolve the crash...
07/26/2022
- 08:44 PM Bug #56659 (Duplicate): mgr: crash after upgrade pacific to main
- 02:31 PM Backport #56712 (In Progress): pacific: mds: standby-replay daemon always removed in MDSMonitor::...
- 01:05 PM Backport #56712 (Resolved): pacific: mds: standby-replay daemon always removed in MDSMonitor::pre...
- https://github.com/ceph/ceph/pull/47282
- 02:30 PM Backport #56713 (In Progress): quincy: mds: standby-replay daemon always removed in MDSMonitor::p...
- 01:05 PM Backport #56713 (Resolved): quincy: mds: standby-replay daemon always removed in MDSMonitor::prep...
- https://github.com/ceph/ceph/pull/47281
- 01:03 PM Bug #56666 (Pending Backport): mds: standby-replay daemon always removed in MDSMonitor::prepare_b...
- 12:14 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Here is my test case locally https://github.com/lxbsz/ceph/tree/wip-56605-draft.
By using:...
07/25/2022
- 03:20 PM Bug #56698 (Resolved): client: FAILED ceph_assert(_size == 0)
- ...
- 03:17 PM Bug #56697 (New): qa: fs/snaps fails for fuse
- ...
- 02:46 PM Bug #56695 (Resolved): [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
- ...
- 02:38 PM Bug #56694 (Fix Under Review): qa: avoid blocking forever on hung umount
- 02:34 PM Bug #56694 (Rejected): qa: avoid blocking forever on hung umount
- /ceph/teuthology-archive/pdonnell-2022-07-22_19:42:58-fs-wip-pdonnell-testing-20220721.235756-distro-default-smithi/6...
- 11:18 AM Bug #56626 (In Progress): "ceph fs volume create" fails with error ERANGE
- 11:16 AM Bug #56626: "ceph fs volume create" fails with error ERANGE
- Hi Victoria,
I am not very familiar with the osd configs but as per code if 'osd_pool_default_pg_autoscale_mode' i... - 06:39 AM Bug #55858 (Need More Info): Pacific 16.2.7 MDS constantly crashing
- 04:59 AM Backport #56469 (In Progress): quincy: mgr/volumes: display in-progress clones for a snapshot
07/24/2022
- 06:20 PM Bug #56067: Cephfs data loss with root_squash enabled
- Patrick Donnelly wrote:
> I don't think it's reasonable to try to require the client mount to keep track of which ap...
07/23/2022
- 05:27 PM Bug #55759 (Resolved): mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- 05:27 PM Bug #55822 (Resolved): mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command
- 05:25 PM Backport #56103 (Resolved): quincy: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
07/22/2022
- 06:18 PM Feature #50470 (Resolved): cephfs-top: multiple file system support
- 06:17 PM Bug #52982 (Resolved): client: Inode::hold_caps_until should be a time from a monotonic clock
- 06:17 PM Backport #55937 (Resolved): pacific: client: Inode::hold_caps_until should be a time from a monot...
- 05:31 PM Bug #55971 (Resolved): LibRadosMiscConnectFailure.ConnectFailure test failure
- 05:30 PM Backport #56005 (Resolved): pacific: LibRadosMiscConnectFailure.ConnectFailure test failure
- 05:30 PM Backport #56004 (Resolved): quincy: LibRadosMiscConnectFailure.ConnectFailure test failure
- 04:37 PM Backport #55936 (Resolved): quincy: client: Inode::hold_caps_until should be a time from a monoto...
- 12:07 PM Backport #55936: quincy: client: Inode::hold_caps_until should be a time from a monotonic clock
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46563
merged - 04:37 PM Backport #56013 (Resolved): quincy: quota support for subvolumegroup
- 12:10 PM Backport #56013: quincy: quota support for subvolumegroup
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46667
merged - 04:37 PM Backport #56108 (Resolved): quincy: mgr/volumes: Remove incorrect 'size' in the output of 'snapsh...
- 12:12 PM Backport #56108: quincy: mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' co...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46804
merged - 04:36 PM Bug #56067: Cephfs data loss with root_squash enabled
- Good work tracking that down Ramana! I don't think it's reasonable to try to require the client mount to keep track o...
- 12:13 PM Backport #56103: quincy: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46805
merged - 12:09 PM Backport #54578: quincy: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding pat...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46647
merged - 02:58 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Greg Farnum wrote:
> Matan Breizman wrote:
> > Meaning,
> > > We can see the 1000098a1a5.00000000 object is still... - 02:52 AM Bug #56605 (Need More Info): Snapshot and xattr scanning in cephfs-data-scan
- Matan Breizman wrote:
> Meaning,
> > We can see the 1000098a1a5.00000000 object is still in the data pool: ...
> ... - 12:33 AM Bug #56638: Restore the AT_NO_ATTR_SYNC define in libcephfs
- John Mulligan wrote:
> I'm setting the backport field now for pacific & quincy. I hope I am setting it properly. Ple... - 12:21 AM Bug #56666 (Fix Under Review): mds: standby-replay daemon always removed in MDSMonitor::prepare_b...
07/21/2022
- 10:25 PM Bug #56067: Cephfs data loss with root_squash enabled
- Greg Farnum wrote:
> Hmm. Is the kernel client just losing track of root_squash when flushing caps? That is a differ... - 12:49 PM Bug #56067: Cephfs data loss with root_squash enabled
- Hmm. Is the kernel client just losing track of root_squash when flushing caps? That is a different path than the more...
- 12:36 PM Bug #56067 (In Progress): Cephfs data loss with root_squash enabled
- 02:14 AM Bug #56067: Cephfs data loss with root_squash enabled
- With vstart cluster (ceph main branch), I was able to reproduce the issue with a kernel client (5.17.11-200.fc35.x86_...
- 08:19 PM Bug #56666 (Resolved): mds: standby-replay daemon always removed in MDSMonitor::prepare_beacon
- If a standby-replay daemon's beacon makes it to MDSMonitor::prepare_beacon (rarely), it's automatically removed by th...
- 02:54 PM Bug #56638: Restore the AT_NO_ATTR_SYNC define in libcephfs
- I'm setting the backport field now for pacific & quincy. I hope I am setting it properly. Please correct it if I've f...
- 12:09 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Hi Xiubo, Thank you for the detailed information!
From a RADOS standpoint everything is working as expected.
We a... - 10:22 AM Bug #54283: qa/cephfs: is_mounted() depends on a mutable variable
- Rishabh Dave wrote:
> The PR for this ticket needed fix for "ticket 56476":https://tracker.ceph.com/issues/56476 in ... - 08:48 AM Bug #56659 (Duplicate): mgr: crash after upgrade pacific to main
- ...
07/20/2022
- 05:50 PM Bug #56605 (In Progress): Snapshot and xattr scanning in cephfs-data-scan
- 01:10 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Let me describe how the cephfs act for this:
**1**, For the directory and it's contents, which are all metadata in... - 01:25 PM Bug #55858: Pacific 16.2.7 MDS constantly crashing
- Hi Mike,
We would need more information on this to proceed further.
1. Output of 'ceph fs dump' ?
2. Was multi... - 09:03 AM Bug #56063: Snapshot retention config lost after mgr restart
- After updating to 17.2.1 I'm not observing the issue anymore. Now, after failing over the mgr, the retention policy i...
- 08:04 AM Bug #56507 (Duplicate): pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.Te...
- 08:04 AM Bug #56644: qa: test_rapid_creation fails with "No space left on device"
- h3. From https://tracker.ceph.com/issues/56507 -
https://pulpito.ceph.com/yuriw-2022-07-06_13:57:53-fs-wip-yuri4-t... - 07:01 AM Bug #56644 (Triaged): qa: test_rapid_creation fails with "No space left on device"
- http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default...
- 07:49 AM Bug #55716 (Resolved): cephfs-shell: Cmd2ArgparseError is imported without version check
- The PR was merged by Venky a couple months ago - https://github.com/ceph/ceph/pull/46337#event-6657873439
- 07:32 AM Bug #56416 (Resolved): qa/cephfs: delete path from cmd args after use
- 06:01 AM Feature #56643 (New): scrub: add one subcommand or option to add the missing objects back
- When we are scrub repairing the metadatas and some objects may get lost due to some reasons. After the repair finishe...
- 01:45 AM Bug #56638 (Fix Under Review): Restore the AT_NO_ATTR_SYNC define in libcephfs
- 01:37 AM Bug #56638 (In Progress): Restore the AT_NO_ATTR_SYNC define in libcephfs
07/19/2022
- 11:43 PM Backport #55928 (In Progress): quincy: mds: FAILED ceph_assert(dir->get_projected_version() == di...
- Hit this in downstream too.
- 11:40 PM Backport #55929 (In Progress): pacific: mds: FAILED ceph_assert(dir->get_projected_version() == d...
- Patrick Donnelly wrote:
> Xiubo, please do this backport.
Done. - 04:12 PM Backport #55929 (Need More Info): pacific: mds: FAILED ceph_assert(dir->get_projected_version() =...
- Xiubo, please do this backport.
- 06:14 PM Bug #56632: qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFailedError
- This test passed on main branch - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-19_12:12:03-fs:volumes-main-dis...
- 04:03 PM Bug #56632 (Resolved): qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFailedError
- 100% reproducible so far.
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2... - 05:46 PM Bug #56638 (Resolved): Restore the AT_NO_ATTR_SYNC define in libcephfs
- While working on an unrelated topic but building against the current 'quincy' branch - but not a released quincy - we...
- 04:34 PM Bug #56634 (New): qa: workunit snaptest-intodir.sh fails with MDS crash
- http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default...
- 04:16 PM Bug #56633 (Need More Info): mds: crash during construction of internal request
- ...
- 02:20 PM Bug #56626 (Triaged): "ceph fs volume create" fails with error ERANGE
- 02:20 PM Bug #56626: "ceph fs volume create" fails with error ERANGE
- Kotresh, PTAL.
- 01:43 PM Bug #56626 (Closed): "ceph fs volume create" fails with error ERANGE
- Trying to create a CephFS filesystem within a cluster deployed with cephadm fails
Steps followed
1. sudo cephad... - 05:02 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- I think about this more, even the *xattrs* are not lost, we still couldn't recovery the snapshot from the data pool. ...
- 01:41 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Greg Farnum wrote:
> Do you have logs/shell output or can you reproduce this, demonstrating the presence of the xatt... - 12:53 AM Bug #43216 (Resolved): MDSMonitor: removes MDS coming out of quorum election
- 12:53 AM Backport #52636 (Resolved): pacific: MDSMonitor: removes MDS coming out of quorum election
07/18/2022
- 08:38 PM Documentation #49406: Exceeding osd nearfull ratio causes write throttle.
- After wondering for a long time why my clusters get slow at some point, I finally found this as well.
It would be ... - 03:24 PM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
- For those following along, most MDS operations involve something like "mut->ls = get_current_segment()", and the poss...
- 10:40 AM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
- >
> Yeah, IMO it should be a good habit to use the shared_ptr to avoid potential use-after-free bugs as we hit in c... - 09:42 AM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
- Tamar Shacked wrote:
> The issue suggests spreading LogSegment* as shared_ptr while class MDLog manages those ptrs l... - 09:26 AM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
- The issue suggests spreading LogSegment* as shared_ptr while class MDLog manages those ptrs lifetime (creates/stores/...
- 03:23 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
- Do you have logs/shell output or can you reproduce this, demonstrating the presence of the xattr before taking the sn...
- 02:33 PM Bug #56605 (In Progress): Snapshot and xattr scanning in cephfs-data-scan
- We are doing the recovery by steps with a *alternate metadata pool*, more detail please see https://docs.ceph.com/en/...
- 06:36 AM Bug #56592 (Triaged): mds: crash when mounting a client during the scrub repair is going on
- ...
- 06:30 AM Feature #55715 (Fix Under Review): pybind/mgr/cephadm/upgrade: allow upgrades without reducing ma...
- 03:46 AM Fix #55567 (Resolved): cephfs-shell: rm returns just the error code and not proper error msg
- 03:46 AM Backport #56591 (Rejected): pacific: qa: iogen workunit: "The following counters failed to be set...
- 03:45 AM Backport #56590 (New): quincy: qa: iogen workunit: "The following counters failed to be set on md...
- 03:45 AM Feature #48911 (Resolved): cephfs-shell needs "ln" command equivalent
- 03:43 AM Bug #54108 (Pending Backport): qa: iogen workunit: "The following counters failed to be set on md...
- 01:37 AM Bug #55778 (Resolved): client: choose auth MDS for getxattr with the Xs caps
- 01:37 AM Backport #56109 (Resolved): quincy: client: choose auth MDS for getxattr with the Xs caps
- 01:37 AM Bug #55824 (Resolved): ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
- 01:36 AM Backport #56106 (Resolved): quincy: ceph-fuse[88614]: ceph mount failed with (65536) Unknown erro...
- 01:35 AM Bug #53504 (Resolved): client: infinite loop "got ESTALE" after mds recovery
- 01:35 AM Backport #55934 (Resolved): quincy: client: infinite loop "got ESTALE" after mds recovery
- 01:35 AM Bug #55253 (Resolved): client: switch to glibc's STATX macros
- 01:35 AM Backport #55994 (Resolved): quincy: client: switch to glibc's STATX macros
- 01:34 AM Bug #53741 (Resolved): crash just after MDS become active
- 01:34 AM Backport #56015 (Resolved): quincy: crash just after MDS become active
07/15/2022
- 08:30 PM Bug #56577 (Pending Backport): mds: client request may complete without queueing next replay request
- We received a report of a situation of a cluster with a single active MDS stuck in up:clientreplay. The status was:
... - 03:29 PM Bug #52430: mds: fast async create client mount breaks racy test
- Copying tracebacks for convenience (recently saw same test fail for different reason) -...
- 02:43 PM Backport #56106: quincy: ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46801
merged - 02:42 PM Backport #56109: quincy: client: choose auth MDS for getxattr with the Xs caps
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46800
merged - 02:41 PM Backport #56015: quincy: crash just after MDS become active
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46681
merged - 02:40 PM Backport #55994: quincy: client: switch to glibc's STATX macros
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46680
merged - 02:39 PM Backport #55926: quincy: Unexpected file access behavior using ceph-fuse
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46595
merged - 02:39 PM Backport #55933: quincy: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46566
merged - 02:38 PM Backport #55934: quincy: client: infinite loop "got ESTALE" after mds recovery
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46558
merged - 10:05 AM Bug #56532 (Fix Under Review): client stalls during vstart_runner test
- 01:02 AM Bug #56532: client stalls during vstart_runner test
- From Milind's reproducing logs, there has two different error code, which are *1* and *32*:...
- 05:49 AM Backport #56468 (In Progress): pacific: mgr/volumes: display in-progress clones for a snapshot
- 02:46 AM Backport #56527 (In Progress): pacific: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ ...
- 02:44 AM Backport #56526 (In Progress): quincy: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ a...
07/14/2022
- 01:00 PM Bug #56537 (Fix Under Review): cephfs-top: wrong/infinitely changing wsp values
- 11:18 AM Bug #48773: qa: scrub does not complete
- Saw this in my Quincy backport reviews as well -
https://pulpito.ceph.com/yuriw-2022-07-08_17:05:01-fs-wip-yuri2-tes... - 10:46 AM Backport #56152 (In Progress): pacific: mgr/snap_schedule: schedule updates are not persisted acr...
- 10:40 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
- Rishabh, did you get to RCA this?
- 06:09 AM Bug #56522: Do not abort MDS on unknown messages
- Milind Changire wrote:
> Xiubo Li wrote:
> > Milind Changire wrote:
> > > I had started the GETVXATTR RPC implemen... - 05:31 AM Bug #56522: Do not abort MDS on unknown messages
- Milind Changire wrote:
> Xiubo Li wrote:
> > Milind Changire wrote:
> > > I had started the GETVXATTR RPC implemen... - 05:14 AM Bug #56522: Do not abort MDS on unknown messages
- Xiubo Li wrote:
> Milind Changire wrote:
> > I had started the GETVXATTR RPC implementation with the introduction o... - 04:20 AM Bug #56522: Do not abort MDS on unknown messages
- Milind Changire wrote:
> I had started the GETVXATTR RPC implementation with the introduction of a feature bit for t... - 01:29 AM Bug #56553 (Fix Under Review): client: do not uninline data for read
- 01:20 AM Bug #56553 (Resolved): client: do not uninline data for read
- We don't even ask for and to be sure that we have been granted the Fw caps when reading, we shouldn't write contents ...
07/13/2022
- 02:13 PM Bug #56529: ceph-fs crashes on getfattr
- Xiubo Li wrote:
> We are still discussing to find a best approach to fix this or similar issues ...
Since my comm... - 10:03 AM Bug #56529: ceph-fs crashes on getfattr
- Frank Schilder wrote:
> Dear Xiubo Li, thanks for tracking this down so fast! Would be great if you could indicate h... - 09:22 AM Bug #56529: ceph-fs crashes on getfattr
- FWIW - we need to get this going: https://tracker.ceph.com/issues/53573.
The question is - how far back in release... - 04:05 AM Bug #56529: ceph-fs crashes on getfattr
- Just for completeness -- commit 2f4060b8c41004d10d9a64676ccd847f6e1304dd is the (mds side) fix for this.
- 12:54 PM Bug #56522: Do not abort MDS on unknown messages
- I had started the GETVXATTR RPC implementation with the introduction of a feature bit for this very purpose. I was to...
- 12:43 PM Bug #56522 (Fix Under Review): Do not abort MDS on unknown messages
- 12:23 PM Bug #56522: Do not abort MDS on unknown messages
- Stefan Kooman wrote:
> @Dhairya Parmar
>
> If the connection would be silently closed, it would be highly appreci... - 11:01 AM Bug #56522: Do not abort MDS on unknown messages
- @Dhairya Parmar
If the connection would be silently closed, it would be highly appreciated that the MDS logs this ... - 10:26 AM Bug #56522: Do not abort MDS on unknown messages
- Greg Farnum wrote:
> Venky Shankar wrote:
>
> > We obviously do not want to abort the mds. If we drop the message... - 10:29 AM Bug #56537: cephfs-top: wrong/infinitely changing wsp values
- Venky Shankar wrote:
> Jos Collin wrote:
> > wsp(MB/s) field in cephfs-top shows wrong values when there is an IO.
... - 07:03 AM Bug #56537: cephfs-top: wrong/infinitely changing wsp values
- Jos Collin wrote:
> wsp(MB/s) field in cephfs-top shows wrong values when there is an IO.
>
> Steps to reproduce:... - 06:33 AM Bug #56537 (Resolved): cephfs-top: wrong/infinitely changing wsp values
- wsp(MB/s) field in cephfs-top shows wrong and negative values changing infinitely.
Steps to reproduce:
1. Create ... - 09:39 AM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- I don't think a backport to pacific makes sense. The relevant code is only in quincy, so pacific is not affected by t...
- 09:30 AM Bug #56269 (Pending Backport): crash: File "mgr/snap_schedule/module.py", in __init__: self.clien...
- 02:42 AM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- Andreas Teuchert wrote:
> I created a PR that should fix this bug: https://github.com/ceph/ceph/pull/47006.
Thank... - 09:35 AM Backport #56542 (Rejected): pacific: crash: File "mgr/snap_schedule/module.py", in __init__: self...
- 09:35 AM Backport #56541 (Resolved): quincy: crash: File "mgr/snap_schedule/module.py", in __init__: self....
- https://github.com/ceph/ceph/pull/48013
- 09:28 AM Feature #56489: qa: test mgr plugins with standby mgr failover
- Milind, please have a look on priority :)
- 09:22 AM Bug #46075 (Resolved): ceph-fuse: mount -a on already mounted folder should be ignored
- 09:21 AM Backport #55040 (Rejected): pacific: ceph-fuse: mount -a on already mounted folder should be ignored
- Fix is not critical to pacific hence rejecting fix for pacific.
- 09:18 AM Backport #56469 (New): quincy: mgr/volumes: display in-progress clones for a snapshot
- 08:15 AM Backport #55539 (Resolved): pacific: cephfs-top: multiple file system support
- 07:20 AM Bug #56483 (Fix Under Review): mgr/stats: missing clients in perf stats command output.
- 07:05 AM Bug #56483: mgr/stats: missing clients in perf stats command output.
- Venky Shankar wrote:
> Neeraj, does this fix require backport to q/p or is it due to a recently pushed change?
It... - 06:57 AM Bug #56483: mgr/stats: missing clients in perf stats command output.
- Neeraj, does this fix require backport to q/p or is it due to a recently pushed change?
- 04:50 AM Feature #55121: cephfs-top: new options to limit and order-by
- Having a `sort-by-field` option is handy for the point I mentioned in https://tracker.ceph.com/issues/55121#note-4. T...
- 02:23 AM Bug #55583 (Fix Under Review): Intermittent ParsingError failure in mgr/volumes module during "c...
- 02:19 AM Bug #51281 (Duplicate): qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1...
- Xiubo Li wrote:
> Venky,
>
> This should have been fixed in https://tracker.ceph.com/issues/56011.
Right. Mark... - 02:18 AM Bug #46504 (Can't reproduce): pybind/mgr/volumes: self.assertTrue(check < timo) fails
- Haven't seen this failure again. Please reopen if required.
- 02:17 AM Feature #48619 (Resolved): client: track (and forward to MDS) average read/write/metadata latency
07/12/2022
- 11:24 PM Bug #56522: Do not abort MDS on unknown messages
- Venky Shankar wrote:
> We obviously do not want to abort the mds. If we drop the message, how do clients react? Bl... - 01:30 PM Bug #56522: Do not abort MDS on unknown messages
- I think the MDS should close the session and blocklist the client. If a newer client is using features an older clust...
- 12:52 PM Bug #56522 (Triaged): Do not abort MDS on unknown messages
- 05:13 AM Bug #56522: Do not abort MDS on unknown messages
- Venky Shankar wrote:
> Greg Farnum wrote:
> > Right now, in Server::dispatch(), we abort the MDS if we get a messag... - 04:47 AM Bug #56522: Do not abort MDS on unknown messages
- Greg Farnum wrote:
> Right now, in Server::dispatch(), we abort the MDS if we get a message type we don't understand... - 01:47 AM Bug #56522: Do not abort MDS on unknown messages
- Greg Farnum wrote:
> Right now, in Server::dispatch(), we abort the MDS if we get a message type we don't understand... - 11:13 PM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
- Draft PR: https://github.com/ceph/ceph/pull/47067
- 06:12 AM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
- This is really interesting. Waiting for you PR to understand in what scenario this can happen.
- 03:36 PM Bug #56529: ceph-fs crashes on getfattr
- Dear Xiubo Li, thanks for tracking this down so fast! Would be great if you could indicate here when an updated kclie...
- 02:50 PM Bug #56529 (Fix Under Review): ceph-fs crashes on getfattr
- Added one *_CEPHFS_FEATURE_OP_GETVXATTR_* feature bit support in mds side and fixed it in libcephfs in PR#47063. Will...
- 02:27 PM Bug #56529: ceph-fs crashes on getfattr
- It was introduced by:...
- 02:18 PM Bug #56529: ceph-fs crashes on getfattr
- ...
- 02:08 PM Bug #56529 (In Progress): ceph-fs crashes on getfattr
- 02:07 PM Bug #56529: ceph-fs crashes on getfattr
- Frank Schilder wrote:
> Quoting Gregory Farnum in the conversation on the ceph-user list:
>
> > That obviously sh... - 01:34 PM Bug #56529: ceph-fs crashes on getfattr
- Quoting Gregory Farnum in the conversation on the ceph-user list:
> That obviously shouldn't happen. Please file a... - 01:22 PM Bug #56529 (Need More Info): ceph-fs crashes on getfattr
- 01:22 PM Bug #56529: ceph-fs crashes on getfattr
- Tried the Pacific and Quincy cephs with the latest upstream kernel, I couldn't reproduce this. I am sure I have also ...
- 12:57 PM Bug #56529: ceph-fs crashes on getfattr
- Will work on it.
- 10:59 AM Bug #56529 (Resolved): ceph-fs crashes on getfattr
- From https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GCZ3F3ONVA2YIR7DJNQJFG53Y4DWQABN/
We made a v... - 03:25 PM Bug #56532 (Resolved): client stalls during vstart_runner test
- client logs show following message:...
- 01:33 PM Fix #48027 (Resolved): qa: add cephadm tests for CephFS in QA
- This is fixed I believe. We're using cephadm for fs:workload now. Also some in fs:upgrade.
- 01:33 PM Bug #51281: qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079...
- Venky,
This should have been fixed in https://tracker.ceph.com/issues/56011. - 01:05 PM Backport #56112 (In Progress): pacific: Test failure: test_flush (tasks.cephfs.test_readahead.Tes...
- 01:03 PM Backport #56111 (In Progress): quincy: Test failure: test_flush (tasks.cephfs.test_readahead.Test...
- 12:58 PM Backport #56469 (Need More Info): quincy: mgr/volumes: display in-progress clones for a snapshot
- 12:56 PM Bug #56435 (Triaged): octopus: cluster [WRN] evicting unresponsive client smithi115: (124139), ...
- 12:54 PM Bug #56506 (Triaged): pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_s...
- 07:04 AM Backport #56107 (Resolved): pacific: mgr/volumes: Remove incorrect 'size' in the output of 'snaps...
- 07:03 AM Backport #56104 (Resolved): pacific: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- 06:00 AM Backport #56527 (Resolved): pacific: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any...
- https://github.com/ceph/ceph/pull/47111
- 06:00 AM Backport #56526 (Resolved): quincy: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_...
- https://github.com/ceph/ceph/pull/47110
- 05:57 AM Bug #56012 (Pending Backport): mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_repla...
- 04:48 AM Backport #56465 (In Progress): pacific: xfstests-dev generic/444 test failed
- 04:44 AM Backport #56464 (In Progress): quincy: xfstests-dev generic/444 test failed
- 04:38 AM Backport #56449 (In Progress): pacific: pjd failure (caused by xattr's value not consistent betwe...
- 04:38 AM Backport #56448 (In Progress): quincy: pjd failure (caused by xattr's value not consistent betwee...
07/11/2022
- 09:05 PM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
- I think I have a fix for this issue. I'm working on verifying it for go-ceph. If that all goes well I be putting toge...
- 05:18 PM Bug #56522 (Resolved): Do not abort MDS on unknown messages
- Right now, in Server::dispatch(), we abort the MDS if we get a message type we don't understand.
This is horrible:... - 02:35 PM Bug #56269 (Fix Under Review): crash: File "mgr/snap_schedule/module.py", in __init__: self.clien...
- 01:34 PM Backport #56104: pacific: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46806
merged - 01:33 PM Backport #56107: pacific: mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' c...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46803
merged
07/10/2022
- 05:54 AM Support #56443: OSD USED Size contains unknown data
- Hi,
It's a CephFS pool version 16.2.9 with 1 data pool and 1 metadata pool. It has 3 MDS servers and 3 MON servers. ...
07/09/2022
- 01:14 AM Bug #56518 (Rejected): client: when reconnecting new targets it will be skipped
- sorry, not a bug.
- 01:08 AM Bug #56518 (Rejected): client: when reconnecting new targets it will be skipped
- The new created session's state is set to STATE_OPENING, not STATE_NEW. More detail please see https://github.com/cep...
- 01:00 AM Bug #54653: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_map.count(stag))
- Luis Henriques wrote:
> It looks like the fix for this bug has broken the build for the latest versions of fuse3 (fy... - 12:52 AM Bug #56517 (Fix Under Review): fuse_ll.cc: error: expected identifier before ‘{’ token 1379 | {
- 12:47 AM Bug #56517 (Resolved): fuse_ll.cc: error: expected identifier before ‘{’ token 1379 | {
- When libfuse >= 3.0:...
07/08/2022
- 01:27 PM Backport #56056: pacific: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap...
- Please see my last comment on https://tracker.ceph.com/issues/54653
- 01:27 PM Backport #56055: quincy: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_...
- Please see my last comment on https://tracker.ceph.com/issues/54653
- 01:25 PM Bug #54653: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_map.count(stag))
- It looks like the fix for this bug has broken the build for the latest versions of fuse3 (fyi the fuse version I've o...
- 10:48 AM Bug #56507 (Duplicate): pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.Te...
- https://pulpito.ceph.com/yuriw-2022-07-06_13:57:53-fs-wip-yuri4-testing-2022-07-05-0719-pacific-distro-default-smithi...
- 10:41 AM Bug #56506 (Triaged): pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_s...
- https://pulpito.ceph.com/yuriw-2022-07-06_13:57:53-fs-wip-yuri4-testing-2022-07-05-0719-pacific-distro-default-smithi...
- 06:14 AM Bug #56483 (In Progress): mgr/stats: missing clients in perf stats command output.
07/07/2022
- 01:12 PM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- I created a PR that should fix this bug: https://github.com/ceph/ceph/pull/47006.
- 05:08 AM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- Milind, please take a look.
- 10:17 AM Feature #56489 (New): qa: test mgr plugins with standby mgr failover
- Related to https://tracker.ceph.com/issues/56269 which is seen when failing an active mgr. The standby mgr hits a tra...
- 09:50 AM Feature #55121: cephfs-top: new options to limit and order-by
- Neeraj Pratap Singh wrote:
> Jos Collin wrote:
> > Greg Farnum wrote:
> > > Can't fs top already change the sort o... - 08:03 AM Feature #55121: cephfs-top: new options to limit and order-by
- Jos Collin wrote:
> Greg Farnum wrote:
> > Can't fs top already change the sort order? I thought that was done in N... - 08:00 AM Feature #55121: cephfs-top: new options to limit and order-by
- Venky Shankar wrote:
> Jos Collin wrote:
> > Based on my discussion with Greg, I'm closing this ticket. Because the... - 05:15 AM Feature #55121: cephfs-top: new options to limit and order-by
- Greg Farnum wrote:
> Can't fs top already change the sort order? I thought that was done in Neeraj's first tranche o... - 06:57 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
- Rishabh Dave wrote:
> Path directory @/home/ubuntu/cephtest/mnt.0/testdir@ is created twice. Copying following from ... - 04:59 AM Bug #56446 (In Progress): Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.T...
- 04:59 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
- Path directory @/home/ubuntu/cephtest/mnt.0/testdir@ is created twice. Copying following from https://pulpito.ceph.co...
- 05:17 AM Bug #56270: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- Milind, please take a look.
- 02:26 AM Bug #56476 (Fix Under Review): qa/suites: evicted client unhandled in 4-compat_client.yaml
07/06/2022
- 11:08 PM Feature #55121: cephfs-top: new options to limit and order-by
- Can't fs top already change the sort order? I thought that was done in Neeraj's first tranche of improvements.
- 05:52 AM Feature #55121 (New): cephfs-top: new options to limit and order-by
- Jos Collin wrote:
> Based on my discussion with Greg, I'm closing this ticket. Because the issue that the customer r... - 02:58 PM Bug #56270: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- The full backtrace is:...
- 02:53 PM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- The full backtrace is:...
- 02:42 PM Support #56443: OSD USED Size contains unknown data
- Hi Greg,
You'd need to give more details. This tracker is filed under CephFS, however, it does not mention anythin... - 12:47 PM Bug #56483 (Resolved): mgr/stats: missing clients in perf stats command output.
- perf stats doesn't get the client info w.r.t filesystems created after running the perf stats command once with exist...
- 10:32 AM Bug #54283: qa/cephfs: is_mounted() depends on a mutable variable
- The PR for this ticket needed fix for "ticket 56476":https://tracker.ceph.com/issues/56476 in order to pass QA runs.
- 09:10 AM Bug #56476 (Resolved): qa/suites: evicted client unhandled in 4-compat_client.yaml
- In "@4-compat_client.yaml@":https://github.com/ceph/ceph/blob/main/qa/suites/fs/upgrade/featureful_client/upgraded_cl...
- 06:54 AM Bug #56282 (Duplicate): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() ...
- This is a known bug and have been fixed in upstream. And the backport PR is still under reviewing https://tracker.cep...
07/05/2022
- 02:09 PM Feature #56428: add command "fs deauthorize"
- Hmm, I've had concerns about interfaces like this in the past. What happens if:
caps mds = "allow rw fsname=a, all... - 09:15 AM Backport #56469 (Resolved): quincy: mgr/volumes: display in-progress clones for a snapshot
- https://github.com/ceph/ceph/pull/47894
- 09:15 AM Backport #56468 (Resolved): pacific: mgr/volumes: display in-progress clones for a snapshot
- https://github.com/ceph/ceph/pull/47112
- 09:10 AM Bug #55041 (Pending Backport): mgr/volumes: display in-progress clones for a snapshot
- 02:50 AM Backport #56465 (Resolved): pacific: xfstests-dev generic/444 test failed
- https://github.com/ceph/ceph/pull/47059
- 02:50 AM Backport #56464 (Resolved): quincy: xfstests-dev generic/444 test failed
- https://github.com/ceph/ceph/pull/47058
- 02:49 AM Bug #56010 (Pending Backport): xfstests-dev generic/444 test failed
07/04/2022
- 09:36 PM Backport #54242 (Resolved): octopus: mds: clients can send a "new" op (file operation) and crash ...
- 09:28 PM Backport #54241 (Resolved): pacific: mds: clients can send a "new" op (file operation) and crash ...
- 09:16 PM Backport #55348 (Resolved): quincy: mgr/volumes: Show clone failure reason in clone status command
- 09:10 PM Backport #55540 (Resolved): quincy: cephfs-top: multiple file system support
- 09:03 PM Backport #55336 (Resolved): quincy: Issue removing subvolume with retained snapshots - Possible q...
- 09:02 PM Backport #55428 (Resolved): quincy: unaccessible dentries after fsstress run with namespace-restr...
- 09:00 PM Backport #55626 (Resolved): quincy: cephfs-shell: put command should accept both path mandatorily...
- 09:00 PM Backport #55628 (Resolved): quincy: cephfs-shell: creates directories in local file system even i...
- 08:59 PM Backport #55630 (Resolved): quincy: cephfs-shell: saving files doesn't work as expected
- 03:15 PM Backport #56462 (Resolved): pacific: mds: crash due to seemingly unrecoverable metadata error
- https://github.com/ceph/ceph/pull/47433
- 03:15 PM Backport #56461 (Resolved): quincy: mds: crash due to seemingly unrecoverable metadata error
- https://github.com/ceph/ceph/pull/47432
- 03:10 PM Bug #54384 (Pending Backport): mds: crash due to seemingly unrecoverable metadata error
- 12:42 PM Bug #52438 (Resolved): qa: ffsb timeout
- 12:39 PM Bug #54106 (Duplicate): kclient: hang during workunit cleanup
- This is duplicated to https://tracker.ceph.com/issues/55857.
- 12:26 PM Bug #56282 (In Progress): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state(...
- 08:59 AM Backport #56056 (In Progress): pacific: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): as...
- 08:48 AM Backport #56055 (In Progress): quincy: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): ass...
- 03:00 AM Backport #56449 (Resolved): pacific: pjd failure (caused by xattr's value not consistent between ...
- https://github.com/ceph/ceph/pull/47056
- 03:00 AM Backport #56448 (Resolved): quincy: pjd failure (caused by xattr's value not consistent between a...
- https://github.com/ceph/ceph/pull/47057
- 02:58 AM Bug #55331 (Pending Backport): pjd failure (caused by xattr's value not consistent between auth M...
- 02:44 AM Bug #56446 (Pending Backport): Test failure: test_client_cache_size (tasks.cephfs.test_client_lim...
- Seen here: https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-defa...
07/03/2022
- 12:02 PM Support #56443 (New): OSD USED Size contains unknown data
- Hi,
We have a problem, that the POOL recognizes information in a size of ~1 GB, it is associated with a type of SS...
07/02/2022
- 07:13 PM Feature #56442 (New): mds: build asok command to dump stray files and associated caps
- To diagnose what is delaying reintegration or deletion.
- 01:07 PM Bug #55762 (Fix Under Review): mgr/volumes: Handle internal metadata directories under '/volumes'...
- 01:06 PM Backport #56014 (Resolved): pacific: quota support for subvolumegroup
- 01:04 PM Feature #55401 (Resolved): mgr/volumes: allow users to add metadata (key-value pairs) for subvolu...
- 01:04 PM Backport #55802 (Resolved): pacific: mgr/volumes: allow users to add metadata (key-value pairs) f...
07/01/2022
- 05:42 PM Backport #51323: octopus: qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45159
merged - 01:37 PM Backport #52634: octopus: mds sends cap updates with btime zeroed out
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45164
merged - 01:36 PM Backport #50914: octopus: MDS heartbeat timed out between during executing MDCache::start_files_t...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45157
merged - 01:26 PM Bug #50868: qa: "kern.log.gz already exists; not overwritten"
- /a/yuriw-2022-06-30_22:36:16-upgrade:pacific-x-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/6907793/
- 07:02 AM Bug #54111 (Resolved): data pool attached to a file system can be attached to another file system
- 04:08 AM Feature #55121 (Closed): cephfs-top: new options to limit and order-by
- Based on my discussion with Greg, I'm closing this ticket. Because the issue that the customer reported in BZ[1] is p...
- 03:17 AM Bug #56435: octopus: cluster [WRN] evicting unresponsive client smithi115: (124139), after wait...
- The clients have been unregistered at *_2022-06-24T20:00:11_*:...
- 03:13 AM Bug #56435 (Triaged): octopus: cluster [WRN] evicting unresponsive client smithi115: (124139), ...
- /a/yuriw-2022-06-24_16:58:58-rados-wip-yuri-testing-2022-06-24-0817-octopus-distro-default-smithi/6897870
The unre... - 03:01 AM Bug #52876: pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), after 303.909 s...
- Laura Flores wrote:
> /a/yuriw-2022-06-24_16:58:58-rados-wip-yuri-testing-2022-06-24-0817-octopus-distro-default-smi...
06/30/2022
- 08:18 PM Bug #52876: pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), after 303.909 s...
- /a/yuriw-2022-06-24_16:58:58-rados-wip-yuri-testing-2022-06-24-0817-octopus-distro-default-smithi/6897870
- 06:55 PM Bug #50868: qa: "kern.log.gz already exists; not overwritten"
- /a/yuriw-2022-06-30_14:20:05-rados-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/6907396/
- 04:57 PM Bug #56384 (Resolved): ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- 10:04 AM Feature #56428 (New): add command "fs deauthorize"
- Since entity auth keyrings can now hold auth caps for multiple Ceph FSs, it is very tedious and very error-prone to r...
06/29/2022
- 08:38 PM Backport #56112: pacific: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Venky Shankar wrote:
> Dhairya, please do the backport.
https://github.com/ceph/ceph/pull/46901 - 02:15 PM Backport #56112: pacific: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Dhairya, please do the backport.
- 07:44 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- https://github.com/ceph/ceph/pull/46899
- 07:41 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Venky Shankar wrote:
> Dhairya, please do the backport.
Okay,sure. - 02:15 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Dhairya, please do the backport.
- 04:49 PM Bug #52123: mds sends cap updates with btime zeroed out
- Not sure what has to happen to unwedge this backport.
- 02:48 PM Feature #56140: cephfs: tooling to identify inode (metadata) corruption
- Posting an update here based on discussion between me, Greg and Patrick:
Short term plan: Helper script to identif... - 11:08 AM Bug #56416 (Resolved): qa/cephfs: delete path from cmd args after use
- Method conduct_neg_test_for_write_caps() in qa/tasks/cephfs/caps_helper.py appends path to command arguments but does...
- 09:22 AM Bug #56414 (Fix Under Review): mounting subvolume shows size/used bytes for entire fs, not subvolume
- 09:18 AM Bug #56414 (In Progress): mounting subvolume shows size/used bytes for entire fs, not subvolume
- Hit the same issue in libcephfs.
- 09:18 AM Bug #56414 (Resolved): mounting subvolume shows size/used bytes for entire fs, not subvolume
- When mounting a subvolume at the base dir of the subvolume, the kernel client correctly shows the size/usage of a sub...
- 01:02 AM Backport #56110 (Resolved): pacific: client: choose auth MDS for getxattr with the Xs caps
- 01:01 AM Backport #56105 (Resolved): pacific: ceph-fuse[88614]: ceph mount failed with (65536) Unknown err...
- 01:00 AM Backport #56016 (Resolved): pacific: crash just after MDS become active
- 01:00 AM Bug #54411 (Resolved): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 f...
- 01:00 AM Backport #55449 (Resolved): pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm ...
- 12:59 AM Backport #55993 (Resolved): pacific: client: switch to glibc's STATX macros
- 12:58 AM Backport #55935 (Resolved): pacific: client: infinite loop "got ESTALE" after mds recovery
- 12:58 AM Bug #55329 (Resolved): qa: add test case for fsync crash issue
- 12:57 AM Backport #55660 (Resolved): pacific: qa: add test case for fsync crash issue
- 12:56 AM Backport #55757 (Resolved): pacific: mds: flush mdlog if locked and still has wanted caps not sat...
06/28/2022
- 04:46 PM Bug #17594 (In Progress): cephfs: permission checking not working (MDS should enforce POSIX permi...
- 04:19 PM Bug #53045 (Resolved): stat->fsid is not unique among filesystems exported by the ceph server
- 04:04 PM Bug #53765 (Resolved): mount helper mangles the new syntax device string by qualifying the name
- 04:04 PM Fix #52068: qa: add testing for "ms_mode" mount option
- This appears to be waiting for a pacific backport.
- 04:00 PM Fix #52068: qa: add testing for "ms_mode" mount option
- I think this is in now, right?
- 04:02 PM Bug #50719 (Can't reproduce): xattr returning from the dead (sic!)
- No response in several months. Closing case. Ralph, feel free to reopen if you have more info to share.
- 03:58 PM Bug #52134 (Can't reproduce): botched cephadm upgrade due to mds failures
- Haven't seen this in some time.
- 03:53 PM Bug #41192 (Won't Fix): mds: atime not being updated persistently
- I don't see us fixing this in order to get local atime semantics. Closing WONTFIX.
- 03:52 PM Bug #50826: kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
- Handing this back to Patrick for now. I haven't seen this occur myself. Is this still a problem? Should we close it out?
- 03:17 PM Backport #56105: pacific: ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46802
merged - 03:15 PM Backport #56110: pacific: client: choose auth MDS for getxattr with the Xs caps
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46799
merged - 03:15 PM Backport #55449: pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46798
merged - 03:14 PM Backport #56016: pacific: crash just after MDS become active
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46682
merged - 03:14 PM Backport #55993: pacific: client: switch to glibc's STATX macros
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46679
merged - 03:12 PM Backport #54577: pacific: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding pa...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46646
merged - 03:11 PM Backport #55935: pacific: client: infinite loop "got ESTALE" after mds recovery
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46557
merged - 03:10 PM Backport #55660: pacific: qa: add test case for fsync crash issue
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46425
merged - 03:10 PM Backport #55659: pacific: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46424
merged - 03:08 PM Backport #55757: pacific: mds: flush mdlog if locked and still has wanted caps not satisfied
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46423
merged - 01:45 PM Feature #55821 (Fix Under Review): pybind/mgr/volumes: interface to check the presence of subvolu...
- 01:31 PM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
- Hi there, sorry for delays, this was very tricky to get info on as it did not reproduce outside of our CI. So it requ...
- 12:28 PM Bug #53214 (Resolved): qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42...
- 12:00 PM Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_...
- Venky Shankar wrote:
> Xiubo, please take a look.
Sure. - 10:50 AM Bug #56282 (Triaged): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() ==...
- 10:50 AM Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_...
- Xiubo, please take a look.
- 11:30 AM Bug #56384 (Fix Under Review): ceph/test.sh: check_response erasure-code didn't find erasure-code...
- 09:55 AM Bug #56380: crash: Client::_get_vino(Inode*)
- Venky Shankar wrote:
> Xiubo Li wrote:
> > This should be fixed by https://github.com/ceph/ceph/pull/45614, in http... - 09:46 AM Bug #56380 (Duplicate): crash: Client::_get_vino(Inode*)
- Dup: https://tracker.ceph.com/issues/54653
- 09:45 AM Bug #56380: crash: Client::_get_vino(Inode*)
- Xiubo Li wrote:
> This should be fixed by https://github.com/ceph/ceph/pull/45614, in https://github.com/ceph/ceph/p... - 06:53 AM Bug #56380: crash: Client::_get_vino(Inode*)
- This should be fixed by https://github.com/ceph/ceph/pull/45614, in https://github.com/ceph/ceph/pull/45614/files#dif...
- 09:54 AM Backport #56113 (Rejected): pacific: data pool attached to a file system can be attached to anoth...
- 09:53 AM Backport #56114 (Rejected): quincy: data pool attached to a file system can be attached to anothe...
- 09:48 AM Bug #56263 (Duplicate): crash: Client::_get_vino(Inode*)
- Dup: https://tracker.ceph.com/issues/54653
- 06:53 AM Bug #56263: crash: Client::_get_vino(Inode*)
- This should be fixed by https://github.com/ceph/ceph/pull/45614, in https://github.com/ceph/ceph/pull/45614/files#dif...
- 07:02 AM Bug #56249: crash: int Client::_do_remount(bool): abort
- Should be fixed by https://tracker.ceph.com/issues/54049.
- 06:41 AM Bug #56397 (Fix Under Review): client: `df` will show incorrect disk size if the quota size is no...
- 02:27 AM Bug #56397 (In Progress): client: `df` will show incorrect disk size if the quota size is not ali...
- 02:27 AM Bug #56397 (In Progress): client: `df` will show incorrect disk size if the quota size is not ali...
Also available in: Atom