Project

General

Profile

Activity

From 06/26/2022 to 07/25/2022

07/25/2022

03:20 PM Bug #56698 (Resolved): client: FAILED ceph_assert(_size == 0)
... Patrick Donnelly
03:17 PM Bug #56697 (New): qa: fs/snaps fails for fuse
... Patrick Donnelly
02:46 PM Bug #56695 (Resolved): [RHEL stock] pjd test failures(a bug that need to wait the unlink to finish)
... Patrick Donnelly
02:38 PM Bug #56694 (Fix Under Review): qa: avoid blocking forever on hung umount
Patrick Donnelly
02:34 PM Bug #56694 (Rejected): qa: avoid blocking forever on hung umount
/ceph/teuthology-archive/pdonnell-2022-07-22_19:42:58-fs-wip-pdonnell-testing-20220721.235756-distro-default-smithi/6... Patrick Donnelly
11:18 AM Bug #56626 (In Progress): "ceph fs volume create" fails with error ERANGE
Kotresh Hiremath Ravishankar
11:16 AM Bug #56626: "ceph fs volume create" fails with error ERANGE
Hi Victoria,
I am not very familiar with the osd configs but as per code if 'osd_pool_default_pg_autoscale_mode' i...
Kotresh Hiremath Ravishankar
06:39 AM Bug #55858 (Need More Info): Pacific 16.2.7 MDS constantly crashing
Kotresh Hiremath Ravishankar
04:59 AM Backport #56469 (In Progress): quincy: mgr/volumes: display in-progress clones for a snapshot
Nikhilkumar Shelke

07/24/2022

06:20 PM Bug #56067: Cephfs data loss with root_squash enabled
Patrick Donnelly wrote:
> I don't think it's reasonable to try to require the client mount to keep track of which ap...
Ramana Raja

07/23/2022

05:27 PM Bug #55759 (Resolved): mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
Nikhilkumar Shelke
05:27 PM Bug #55822 (Resolved): mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command
Nikhilkumar Shelke
05:25 PM Backport #56103 (Resolved): quincy: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
Nikhilkumar Shelke

07/22/2022

06:18 PM Feature #50470 (Resolved): cephfs-top: multiple file system support
Neeraj Pratap Singh
06:17 PM Bug #52982 (Resolved): client: Inode::hold_caps_until should be a time from a monotonic clock
Neeraj Pratap Singh
06:17 PM Backport #55937 (Resolved): pacific: client: Inode::hold_caps_until should be a time from a monot...
Neeraj Pratap Singh
05:31 PM Bug #55971 (Resolved): LibRadosMiscConnectFailure.ConnectFailure test failure
Laura Flores
05:30 PM Backport #56005 (Resolved): pacific: LibRadosMiscConnectFailure.ConnectFailure test failure
Laura Flores
05:30 PM Backport #56004 (Resolved): quincy: LibRadosMiscConnectFailure.ConnectFailure test failure
Laura Flores
04:37 PM Backport #55936 (Resolved): quincy: client: Inode::hold_caps_until should be a time from a monoto...
Patrick Donnelly
12:07 PM Backport #55936: quincy: client: Inode::hold_caps_until should be a time from a monotonic clock
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46563
merged
Yuri Weinstein
04:37 PM Backport #56013 (Resolved): quincy: quota support for subvolumegroup
Patrick Donnelly
12:10 PM Backport #56013: quincy: quota support for subvolumegroup
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46667
merged
Yuri Weinstein
04:37 PM Backport #56108 (Resolved): quincy: mgr/volumes: Remove incorrect 'size' in the output of 'snapsh...
Patrick Donnelly
12:12 PM Backport #56108: quincy: mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' co...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46804
merged
Yuri Weinstein
04:36 PM Bug #56067: Cephfs data loss with root_squash enabled
Good work tracking that down Ramana! I don't think it's reasonable to try to require the client mount to keep track o... Patrick Donnelly
12:13 PM Backport #56103: quincy: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46805
merged
Yuri Weinstein
12:09 PM Backport #54578: quincy: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding pat...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46647
merged
Yuri Weinstein
02:58 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
Greg Farnum wrote:
> Matan Breizman wrote:
> > Meaning,
> > > We can see the 1000098a1a5.00000000 object is still...
Xiubo Li
02:52 AM Bug #56605 (Need More Info): Snapshot and xattr scanning in cephfs-data-scan
Matan Breizman wrote:
> Meaning,
> > We can see the 1000098a1a5.00000000 object is still in the data pool: ...
> ...
Greg Farnum
12:33 AM Bug #56638: Restore the AT_NO_ATTR_SYNC define in libcephfs
John Mulligan wrote:
> I'm setting the backport field now for pacific & quincy. I hope I am setting it properly. Ple...
Xiubo Li
12:21 AM Bug #56666 (Fix Under Review): mds: standby-replay daemon always removed in MDSMonitor::prepare_b...
Patrick Donnelly

07/21/2022

10:25 PM Bug #56067: Cephfs data loss with root_squash enabled
Greg Farnum wrote:
> Hmm. Is the kernel client just losing track of root_squash when flushing caps? That is a differ...
Ramana Raja
12:49 PM Bug #56067: Cephfs data loss with root_squash enabled
Hmm. Is the kernel client just losing track of root_squash when flushing caps? That is a different path than the more... Greg Farnum
12:36 PM Bug #56067 (In Progress): Cephfs data loss with root_squash enabled
Ramana Raja
02:14 AM Bug #56067: Cephfs data loss with root_squash enabled
With vstart cluster (ceph main branch), I was able to reproduce the issue with a kernel client (5.17.11-200.fc35.x86_... Ramana Raja
08:19 PM Bug #56666 (Resolved): mds: standby-replay daemon always removed in MDSMonitor::prepare_beacon
If a standby-replay daemon's beacon makes it to MDSMonitor::prepare_beacon (rarely), it's automatically removed by th... Patrick Donnelly
02:54 PM Bug #56638: Restore the AT_NO_ATTR_SYNC define in libcephfs
I'm setting the backport field now for pacific & quincy. I hope I am setting it properly. Please correct it if I've f... John Mulligan
12:09 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
Hi Xiubo, Thank you for the detailed information!
From a RADOS standpoint everything is working as expected.
We a...
Matan Breizman
10:22 AM Bug #54283: qa/cephfs: is_mounted() depends on a mutable variable
Rishabh Dave wrote:
> The PR for this ticket needed fix for "ticket 56476":https://tracker.ceph.com/issues/56476 in ...
Rishabh Dave
08:48 AM Bug #56659 (Duplicate): mgr: crash after upgrade pacific to main
... Milind Changire

07/20/2022

05:50 PM Bug #56605 (In Progress): Snapshot and xattr scanning in cephfs-data-scan
Radoslaw Zarzynski
01:10 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
Let me describe how the cephfs act for this:
**1**, For the directory and it's contents, which are all metadata in...
Xiubo Li
01:25 PM Bug #55858: Pacific 16.2.7 MDS constantly crashing
Hi Mike,
We would need more information on this to proceed further.
1. Output of 'ceph fs dump' ?
2. Was multi...
Kotresh Hiremath Ravishankar
09:03 AM Bug #56063: Snapshot retention config lost after mgr restart
After updating to 17.2.1 I'm not observing the issue anymore. Now, after failing over the mgr, the retention policy i... Andreas Teuchert
08:04 AM Bug #56507 (Duplicate): pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.Te...
Rishabh Dave
08:04 AM Bug #56644: qa: test_rapid_creation fails with "No space left on device"
h3. From https://tracker.ceph.com/issues/56507 -
https://pulpito.ceph.com/yuriw-2022-07-06_13:57:53-fs-wip-yuri4-t...
Rishabh Dave
07:01 AM Bug #56644 (Triaged): qa: test_rapid_creation fails with "No space left on device"
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default... Rishabh Dave
07:49 AM Bug #55716 (Resolved): cephfs-shell: Cmd2ArgparseError is imported without version check
The PR was merged by Venky a couple months ago - https://github.com/ceph/ceph/pull/46337#event-6657873439 Rishabh Dave
07:32 AM Bug #56416 (Resolved): qa/cephfs: delete path from cmd args after use
Rishabh Dave
06:01 AM Feature #56643 (New): scrub: add one subcommand or option to add the missing objects back
When we are scrub repairing the metadatas and some objects may get lost due to some reasons. After the repair finishe... Xiubo Li
01:45 AM Bug #56638 (Fix Under Review): Restore the AT_NO_ATTR_SYNC define in libcephfs
Xiubo Li
01:37 AM Bug #56638 (In Progress): Restore the AT_NO_ATTR_SYNC define in libcephfs
Xiubo Li

07/19/2022

11:43 PM Backport #55928 (In Progress): quincy: mds: FAILED ceph_assert(dir->get_projected_version() == di...
Hit this in downstream too. Xiubo Li
11:40 PM Backport #55929 (In Progress): pacific: mds: FAILED ceph_assert(dir->get_projected_version() == d...
Patrick Donnelly wrote:
> Xiubo, please do this backport.
Done.
Xiubo Li
04:12 PM Backport #55929 (Need More Info): pacific: mds: FAILED ceph_assert(dir->get_projected_version() =...
Xiubo, please do this backport. Patrick Donnelly
06:14 PM Bug #56632: qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFailedError
This test passed on main branch - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-19_12:12:03-fs:volumes-main-dis... Rishabh Dave
04:03 PM Bug #56632 (Resolved): qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFailedError
100% reproducible so far.
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2...
Rishabh Dave
05:46 PM Bug #56638 (Resolved): Restore the AT_NO_ATTR_SYNC define in libcephfs
While working on an unrelated topic but building against the current 'quincy' branch - but not a released quincy - we... John Mulligan
04:34 PM Bug #56634 (New): qa: workunit snaptest-intodir.sh fails with MDS crash
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default... Rishabh Dave
04:16 PM Bug #56633 (Need More Info): mds: crash during construction of internal request
... Patrick Donnelly
02:20 PM Bug #56626 (Triaged): "ceph fs volume create" fails with error ERANGE
Patrick Donnelly
02:20 PM Bug #56626: "ceph fs volume create" fails with error ERANGE
Kotresh, PTAL. Patrick Donnelly
01:43 PM Bug #56626 (Closed): "ceph fs volume create" fails with error ERANGE
Trying to create a CephFS filesystem within a cluster deployed with cephadm fails
Steps followed
1. sudo cephad...
Victoria Martinez de la Cruz
05:02 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
I think about this more, even the *xattrs* are not lost, we still couldn't recovery the snapshot from the data pool. ... Xiubo Li
01:41 AM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
Greg Farnum wrote:
> Do you have logs/shell output or can you reproduce this, demonstrating the presence of the xatt...
Xiubo Li
12:53 AM Bug #43216 (Resolved): MDSMonitor: removes MDS coming out of quorum election
Patrick Donnelly
12:53 AM Backport #52636 (Resolved): pacific: MDSMonitor: removes MDS coming out of quorum election
Patrick Donnelly

07/18/2022

08:38 PM Documentation #49406: Exceeding osd nearfull ratio causes write throttle.
After wondering for a long time why my clusters get slow at some point, I finally found this as well.
It would be ...
Niklas Hambuechen
03:24 PM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
For those following along, most MDS operations involve something like "mut->ls = get_current_segment()", and the poss... Greg Farnum
10:40 AM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
>
> Yeah, IMO it should be a good habit to use the shared_ptr to avoid potential use-after-free bugs as we hit in c...
Tamar Shacked
09:42 AM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
Tamar Shacked wrote:
> The issue suggests spreading LogSegment* as shared_ptr while class MDLog manages those ptrs l...
Xiubo Li
09:26 AM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
The issue suggests spreading LogSegment* as shared_ptr while class MDLog manages those ptrs lifetime (creates/stores/... Tamar Shacked
03:23 PM Bug #56605: Snapshot and xattr scanning in cephfs-data-scan
Do you have logs/shell output or can you reproduce this, demonstrating the presence of the xattr before taking the sn... Greg Farnum
02:33 PM Bug #56605 (In Progress): Snapshot and xattr scanning in cephfs-data-scan
We are doing the recovery by steps with a *alternate metadata pool*, more detail please see https://docs.ceph.com/en/... Xiubo Li
06:36 AM Bug #56592 (Triaged): mds: crash when mounting a client during the scrub repair is going on
... Xiubo Li
06:30 AM Feature #55715 (Fix Under Review): pybind/mgr/cephadm/upgrade: allow upgrades without reducing ma...
Dhairya Parmar
03:46 AM Fix #55567 (Resolved): cephfs-shell: rm returns just the error code and not proper error msg
Rishabh Dave
03:46 AM Backport #56591 (Rejected): pacific: qa: iogen workunit: "The following counters failed to be set...
Backport Bot
03:45 AM Backport #56590 (New): quincy: qa: iogen workunit: "The following counters failed to be set on md...
Backport Bot
03:45 AM Feature #48911 (Resolved): cephfs-shell needs "ln" command equivalent
Rishabh Dave
03:43 AM Bug #54108 (Pending Backport): qa: iogen workunit: "The following counters failed to be set on md...
Rishabh Dave
01:37 AM Bug #55778 (Resolved): client: choose auth MDS for getxattr with the Xs caps
Xiubo Li
01:37 AM Backport #56109 (Resolved): quincy: client: choose auth MDS for getxattr with the Xs caps
Xiubo Li
01:37 AM Bug #55824 (Resolved): ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
Xiubo Li
01:36 AM Backport #56106 (Resolved): quincy: ceph-fuse[88614]: ceph mount failed with (65536) Unknown erro...
Xiubo Li
01:35 AM Bug #53504 (Resolved): client: infinite loop "got ESTALE" after mds recovery
Xiubo Li
01:35 AM Backport #55934 (Resolved): quincy: client: infinite loop "got ESTALE" after mds recovery
Xiubo Li
01:35 AM Bug #55253 (Resolved): client: switch to glibc's STATX macros
Xiubo Li
01:35 AM Backport #55994 (Resolved): quincy: client: switch to glibc's STATX macros
Xiubo Li
01:34 AM Bug #53741 (Resolved): crash just after MDS become active
Xiubo Li
01:34 AM Backport #56015 (Resolved): quincy: crash just after MDS become active
Xiubo Li

07/15/2022

08:30 PM Bug #56577 (Pending Backport): mds: client request may complete without queueing next replay request
We received a report of a situation of a cluster with a single active MDS stuck in up:clientreplay. The status was:
...
Patrick Donnelly
03:29 PM Bug #52430: mds: fast async create client mount breaks racy test
Copying tracebacks for convenience (recently saw same test fail for different reason) -... Rishabh Dave
02:43 PM Backport #56106: quincy: ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46801
merged
Yuri Weinstein
02:42 PM Backport #56109: quincy: client: choose auth MDS for getxattr with the Xs caps
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46800
merged
Yuri Weinstein
02:41 PM Backport #56015: quincy: crash just after MDS become active
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46681
merged
Yuri Weinstein
02:40 PM Backport #55994: quincy: client: switch to glibc's STATX macros
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46680
merged
Yuri Weinstein
02:39 PM Backport #55926: quincy: Unexpected file access behavior using ceph-fuse
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46595
merged
Yuri Weinstein
02:39 PM Backport #55933: quincy: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, ...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46566
merged
Yuri Weinstein
02:38 PM Backport #55934: quincy: client: infinite loop "got ESTALE" after mds recovery
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46558
merged
Yuri Weinstein
10:05 AM Bug #56532 (Fix Under Review): client stalls during vstart_runner test
Xiubo Li
01:02 AM Bug #56532: client stalls during vstart_runner test
From Milind's reproducing logs, there has two different error code, which are *1* and *32*:... Xiubo Li
05:49 AM Backport #56468 (In Progress): pacific: mgr/volumes: display in-progress clones for a snapshot
Nikhilkumar Shelke
02:46 AM Backport #56527 (In Progress): pacific: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ ...
Kotresh Hiremath Ravishankar
02:44 AM Backport #56526 (In Progress): quincy: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ a...
Kotresh Hiremath Ravishankar

07/14/2022

01:00 PM Bug #56537 (Fix Under Review): cephfs-top: wrong/infinitely changing wsp values
Jos Collin
11:18 AM Bug #48773: qa: scrub does not complete
Saw this in my Quincy backport reviews as well -
https://pulpito.ceph.com/yuriw-2022-07-08_17:05:01-fs-wip-yuri2-tes...
Rishabh Dave
10:46 AM Backport #56152 (In Progress): pacific: mgr/snap_schedule: schedule updates are not persisted acr...
Venky Shankar
10:40 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
Rishabh, did you get to RCA this? Venky Shankar
06:09 AM Bug #56522: Do not abort MDS on unknown messages
Milind Changire wrote:
> Xiubo Li wrote:
> > Milind Changire wrote:
> > > I had started the GETVXATTR RPC implemen...
Venky Shankar
05:31 AM Bug #56522: Do not abort MDS on unknown messages
Milind Changire wrote:
> Xiubo Li wrote:
> > Milind Changire wrote:
> > > I had started the GETVXATTR RPC implemen...
Xiubo Li
05:14 AM Bug #56522: Do not abort MDS on unknown messages
Xiubo Li wrote:
> Milind Changire wrote:
> > I had started the GETVXATTR RPC implementation with the introduction o...
Milind Changire
04:20 AM Bug #56522: Do not abort MDS on unknown messages
Milind Changire wrote:
> I had started the GETVXATTR RPC implementation with the introduction of a feature bit for t...
Xiubo Li
01:29 AM Bug #56553 (Fix Under Review): client: do not uninline data for read
Xiubo Li
01:20 AM Bug #56553 (Resolved): client: do not uninline data for read
We don't even ask for and to be sure that we have been granted the Fw caps when reading, we shouldn't write contents ... Xiubo Li

07/13/2022

02:13 PM Bug #56529: ceph-fs crashes on getfattr
Xiubo Li wrote:
> We are still discussing to find a best approach to fix this or similar issues ...
Since my comm...
Frank Schilder
10:03 AM Bug #56529: ceph-fs crashes on getfattr
Frank Schilder wrote:
> Dear Xiubo Li, thanks for tracking this down so fast! Would be great if you could indicate h...
Xiubo Li
09:22 AM Bug #56529: ceph-fs crashes on getfattr
FWIW - we need to get this going: https://tracker.ceph.com/issues/53573.
The question is - how far back in release...
Venky Shankar
04:05 AM Bug #56529: ceph-fs crashes on getfattr
Just for completeness -- commit 2f4060b8c41004d10d9a64676ccd847f6e1304dd is the (mds side) fix for this. Venky Shankar
12:54 PM Bug #56522: Do not abort MDS on unknown messages
I had started the GETVXATTR RPC implementation with the introduction of a feature bit for this very purpose. I was to... Milind Changire
12:43 PM Bug #56522 (Fix Under Review): Do not abort MDS on unknown messages
Dhairya Parmar
12:23 PM Bug #56522: Do not abort MDS on unknown messages
Stefan Kooman wrote:
> @Dhairya Parmar
>
> If the connection would be silently closed, it would be highly appreci...
Dhairya Parmar
11:01 AM Bug #56522: Do not abort MDS on unknown messages
@Dhairya Parmar
If the connection would be silently closed, it would be highly appreciated that the MDS logs this ...
Stefan Kooman
10:26 AM Bug #56522: Do not abort MDS on unknown messages
Greg Farnum wrote:
> Venky Shankar wrote:
>
> > We obviously do not want to abort the mds. If we drop the message...
Dhairya Parmar
10:29 AM Bug #56537: cephfs-top: wrong/infinitely changing wsp values
Venky Shankar wrote:
> Jos Collin wrote:
> > wsp(MB/s) field in cephfs-top shows wrong values when there is an IO.
...
Jos Collin
07:03 AM Bug #56537: cephfs-top: wrong/infinitely changing wsp values
Jos Collin wrote:
> wsp(MB/s) field in cephfs-top shows wrong values when there is an IO.
>
> Steps to reproduce:...
Venky Shankar
06:33 AM Bug #56537 (Resolved): cephfs-top: wrong/infinitely changing wsp values
wsp(MB/s) field in cephfs-top shows wrong and negative values changing infinitely.
Steps to reproduce:
1. Create ...
Jos Collin
09:39 AM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
I don't think a backport to pacific makes sense. The relevant code is only in quincy, so pacific is not affected by t... Andreas Teuchert
09:30 AM Bug #56269 (Pending Backport): crash: File "mgr/snap_schedule/module.py", in __init__: self.clien...
Venky Shankar
02:42 AM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
Andreas Teuchert wrote:
> I created a PR that should fix this bug: https://github.com/ceph/ceph/pull/47006.
Thank...
Venky Shankar
09:35 AM Backport #56542 (Rejected): pacific: crash: File "mgr/snap_schedule/module.py", in __init__: self...
Backport Bot
09:35 AM Backport #56541 (Resolved): quincy: crash: File "mgr/snap_schedule/module.py", in __init__: self....
https://github.com/ceph/ceph/pull/48013 Backport Bot
09:28 AM Feature #56489: qa: test mgr plugins with standby mgr failover
Milind, please have a look on priority :) Venky Shankar
09:22 AM Bug #46075 (Resolved): ceph-fuse: mount -a on already mounted folder should be ignored
Nikhilkumar Shelke
09:21 AM Backport #55040 (Rejected): pacific: ceph-fuse: mount -a on already mounted folder should be ignored
Fix is not critical to pacific hence rejecting fix for pacific. Nikhilkumar Shelke
09:18 AM Backport #56469 (New): quincy: mgr/volumes: display in-progress clones for a snapshot
Venky Shankar
08:15 AM Backport #55539 (Resolved): pacific: cephfs-top: multiple file system support
Neeraj Pratap Singh
07:20 AM Bug #56483 (Fix Under Review): mgr/stats: missing clients in perf stats command output.
Neeraj Pratap Singh
07:05 AM Bug #56483: mgr/stats: missing clients in perf stats command output.
Venky Shankar wrote:
> Neeraj, does this fix require backport to q/p or is it due to a recently pushed change?
It...
Neeraj Pratap Singh
06:57 AM Bug #56483: mgr/stats: missing clients in perf stats command output.
Neeraj, does this fix require backport to q/p or is it due to a recently pushed change? Venky Shankar
04:50 AM Feature #55121: cephfs-top: new options to limit and order-by
Having a `sort-by-field` option is handy for the point I mentioned in https://tracker.ceph.com/issues/55121#note-4. T... Venky Shankar
02:23 AM Bug #55583 (Fix Under Review): Intermittent ParsingError failure in mgr/volumes module during "c...
Venky Shankar
02:19 AM Bug #51281 (Duplicate): qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1...
Xiubo Li wrote:
> Venky,
>
> This should have been fixed in https://tracker.ceph.com/issues/56011.
Right. Mark...
Venky Shankar
02:18 AM Bug #46504 (Can't reproduce): pybind/mgr/volumes: self.assertTrue(check < timo) fails
Haven't seen this failure again. Please reopen if required. Venky Shankar
02:17 AM Feature #48619 (Resolved): client: track (and forward to MDS) average read/write/metadata latency
Venky Shankar

07/12/2022

11:24 PM Bug #56522: Do not abort MDS on unknown messages
Venky Shankar wrote:
> We obviously do not want to abort the mds. If we drop the message, how do clients react? Bl...
Greg Farnum
01:30 PM Bug #56522: Do not abort MDS on unknown messages
I think the MDS should close the session and blocklist the client. If a newer client is using features an older clust... Patrick Donnelly
12:52 PM Bug #56522 (Triaged): Do not abort MDS on unknown messages
Venky Shankar
05:13 AM Bug #56522: Do not abort MDS on unknown messages
Venky Shankar wrote:
> Greg Farnum wrote:
> > Right now, in Server::dispatch(), we abort the MDS if we get a messag...
Xiubo Li
04:47 AM Bug #56522: Do not abort MDS on unknown messages
Greg Farnum wrote:
> Right now, in Server::dispatch(), we abort the MDS if we get a message type we don't understand...
Venky Shankar
01:47 AM Bug #56522: Do not abort MDS on unknown messages
Greg Farnum wrote:
> Right now, in Server::dispatch(), we abort the MDS if we get a message type we don't understand...
Xiubo Li
11:13 PM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
Draft PR: https://github.com/ceph/ceph/pull/47067 John Mulligan
06:12 AM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
This is really interesting. Waiting for you PR to understand in what scenario this can happen. Kotresh Hiremath Ravishankar
03:36 PM Bug #56529: ceph-fs crashes on getfattr
Dear Xiubo Li, thanks for tracking this down so fast! Would be great if you could indicate here when an updated kclie... Frank Schilder
02:50 PM Bug #56529 (Fix Under Review): ceph-fs crashes on getfattr
Added one *_CEPHFS_FEATURE_OP_GETVXATTR_* feature bit support in mds side and fixed it in libcephfs in PR#47063. Will... Xiubo Li
02:27 PM Bug #56529: ceph-fs crashes on getfattr
It was introduced by:... Xiubo Li
02:18 PM Bug #56529: ceph-fs crashes on getfattr
... Xiubo Li
02:08 PM Bug #56529 (In Progress): ceph-fs crashes on getfattr
Xiubo Li
02:07 PM Bug #56529: ceph-fs crashes on getfattr
Frank Schilder wrote:
> Quoting Gregory Farnum in the conversation on the ceph-user list:
>
> > That obviously sh...
Xiubo Li
01:34 PM Bug #56529: ceph-fs crashes on getfattr
Quoting Gregory Farnum in the conversation on the ceph-user list:
> That obviously shouldn't happen. Please file a...
Frank Schilder
01:22 PM Bug #56529 (Need More Info): ceph-fs crashes on getfattr
Xiubo Li
01:22 PM Bug #56529: ceph-fs crashes on getfattr
Tried the Pacific and Quincy cephs with the latest upstream kernel, I couldn't reproduce this. I am sure I have also ... Xiubo Li
12:57 PM Bug #56529: ceph-fs crashes on getfattr
Will work on it. Xiubo Li
10:59 AM Bug #56529 (Resolved): ceph-fs crashes on getfattr
From https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GCZ3F3ONVA2YIR7DJNQJFG53Y4DWQABN/
We made a v...
Frank Schilder
03:25 PM Bug #56532 (Resolved): client stalls during vstart_runner test
client logs show following message:... Milind Changire
01:33 PM Fix #48027 (Resolved): qa: add cephadm tests for CephFS in QA
This is fixed I believe. We're using cephadm for fs:workload now. Also some in fs:upgrade. Patrick Donnelly
01:33 PM Bug #51281: qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079...
Venky,
This should have been fixed in https://tracker.ceph.com/issues/56011.
Xiubo Li
01:05 PM Backport #56112 (In Progress): pacific: Test failure: test_flush (tasks.cephfs.test_readahead.Tes...
Dhairya Parmar
01:03 PM Backport #56111 (In Progress): quincy: Test failure: test_flush (tasks.cephfs.test_readahead.Test...
Dhairya Parmar
12:58 PM Backport #56469 (Need More Info): quincy: mgr/volumes: display in-progress clones for a snapshot
Venky Shankar
12:56 PM Bug #56435 (Triaged): octopus: cluster [WRN] evicting unresponsive client smithi115: (124139), ...
Venky Shankar
12:54 PM Bug #56506 (Triaged): pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_s...
Venky Shankar
07:04 AM Backport #56107 (Resolved): pacific: mgr/volumes: Remove incorrect 'size' in the output of 'snaps...
Nikhilkumar Shelke
07:03 AM Backport #56104 (Resolved): pacific: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
Nikhilkumar Shelke
06:00 AM Backport #56527 (Resolved): pacific: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any...
https://github.com/ceph/ceph/pull/47111 Backport Bot
06:00 AM Backport #56526 (Resolved): quincy: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_...
https://github.com/ceph/ceph/pull/47110 Backport Bot
05:57 AM Bug #56012 (Pending Backport): mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_repla...
Kotresh Hiremath Ravishankar
04:48 AM Backport #56465 (In Progress): pacific: xfstests-dev generic/444 test failed
Xiubo Li
04:44 AM Backport #56464 (In Progress): quincy: xfstests-dev generic/444 test failed
Xiubo Li
04:38 AM Backport #56449 (In Progress): pacific: pjd failure (caused by xattr's value not consistent betwe...
Xiubo Li
04:38 AM Backport #56448 (In Progress): quincy: pjd failure (caused by xattr's value not consistent betwee...
Xiubo Li

07/11/2022

09:05 PM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
I think I have a fix for this issue. I'm working on verifying it for go-ceph. If that all goes well I be putting toge... John Mulligan
05:18 PM Bug #56522 (Resolved): Do not abort MDS on unknown messages
Right now, in Server::dispatch(), we abort the MDS if we get a message type we don't understand.
This is horrible:...
Greg Farnum
02:35 PM Bug #56269 (Fix Under Review): crash: File "mgr/snap_schedule/module.py", in __init__: self.clien...
Patrick Donnelly
01:34 PM Backport #56104: pacific: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46806
merged
Yuri Weinstein
01:33 PM Backport #56107: pacific: mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' c...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46803
merged
Yuri Weinstein

07/10/2022

05:54 AM Support #56443: OSD USED Size contains unknown data
Hi,
It's a CephFS pool version 16.2.9 with 1 data pool and 1 metadata pool. It has 3 MDS servers and 3 MON servers. ...
Greg Smith

07/09/2022

01:14 AM Bug #56518 (Rejected): client: when reconnecting new targets it will be skipped
sorry, not a bug. Xiubo Li
01:08 AM Bug #56518 (Rejected): client: when reconnecting new targets it will be skipped
The new created session's state is set to STATE_OPENING, not STATE_NEW. More detail please see https://github.com/cep... Xiubo Li
01:00 AM Bug #54653: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_map.count(stag))
Luis Henriques wrote:
> It looks like the fix for this bug has broken the build for the latest versions of fuse3 (fy...
Xiubo Li
12:52 AM Bug #56517 (Fix Under Review): fuse_ll.cc: error: expected identifier before ‘{’ token 1379 | {
Xiubo Li
12:47 AM Bug #56517 (Resolved): fuse_ll.cc: error: expected identifier before ‘{’ token 1379 | {
When libfuse >= 3.0:... Xiubo Li

07/08/2022

01:27 PM Backport #56056: pacific: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap...
Please see my last comment on https://tracker.ceph.com/issues/54653 Luis Henriques
01:27 PM Backport #56055: quincy: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_...
Please see my last comment on https://tracker.ceph.com/issues/54653 Luis Henriques
01:25 PM Bug #54653: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_map.count(stag))
It looks like the fix for this bug has broken the build for the latest versions of fuse3 (fyi the fuse version I've o... Luis Henriques
10:48 AM Bug #56507 (Duplicate): pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.Te...
https://pulpito.ceph.com/yuriw-2022-07-06_13:57:53-fs-wip-yuri4-testing-2022-07-05-0719-pacific-distro-default-smithi... Venky Shankar
10:41 AM Bug #56506 (Triaged): pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_s...
https://pulpito.ceph.com/yuriw-2022-07-06_13:57:53-fs-wip-yuri4-testing-2022-07-05-0719-pacific-distro-default-smithi... Venky Shankar
06:14 AM Bug #56483 (In Progress): mgr/stats: missing clients in perf stats command output.
Neeraj Pratap Singh

07/07/2022

01:12 PM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
I created a PR that should fix this bug: https://github.com/ceph/ceph/pull/47006. Andreas Teuchert
05:08 AM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
Milind, please take a look. Venky Shankar
10:17 AM Feature #56489 (New): qa: test mgr plugins with standby mgr failover
Related to https://tracker.ceph.com/issues/56269 which is seen when failing an active mgr. The standby mgr hits a tra... Venky Shankar
09:50 AM Feature #55121: cephfs-top: new options to limit and order-by
Neeraj Pratap Singh wrote:
> Jos Collin wrote:
> > Greg Farnum wrote:
> > > Can't fs top already change the sort o...
Jos Collin
08:03 AM Feature #55121: cephfs-top: new options to limit and order-by
Jos Collin wrote:
> Greg Farnum wrote:
> > Can't fs top already change the sort order? I thought that was done in N...
Neeraj Pratap Singh
08:00 AM Feature #55121: cephfs-top: new options to limit and order-by
Venky Shankar wrote:
> Jos Collin wrote:
> > Based on my discussion with Greg, I'm closing this ticket. Because the...
Neeraj Pratap Singh
05:15 AM Feature #55121: cephfs-top: new options to limit and order-by
Greg Farnum wrote:
> Can't fs top already change the sort order? I thought that was done in Neeraj's first tranche o...
Jos Collin
06:57 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
Rishabh Dave wrote:
> Path directory @/home/ubuntu/cephtest/mnt.0/testdir@ is created twice. Copying following from ...
Venky Shankar
04:59 AM Bug #56446 (In Progress): Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.T...
Rishabh Dave
04:59 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
Path directory @/home/ubuntu/cephtest/mnt.0/testdir@ is created twice. Copying following from https://pulpito.ceph.co... Rishabh Dave
05:17 AM Bug #56270: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
Milind, please take a look. Venky Shankar
02:26 AM Bug #56476 (Fix Under Review): qa/suites: evicted client unhandled in 4-compat_client.yaml
Xiubo Li

07/06/2022

11:08 PM Feature #55121: cephfs-top: new options to limit and order-by
Can't fs top already change the sort order? I thought that was done in Neeraj's first tranche of improvements. Greg Farnum
05:52 AM Feature #55121 (New): cephfs-top: new options to limit and order-by
Jos Collin wrote:
> Based on my discussion with Greg, I'm closing this ticket. Because the issue that the customer r...
Venky Shankar
02:58 PM Bug #56270: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
The full backtrace is:... Andreas Teuchert
02:53 PM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
The full backtrace is:... Andreas Teuchert
02:42 PM Support #56443: OSD USED Size contains unknown data
Hi Greg,
You'd need to give more details. This tracker is filed under CephFS, however, it does not mention anythin...
Venky Shankar
12:47 PM Bug #56483 (Resolved): mgr/stats: missing clients in perf stats command output.
perf stats doesn't get the client info w.r.t filesystems created after running the perf stats command once with exist... Neeraj Pratap Singh
10:32 AM Bug #54283: qa/cephfs: is_mounted() depends on a mutable variable
The PR for this ticket needed fix for "ticket 56476":https://tracker.ceph.com/issues/56476 in order to pass QA runs. Rishabh Dave
09:10 AM Bug #56476 (Resolved): qa/suites: evicted client unhandled in 4-compat_client.yaml
In "@4-compat_client.yaml@":https://github.com/ceph/ceph/blob/main/qa/suites/fs/upgrade/featureful_client/upgraded_cl... Rishabh Dave
06:54 AM Bug #56282 (Duplicate): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() ...
This is a known bug and have been fixed in upstream. And the backport PR is still under reviewing https://tracker.cep... Xiubo Li

07/05/2022

02:09 PM Feature #56428: add command "fs deauthorize"
Hmm, I've had concerns about interfaces like this in the past. What happens if:
caps mds = "allow rw fsname=a, all...
Greg Farnum
09:15 AM Backport #56469 (Resolved): quincy: mgr/volumes: display in-progress clones for a snapshot
https://github.com/ceph/ceph/pull/47894 Backport Bot
09:15 AM Backport #56468 (Resolved): pacific: mgr/volumes: display in-progress clones for a snapshot
https://github.com/ceph/ceph/pull/47112 Backport Bot
09:10 AM Bug #55041 (Pending Backport): mgr/volumes: display in-progress clones for a snapshot
Venky Shankar
02:50 AM Backport #56465 (Resolved): pacific: xfstests-dev generic/444 test failed
https://github.com/ceph/ceph/pull/47059 Backport Bot
02:50 AM Backport #56464 (Resolved): quincy: xfstests-dev generic/444 test failed
https://github.com/ceph/ceph/pull/47058 Backport Bot
02:49 AM Bug #56010 (Pending Backport): xfstests-dev generic/444 test failed
Venky Shankar

07/04/2022

09:36 PM Backport #54242 (Resolved): octopus: mds: clients can send a "new" op (file operation) and crash ...
Ilya Dryomov
09:28 PM Backport #54241 (Resolved): pacific: mds: clients can send a "new" op (file operation) and crash ...
Ilya Dryomov
09:16 PM Backport #55348 (Resolved): quincy: mgr/volumes: Show clone failure reason in clone status command
Ilya Dryomov
09:10 PM Backport #55540 (Resolved): quincy: cephfs-top: multiple file system support
Ilya Dryomov
09:03 PM Backport #55336 (Resolved): quincy: Issue removing subvolume with retained snapshots - Possible q...
Ilya Dryomov
09:02 PM Backport #55428 (Resolved): quincy: unaccessible dentries after fsstress run with namespace-restr...
Ilya Dryomov
09:00 PM Backport #55626 (Resolved): quincy: cephfs-shell: put command should accept both path mandatorily...
Ilya Dryomov
09:00 PM Backport #55628 (Resolved): quincy: cephfs-shell: creates directories in local file system even i...
Ilya Dryomov
08:59 PM Backport #55630 (Resolved): quincy: cephfs-shell: saving files doesn't work as expected
Ilya Dryomov
03:15 PM Backport #56462 (Resolved): pacific: mds: crash due to seemingly unrecoverable metadata error
https://github.com/ceph/ceph/pull/47433 Backport Bot
03:15 PM Backport #56461 (Resolved): quincy: mds: crash due to seemingly unrecoverable metadata error
https://github.com/ceph/ceph/pull/47432 Backport Bot
03:10 PM Bug #54384 (Pending Backport): mds: crash due to seemingly unrecoverable metadata error
Venky Shankar
12:42 PM Bug #52438 (Resolved): qa: ffsb timeout
Xiubo Li
12:39 PM Bug #54106 (Duplicate): kclient: hang during workunit cleanup
This is duplicated to https://tracker.ceph.com/issues/55857. Xiubo Li
12:26 PM Bug #56282 (In Progress): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state(...
Xiubo Li
08:59 AM Backport #56056 (In Progress): pacific: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): as...
Xiubo Li
08:48 AM Backport #56055 (In Progress): quincy: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): ass...
Xiubo Li
03:00 AM Backport #56449 (Resolved): pacific: pjd failure (caused by xattr's value not consistent between ...
https://github.com/ceph/ceph/pull/47056 Backport Bot
03:00 AM Backport #56448 (Resolved): quincy: pjd failure (caused by xattr's value not consistent between a...
https://github.com/ceph/ceph/pull/47057 Backport Bot
02:58 AM Bug #55331 (Pending Backport): pjd failure (caused by xattr's value not consistent between auth M...
Venky Shankar
02:44 AM Bug #56446 (Pending Backport): Test failure: test_client_cache_size (tasks.cephfs.test_client_lim...
Seen here: https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-defa... Venky Shankar

07/03/2022

12:02 PM Support #56443 (New): OSD USED Size contains unknown data
Hi,
We have a problem, that the POOL recognizes information in a size of ~1 GB, it is associated with a type of SS...
Greg Smith

07/02/2022

07:13 PM Feature #56442 (New): mds: build asok command to dump stray files and associated caps
To diagnose what is delaying reintegration or deletion. Patrick Donnelly
01:07 PM Bug #55762 (Fix Under Review): mgr/volumes: Handle internal metadata directories under '/volumes'...
Nikhilkumar Shelke
01:06 PM Backport #56014 (Resolved): pacific: quota support for subvolumegroup
Nikhilkumar Shelke
01:04 PM Feature #55401 (Resolved): mgr/volumes: allow users to add metadata (key-value pairs) for subvolu...
Nikhilkumar Shelke
01:04 PM Backport #55802 (Resolved): pacific: mgr/volumes: allow users to add metadata (key-value pairs) f...
Nikhilkumar Shelke

07/01/2022

05:42 PM Backport #51323: octopus: qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in ...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45159
merged
Yuri Weinstein
01:37 PM Backport #52634: octopus: mds sends cap updates with btime zeroed out
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45164
merged
Yuri Weinstein
01:36 PM Backport #50914: octopus: MDS heartbeat timed out between during executing MDCache::start_files_t...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45157
merged
Yuri Weinstein
01:26 PM Bug #50868: qa: "kern.log.gz already exists; not overwritten"
/a/yuriw-2022-06-30_22:36:16-upgrade:pacific-x-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/6907793/ Kamoltat (Junior) Sirivadhna
07:02 AM Bug #54111 (Resolved): data pool attached to a file system can be attached to another file system
Nikhilkumar Shelke
04:08 AM Feature #55121 (Closed): cephfs-top: new options to limit and order-by
Based on my discussion with Greg, I'm closing this ticket. Because the issue that the customer reported in BZ[1] is p... Jos Collin
03:17 AM Bug #56435: octopus: cluster [WRN] evicting unresponsive client smithi115: (124139), after wait...
The clients have been unregistered at *_2022-06-24T20:00:11_*:... Xiubo Li
03:13 AM Bug #56435 (Triaged): octopus: cluster [WRN] evicting unresponsive client smithi115: (124139), ...
/a/yuriw-2022-06-24_16:58:58-rados-wip-yuri-testing-2022-06-24-0817-octopus-distro-default-smithi/6897870
The unre...
Xiubo Li
03:01 AM Bug #52876: pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), after 303.909 s...
Laura Flores wrote:
> /a/yuriw-2022-06-24_16:58:58-rados-wip-yuri-testing-2022-06-24-0817-octopus-distro-default-smi...
Xiubo Li

06/30/2022

08:18 PM Bug #52876: pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), after 303.909 s...
/a/yuriw-2022-06-24_16:58:58-rados-wip-yuri-testing-2022-06-24-0817-octopus-distro-default-smithi/6897870 Laura Flores
06:55 PM Bug #50868: qa: "kern.log.gz already exists; not overwritten"
/a/yuriw-2022-06-30_14:20:05-rados-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/6907396/ Kamoltat (Junior) Sirivadhna
04:57 PM Bug #56384 (Resolved): ceph/test.sh: check_response erasure-code didn't find erasure-code in output
Venky Shankar
10:04 AM Feature #56428 (New): add command "fs deauthorize"
Since entity auth keyrings can now hold auth caps for multiple Ceph FSs, it is very tedious and very error-prone to r... Rishabh Dave

06/29/2022

08:38 PM Backport #56112: pacific: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
Venky Shankar wrote:
> Dhairya, please do the backport.
https://github.com/ceph/ceph/pull/46901
Dhairya Parmar
02:15 PM Backport #56112: pacific: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
Dhairya, please do the backport.
Venky Shankar
07:44 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
https://github.com/ceph/ceph/pull/46899
Dhairya Parmar
07:41 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
Venky Shankar wrote:
> Dhairya, please do the backport.
Okay,sure.
Dhairya Parmar
02:15 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
Dhairya, please do the backport. Venky Shankar
04:49 PM Bug #52123: mds sends cap updates with btime zeroed out
Not sure what has to happen to unwedge this backport. Jeff Layton
02:48 PM Feature #56140: cephfs: tooling to identify inode (metadata) corruption
Posting an update here based on discussion between me, Greg and Patrick:
Short term plan: Helper script to identif...
Venky Shankar
11:08 AM Bug #56416 (Resolved): qa/cephfs: delete path from cmd args after use
Method conduct_neg_test_for_write_caps() in qa/tasks/cephfs/caps_helper.py appends path to command arguments but does... Rishabh Dave
09:22 AM Bug #56414 (Fix Under Review): mounting subvolume shows size/used bytes for entire fs, not subvolume
Xiubo Li
09:18 AM Bug #56414 (In Progress): mounting subvolume shows size/used bytes for entire fs, not subvolume
Hit the same issue in libcephfs. Xiubo Li
09:18 AM Bug #56414 (Resolved): mounting subvolume shows size/used bytes for entire fs, not subvolume
When mounting a subvolume at the base dir of the subvolume, the kernel client correctly shows the size/usage of a sub... Xiubo Li
01:02 AM Backport #56110 (Resolved): pacific: client: choose auth MDS for getxattr with the Xs caps
Xiubo Li
01:01 AM Backport #56105 (Resolved): pacific: ceph-fuse[88614]: ceph mount failed with (65536) Unknown err...
Xiubo Li
01:00 AM Backport #56016 (Resolved): pacific: crash just after MDS become active
Xiubo Li
01:00 AM Bug #54411 (Resolved): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 f...
Xiubo Li
01:00 AM Backport #55449 (Resolved): pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm ...
Xiubo Li
12:59 AM Backport #55993 (Resolved): pacific: client: switch to glibc's STATX macros
Xiubo Li
12:58 AM Backport #55935 (Resolved): pacific: client: infinite loop "got ESTALE" after mds recovery
Xiubo Li
12:58 AM Bug #55329 (Resolved): qa: add test case for fsync crash issue
Xiubo Li
12:57 AM Backport #55660 (Resolved): pacific: qa: add test case for fsync crash issue
Xiubo Li
12:56 AM Backport #55757 (Resolved): pacific: mds: flush mdlog if locked and still has wanted caps not sat...
Xiubo Li

06/28/2022

04:46 PM Bug #17594 (In Progress): cephfs: permission checking not working (MDS should enforce POSIX permi...
Jeff Layton
04:19 PM Bug #53045 (Resolved): stat->fsid is not unique among filesystems exported by the ceph server
Jeff Layton
04:04 PM Bug #53765 (Resolved): mount helper mangles the new syntax device string by qualifying the name
Jeff Layton
04:04 PM Fix #52068: qa: add testing for "ms_mode" mount option
This appears to be waiting for a pacific backport. Ilya Dryomov
04:00 PM Fix #52068: qa: add testing for "ms_mode" mount option
I think this is in now, right? Jeff Layton
04:02 PM Bug #50719 (Can't reproduce): xattr returning from the dead (sic!)
No response in several months. Closing case. Ralph, feel free to reopen if you have more info to share. Jeff Layton
03:58 PM Bug #52134 (Can't reproduce): botched cephadm upgrade due to mds failures
Haven't seen this in some time. Jeff Layton
03:53 PM Bug #41192 (Won't Fix): mds: atime not being updated persistently
I don't see us fixing this in order to get local atime semantics. Closing WONTFIX. Jeff Layton
03:52 PM Bug #50826: kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
Handing this back to Patrick for now. I haven't seen this occur myself. Is this still a problem? Should we close it out? Jeff Layton
03:17 PM Backport #56105: pacific: ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46802
merged
Yuri Weinstein
03:15 PM Backport #56110: pacific: client: choose auth MDS for getxattr with the Xs caps
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46799
merged
Yuri Weinstein
03:15 PM Backport #55449: pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); ...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46798
merged
Yuri Weinstein
03:14 PM Backport #56016: pacific: crash just after MDS become active
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46682
merged
Yuri Weinstein
03:14 PM Backport #55993: pacific: client: switch to glibc's STATX macros
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46679
merged
Yuri Weinstein
03:12 PM Backport #54577: pacific: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding pa...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46646
merged
Yuri Weinstein
03:11 PM Backport #55935: pacific: client: infinite loop "got ESTALE" after mds recovery
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46557
merged
Yuri Weinstein
03:10 PM Backport #55660: pacific: qa: add test case for fsync crash issue
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46425
merged
Yuri Weinstein
03:10 PM Backport #55659: pacific: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46424
merged
Yuri Weinstein
03:08 PM Backport #55757: pacific: mds: flush mdlog if locked and still has wanted caps not satisfied
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46423
merged
Yuri Weinstein
01:45 PM Feature #55821 (Fix Under Review): pybind/mgr/volumes: interface to check the presence of subvolu...
Neeraj Pratap Singh
01:31 PM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
Hi there, sorry for delays, this was very tricky to get info on as it did not reproduce outside of our CI. So it requ... John Mulligan
12:28 PM Bug #53214 (Resolved): qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42...
Jeff Layton
12:00 PM Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_...
Venky Shankar wrote:
> Xiubo, please take a look.
Sure.
Xiubo Li
10:50 AM Bug #56282 (Triaged): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() ==...
Venky Shankar
10:50 AM Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_...
Xiubo, please take a look. Venky Shankar
11:30 AM Bug #56384 (Fix Under Review): ceph/test.sh: check_response erasure-code didn't find erasure-code...
Nikhilkumar Shelke
09:55 AM Bug #56380: crash: Client::_get_vino(Inode*)
Venky Shankar wrote:
> Xiubo Li wrote:
> > This should be fixed by https://github.com/ceph/ceph/pull/45614, in http...
Xiubo Li
09:46 AM Bug #56380 (Duplicate): crash: Client::_get_vino(Inode*)
Dup: https://tracker.ceph.com/issues/54653 Venky Shankar
09:45 AM Bug #56380: crash: Client::_get_vino(Inode*)
Xiubo Li wrote:
> This should be fixed by https://github.com/ceph/ceph/pull/45614, in https://github.com/ceph/ceph/p...
Venky Shankar
06:53 AM Bug #56380: crash: Client::_get_vino(Inode*)
This should be fixed by https://github.com/ceph/ceph/pull/45614, in https://github.com/ceph/ceph/pull/45614/files#dif... Xiubo Li
09:54 AM Backport #56113 (Rejected): pacific: data pool attached to a file system can be attached to anoth...
Nikhilkumar Shelke
09:53 AM Backport #56114 (Rejected): quincy: data pool attached to a file system can be attached to anothe...
Nikhilkumar Shelke
09:48 AM Bug #56263 (Duplicate): crash: Client::_get_vino(Inode*)
Dup: https://tracker.ceph.com/issues/54653 Venky Shankar
06:53 AM Bug #56263: crash: Client::_get_vino(Inode*)
This should be fixed by https://github.com/ceph/ceph/pull/45614, in https://github.com/ceph/ceph/pull/45614/files#dif... Xiubo Li
07:02 AM Bug #56249: crash: int Client::_do_remount(bool): abort
Should be fixed by https://tracker.ceph.com/issues/54049. Xiubo Li
06:41 AM Bug #56397 (Fix Under Review): client: `df` will show incorrect disk size if the quota size is no...
Xiubo Li
02:27 AM Bug #56397 (In Progress): client: `df` will show incorrect disk size if the quota size is not ali...
Xiubo Li
02:27 AM Bug #56397 (In Progress): client: `df` will show incorrect disk size if the quota size is not ali...
Xiubo Li

06/27/2022

06:23 PM Bug #54108 (Fix Under Review): qa: iogen workunit: "The following counters failed to be set on md...
Ramana Raja
 

Also available in: Atom