Activity
From 06/15/2022 to 07/14/2022
07/14/2022
- 01:00 PM Bug #56537 (Fix Under Review): cephfs-top: wrong/infinitely changing wsp values
- 11:18 AM Bug #48773: qa: scrub does not complete
- Saw this in my Quincy backport reviews as well -
https://pulpito.ceph.com/yuriw-2022-07-08_17:05:01-fs-wip-yuri2-tes... - 10:46 AM Backport #56152 (In Progress): pacific: mgr/snap_schedule: schedule updates are not persisted acr...
- 10:40 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
- Rishabh, did you get to RCA this?
- 06:09 AM Bug #56522: Do not abort MDS on unknown messages
- Milind Changire wrote:
> Xiubo Li wrote:
> > Milind Changire wrote:
> > > I had started the GETVXATTR RPC implemen... - 05:31 AM Bug #56522: Do not abort MDS on unknown messages
- Milind Changire wrote:
> Xiubo Li wrote:
> > Milind Changire wrote:
> > > I had started the GETVXATTR RPC implemen... - 05:14 AM Bug #56522: Do not abort MDS on unknown messages
- Xiubo Li wrote:
> Milind Changire wrote:
> > I had started the GETVXATTR RPC implementation with the introduction o... - 04:20 AM Bug #56522: Do not abort MDS on unknown messages
- Milind Changire wrote:
> I had started the GETVXATTR RPC implementation with the introduction of a feature bit for t... - 01:29 AM Bug #56553 (Fix Under Review): client: do not uninline data for read
- 01:20 AM Bug #56553 (Resolved): client: do not uninline data for read
- We don't even ask for and to be sure that we have been granted the Fw caps when reading, we shouldn't write contents ...
07/13/2022
- 02:13 PM Bug #56529: ceph-fs crashes on getfattr
- Xiubo Li wrote:
> We are still discussing to find a best approach to fix this or similar issues ...
Since my comm... - 10:03 AM Bug #56529: ceph-fs crashes on getfattr
- Frank Schilder wrote:
> Dear Xiubo Li, thanks for tracking this down so fast! Would be great if you could indicate h... - 09:22 AM Bug #56529: ceph-fs crashes on getfattr
- FWIW - we need to get this going: https://tracker.ceph.com/issues/53573.
The question is - how far back in release... - 04:05 AM Bug #56529: ceph-fs crashes on getfattr
- Just for completeness -- commit 2f4060b8c41004d10d9a64676ccd847f6e1304dd is the (mds side) fix for this.
- 12:54 PM Bug #56522: Do not abort MDS on unknown messages
- I had started the GETVXATTR RPC implementation with the introduction of a feature bit for this very purpose. I was to...
- 12:43 PM Bug #56522 (Fix Under Review): Do not abort MDS on unknown messages
- 12:23 PM Bug #56522: Do not abort MDS on unknown messages
- Stefan Kooman wrote:
> @Dhairya Parmar
>
> If the connection would be silently closed, it would be highly appreci... - 11:01 AM Bug #56522: Do not abort MDS on unknown messages
- @Dhairya Parmar
If the connection would be silently closed, it would be highly appreciated that the MDS logs this ... - 10:26 AM Bug #56522: Do not abort MDS on unknown messages
- Greg Farnum wrote:
> Venky Shankar wrote:
>
> > We obviously do not want to abort the mds. If we drop the message... - 10:29 AM Bug #56537: cephfs-top: wrong/infinitely changing wsp values
- Venky Shankar wrote:
> Jos Collin wrote:
> > wsp(MB/s) field in cephfs-top shows wrong values when there is an IO.
... - 07:03 AM Bug #56537: cephfs-top: wrong/infinitely changing wsp values
- Jos Collin wrote:
> wsp(MB/s) field in cephfs-top shows wrong values when there is an IO.
>
> Steps to reproduce:... - 06:33 AM Bug #56537 (Resolved): cephfs-top: wrong/infinitely changing wsp values
- wsp(MB/s) field in cephfs-top shows wrong and negative values changing infinitely.
Steps to reproduce:
1. Create ... - 09:39 AM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- I don't think a backport to pacific makes sense. The relevant code is only in quincy, so pacific is not affected by t...
- 09:30 AM Bug #56269 (Pending Backport): crash: File "mgr/snap_schedule/module.py", in __init__: self.clien...
- 02:42 AM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- Andreas Teuchert wrote:
> I created a PR that should fix this bug: https://github.com/ceph/ceph/pull/47006.
Thank... - 09:35 AM Backport #56542 (Rejected): pacific: crash: File "mgr/snap_schedule/module.py", in __init__: self...
- 09:35 AM Backport #56541 (Resolved): quincy: crash: File "mgr/snap_schedule/module.py", in __init__: self....
- https://github.com/ceph/ceph/pull/48013
- 09:28 AM Feature #56489: qa: test mgr plugins with standby mgr failover
- Milind, please have a look on priority :)
- 09:22 AM Bug #46075 (Resolved): ceph-fuse: mount -a on already mounted folder should be ignored
- 09:21 AM Backport #55040 (Rejected): pacific: ceph-fuse: mount -a on already mounted folder should be ignored
- Fix is not critical to pacific hence rejecting fix for pacific.
- 09:18 AM Backport #56469 (New): quincy: mgr/volumes: display in-progress clones for a snapshot
- 08:15 AM Backport #55539 (Resolved): pacific: cephfs-top: multiple file system support
- 07:20 AM Bug #56483 (Fix Under Review): mgr/stats: missing clients in perf stats command output.
- 07:05 AM Bug #56483: mgr/stats: missing clients in perf stats command output.
- Venky Shankar wrote:
> Neeraj, does this fix require backport to q/p or is it due to a recently pushed change?
It... - 06:57 AM Bug #56483: mgr/stats: missing clients in perf stats command output.
- Neeraj, does this fix require backport to q/p or is it due to a recently pushed change?
- 04:50 AM Feature #55121: cephfs-top: new options to limit and order-by
- Having a `sort-by-field` option is handy for the point I mentioned in https://tracker.ceph.com/issues/55121#note-4. T...
- 02:23 AM Bug #55583 (Fix Under Review): Intermittent ParsingError failure in mgr/volumes module during "c...
- 02:19 AM Bug #51281 (Duplicate): qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1...
- Xiubo Li wrote:
> Venky,
>
> This should have been fixed in https://tracker.ceph.com/issues/56011.
Right. Mark... - 02:18 AM Bug #46504 (Can't reproduce): pybind/mgr/volumes: self.assertTrue(check < timo) fails
- Haven't seen this failure again. Please reopen if required.
- 02:17 AM Feature #48619 (Resolved): client: track (and forward to MDS) average read/write/metadata latency
07/12/2022
- 11:24 PM Bug #56522: Do not abort MDS on unknown messages
- Venky Shankar wrote:
> We obviously do not want to abort the mds. If we drop the message, how do clients react? Bl... - 01:30 PM Bug #56522: Do not abort MDS on unknown messages
- I think the MDS should close the session and blocklist the client. If a newer client is using features an older clust...
- 12:52 PM Bug #56522 (Triaged): Do not abort MDS on unknown messages
- 05:13 AM Bug #56522: Do not abort MDS on unknown messages
- Venky Shankar wrote:
> Greg Farnum wrote:
> > Right now, in Server::dispatch(), we abort the MDS if we get a messag... - 04:47 AM Bug #56522: Do not abort MDS on unknown messages
- Greg Farnum wrote:
> Right now, in Server::dispatch(), we abort the MDS if we get a message type we don't understand... - 01:47 AM Bug #56522: Do not abort MDS on unknown messages
- Greg Farnum wrote:
> Right now, in Server::dispatch(), we abort the MDS if we get a message type we don't understand... - 11:13 PM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
- Draft PR: https://github.com/ceph/ceph/pull/47067
- 06:12 AM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
- This is really interesting. Waiting for you PR to understand in what scenario this can happen.
- 03:36 PM Bug #56529: ceph-fs crashes on getfattr
- Dear Xiubo Li, thanks for tracking this down so fast! Would be great if you could indicate here when an updated kclie...
- 02:50 PM Bug #56529 (Fix Under Review): ceph-fs crashes on getfattr
- Added one *_CEPHFS_FEATURE_OP_GETVXATTR_* feature bit support in mds side and fixed it in libcephfs in PR#47063. Will...
- 02:27 PM Bug #56529: ceph-fs crashes on getfattr
- It was introduced by:...
- 02:18 PM Bug #56529: ceph-fs crashes on getfattr
- ...
- 02:08 PM Bug #56529 (In Progress): ceph-fs crashes on getfattr
- 02:07 PM Bug #56529: ceph-fs crashes on getfattr
- Frank Schilder wrote:
> Quoting Gregory Farnum in the conversation on the ceph-user list:
>
> > That obviously sh... - 01:34 PM Bug #56529: ceph-fs crashes on getfattr
- Quoting Gregory Farnum in the conversation on the ceph-user list:
> That obviously shouldn't happen. Please file a... - 01:22 PM Bug #56529 (Need More Info): ceph-fs crashes on getfattr
- 01:22 PM Bug #56529: ceph-fs crashes on getfattr
- Tried the Pacific and Quincy cephs with the latest upstream kernel, I couldn't reproduce this. I am sure I have also ...
- 12:57 PM Bug #56529: ceph-fs crashes on getfattr
- Will work on it.
- 10:59 AM Bug #56529 (Resolved): ceph-fs crashes on getfattr
- From https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GCZ3F3ONVA2YIR7DJNQJFG53Y4DWQABN/
We made a v... - 03:25 PM Bug #56532 (Resolved): client stalls during vstart_runner test
- client logs show following message:...
- 01:33 PM Fix #48027 (Resolved): qa: add cephadm tests for CephFS in QA
- This is fixed I believe. We're using cephadm for fs:workload now. Also some in fs:upgrade.
- 01:33 PM Bug #51281: qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079...
- Venky,
This should have been fixed in https://tracker.ceph.com/issues/56011. - 01:05 PM Backport #56112 (In Progress): pacific: Test failure: test_flush (tasks.cephfs.test_readahead.Tes...
- 01:03 PM Backport #56111 (In Progress): quincy: Test failure: test_flush (tasks.cephfs.test_readahead.Test...
- 12:58 PM Backport #56469 (Need More Info): quincy: mgr/volumes: display in-progress clones for a snapshot
- 12:56 PM Bug #56435 (Triaged): octopus: cluster [WRN] evicting unresponsive client smithi115: (124139), ...
- 12:54 PM Bug #56506 (Triaged): pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_s...
- 07:04 AM Backport #56107 (Resolved): pacific: mgr/volumes: Remove incorrect 'size' in the output of 'snaps...
- 07:03 AM Backport #56104 (Resolved): pacific: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- 06:00 AM Backport #56527 (Resolved): pacific: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any...
- https://github.com/ceph/ceph/pull/47111
- 06:00 AM Backport #56526 (Resolved): quincy: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_...
- https://github.com/ceph/ceph/pull/47110
- 05:57 AM Bug #56012 (Pending Backport): mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_repla...
- 04:48 AM Backport #56465 (In Progress): pacific: xfstests-dev generic/444 test failed
- 04:44 AM Backport #56464 (In Progress): quincy: xfstests-dev generic/444 test failed
- 04:38 AM Backport #56449 (In Progress): pacific: pjd failure (caused by xattr's value not consistent betwe...
- 04:38 AM Backport #56448 (In Progress): quincy: pjd failure (caused by xattr's value not consistent betwee...
07/11/2022
- 09:05 PM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
- I think I have a fix for this issue. I'm working on verifying it for go-ceph. If that all goes well I be putting toge...
- 05:18 PM Bug #56522 (Resolved): Do not abort MDS on unknown messages
- Right now, in Server::dispatch(), we abort the MDS if we get a message type we don't understand.
This is horrible:... - 02:35 PM Bug #56269 (Fix Under Review): crash: File "mgr/snap_schedule/module.py", in __init__: self.clien...
- 01:34 PM Backport #56104: pacific: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46806
merged - 01:33 PM Backport #56107: pacific: mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' c...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46803
merged
07/10/2022
- 05:54 AM Support #56443: OSD USED Size contains unknown data
- Hi,
It's a CephFS pool version 16.2.9 with 1 data pool and 1 metadata pool. It has 3 MDS servers and 3 MON servers. ...
07/09/2022
- 01:14 AM Bug #56518 (Rejected): client: when reconnecting new targets it will be skipped
- sorry, not a bug.
- 01:08 AM Bug #56518 (Rejected): client: when reconnecting new targets it will be skipped
- The new created session's state is set to STATE_OPENING, not STATE_NEW. More detail please see https://github.com/cep...
- 01:00 AM Bug #54653: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_map.count(stag))
- Luis Henriques wrote:
> It looks like the fix for this bug has broken the build for the latest versions of fuse3 (fy... - 12:52 AM Bug #56517 (Fix Under Review): fuse_ll.cc: error: expected identifier before ‘{’ token 1379 | {
- 12:47 AM Bug #56517 (Resolved): fuse_ll.cc: error: expected identifier before ‘{’ token 1379 | {
- When libfuse >= 3.0:...
07/08/2022
- 01:27 PM Backport #56056: pacific: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap...
- Please see my last comment on https://tracker.ceph.com/issues/54653
- 01:27 PM Backport #56055: quincy: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_...
- Please see my last comment on https://tracker.ceph.com/issues/54653
- 01:25 PM Bug #54653: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_map.count(stag))
- It looks like the fix for this bug has broken the build for the latest versions of fuse3 (fyi the fuse version I've o...
- 10:48 AM Bug #56507 (Duplicate): pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.Te...
- https://pulpito.ceph.com/yuriw-2022-07-06_13:57:53-fs-wip-yuri4-testing-2022-07-05-0719-pacific-distro-default-smithi...
- 10:41 AM Bug #56506 (Triaged): pacific: Test failure: test_rebuild_backtraceless (tasks.cephfs.test_data_s...
- https://pulpito.ceph.com/yuriw-2022-07-06_13:57:53-fs-wip-yuri4-testing-2022-07-05-0719-pacific-distro-default-smithi...
- 06:14 AM Bug #56483 (In Progress): mgr/stats: missing clients in perf stats command output.
07/07/2022
- 01:12 PM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- I created a PR that should fix this bug: https://github.com/ceph/ceph/pull/47006.
- 05:08 AM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- Milind, please take a look.
- 10:17 AM Feature #56489 (New): qa: test mgr plugins with standby mgr failover
- Related to https://tracker.ceph.com/issues/56269 which is seen when failing an active mgr. The standby mgr hits a tra...
- 09:50 AM Feature #55121: cephfs-top: new options to limit and order-by
- Neeraj Pratap Singh wrote:
> Jos Collin wrote:
> > Greg Farnum wrote:
> > > Can't fs top already change the sort o... - 08:03 AM Feature #55121: cephfs-top: new options to limit and order-by
- Jos Collin wrote:
> Greg Farnum wrote:
> > Can't fs top already change the sort order? I thought that was done in N... - 08:00 AM Feature #55121: cephfs-top: new options to limit and order-by
- Venky Shankar wrote:
> Jos Collin wrote:
> > Based on my discussion with Greg, I'm closing this ticket. Because the... - 05:15 AM Feature #55121: cephfs-top: new options to limit and order-by
- Greg Farnum wrote:
> Can't fs top already change the sort order? I thought that was done in Neeraj's first tranche o... - 06:57 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
- Rishabh Dave wrote:
> Path directory @/home/ubuntu/cephtest/mnt.0/testdir@ is created twice. Copying following from ... - 04:59 AM Bug #56446 (In Progress): Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.T...
- 04:59 AM Bug #56446: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
- Path directory @/home/ubuntu/cephtest/mnt.0/testdir@ is created twice. Copying following from https://pulpito.ceph.co...
- 05:17 AM Bug #56270: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- Milind, please take a look.
- 02:26 AM Bug #56476 (Fix Under Review): qa/suites: evicted client unhandled in 4-compat_client.yaml
07/06/2022
- 11:08 PM Feature #55121: cephfs-top: new options to limit and order-by
- Can't fs top already change the sort order? I thought that was done in Neeraj's first tranche of improvements.
- 05:52 AM Feature #55121 (New): cephfs-top: new options to limit and order-by
- Jos Collin wrote:
> Based on my discussion with Greg, I'm closing this ticket. Because the issue that the customer r... - 02:58 PM Bug #56270: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- The full backtrace is:...
- 02:53 PM Bug #56269: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient...
- The full backtrace is:...
- 02:42 PM Support #56443: OSD USED Size contains unknown data
- Hi Greg,
You'd need to give more details. This tracker is filed under CephFS, however, it does not mention anythin... - 12:47 PM Bug #56483 (Resolved): mgr/stats: missing clients in perf stats command output.
- perf stats doesn't get the client info w.r.t filesystems created after running the perf stats command once with exist...
- 10:32 AM Bug #54283: qa/cephfs: is_mounted() depends on a mutable variable
- The PR for this ticket needed fix for "ticket 56476":https://tracker.ceph.com/issues/56476 in order to pass QA runs.
- 09:10 AM Bug #56476 (Resolved): qa/suites: evicted client unhandled in 4-compat_client.yaml
- In "@4-compat_client.yaml@":https://github.com/ceph/ceph/blob/main/qa/suites/fs/upgrade/featureful_client/upgraded_cl...
- 06:54 AM Bug #56282 (Duplicate): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() ...
- This is a known bug and have been fixed in upstream. And the backport PR is still under reviewing https://tracker.cep...
07/05/2022
- 02:09 PM Feature #56428: add command "fs deauthorize"
- Hmm, I've had concerns about interfaces like this in the past. What happens if:
caps mds = "allow rw fsname=a, all... - 09:15 AM Backport #56469 (Resolved): quincy: mgr/volumes: display in-progress clones for a snapshot
- https://github.com/ceph/ceph/pull/47894
- 09:15 AM Backport #56468 (Resolved): pacific: mgr/volumes: display in-progress clones for a snapshot
- https://github.com/ceph/ceph/pull/47112
- 09:10 AM Bug #55041 (Pending Backport): mgr/volumes: display in-progress clones for a snapshot
- 02:50 AM Backport #56465 (Resolved): pacific: xfstests-dev generic/444 test failed
- https://github.com/ceph/ceph/pull/47059
- 02:50 AM Backport #56464 (Resolved): quincy: xfstests-dev generic/444 test failed
- https://github.com/ceph/ceph/pull/47058
- 02:49 AM Bug #56010 (Pending Backport): xfstests-dev generic/444 test failed
07/04/2022
- 09:36 PM Backport #54242 (Resolved): octopus: mds: clients can send a "new" op (file operation) and crash ...
- 09:28 PM Backport #54241 (Resolved): pacific: mds: clients can send a "new" op (file operation) and crash ...
- 09:16 PM Backport #55348 (Resolved): quincy: mgr/volumes: Show clone failure reason in clone status command
- 09:10 PM Backport #55540 (Resolved): quincy: cephfs-top: multiple file system support
- 09:03 PM Backport #55336 (Resolved): quincy: Issue removing subvolume with retained snapshots - Possible q...
- 09:02 PM Backport #55428 (Resolved): quincy: unaccessible dentries after fsstress run with namespace-restr...
- 09:00 PM Backport #55626 (Resolved): quincy: cephfs-shell: put command should accept both path mandatorily...
- 09:00 PM Backport #55628 (Resolved): quincy: cephfs-shell: creates directories in local file system even i...
- 08:59 PM Backport #55630 (Resolved): quincy: cephfs-shell: saving files doesn't work as expected
- 03:15 PM Backport #56462 (Resolved): pacific: mds: crash due to seemingly unrecoverable metadata error
- https://github.com/ceph/ceph/pull/47433
- 03:15 PM Backport #56461 (Resolved): quincy: mds: crash due to seemingly unrecoverable metadata error
- https://github.com/ceph/ceph/pull/47432
- 03:10 PM Bug #54384 (Pending Backport): mds: crash due to seemingly unrecoverable metadata error
- 12:42 PM Bug #52438 (Resolved): qa: ffsb timeout
- 12:39 PM Bug #54106 (Duplicate): kclient: hang during workunit cleanup
- This is duplicated to https://tracker.ceph.com/issues/55857.
- 12:26 PM Bug #56282 (In Progress): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state(...
- 08:59 AM Backport #56056 (In Progress): pacific: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): as...
- 08:48 AM Backport #56055 (In Progress): quincy: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): ass...
- 03:00 AM Backport #56449 (Resolved): pacific: pjd failure (caused by xattr's value not consistent between ...
- https://github.com/ceph/ceph/pull/47056
- 03:00 AM Backport #56448 (Resolved): quincy: pjd failure (caused by xattr's value not consistent between a...
- https://github.com/ceph/ceph/pull/47057
- 02:58 AM Bug #55331 (Pending Backport): pjd failure (caused by xattr's value not consistent between auth M...
- 02:44 AM Bug #56446 (Pending Backport): Test failure: test_client_cache_size (tasks.cephfs.test_client_lim...
- Seen here: https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-defa...
07/03/2022
- 12:02 PM Support #56443 (New): OSD USED Size contains unknown data
- Hi,
We have a problem, that the POOL recognizes information in a size of ~1 GB, it is associated with a type of SS...
07/02/2022
- 07:13 PM Feature #56442 (New): mds: build asok command to dump stray files and associated caps
- To diagnose what is delaying reintegration or deletion.
- 01:07 PM Bug #55762 (Fix Under Review): mgr/volumes: Handle internal metadata directories under '/volumes'...
- 01:06 PM Backport #56014 (Resolved): pacific: quota support for subvolumegroup
- 01:04 PM Feature #55401 (Resolved): mgr/volumes: allow users to add metadata (key-value pairs) for subvolu...
- 01:04 PM Backport #55802 (Resolved): pacific: mgr/volumes: allow users to add metadata (key-value pairs) f...
07/01/2022
- 05:42 PM Backport #51323: octopus: qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45159
merged - 01:37 PM Backport #52634: octopus: mds sends cap updates with btime zeroed out
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45164
merged - 01:36 PM Backport #50914: octopus: MDS heartbeat timed out between during executing MDCache::start_files_t...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45157
merged - 01:26 PM Bug #50868: qa: "kern.log.gz already exists; not overwritten"
- /a/yuriw-2022-06-30_22:36:16-upgrade:pacific-x-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/6907793/
- 07:02 AM Bug #54111 (Resolved): data pool attached to a file system can be attached to another file system
- 04:08 AM Feature #55121 (Closed): cephfs-top: new options to limit and order-by
- Based on my discussion with Greg, I'm closing this ticket. Because the issue that the customer reported in BZ[1] is p...
- 03:17 AM Bug #56435: octopus: cluster [WRN] evicting unresponsive client smithi115: (124139), after wait...
- The clients have been unregistered at *_2022-06-24T20:00:11_*:...
- 03:13 AM Bug #56435 (Triaged): octopus: cluster [WRN] evicting unresponsive client smithi115: (124139), ...
- /a/yuriw-2022-06-24_16:58:58-rados-wip-yuri-testing-2022-06-24-0817-octopus-distro-default-smithi/6897870
The unre... - 03:01 AM Bug #52876: pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), after 303.909 s...
- Laura Flores wrote:
> /a/yuriw-2022-06-24_16:58:58-rados-wip-yuri-testing-2022-06-24-0817-octopus-distro-default-smi...
06/30/2022
- 08:18 PM Bug #52876: pacific: cluster [WRN] evicting unresponsive client smithi121 (9126), after 303.909 s...
- /a/yuriw-2022-06-24_16:58:58-rados-wip-yuri-testing-2022-06-24-0817-octopus-distro-default-smithi/6897870
- 06:55 PM Bug #50868: qa: "kern.log.gz already exists; not overwritten"
- /a/yuriw-2022-06-30_14:20:05-rados-wip-yuri3-testing-2022-06-28-1737-distro-default-smithi/6907396/
- 04:57 PM Bug #56384 (Resolved): ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- 10:04 AM Feature #56428 (New): add command "fs deauthorize"
- Since entity auth keyrings can now hold auth caps for multiple Ceph FSs, it is very tedious and very error-prone to r...
06/29/2022
- 08:38 PM Backport #56112: pacific: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Venky Shankar wrote:
> Dhairya, please do the backport.
https://github.com/ceph/ceph/pull/46901 - 02:15 PM Backport #56112: pacific: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Dhairya, please do the backport.
- 07:44 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- https://github.com/ceph/ceph/pull/46899
- 07:41 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Venky Shankar wrote:
> Dhairya, please do the backport.
Okay,sure. - 02:15 PM Backport #56111: quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- Dhairya, please do the backport.
- 04:49 PM Bug #52123: mds sends cap updates with btime zeroed out
- Not sure what has to happen to unwedge this backport.
- 02:48 PM Feature #56140: cephfs: tooling to identify inode (metadata) corruption
- Posting an update here based on discussion between me, Greg and Patrick:
Short term plan: Helper script to identif... - 11:08 AM Bug #56416 (Resolved): qa/cephfs: delete path from cmd args after use
- Method conduct_neg_test_for_write_caps() in qa/tasks/cephfs/caps_helper.py appends path to command arguments but does...
- 09:22 AM Bug #56414 (Fix Under Review): mounting subvolume shows size/used bytes for entire fs, not subvolume
- 09:18 AM Bug #56414 (In Progress): mounting subvolume shows size/used bytes for entire fs, not subvolume
- Hit the same issue in libcephfs.
- 09:18 AM Bug #56414 (Resolved): mounting subvolume shows size/used bytes for entire fs, not subvolume
- When mounting a subvolume at the base dir of the subvolume, the kernel client correctly shows the size/usage of a sub...
- 01:02 AM Backport #56110 (Resolved): pacific: client: choose auth MDS for getxattr with the Xs caps
- 01:01 AM Backport #56105 (Resolved): pacific: ceph-fuse[88614]: ceph mount failed with (65536) Unknown err...
- 01:00 AM Backport #56016 (Resolved): pacific: crash just after MDS become active
- 01:00 AM Bug #54411 (Resolved): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 f...
- 01:00 AM Backport #55449 (Resolved): pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm ...
- 12:59 AM Backport #55993 (Resolved): pacific: client: switch to glibc's STATX macros
- 12:58 AM Backport #55935 (Resolved): pacific: client: infinite loop "got ESTALE" after mds recovery
- 12:58 AM Bug #55329 (Resolved): qa: add test case for fsync crash issue
- 12:57 AM Backport #55660 (Resolved): pacific: qa: add test case for fsync crash issue
- 12:56 AM Backport #55757 (Resolved): pacific: mds: flush mdlog if locked and still has wanted caps not sat...
06/28/2022
- 04:46 PM Bug #17594 (In Progress): cephfs: permission checking not working (MDS should enforce POSIX permi...
- 04:19 PM Bug #53045 (Resolved): stat->fsid is not unique among filesystems exported by the ceph server
- 04:04 PM Bug #53765 (Resolved): mount helper mangles the new syntax device string by qualifying the name
- 04:04 PM Fix #52068: qa: add testing for "ms_mode" mount option
- This appears to be waiting for a pacific backport.
- 04:00 PM Fix #52068: qa: add testing for "ms_mode" mount option
- I think this is in now, right?
- 04:02 PM Bug #50719 (Can't reproduce): xattr returning from the dead (sic!)
- No response in several months. Closing case. Ralph, feel free to reopen if you have more info to share.
- 03:58 PM Bug #52134 (Can't reproduce): botched cephadm upgrade due to mds failures
- Haven't seen this in some time.
- 03:53 PM Bug #41192 (Won't Fix): mds: atime not being updated persistently
- I don't see us fixing this in order to get local atime semantics. Closing WONTFIX.
- 03:52 PM Bug #50826: kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
- Handing this back to Patrick for now. I haven't seen this occur myself. Is this still a problem? Should we close it out?
- 03:17 PM Backport #56105: pacific: ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46802
merged - 03:15 PM Backport #56110: pacific: client: choose auth MDS for getxattr with the Xs caps
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46799
merged - 03:15 PM Backport #55449: pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46798
merged - 03:14 PM Backport #56016: pacific: crash just after MDS become active
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46682
merged - 03:14 PM Backport #55993: pacific: client: switch to glibc's STATX macros
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46679
merged - 03:12 PM Backport #54577: pacific: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding pa...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46646
merged - 03:11 PM Backport #55935: pacific: client: infinite loop "got ESTALE" after mds recovery
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46557
merged - 03:10 PM Backport #55660: pacific: qa: add test case for fsync crash issue
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46425
merged - 03:10 PM Backport #55659: pacific: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46424
merged - 03:08 PM Backport #55757: pacific: mds: flush mdlog if locked and still has wanted caps not satisfied
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46423
merged - 01:45 PM Feature #55821 (Fix Under Review): pybind/mgr/volumes: interface to check the presence of subvolu...
- 01:31 PM Bug #55583: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
- Hi there, sorry for delays, this was very tricky to get info on as it did not reproduce outside of our CI. So it requ...
- 12:28 PM Bug #53214 (Resolved): qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42...
- 12:00 PM Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_...
- Venky Shankar wrote:
> Xiubo, please take a look.
Sure. - 10:50 AM Bug #56282 (Triaged): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() ==...
- 10:50 AM Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_...
- Xiubo, please take a look.
- 11:30 AM Bug #56384 (Fix Under Review): ceph/test.sh: check_response erasure-code didn't find erasure-code...
- 09:55 AM Bug #56380: crash: Client::_get_vino(Inode*)
- Venky Shankar wrote:
> Xiubo Li wrote:
> > This should be fixed by https://github.com/ceph/ceph/pull/45614, in http... - 09:46 AM Bug #56380 (Duplicate): crash: Client::_get_vino(Inode*)
- Dup: https://tracker.ceph.com/issues/54653
- 09:45 AM Bug #56380: crash: Client::_get_vino(Inode*)
- Xiubo Li wrote:
> This should be fixed by https://github.com/ceph/ceph/pull/45614, in https://github.com/ceph/ceph/p... - 06:53 AM Bug #56380: crash: Client::_get_vino(Inode*)
- This should be fixed by https://github.com/ceph/ceph/pull/45614, in https://github.com/ceph/ceph/pull/45614/files#dif...
- 09:54 AM Backport #56113 (Rejected): pacific: data pool attached to a file system can be attached to anoth...
- 09:53 AM Backport #56114 (Rejected): quincy: data pool attached to a file system can be attached to anothe...
- 09:48 AM Bug #56263 (Duplicate): crash: Client::_get_vino(Inode*)
- Dup: https://tracker.ceph.com/issues/54653
- 06:53 AM Bug #56263: crash: Client::_get_vino(Inode*)
- This should be fixed by https://github.com/ceph/ceph/pull/45614, in https://github.com/ceph/ceph/pull/45614/files#dif...
- 07:02 AM Bug #56249: crash: int Client::_do_remount(bool): abort
- Should be fixed by https://tracker.ceph.com/issues/54049.
- 06:41 AM Bug #56397 (Fix Under Review): client: `df` will show incorrect disk size if the quota size is no...
- 02:27 AM Bug #56397 (In Progress): client: `df` will show incorrect disk size if the quota size is not ali...
- 02:27 AM Bug #56397 (In Progress): client: `df` will show incorrect disk size if the quota size is not ali...
06/27/2022
- 06:23 PM Bug #54108 (Fix Under Review): qa: iogen workunit: "The following counters failed to be set on md...
06/24/2022
- 03:17 PM Bug #56384: ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- Regression introduced with https://github.com/ceph/ceph/pull/44900.
- 04:06 AM Bug #56384: ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- /a/yuriw-2022-06-23_14:17:25-rados-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/6894631
- 03:59 AM Bug #56384: ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- /a/yuriw-2022-06-23_14:17:25-rados-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/6894626
/a/yuriw-2022-06-... - 03:38 AM Bug #56384: ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- /a/yuriw-2022-06-23_14:17:25-rados-wip-yuri6-testing-2022-06-22-1419-distro-default-smithi/6894622
- 03:36 AM Bug #56384 (Resolved): ceph/test.sh: check_response erasure-code didn't find erasure-code in output
- ...
- 10:12 AM Bug #55976 (Fix Under Review): mgr/volumes: Clone operations are failing with Assertion Error
- 09:16 AM Backport #56152: pacific: mgr/snap_schedule: schedule updates are not persisted across mgr restart
- Venky Shankar wrote:
> Milind, This is pacific only due to the usage of libsqlite in mainline vs in-memory+rados dum... - 09:11 AM Backport #56152: pacific: mgr/snap_schedule: schedule updates are not persisted across mgr restart
- Milind, This is pacific only due to the usage of libsqlite in mainline vs in-memory+rados dump in pacfic?
- 09:01 AM Bug #56012 (Fix Under Review): mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_repla...
- 03:14 AM Bug #56380 (Duplicate): crash: Client::_get_vino(Inode*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cf152a5bd5d340d5ee9fabea...- 03:10 AM Bug #56288 (Triaged): crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, cep...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=aeaa2b6c5a82bba2b2f33885...- 03:10 AM Bug #56282 (Duplicate): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9e99e620a470c067176ebf0e...- 03:09 AM Bug #56270 (Duplicate): crash: File "mgr/snap_schedule/module.py", in __init__: self.client = Sna...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c00cdd2659181963c4fcc1ea...- 03:09 AM Bug #56269 (Resolved): crash: File "mgr/snap_schedule/module.py", in __init__: self.client = Snap...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d2aeadfccf541e27a05866ac...- 03:09 AM Bug #56263 (Duplicate): crash: Client::_get_vino(Inode*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0b704d8eeaf9a29a1e49c16c...- 03:09 AM Bug #56261 (Triaged): crash: Migrator::import_notify_abort(CDir*, std::set<CDir*, std::less<CDir*...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ef0defbe852e18fecfcbe993...- 03:08 AM Bug #56249 (Resolved): crash: int Client::_do_remount(bool): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e023ce46f46b39b4a3c88a31...
06/23/2022
- 12:21 PM Feature #55715 (In Progress): pybind/mgr/cephadm/upgrade: allow upgrades without reducing max_mds
06/22/2022
- 06:40 PM Bug #23724 (Fix Under Review): qa: broad snapshot functionality testing across clients
- Does not include Ganesha.
- 06:39 PM Feature #55470 (Fix Under Review): qa: postgresql test suite workunit
- 12:16 PM Feature #55470 (In Progress): qa: postgresql test suite workunit
- 06:37 PM Bug #56169 (Fix Under Review): mgr/stats: 'perf stats' command shows incorrect output with non-ex...
- 01:49 PM Bug #56169 (Resolved): mgr/stats: 'perf stats' command shows incorrect output with non-existing m...
- When `ceph fs perf stats` command runs with non-existing mds_rank filter, it shows all the clients in `client_metadat...
- 02:54 PM Backport #56014: pacific: quota support for subvolumegroup
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46668
merged - 02:52 PM Backport #55802: pacific: mgr/volumes: allow users to add metadata (key-value pairs) for subvolum...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46515
merged - 11:27 AM Bug #56162 (Resolved): mgr/stats: add fs_name as field in perf stats command output
- fs_name needs to be added as a field with the change in structure of perf stats output.
- 10:11 AM Backport #56104 (In Progress): pacific: mgr/volumes: subvolume ls with groupname as '_nogroup' cr...
- 10:10 AM Backport #56103 (In Progress): quincy: mgr/volumes: subvolume ls with groupname as '_nogroup' cra...
- 10:03 AM Backport #56108 (In Progress): quincy: mgr/volumes: Remove incorrect 'size' in the output of 'sna...
- 10:01 AM Backport #56107 (In Progress): pacific: mgr/volumes: Remove incorrect 'size' in the output of 'sn...
- 06:44 AM Backport #56106 (In Progress): quincy: ceph-fuse[88614]: ceph mount failed with (65536) Unknown e...
- 06:43 AM Backport #56105 (In Progress): pacific: ceph-fuse[88614]: ceph mount failed with (65536) Unknown ...
- 05:33 AM Backport #56109 (In Progress): quincy: client: choose auth MDS for getxattr with the Xs caps
- 05:30 AM Backport #56110 (In Progress): pacific: client: choose auth MDS for getxattr with the Xs caps
- 05:22 AM Backport #55449 (In Progress): pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cepha...
- 05:03 AM Backport #55661 (Resolved): quincy: qa: add test case for fsync crash issue
- 05:02 AM Backport #55658 (Resolved): quincy: mds: stuck 2 seconds and keeps retrying to find ino from auth...
- 01:46 AM Backport #56152 (Resolved): pacific: mgr/snap_schedule: schedule updates are not persisted across...
- https://github.com/ceph/ceph/pull/46797
scrub status does not reflect the correct status after mgr restart
eg...
06/21/2022
- 12:51 PM Bug #55516 (Resolved): qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data:...
- 12:50 PM Backport #55621 (Resolved): quincy: qa: fs suite tests failing with "json.decoder.JSONDecodeError...
- 10:18 AM Backport #55621 (In Progress): quincy: qa: fs suite tests failing with "json.decoder.JSONDecodeEr...
- 12:47 PM Backport #55916 (Resolved): pacific: qa: fs suite tests failing with "json.decoder.JSONDecodeErro...
- 10:16 AM Backport #55916 (In Progress): pacific: qa: fs suite tests failing with "json.decoder.JSONDecodeE...
- 12:44 PM Feature #56140: cephfs: tooling to identify inode (metadata) corruption
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Tracker https://tracker.ceph.com/issues/54546 related to metadat... - 12:35 PM Feature #56140: cephfs: tooling to identify inode (metadata) corruption
- Venky Shankar wrote:
> Tracker https://tracker.ceph.com/issues/54546 related to metadata corruption which is seen wh... - 10:46 AM Feature #56140: cephfs: tooling to identify inode (metadata) corruption
- Tracker https://tracker.ceph.com/issues/54546 related to metadata corruption which is seen when running databases (es...
- 10:39 AM Feature #56140 (Pending Backport): cephfs: tooling to identify inode (metadata) corruption
- 10:42 AM Bug #54345 (Resolved): mds: try to reset heartbeat when fetching or committing.
- 10:42 AM Backport #55342 (Resolved): quincy: mds: try to reset heartbeat when fetching or committing.
- 10:42 AM Backport #55343 (Resolved): pacific: mds: try to reset heartbeat when fetching or committing.
- 10:40 AM Bug #55129 (Resolved): client: get stuck forever when the forward seq exceeds 256
- 10:40 AM Backport #55346 (Resolved): pacific: client: get stuck forever when the forward seq exceeds 256
- 10:39 AM Backport #55345 (Resolved): quincy: client: get stuck forever when the forward seq exceeds 256
- 10:37 AM Bug #16739 (Resolved): Client::setxattr always sends setxattr request to MDS
- 10:36 AM Backport #55192 (Resolved): pacific: Client::setxattr always sends setxattr request to MDS
- 10:36 AM Backport #55447 (Resolved): quincy: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm d...
- 10:11 AM Bug #54971 (Resolved): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics...
- 10:11 AM Backport #55338 (Resolved): pacific: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.te...
- 10:10 AM Bug #50033 (Resolved): mgr/stats: be resilient to offline MDS rank-0
- 10:08 AM Backport #54479 (Resolved): pacific: mgr/stats: be resilient to offline MDS rank-0
- 06:27 AM Bug #56116: mds: handle deferred client request core when mds reboot
- Venky Shankar wrote:
> Hi,
>
> Do you have a specific reproducer for this (in the form of a workload)?
>
> Che... - 05:34 AM Bug #56116: mds: handle deferred client request core when mds reboot
- Hi,
Do you have a specific reproducer for this (in the form of a workload)?
Cheers,
Venky - 04:13 AM Bug #56116 (Fix Under Review): mds: handle deferred client request core when mds reboot
06/20/2022
- 10:46 PM Bug #54108: qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.ex...
- The counters mds.imported and mds.exported were not incremented during iogen workloads. I'm not sure why the exports ...
- 10:04 PM Bug #55807 (Duplicate): qa failure: workload iogen failed
- Duplicate of https://tracker.ceph.com/issues/54108
- 05:34 PM Bug #50546 (Won't Fix): nautilus: qa: 'The following counters failed to be set on mds daemons: {'...
- nautilus is EOL
- 12:37 PM Bug #56063 (Triaged): Snapshot retention config lost after mgr restart
- Milind, please take a look.
- 09:27 AM Backport #56114 (In Progress): quincy: data pool attached to a file system can be attached to ano...
- 04:36 AM Backport #56114 (Rejected): quincy: data pool attached to a file system can be attached to anothe...
- https://github.com/ceph/ceph/pull/46752
- 09:26 AM Backport #56113 (In Progress): pacific: data pool attached to a file system can be attached to an...
- 04:36 AM Backport #56113 (Rejected): pacific: data pool attached to a file system can be attached to anoth...
- https://github.com/ceph/ceph/pull/46751
- 09:25 AM Feature #56058 (Fix Under Review): mds/MDBalancer: add an arg to limit depth when dump loads for ...
- 09:14 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo, there is another class of failures for this test. See - https://p... - 04:06 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Venky Shankar wrote:
> Xiubo, there is another class of failures for this test. See - https://pulpito.ceph.com/vshan... - 03:52 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Xiubo, there is another class of failures for this test. See - https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-...
- 08:50 AM Backport #55797 (Resolved): quincy: mgr/volumes: allow users to add metadata (key-value pairs) fo...
- 08:49 AM Feature #54472 (Resolved): mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes
- 08:48 AM Bug #54375 (Resolved): mgr/volumes: The 'mode' argument is not honored on idempotent subvolume cr...
- 08:48 AM Backport #54574 (Resolved): quincy: mgr/volumes: The 'mode' argument is not honored on idempotent...
- 08:47 AM Backport #54573 (Resolved): pacific: mgr/volumes: The 'mode' argument is not honored on idempoten...
- 08:47 AM Bug #54049 (Resolved): ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not expect...
- 08:46 AM Backport #54478 (Resolved): pacific: ceph-fuse: If nonroot user runs ceph-fuse mount on then path...
- 08:45 AM Backport #54477 (Resolved): quincy: ceph-fuse: If nonroot user runs ceph-fuse mount on then path ...
- 08:43 AM Backport #55039 (Resolved): quincy: ceph-fuse: mount -a on already mounted folder should be ignored
- 08:42 AM Backport #55413 (Resolved): quincy: mds: add perf counter to record slow replies
- 08:42 AM Backport #55412 (Resolved): pacific: mds: add perf counter to record slow replies
- 08:40 AM Backport #55376 (Resolved): quincy: mgr/volumes: allow users to add metadata (key-value pairs) to...
- 08:27 AM Bug #56116 (Pending Backport): mds: handle deferred client request core when mds reboot
When mds reboot, client will send `mds_requests` and `client_reconnect` to mds.
If mds does not receive the `cli...- 04:33 AM Bug #54111 (Pending Backport): data pool attached to a file system can be attached to another fil...
- 04:30 AM Backport #56112 (Resolved): pacific: Test failure: test_flush (tasks.cephfs.test_readahead.TestRe...
- 04:30 AM Backport #56111 (Resolved): quincy: Test failure: test_flush (tasks.cephfs.test_readahead.TestRea...
- 04:30 AM Backport #56110 (Resolved): pacific: client: choose auth MDS for getxattr with the Xs caps
- https://github.com/ceph/ceph/pull/46799
- 04:30 AM Backport #56109 (Resolved): quincy: client: choose auth MDS for getxattr with the Xs caps
- https://github.com/ceph/ceph/pull/46800
- 04:30 AM Cleanup #3998 (Resolved): mds: split up mdstypes
- 04:27 AM Bug #55538 (Pending Backport): Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- 04:25 AM Backport #56108 (Resolved): quincy: mgr/volumes: Remove incorrect 'size' in the output of 'snapsh...
- https://github.com/ceph/ceph/pull/46804
- 04:25 AM Backport #56107 (Resolved): pacific: mgr/volumes: Remove incorrect 'size' in the output of 'snaps...
- https://github.com/ceph/ceph/pull/46803
- 04:25 AM Backport #56106 (Resolved): quincy: ceph-fuse[88614]: ceph mount failed with (65536) Unknown erro...
- https://github.com/ceph/ceph/pull/46801
- 04:25 AM Backport #56105 (Resolved): pacific: ceph-fuse[88614]: ceph mount failed with (65536) Unknown err...
- https://github.com/ceph/ceph/pull/46802
- 04:25 AM Bug #55778 (Pending Backport): client: choose auth MDS for getxattr with the Xs caps
- 04:22 AM Bug #55824 (Pending Backport): ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
- 04:21 AM Bug #55822 (Pending Backport): mgr/volumes: Remove incorrect 'size' in the output of 'snapshot in...
- 04:21 AM Bug #55822 (Resolved): mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command
- 04:20 AM Backport #56104 (Resolved): pacific: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- https://github.com/ceph/ceph/pull/46806
- 04:20 AM Backport #56103 (Resolved): quincy: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- https://github.com/ceph/ceph/pull/46805
- 04:18 AM Bug #55759 (Pending Backport): mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
- 04:17 AM Bug #56065 (Resolved): qa: TestMDSMetrics.test_delayed_metrics failure
- 02:14 AM Bug #56011 (Fix Under Review): fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
06/17/2022
- 08:31 PM Backport #55927: pacific: Unexpected file access behavior using ceph-fuse
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46596
merged - 08:31 PM Backport #55932: pacific: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*,...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46567
merged - 08:30 PM Backport #55338: pacific: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metr...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45293
merged - 08:30 PM Backport #54479: pacific: mgr/stats: be resilient to offline MDS rank-0
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45293
merged
06/16/2022
- 09:22 PM Backport #55349: pacific: mgr/volumes: Show clone failure reason in clone status command
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45928
merged - 02:31 PM Bug #56067: Cephfs data loss with root_squash enabled
- Odd. From the log:
> 2022-06-15T15:52:50.284+0000 7f26c9d35700 10 MDSAuthCap is_capable inode(path /npx/stress-tes... - 02:02 PM Bug #56067: Cephfs data loss with root_squash enabled
- Running sync after creating the file did not report errors.
- 01:46 PM Bug #56067: Cephfs data loss with root_squash enabled
- Try running `sync` after creating the file. It should report errors.
- 11:20 AM Backport #55375 (Resolved): pacific: mgr/volumes: allow users to add metadata (key-value pairs) t...
- merged.
- 10:26 AM Backport #54480 (Resolved): quincy: mgr/stats: be resilient to offline MDS rank-0
- 10:23 AM Backport #55337 (Resolved): quincy: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.tes...
- 10:18 AM Bug #56011: fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
- Xiubo Li wrote:
> From https://pulpito.ceph.com/vshankar-2022-06-10_05:38:08-fs-wip-vshankar-testing1-20220607-10413... - 06:10 AM Bug #56011: fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
From https://pulpito.ceph.com/vshankar-2022-06-10_05:38:08-fs-wip-vshankar-testing1-20220607-104134-testing-default...- 08:58 AM Bug #55446: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command
- /a/yuriw-2022-06-15_18:29:33-rados-wip-yuri4-testing-2022-06-15-1000-pacific-distro-default-smithi/6881176
/a/yuriw-... - 06:33 AM Bug #55759 (Fix Under Review): mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
06/15/2022
- 04:07 PM Bug #56067 (New): Cephfs data loss with root_squash enabled
- With root_squash client capability enabled, a file is created as a non-root user on one host, appears empty when read...
- 02:28 PM Backport #55335: pacific: Issue removing subvolume with retained snapshots - Possible quincy regr...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46139
merged - 02:28 PM Backport #55412: pacific: mds: add perf counter to record slow replies
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46138
merged - 02:27 PM Backport #55384: pacific: mgr/snap_schedule: include timezone information in scheduled snapshots
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45968
merged - 02:25 PM Backport #55192: pacific: Client::setxattr always sends setxattr request to MDS
- Xiubo Li wrote:
> https://github.com/ceph/ceph/pull/45792
merged - 01:58 PM Bug #56065 (Fix Under Review): qa: TestMDSMetrics.test_delayed_metrics failure
- 01:42 PM Bug #56065 (Resolved): qa: TestMDSMetrics.test_delayed_metrics failure
- TestMDSMetrics.test_delayed_metrics fails with the following message:...
- 11:47 AM Bug #56063 (Closed): Snapshot retention config lost after mgr restart
- In https://tracker.ceph.com/issues/54052 the issue that scheduled snapshots are not longer created after a mgr restar...
- 08:55 AM Feature #56058: mds/MDBalancer: add an arg to limit depth when dump loads for dirfrags
- https://github.com/ceph/ceph/pull/46685
- 08:54 AM Feature #56058 (Pending Backport): mds/MDBalancer: add an arg to limit depth when dump loads for ...
- Directory hierarchy may be deep for a large filesystem, cmd dump loads would output
a lot and take a long time. So a... - 07:05 AM Backport #56056 (Resolved): pacific: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): asser...
- https://github.com/ceph/ceph/pull/46949
- 07:05 AM Backport #56055 (Resolved): quincy: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert...
- https://github.com/ceph/ceph/pull/46948
- 07:03 AM Bug #54653 (Pending Backport): crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag...
- 02:05 AM Bug #56011 (In Progress): fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
- 02:01 AM Backport #56015 (In Progress): quincy: crash just after MDS become active
- 02:00 AM Backport #56016 (In Progress): pacific: crash just after MDS become active
- 01:53 AM Backport #55994 (In Progress): quincy: client: switch to glibc's STATX macros
- 01:53 AM Backport #55993 (In Progress): pacific: client: switch to glibc's STATX macros
Also available in: Atom