Activity
From 04/13/2022 to 05/12/2022
05/12/2022
- 06:06 PM Backport #55630 (Resolved): quincy: cephfs-shell: saving files doesn't work as expected
- 06:05 PM Backport #55629 (Resolved): pacific: cephfs-shell: saving files doesn't work as expected
- 06:05 PM Backport #55628 (Resolved): quincy: cephfs-shell: creates directories in local file system even i...
- 06:05 PM Backport #55627 (Resolved): pacific: cephfs-shell: creates directories in local file system even ...
- 06:05 PM Backport #55626 (Resolved): quincy: cephfs-shell: put command should accept both path mandatorily...
- 06:05 PM Backport #55625 (Resolved): pacific: cephfs-shell: put command should accept both path mandatoril...
- 02:50 PM Support #55486: cephfs degraded during upgrade from 16.2.5 -> 16.2.6
- Venky Shankar wrote:
> Hi Jesse,
>
> Do you have the MDS logs when the file system was reported as damaged? cepha... - 12:20 PM Backport #55621: quincy: qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra dat...
- Note to backporter - needs to be picked up with https://github.com/ceph/ceph/pull/44151 backport
- 12:12 PM Backport #55621 (Resolved): quincy: qa: fs suite tests failing with "json.decoder.JSONDecodeError...
- https://github.com/ceph/ceph/pull/46469
- 11:52 AM Bug #55620 (Pending Backport): ceph pacific fails to perform fs/multifs test
- During execution of the integration tests (IBM Z, BE) the fs/multifs suite produces a set of error related to segfaul...
- 09:46 AM Bug #55112 (Pending Backport): cephfs-shell: saving files doesn't work as expected
- 09:46 AM Bug #55216 (Pending Backport): cephfs-shell: creates directories in local file system even if fil...
- 09:46 AM Bug #55242 (Pending Backport): cephfs-shell: put command should accept both path mandatorily and ...
- 09:45 AM Bug #53996 (Resolved): qa: update fs:upgrade tasks to upgrade from pacific instead of octopus, or...
- 09:43 AM Bug #55516 (Pending Backport): qa: fs suite tests failing with "json.decoder.JSONDecodeError: Ext...
- 09:41 AM Bug #55572 (Resolved): qa/cephfs: omit_sudo doesn't have effect when passed to run_shell()
- 03:04 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Another instance: https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-...
05/11/2022
- 12:10 PM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Jeff Layton wrote:
> > Anyway we need to make sure the unlink and create sequence in either the client side or the M... - 10:34 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- > Anyway we need to make sure the unlink and create sequence in either the client side or the MDS side.
We do d_dr... - 09:37 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > This happens at least once when running fs s... - 09:13 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Xiubo Li wrote:
> Locally when I running the _*snaptest-git-ceph.sh*_ tests there had two MDS daemons crash with:
>... - 09:06 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Xiubo Li wrote:
> Venky Shankar wrote:
> > This happens at least once when running fs suite. Latest - https://pulpi... - 08:32 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Locally when I running the _*snaptest-git-ceph.sh*_ tests there had two MDS daemons crash with:...
- 06:08 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Venky Shankar wrote:
> This happens at least once when running fs suite. Latest - https://pulpito.ceph.com/vshankar-... - 05:44 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- This happens at least once when running fs suite. Latest - https://pulpito.ceph.com/vshankar-2022-05-09_10:08:21-fs-w...
05/10/2022
- 05:28 PM Fix #55536 (Resolved): cephfs-shell: print proper python error message
- 02:47 PM Backport #55413: quincy: mds: add perf counter to record slow replies
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46156
merged - 02:45 PM Backport #55540: quincy: cephfs-top: multiple file system support
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46147
merged - 02:43 PM Backport #55376: quincy: mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45994
merged - 02:43 PM Backport #55039: quincy: ceph-fuse: mount -a on already mounted folder should be ignored
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45939
merged - 09:38 AM Bug #55553 (Resolved): qa/vstart_runner: LocalFuseMount._run_mount_cmd() doesn't return values on...
- 07:22 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Venky Shankar wrote:
> Another instance: https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testi... - 04:48 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- The kernel crash issue has been fixed in https://patchwork.kernel.org/project/ceph-devel/list/?series=639983.
- 04:57 AM Bug #54701 (Fix Under Review): crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CIn...
- 03:56 AM Bug #54701: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CDentry*, MDR...
- Venky Shankar wrote:
> I've managed to reproduce this crash today. Will send out a fix.
OK. This was not the exac...
05/09/2022
- 07:08 PM Bug #55583 (Resolved): Intermittent ParsingError failure in mgr/volumes module during "clone can...
- This issue is a bit difficult to explain so please bear with me.
On quincy, but not on octopus or pacific, a test ... - 02:05 PM Backport #55580 (Resolved): pacific: snap_schedule: avoid throwing traceback for bad or missing a...
- 02:05 PM Backport #55579 (In Progress): quincy: snap_schedule: avoid throwing traceback for bad or missing...
- 02:01 PM Bug #54560 (Pending Backport): snap_schedule: avoid throwing traceback for bad or missing arguments
- 01:29 PM Backport #52427 (Resolved): pacific: qa: "error reading sessionmap 'mds1_sessionmap'"
- 12:44 PM Bug #55537 (Triaged): mds: crash during fs:upgrade test
- 12:43 PM Bug #55538 (Triaged): Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- 07:27 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- There has no debug kernel for _*kernel-5.18.0_rc2_ceph_g1771083b2f18-1.x86_64.rpm*_, which could be get from [1], so ...
- 05:14 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- From _*remote/smithi183/syslog/kern.log.gz*_, the kernel has been crashed due to _*NULL pointer dereference*_:
<pr...
05/06/2022
- 04:36 PM Bug #55572 (Fix Under Review): qa/cephfs: omit_sudo doesn't have effect when passed to run_shell()
- 04:36 PM Bug #55572: qa/cephfs: omit_sudo doesn't have effect when passed to run_shell()
- Should this too be backported?
- 04:20 PM Bug #55572 (Resolved): qa/cephfs: omit_sudo doesn't have effect when passed to run_shell()
- run_shell() of mount.py accepts omit_sudo but doesn't pass it to underlying methods and therefore has no effect. This...
- 05:47 AM Fix #55567 (Resolved): cephfs-shell: rm returns just the error code and not proper error msg
- ...
- 04:20 AM Bug #55558: qa/cephfs: mon cap not properly tested in caps_helper.py
- Rishabh Dave wrote:
> Venky, better to backport this fix? It might prevent some bugs reaching downstream.
ACK. - 04:19 AM Bug #55557: qa/cephfs: setting to sudo to True has no effect on _run_python()
- Rishabh Dave wrote:
> Venky, better to backport this fix? It might prevent some bugs reaching downstream.
ACK. - 04:17 AM Backport #55428 (In Progress): quincy: unaccessible dentries after fsstress run with namespace-re...
- 04:12 AM Backport #55427 (In Progress): pacific: unaccessible dentries after fsstress run with namespace-r...
- 02:52 AM Backport #55342 (In Progress): quincy: mds: try to reset heartbeat when fetching or committing.
- 02:45 AM Backport #55343 (In Progress): pacific: mds: try to reset heartbeat when fetching or committing.
- 02:23 AM Backport #55346 (In Progress): pacific: client: get stuck forever when the forward seq exceeds 256
- 02:20 AM Backport #55345 (In Progress): quincy: client: get stuck forever when the forward seq exceeds 256
05/05/2022
- 04:03 PM Bug #55557: qa/cephfs: setting to sudo to True has no effect on _run_python()
- Venky, better to backport this fix? It might prevent some bugs reaching downstream.
- 02:35 PM Bug #55557 (Fix Under Review): qa/cephfs: setting to sudo to True has no effect on _run_python()
- 02:33 PM Bug #55557 (Pending Backport): qa/cephfs: setting to sudo to True has no effect on _run_python()
- 04:02 PM Bug #55558: qa/cephfs: mon cap not properly tested in caps_helper.py
- Venky, better to backport this fix? It might prevent some bugs reaching downstream.
- 04:01 PM Bug #55558 (Fix Under Review): qa/cephfs: mon cap not properly tested in caps_helper.py
- 02:51 PM Bug #55558 (Pending Backport): qa/cephfs: mon cap not properly tested in caps_helper.py
- Tests were written with assumption that MDS cap @allow rw@ enables client to access only default FS but the truth is ...
- 04:01 PM Bug #55553 (Fix Under Review): qa/vstart_runner: LocalFuseMount._run_mount_cmd() doesn't return v...
- 10:41 AM Bug #55553 (Resolved): qa/vstart_runner: LocalFuseMount._run_mount_cmd() doesn't return values on...
- @FuseMount._run_mount_cmd()@ returns a tuple containing command's return value, standard output and standard error wh...
- 11:28 AM Feature #55554 (In Progress): cephfs-shell: 'rm' cmd needs -r and -f options
- Currently, in cephfs-shell, there is no option to delete a non-empty directory. If there is any dir that contains fil...
- 11:19 AM Bug #55516 (Fix Under Review): qa: fs suite tests failing with "json.decoder.JSONDecodeError: Ext...
- 10:15 AM Bug #40860 (Fix Under Review): cephfs-shell: raises incorrect error when regfiles are passed to b...
- 07:39 AM Bug #55041 (Fix Under Review): mgr/volumes: display in-progress clones for a snapshot
- 06:36 AM Backport #55413 (In Progress): quincy: mds: add perf counter to record slow replies
- 02:51 AM Bug #54411: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem i...
- A small fix https://github.com/ceph/ceph/pull/46153 followed.
- 12:40 AM Bug #55332 (In Progress): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
05/04/2022
- 03:00 PM Backport #55540 (In Progress): quincy: cephfs-top: multiple file system support
- 09:37 AM Backport #55540 (Resolved): quincy: cephfs-top: multiple file system support
- https://github.com/ceph/ceph/pull/46147
- 02:40 PM Backport #55539 (In Progress): pacific: cephfs-top: multiple file system support
- 09:37 AM Backport #55539 (Resolved): pacific: cephfs-top: multiple file system support
- https://github.com/ceph/ceph/pull/46146
- 11:57 AM Bug #55516: qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 col...
- Jos figured this failure is due to range based (CIDR) blocklisting changes.
- 04:31 AM Bug #55516: qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 col...
- Another instance: https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-test...
- 11:41 AM Bug #40860 (In Progress): cephfs-shell: raises incorrect error when regfiles are passed to be del...
- 09:33 AM Bug #47849 (Resolved): qa/vstart_runner: LocalRemote.run can't take multiple commands
- 09:30 AM Feature #50470 (Pending Backport): cephfs-top: multiple file system support
- 09:30 AM Bug #48863 (Resolved): cephfs-shell should allow changing all mode bits
- 09:23 AM Bug #55538 (Resolved): Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
- - https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smit...
- 09:19 AM Bug #55537 (Triaged): mds: crash during fs:upgrade test
- - https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smit...
- 09:11 AM Bug #55313: Unexpected file access behavior using ceph-fuse
- Yes, the PR is posted and would be backported.
- 09:08 AM Backport #55336 (In Progress): quincy: Issue removing subvolume with retained snapshots - Possibl...
- 09:06 AM Backport #55335 (In Progress): pacific: Issue removing subvolume with retained snapshots - Possib...
- 08:15 AM Backport #55412 (In Progress): pacific: mds: add perf counter to record slow replies
- 07:38 AM Fix #55536 (Resolved): cephfs-shell: print proper python error message
- onecmd() is the global error catcher in cephfs-shell. Whenever a python exception occurres in cephfs-shell, it would ...
- 06:22 AM Feature #54978 (Fix Under Review): cephfs-top:addition of filesystem menu(improving GUI)
- 04:57 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Another instance: https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-test...
05/02/2022
- 05:03 PM Bug #55516: qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 col...
- From logs for a sample test:...
- 05:02 PM Bug #55516 (Resolved): qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data:...
- Started seeing these failures recently:
- https://pulpito.ceph.com/vshankar-2022-05-02_09:11:25-fs-wip-vshankar-te... - 01:28 PM Support #55486 (In Progress): cephfs degraded during upgrade from 16.2.5 -> 16.2.6
- Hi Jesse,
Do you have the MDS logs when the file system was reported as damaged? cephadm does set the relevant con... - 12:52 PM Backport #55239 (In Progress): quincy: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-...
- 12:52 PM Backport #55238 (In Progress): pacific: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an...
- 12:44 PM Bug #55464 (In Progress): cephfs: mds/client error when client stale reconnect
- 12:21 PM Bug #55331: pjd failure (caused by xattr's value not consistent between auth MDS and replicate MD...
- I don't think you have enough information to solve this:
It's not clear which test actually failed. pjdfstests con... - 11:57 AM Bug #55331: pjd failure (caused by xattr's value not consistent between auth MDS and replicate MD...
- Assigning to Xiubo for further investigation about which commit fixed this issue.
- 06:59 AM Bug #55041: mgr/volumes: display in-progress clones for a snapshot
- from irc chat:...
- 06:37 AM Bug #53601 (Resolved): vstart_runner: Running test_data_scan test locally fails with tracebacks
- 06:29 AM Feature #55401 (Fix Under Review): mgr/volumes: allow users to add metadata (key-value pairs) for...
- 04:55 AM Backport #55413: quincy: mds: add perf counter to record slow replies
- Nikhil, please take this.
- 04:55 AM Backport #55412: pacific: mds: add perf counter to record slow replies
- Nikhil, please take this.
- 04:53 AM Backport #55385: quincy: mgr/snap_schedule: include timezone information in scheduled snapshots
- Milind, please take this.
04/30/2022
- 08:18 AM Bug #55331: pjd failure (caused by xattr's value not consistent between auth MDS and replicate MD...
- Since the teuthology run used a kclient, I tried running 100 iterations of pjd.sh on the latest testing kernel 5.18.0...
04/29/2022
- 04:32 PM Support #55486: cephfs degraded during upgrade from 16.2.5 -> 16.2.6
- I've managed to fix this, and am posting here to save anyone else from wasting as much time as I did.
After some... - 03:48 PM Backport #55348: quincy: mgr/volumes: Show clone failure reason in clone status command
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45927
merged - 03:47 PM Backport #55337: quincy: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metri...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45291
merged - 03:46 PM Backport #54480: quincy: mgr/stats: be resilient to offline MDS rank-0
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45291
merged - 08:44 AM Bug #55313: Unexpected file access behavior using ceph-fuse
- Thanks, I can confirm that this works and as you mentioned does slow down file access. In our case, which is an rsnyc...
04/28/2022
- 07:30 PM Support #55486 (In Progress): cephfs degraded during upgrade from 16.2.5 -> 16.2.6
- Hello everyone. I've tried upgrading my ceph cluster by a point release following instructions here: https://docs.cep...
- 05:29 PM Bug #54546: mds: crash due to corrupt inode and omap entry
- Saw this in another cluster. The corruption is seen in the EMetaBlob journal event. The inode+dentry fetch from the j...
- 12:44 PM Bug #55313: Unexpected file access behavior using ceph-fuse
- Matthias Aebi wrote:
> Ok, thank you. I'll certainly give this a try. Besides some cost in performance, does this ha... - 12:35 PM Bug #55313: Unexpected file access behavior using ceph-fuse
- Ok, thank you. I'll certainly give this a try. Besides some cost in performance, does this have any impact on who mig...
- 12:26 PM Bug #55313: Unexpected file access behavior using ceph-fuse
- Hi Matthias,
Quick workaround would be to set "fuse_default_permissions=true" but it might cost you performance. - 12:24 PM Bug #55313 (Fix Under Review): Unexpected file access behavior using ceph-fuse
- 05:01 AM Bug #55170 (Fix Under Review): mds: crash during rejoin (CDir::fetch_keys)
04/27/2022
- 05:11 PM Bug #54701: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CDentry*, MDR...
- I've managed to reproduce this crash today. Will send out a fix.
- 02:26 PM Feature #55470 (Resolved): qa: postgresql test suite workunit
- Run postgresql database test suite as a workunit for cephfs.
- 06:58 AM Bug #55464 (In Progress): cephfs: mds/client error when client stale reconnect
- Options:
mds_session_blocklist_on_evict: false
mds_session_blocklist_on_timeout: false
client_reconnect_stal... - 05:42 AM Feature #55463 (Duplicate): cephfs-top: allow users to chose sorting order
- Right now, the client list are sorted based on client connection order. Allow users to chose a sort field. This would...
04/26/2022
- 05:01 PM Bug #54236 (Resolved): qa/cephfs: change default timeout from 900 secs to 300
- 12:40 PM Feature #48911 (Fix Under Review): cephfs-shell needs "ln" command equivalent
- 09:45 AM Backport #55449 (Resolved): pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm ...
- https://github.com/ceph/ceph/pull/46798
- 06:19 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Venky Shankar wrote:
> Xiubo, please take a look.
Sure. - 05:46 AM Bug #55332 (Triaged): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Xiubo, please take a look.
- 05:54 AM Bug #55313: Unexpected file access behavior using ceph-fuse
- Thanks for the report, Matthias. This seems straightforward to reproduce.
Kotresh, please take a look. - 05:49 AM Bug #55316: qa: add client asok support to get the options
- Neeraj, guessing this is probably required for writing test to be run by vstart_runner for https://github.com/ceph/ce...
- 05:47 AM Bug #55331 (Triaged): pjd failure (caused by xattr's value not consistent between auth MDS and re...
- Milind, please take a look.
- 04:30 AM Backport #55447 (Resolved): quincy: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm d...
- https://github.com/ceph/ceph/pull/46476
- 04:25 AM Bug #54411 (Pending Backport): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon...
- 02:36 AM Bug #55446 (New): mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' c...
- mgr-nfs-upgrade example: /a/yuriw-2022-04-23_16:12:08-rados-wip-55324-pacific-backport-distro-default-smithi/6803121
...
04/25/2022
- 02:20 PM Backport #55428 (Resolved): quincy: unaccessible dentries after fsstress run with namespace-restr...
- https://github.com/ceph/ceph/pull/46184
- 02:20 PM Backport #55427 (Resolved): pacific: unaccessible dentries after fsstress run with namespace-rest...
- https://github.com/ceph/ceph/pull/46183
- 02:19 PM Bug #54046 (Pending Backport): unaccessible dentries after fsstress run with namespace-restricted...
04/22/2022
- 05:30 PM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- Hmm, the stuff in split_at looks like we can and should just swap the logic -- instead of iterating over all inodes i...
- 02:12 PM Feature #55414 (New): mds:asok interface to cleanup permanently damaged inodes
- There exists a nagging bug in the MDS due to a corrupt on-disk inode causing and assert in the MDS when removing the ...
- 12:00 PM Backport #55413 (Resolved): quincy: mds: add perf counter to record slow replies
- https://github.com/ceph/ceph/pull/46156
- 12:00 PM Backport #55412 (Resolved): pacific: mds: add perf counter to record slow replies
- https://github.com/ceph/ceph/pull/46138
- 11:57 AM Feature #55126 (Pending Backport): mds: add perf counter to record slow replies
- 11:57 AM Feature #55126 (Resolved): mds: add perf counter to record slow replies
- 08:57 AM Bug #55409 (Resolved): client: incorrect operator precedence in Client.cc
- Here's the code I am referring to in following explanation - https://github.com/ceph/ceph/commit/ad61e1dd1a56cd27be17...
- 03:07 AM Backport #55376 (In Progress): quincy: mgr/volumes: allow users to add metadata (key-value pairs)...
04/21/2022
- 01:22 PM Bug #55394 (Pending Backport): qa/cephfs: don't exclamation mark on test_cephfs_shell.py
- 08:40 AM Feature #55401 (Resolved): mgr/volumes: allow users to add metadata (key-value pairs) for subvolu...
- This is similar to subvolume metadata get/set/list/remove. Updating an existing key should be supported.
The snapsho... - 05:45 AM Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- NOTE: in this test the _*inline data*_ is enabled:...
04/20/2022
- 11:30 AM Bug #55394 (Pending Backport): qa/cephfs: don't exclamation mark on test_cephfs_shell.py
- Exclamation mark is a special character for bash as well as
cephfs-shell. For bash, it substitutes current command w... - 04:14 AM Backport #55384 (In Progress): pacific: mgr/snap_schedule: include timezone information in schedu...
04/19/2022
- 05:30 PM Backport #55385 (Resolved): quincy: mgr/snap_schedule: include timezone information in scheduled ...
- https://github.com/ceph/ceph/pull/47734
- 05:30 PM Backport #55384 (Resolved): pacific: mgr/snap_schedule: include timezone information in scheduled...
- https://github.com/ceph/ceph/pull/45968
- 05:27 PM Bug #54374 (Pending Backport): mgr/snap_schedule: include timezone information in scheduled snaps...
- 06:30 AM Bug #54374 (Fix Under Review): mgr/snap_schedule: include timezone information in scheduled snaps...
- 03:16 PM Backport #55375 (In Progress): pacific: mgr/volumes: allow users to add metadata (key-value pairs...
- 11:25 AM Backport #55375 (Resolved): pacific: mgr/volumes: allow users to add metadata (key-value pairs) t...
- https://github.com/ceph/ceph/pull/45961
- 11:46 AM Bug #55240 (Fix Under Review): mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- 11:39 AM Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- I have create a new tracker#55377 to fix the kernel issue in https://tracker.ceph.com/issues/55240#note-4.
And thi... - 09:16 AM Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- Another issue in this failure:
In _*mds.1*_ after it find the inode for _*#0x1/client.0/tmp/fsstress/ltp-full-2009... - 06:18 AM Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- The file _*#0x1/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress*_ was created in _*mds....
- 05:52 AM Bug #55240 (In Progress): mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- 11:25 AM Backport #55376 (Resolved): quincy: mgr/volumes: allow users to add metadata (key-value pairs) to...
- https://github.com/ceph/ceph/pull/45994
- 11:22 AM Feature #54472 (Pending Backport): mgr/volumes: allow users to add metadata (key-value pairs) to ...
- 09:19 AM Bug #55196 (In Progress): mgr/stats: perf stats command doesn't have filter option for fs names.
- 09:19 AM Bug #55234 (Fix Under Review): snap_schedule: replace .snap with the client configured snap dir name
- 09:12 AM Feature #51434 (Fix Under Review): pybind/mgr/volumes: add basic introspection
- 09:01 AM Backport #55338 (In Progress): pacific: Test failure: test_perf_stats_stale_metrics (tasks.cephfs...
- 08:56 AM Backport #55337 (In Progress): quincy: Test failure: test_perf_stats_stale_metrics (tasks.cephfs....
- 04:04 AM Backport #55039 (In Progress): quincy: ceph-fuse: mount -a on already mounted folder should be ig...
04/18/2022
- 04:00 PM Backport #55056: pacific: mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr r...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45906
merged - 04:00 PM Backport #53760: pacific: snap scheduler: cephfs snapshot schedule status doesn't list the snapsh...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45906
merged - 09:50 AM Bug #55354 (Resolved): cephfs: xfstests-dev can't be run against fuse mounted cephfs
- This will require 2 steps -
1. Modify xfstests-dev repo to add the ability to mount CephFS using FUSE.
2. Modify qa... - 08:53 AM Backport #55353 (In Progress): quincy: pybind/mgr/volumes: Clone operation hangs
- 08:50 AM Backport #55353 (Resolved): quincy: pybind/mgr/volumes: Clone operation hangs
- https://github.com/ceph/ceph/pull/45927
- 08:52 AM Backport #55352 (In Progress): pacific: pybind/mgr/volumes: Clone operation hangs
- 08:50 AM Backport #55352 (Resolved): pacific: pybind/mgr/volumes: Clone operation hangs
- https://github.com/ceph/ceph/pull/45928
- 08:51 AM Backport #55349 (In Progress): pacific: mgr/volumes: Show clone failure reason in clone status co...
- 04:15 AM Backport #55349 (Resolved): pacific: mgr/volumes: Show clone failure reason in clone status command
- https://github.com/ceph/ceph/pull/45928
- 08:48 AM Backport #55348 (In Progress): quincy: mgr/volumes: Show clone failure reason in clone status com...
- 04:15 AM Backport #55348 (Resolved): quincy: mgr/volumes: Show clone failure reason in clone status command
- https://github.com/ceph/ceph/pull/45927
- 08:45 AM Bug #55217 (Pending Backport): pybind/mgr/volumes: Clone operation hangs
- 05:52 AM Backport #55040 (In Progress): pacific: ceph-fuse: mount -a on already mounted folder should be i...
- 04:14 AM Bug #55190 (Pending Backport): mgr/volumes: Show clone failure reason in clone status command
04/17/2022
- 09:55 AM Backport #55346 (Resolved): pacific: client: get stuck forever when the forward seq exceeds 256
- https://github.com/ceph/ceph/pull/46179
- 09:55 AM Backport #55345 (Resolved): quincy: client: get stuck forever when the forward seq exceeds 256
- https://github.com/ceph/ceph/pull/46178
- 09:53 AM Bug #55129 (Pending Backport): client: get stuck forever when the forward seq exceeds 256
04/16/2022
- 03:25 PM Backport #55343 (Resolved): pacific: mds: try to reset heartbeat when fetching or committing.
- https://github.com/ceph/ceph/pull/46180
- 03:25 PM Backport #55342 (Resolved): quincy: mds: try to reset heartbeat when fetching or committing.
- https://github.com/ceph/ceph/pull/46181
- 03:20 PM Bug #54345 (Pending Backport): mds: try to reset heartbeat when fetching or committing.
- 03:20 PM Bug #54345 (Resolved): mds: try to reset heartbeat when fetching or committing.
04/14/2022
- 12:10 PM Backport #55338 (Resolved): pacific: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.te...
- https://github.com/ceph/ceph/pull/45293
- 12:10 PM Backport #55337 (Resolved): quincy: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.tes...
- https://github.com/ceph/ceph/pull/45291
- 12:07 PM Bug #54971 (Pending Backport): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds...
- 12:05 PM Backport #55336 (Resolved): quincy: Issue removing subvolume with retained snapshots - Possible q...
- https://github.com/ceph/ceph/pull/46140
- 12:05 PM Backport #55335 (Resolved): pacific: Issue removing subvolume with retained snapshots - Possible ...
- https://github.com/ceph/ceph/pull/46139
- 12:02 PM Bug #54625 (Pending Backport): Issue removing subvolume with retained snapshots - Possible quincy...
- 09:35 AM Bug #55332 (Resolved): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- https://pulpito.ceph.com/vshankar-2022-04-11_12:24:06-fs-wip-vshankar-testing1-20220411-144044-testing-default-smithi...
- 08:58 AM Bug #55331 (Resolved): pjd failure (caused by xattr's value not consistent between auth MDS and r...
- This run: https://pulpito.ceph.com/vshankar-2022-04-11_12:24:06-fs-wip-vshankar-testing1-20220411-144044-testing-defa...
- 06:02 AM Bug #50821: qa: untar_snap_rm failure during mds thrashing
- Similar failure here: https://pulpito.ceph.com/vshankar-2022-04-11_12:24:06-fs-wip-vshankar-testing1-20220411-144044-...
- 05:39 AM Bug #55329 (Fix Under Review): qa: add test case for fsync crash issue
- 05:35 AM Bug #55329: qa: add test case for fsync crash issue
- This could be reproduce very easy by using the following kernel patch:...
- 05:30 AM Bug #55329 (Resolved): qa: add test case for fsync crash issue
- This is the test case for https://tracker.ceph.com/issues/55327.
04/13/2022
- 02:04 PM Backport #55264: quincy: mount.ceph: mount helper incorrectly passes `ms_mode' mount option to ol...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45780
merged - 12:14 PM Bug #55316 (New): qa: add client asok support to get the options
- Currently for the vstart_runner.py it only support mon/mds/osd:...
- 09:29 AM Bug #55313 (Resolved): Unexpected file access behavior using ceph-fuse
- Since upgrading from Nautilus (14.2.21) to Pacific (16.2.7) ceph-fuse shows a rather unexpected and unusual behavior ...
- 08:09 AM Backport #53760 (In Progress): pacific: snap scheduler: cephfs snapshot schedule status doesn't l...
- 07:28 AM Backport #53760 (New): pacific: snap scheduler: cephfs snapshot schedule status doesn't list the ...
- * re-doing bad backport
- 02:23 AM Feature #55283: qa: add fsync/sync stuck waiting for unsafe request test
- Normally when before fixing this we can reproduce it very easy, and also mostly the duration is larger around 4 secon...
- 02:11 AM Feature #55283: qa: add fsync/sync stuck waiting for unsafe request test
- Added two test cases support, one for file sync and another is for filesystem sync.
- 02:09 AM Feature #55283 (Fix Under Review): qa: add fsync/sync stuck waiting for unsafe request test
Also available in: Atom