Activity
From 02/19/2020 to 03/19/2020
03/19/2020
- 09:02 PM Bug #44680: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- Yeah definitely the fault of https://github.com/ceph/ceph/pull/33538, which was trying to prevent us from asserting o...
- 08:56 PM Bug #44680: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- > [13:55:18] <@sage> it was triggered by the upgrade... i'm guessing when the old container was stopped and got blac...
- 08:49 PM Bug #44680: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- Do we have any logs or more detail about what happened?
The only thing this flags in my head is https://github.com... - 07:12 PM Bug #44680 (Resolved): mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- ...
- 01:01 PM Bug #44677 (Resolved): stale scrub status entry from a failed mds shows up in `ceph status`
- This happens intermittently. When an active mds (mds.b) is terminated, mds.c transitions to active, but task status s...
- 12:53 PM Bug #43748 (Fix Under Review): client: improve wanted handling so we don't request unused caps (a...
- 10:10 AM Backport #44668 (In Progress): nautilus: mgr/dashboard: backend API test failure "test_access_per...
- 03:16 AM Backport #44291 (Resolved): nautilus: mds: SIGSEGV in Migrator::export_sessions_flushed
- 03:16 AM Bug #43909 (Pending Backport): mds: SIGSEGV in Migrator::export_sessions_flushed
- Whoops wrong ticket.
- 03:15 AM Bug #43909 (Resolved): mds: SIGSEGV in Migrator::export_sessions_flushed
- 02:04 AM Bug #6770: ceph fscache: write file more than a page size to orignal file cause cachfiles bug on EOF
- Test the latest ceph and kclient, it works well:...
03/18/2020
- 10:24 PM Backport #42440: mimic: mds: create a configurable snapshot limit
- Milind Changire wrote:
> do I need to post a PR against the mimic branch for this item ?
Yes, as far as I can tel... - 05:50 PM Backport #44670 (In Progress): mgr/volumes: support canceling in-progress/pending clone operations.
- 05:48 PM Backport #44670 (Resolved): mgr/volumes: support canceling in-progress/pending clone operations.
- https://github.com/ceph/ceph/pull/34036
- 01:52 PM Bug #44208 (Pending Backport): mgr/volumes: support canceling in-progress/pending clone operations.
- 01:13 PM Bug #44638 (Fix Under Review): test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks.TestSc...
- 11:23 AM Backport #44668 (Resolved): nautilus: mgr/dashboard: backend API test failure "test_access_permis...
- https://github.com/ceph/ceph/pull/34055
https://github.com/ceph/ceph/pull/34817 - 11:23 AM Bug #42228 (Pending Backport): mgr/dashboard: backend API test failure "test_access_permissions"
- 10:40 AM Bug #44525 (Fix Under Review): LibCephFS::RecalledGetattr test failed
- 09:45 AM Bug #44525: LibCephFS::RecalledGetattr test failed
- The cap grant may delayed and in the case if the inode locally didn't have the 'Fscr', the set_deleg() will return -E...
- 07:38 AM Bug #43965 (Resolved): mgr/volumes: synchronize ownership (for symlinks) and inode timestamps for...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:37 AM Bug #44438 (Resolved): qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephfs.te...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:36 AM Backport #44521 (Resolved): nautilus: qa: ERROR: test_subvolume_snapshot_clone_different_groups (...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33877
m... - 07:36 AM Backport #44484 (Resolved): nautilus: mgr/volumes: synchronize ownership (for symlinks) and inode...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33877
m... - 07:34 AM Backport #42441: nautilus: mds: create a configurable snapshot limit
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33295
m... - 07:31 AM Bug #43901: qa: fsx: fatal error: libaio.h: No such file or directory
- Whoops https://github.com/ceph/ceph/pull/33959, in test now.
03/17/2020
- 04:09 PM Feature #36413 (Resolved): make cephfs-data-scan reconstruct snaptable
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:08 PM Bug #38597 (Resolved): fs: "log [WRN] : failed to reconnect caps for missing inodes"
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:07 PM Bug #40474 (Resolved): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:07 PM Bug #41434 (Resolved): mds: infinite loop in Locker::file_update_finish()
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:07 PM Bug #44657 (Resolved): cephfs-shell: Fix flake8 errors (F841, E302, E502, E128, E305 and E222)
- ...
- 04:06 PM Bug #41880 (Resolved): mds:split the dir if the op makes it oversized, because some ops maybe in ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:05 PM Backport #44655 (Resolved): nautilus: qa: SyntaxError: invalid token
- https://github.com/ceph/ceph/pull/34470
- 04:03 PM Bug #44645 (Resolved): cephfs-shell: Fix flake8 errors (E302, E502, E128, F821, W605, E128 and E122)
- ...
- 03:07 PM Backport #42441 (Resolved): nautilus: mds: create a configurable snapshot limit
- 12:16 PM Bug #43515 (Pending Backport): qa: SyntaxError: invalid token
- 10:36 AM Bug #44638 (Resolved): test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks.TestScrubContr...
- Greg saw this in nautilus during test: http://pulpito.front.sepia.ceph.com/gregf-2020-03-13_20:56:54-fs-wip-greg-test...
- 07:51 AM Bug #44525: LibCephFS::RecalledGetattr test failed
- Hi Victor Zhang,
I have tried to reproduce it by checking the v12.2.12, but couldn't success. BTW, is there any ch...
03/16/2020
- 10:38 PM Backport #42159 (Resolved): mimic: osdc: objecter ops output does not have useful time information
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31384
m... - 10:38 PM Backport #42143 (Resolved): mimic: mds:split the dir if the op makes it oversized, because some o...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31379
m... - 10:38 PM Backport #42156 (Resolved): mimic: mds: infinite loop in Locker::file_update_finish()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31284
m... - 10:38 PM Backport #41114 (Resolved): mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31283
m... - 10:37 PM Backport #38643 (Resolved): mimic: fs: "log [WRN] : failed to reconnect caps for missing inodes"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31282
m... - 10:37 PM Backport #37906 (Resolved): mimic: make cephfs-data-scan reconstruct snaptable
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31281
m... - 02:01 PM Bug #44565: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XL...
- Sorry this commit comes from github.com/SUSE/ceph
The this build corresponds to commit daf0990c19c89267ea10c40b9c7... - 12:57 PM Bug #44565: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XL...
- If you have coredump, please use gdb to print value of 'state'. Besides ceph was compiled from 8881d33957b54b101eae9c...
- 01:58 PM Bug #44525 (In Progress): LibCephFS::RecalledGetattr test failed
- As discussed with Yan, I will take it.
- 11:05 AM Bug #43901: qa: fsx: fatal error: libaio.h: No such file or directory
- https://github.com/ceph/ceph/pull/43901 does not exist
- 10:15 AM Bug #42835: qa: test_scrub_abort fails during check_task_status("idle")
- Nathan Cutler wrote:
> > tracker #42738 seems to be incorrectly marked as a blocker for this tracker.
>
> Wasn't ... - 05:46 AM Bug #44127 (Fix Under Review): cephfs-shell: read config options from cephf.conf and from ceph co...
- cepfs-shell will have it's own conf for its options. See: https://github.com/ceph/ceph/pull/33286#issuecomment-59050...
- 05:43 AM Bug #44579 (Fix Under Review): qa: commit 9f6c764f10f break qa code in several places
03/15/2020
- 09:05 PM Bug #44100: cephfs rsync kworker high load.
Not really solved it with the 5.5 kernel-ml on bare metal server
PID USER PR NI VIRT RES SHR ...
03/14/2020
- 03:13 PM Bug #42835: qa: test_scrub_abort fails during check_task_status("idle")
- > tracker #42738 seems to be incorrectly marked as a blocker for this tracker.
Wasn't it the other way around? Thi...
03/13/2020
- 06:56 PM Bug #43901 (Fix Under Review): qa: fsx: fatal error: libaio.h: No such file or directory
- 09:17 AM Bug #44172 (In Progress): cephfs-journal-tool: cannot set --dry_run arg
- 03:24 AM Bug #44071 (Fix Under Review): kclient: reconfigure superblock parameters does not work
- https://patchwork.kernel.org/project/ceph-devel/list/?series=241303
- 01:31 AM Bug #44389: client: fuse mount will print call trace with incorrect options
- The objecter_finisher is already started in Client::Client(), but in the failure path when initializing and starting ...
03/12/2020
- 05:09 PM Bug #44172: cephfs-journal-tool: cannot set --dry_run arg
- Now you can specify --dry_run after "event" on the command-line.
- 01:50 PM Bug #44579: qa: commit 9f6c764f10f break qa code in several places
- 2020-03-12 13:49:45,377.377 INFO:__main__:======================================================================
202... - 01:49 PM Bug #44579 (Resolved): qa: commit 9f6c764f10f break qa code in several places
- vstart_runner.py crashes while running tests, many of these happen because vstart_runner.LocalRemote.run doesn't take...
- 12:31 PM Bug #44389 (Fix Under Review): client: fuse mount will print call trace with incorrect options
- https://github.com/ceph/ceph/pull/33915
03/11/2020
- 06:04 PM Bug #44528 (Resolved): remove sprious whitespace from test_snapshot.py
- 03:27 PM Bug #44565 (Resolved): src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state...
- Seeing frequent MDS daemons crashes in a multi-active setup. The crashes often coincide with inode migrations, but no...
- 01:41 PM Bug #44381 (Closed): kclient: crash/hang during qa/workunits/fs/snaps/snaptest-capwb.sh
- it's bug in v3 patches. patches in testing branch are v5, should have fixed the bug
- 10:01 AM Backport #42440: mimic: mds: create a configurable snapshot limit
- do I need to post a PR against the mimic branch for this item ?
- 09:17 AM Backport #44521 (In Progress): nautilus: qa: ERROR: test_subvolume_snapshot_clone_different_group...
- 07:39 AM Backport #44521: nautilus: qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephf...
- working on this
- 09:16 AM Backport #44484 (In Progress): nautilus: mgr/volumes: synchronize ownership (for symlinks) and in...
- 07:33 AM Backport #44484: nautilus: mgr/volumes: synchronize ownership (for symlinks) and inode timestamps...
- I'm working on this
- 09:08 AM Bug #44437 (Fix Under Review): qa:test_config_session_timeout failed with incorrect options
- 09:06 AM Feature #44044 (Fix Under Review): qa: add network namespaces to kernel/ceph-fuse mounts for part...
- 09:04 AM Bug #44555 (Fix Under Review): qa: tasks.cephfs.test_auto_repair.TestMDSAutoRepair failed
- 06:04 AM Bug #44555 (In Progress): qa: tasks.cephfs.test_auto_repair.TestMDSAutoRepair failed
- The fixing PR: https://github.com/ceph/ceph/pull/33873
- 05:58 AM Bug #44555 (Resolved): qa: tasks.cephfs.test_auto_repair.TestMDSAutoRepair failed
- 2020-03-11 01:10:31,068.068 INFO:__main__:test_backtrace_repair (tasks.cephfs.test_auto_repair.TestMDSAutoRepair) ......
03/10/2020
- 05:44 PM Bug #44393: pybind/mgr/volumes: add `mypy` support
- ...
- 02:08 PM Bug #44546 (Need More Info): cleanup: Can't lookup inode 1
I am getting these mounting a cephfs filesystem when using the 5.5 kernel (did not see them using the 3.10)
[4...- 10:52 AM Bug #44528 (In Progress): remove sprious whitespace from test_snapshot.py
- posted to octopus branch
- 08:06 AM Bug #44528 (Fix Under Review): remove sprious whitespace from test_snapshot.py
03/09/2020
- 10:35 PM Bug #44100: cephfs rsync kworker high load.
I did not have time to install the 5.5 kernel yet on the rsync server. Today I noticed on a CentOS7 vm, with just 1...- 06:18 PM Bug #43817 (Fix Under Review): mds: update cephfs octopus feature bit
- 05:40 PM Bug #44528 (Resolved): remove sprious whitespace from test_snapshot.py
- remove spurious whitespace
- 03:02 PM Bug #44525 (Resolved): LibCephFS::RecalledGetattr test failed
- Error reason:
When do Client::_open, MDS didn't return Fs which causes error on this code
ASSERT_EQ(ceph_ll_deleg... - 02:21 PM Bug #44438: qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephfs.test_volumes....
- The fix for this bug is already in octopus:...
- 02:12 PM Feature #38153: client: proactively release caps it is not using
- Zheng, is there anything left on what this ticket was supposed to cover?
- 01:29 PM Backport #44521 (Resolved): nautilus: qa: ERROR: test_subvolume_snapshot_clone_different_groups (...
- https://github.com/ceph/ceph/pull/33877
- 01:29 PM Backport #42713 (In Progress): nautilus: mgr: daemon state for mds not available
- 01:29 PM Backport #44520 (In Progress): nautilus: qa: test_scrub_abort fails during check_task_status("idle")
- 01:20 PM Backport #44520 (Resolved): nautilus: qa: test_scrub_abort fails during check_task_status("idle")
- https://github.com/ceph/ceph/pull/30704
- 01:07 PM Bug #42835: qa: test_scrub_abort fails during check_task_status("idle")
- tracker #42738 seems to be incorrectly marked as a blocker for this tracker.
- 02:42 AM Bug #44497 (Duplicate): qa/tasks/: ValueError: No JSON object could be decoded
- Same issue with https://tracker.ceph.com/issues/44437.
03/08/2020
- 05:38 PM Bug #44438 (Pending Backport): qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.c...
- 01:39 AM Feature #44044: qa: add network namespaces to kernel/ceph-fuse mounts for partition testing
- Have added the qa/ test case by transfering the bash code to python.
In some case could just s/mount_X.kill()/moun... - 12:35 AM Documentation #44503: Document CephFS's behaviour on O_APPEND
- Also perhaps relevant:
* #7333 - 12:33 AM Documentation #44503 (New): Document CephFS's behaviour on O_APPEND
- I have noticed that on my CephFS (13.2.2) file system mounted via fuse, if multiple writers `O_APPEND` to a file simu...
03/07/2020
03/06/2020
- 07:30 PM Bug #44383: qa: MDS_CLIENT_LATE_RELEASE during MDS thrashing
- I saw this in my test run too at http://pulpito.ceph.com/jlayton-2020-03-06_16:21:14-kcephfs-master-distro-basic-smit...
- 03:54 PM Bug #44438: qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephfs.test_volumes....
- Venky Shankar wrote:
> test case bug: group names are not passed to _verify_clone_attrs(): https://github.com/ceph/c... - 03:50 PM Bug #44438 (Fix Under Review): qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.c...
- 02:12 PM Bug #44438: qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephfs.test_volumes....
- test case bug: group names are not passed to _verify_clone_attrs(): https://github.com/ceph/ceph/blob/master/qa/tasks...
- 02:44 PM Backport #42159: mimic: osdc: objecter ops output does not have useful time information
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31384
merged - 02:44 PM Backport #42143: mimic: mds:split the dir if the op makes it oversized, because some ops maybe in...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31379
merged - 02:43 PM Backport #42156: mimic: mds: infinite loop in Locker::file_update_finish()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31284
merged - 02:43 PM Backport #41114: mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/31283
merged - 02:42 PM Backport #38643: mimic: fs: "log [WRN] : failed to reconnect caps for missing inodes"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31282
merged - 02:41 PM Backport #37906: mimic: make cephfs-data-scan reconstruct snaptable
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31281
merged - 12:40 PM Backport #42738: nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- @Ramana - looks like the follow-on fixes have been merged, so this backport could proceed?
- 12:36 PM Bug #42835 (Resolved): qa: test_scrub_abort fails during check_task_status("idle")
- Since this is a follow-on fix for #42299, let's handle the backporting there.
- 12:32 PM Bug #36094 (Resolved): mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:29 PM Bug #41868 (Resolved): mds: mds returns -5 error when the deleted file does not exist
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:29 PM Bug #41871 (Resolved): client: return error when someone passes bad whence value to llseek
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:22 PM Backport #42148 (Resolved): mimic: mds: mds returns -5 error when the deleted file does not exist
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31381
m... - 12:22 PM Backport #43347 (Resolved): mimic: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32757
m... - 12:20 PM Backport #42146 (Resolved): mimic: client: return error when someone passes bad whence value to l...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31380
m... - 12:20 PM Backport #40494 (Resolved): mimic: test_volume_client: declare only one default for python version
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30110
m... - 10:35 AM Backport #44488 (Rejected): mimic: qa: malformed job
- 10:34 AM Backport #44487 (Resolved): nautilus: pybind/mgr/volumes: add upgrade testing
- https://github.com/ceph/ceph/pull/34461
- 10:33 AM Backport #44484 (Resolved): nautilus: mgr/volumes: synchronize ownership (for symlinks) and inode...
- https://github.com/ceph/ceph/pull/33877
- 10:32 AM Backport #44483 (Resolved): nautilus: mds: assertion failure due to blacklist
- https://github.com/ceph/ceph/pull/34435
- 10:32 AM Backport #44480 (Resolved): nautilus: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- https://github.com/ceph/ceph/pull/34343
- 10:32 AM Backport #44479 (Rejected): mimic: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- 10:32 AM Backport #44478 (Resolved): nautilus: mds: assert(p != active_requests.end())
- https://github.com/ceph/ceph/pull/34338
- 10:32 AM Backport #44477 (Rejected): mimic: mds: assert(p != active_requests.end())
- 10:31 AM Backport #44476 (Resolved): luminous: mds: assert(p != active_requests.end())
- https://github.com/ceph/ceph/pull/34937
- 10:31 AM Backport #44473 (Resolved): nautilus: pybind/mgr/volumes: add `mypy` support
- https://github.com/ceph/ceph/pull/34036
- 04:53 AM Bug #44393 (Pending Backport): pybind/mgr/volumes: add `mypy` support
- 02:53 AM Bug #44456 (Fix Under Review): qa/vstart_runner.py: AttributeError: 'LocalRemote' object has no a...
- 01:09 AM Bug #44456 (In Progress): qa/vstart_runner.py: AttributeError: 'LocalRemote' object has no attrib...
- 01:08 AM Bug #44456 (Resolved): qa/vstart_runner.py: AttributeError: 'LocalRemote' object has no attribute...
- ...
- 02:08 AM Bug #44437: qa:test_config_session_timeout failed with incorrect options
- More detail logs:...
- 12:09 AM Bug #44448 (Fix Under Review): mds: 'if there is lock cache on dir' check is buggy
03/05/2020
- 10:55 PM Feature #44455 (In Progress): cephfs: add recursive unlink RPC
- This is a fairly common operation [1] and there's no particular reason we can't support it. The PurgeQueue (I think) ...
- 07:51 PM Backport #42148: mimic: mds: mds returns -5 error when the deleted file does not exist
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31381
merged - 07:50 PM Backport #43348: nautilus: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- https://github.com/ceph/ceph/pull/32757 merged
- 07:49 PM Backport #42146: mimic: client: return error when someone passes bad whence value to llseek
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31380
merged - 07:48 PM Backport #40494: mimic: test_volume_client: declare only one default for python version
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30110
merged - 04:15 PM Bug #44448 (Resolved): mds: 'if there is lock cache on dir' check is buggy
- 03:50 PM Backport #44291 (In Progress): nautilus: mds: SIGSEGV in Migrator::export_sessions_flushed
- 02:21 PM Documentation #44441: document new "wsync" and "nowsync" kcephfs mount options in mount.ceph manpage
- For this, we need to wait until the feature is merged in mainline kernels (probably in v5.7).
- 02:08 PM Documentation #44441 (Resolved): document new "wsync" and "nowsync" kcephfs mount options in moun...
- We're adding new options to control whether asynchronous dirops are enabled. Document them in the mount.ceph manpage.
- 02:02 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- The relevant bits are now in both the userland ceph tree (for octopus) and the kernel "testing" branch (should make v...
- 11:42 AM Bug #44438 (Resolved): qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephfs.te...
- 2020-03-04T17:53:38.493 INFO:tasks.cephfs_test_runner:===============================================================...
- 10:40 AM Bug #44437 (In Progress): qa:test_config_session_timeout failed with incorrect options
- The fixing PR: https://github.com/ceph/ceph/pull/33740
- 10:34 AM Bug #44437 (Resolved): qa:test_config_session_timeout failed with incorrect options
- 2020-03-05 04:10:04,311.311 INFO:__main__:Running ['./bin/ceph', 'daemon', 'mds.a', 'client', 'config', '9430', '120'...
- 04:42 AM Bug #44431 (Won't Fix - EOL): ubuntu - mimic - per-minute scheduled job delay into next minute le...
- job url: http://pulpito.ceph.com/?branch=wip-yuri2-testing-2020-02-20-1957-mimic
run id: 4788442
description:
fs... - 04:03 AM Bug #44316 (Pending Backport): mds: assert(p != active_requests.end())
- 04:02 AM Bug #44132 (Pending Backport): mds: assertion failure due to blacklist
03/04/2020
- 05:54 PM Bug #41031 (Pending Backport): qa: malformed job
- .... and mimic!
- 04:58 PM Bug #44408 (Fix Under Review): qa: after the cephfs qa test case quit the mountpoints still exist
- 06:23 AM Bug #44408 (In Progress): qa: after the cephfs qa test case quit the mountpoints still exist
- 06:22 AM Bug #44408 (Resolved): qa: after the cephfs qa test case quit the mountpoints still exist
- It should umount all the tempory mountpoints.
- 03:26 PM Bug #44382: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- Actually, it's the later assertion:...
- 03:17 PM Bug #44382: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- This test seems to be based on some very subtle assumptions:...
- 03:08 PM Bug #44382: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- Don't see it in http://pulpito.ceph.com/pdonnell-2020-02-16_17:35:17-kcephfs-wip-pdonnell-testing-20200215.033325-dis...
- 03:03 PM Bug #44382: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- When was the last successful run before this? It'd be nice to know what kernel it was using to help ID the cause.
- 01:16 PM Bug #44416 (Fix Under Review): mds: SimpleLock pointer is passed to Locker::wrlock_start
- 01:00 PM Bug #44416: mds: SimpleLock pointer is passed to Locker::wrlock_start
- should pass MutationImpl::LockOp to Locker::wrlock_start
- 12:55 PM Bug #44416 (Resolved): mds: SimpleLock pointer is passed to Locker::wrlock_start
- 12:27 PM Bug #44415: cephfs.pyx: passing empty string is fine but passing None is not to arg conffile in L...
- According Kefu, this was probably done purposefully by original API author so that client has 3 options: first, get c...
- 12:22 PM Bug #44415 (Resolved): cephfs.pyx: passing empty string is fine but passing None is not to arg co...
- ...
03/03/2020
- 06:37 PM Documentation #44310: doc: add blog post for recover_session in kclient
- We decided not to do this for now and to instead write up a blog post to evangelize this feature (and maybe some othe...
- 05:14 PM Bug #44393 (Fix Under Review): pybind/mgr/volumes: add `mypy` support
- 02:25 PM Bug #44393 (Resolved): pybind/mgr/volumes: add `mypy` support
- Adds mypy checks to the mgr/volumes modules.
- 04:52 PM Bug #44389 (In Progress): client: fuse mount will print call trace with incorrect options
- 01:57 AM Bug #44389 (Resolved): client: fuse mount will print call trace with incorrect options
- ...
- 04:41 PM Bug #44132 (Fix Under Review): mds: assertion failure due to blacklist
- 04:18 PM Bug #42723 (Pending Backport): pybind/mgr/volumes: add upgrade testing
- 10:44 AM Bug #43965 (Pending Backport): mgr/volumes: synchronize ownership (for symlinks) and inode timest...
- 01:02 AM Bug #43750 (Resolved): mds: add perf counters for openfiletable
- 01:01 AM Feature #44214 (Resolved): mount.ceph: add "fs" alias for "mds_namespace"
- 12:59 AM Feature #44212 (Resolved): client: add alias client_fs for client_mds_namespace
- 12:56 AM Bug #44295 (Pending Backport): mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- 12:55 AM Bug #44295 (Resolved): mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- 12:27 AM Bug #44381: kclient: crash/hang during qa/workunits/fs/snaps/snaptest-capwb.sh
- I suspect this is related to the merging of:
[PATCH v3 0/6] ceph: don't request caps for idle open files
I'...
03/02/2020
- 09:17 PM Bug #44386 (Can't reproduce): qa: blogbench cleanup hang/stall
- ...
- 08:07 PM Bug #44384 (Can't reproduce): qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.Test...
- ...
- 08:02 PM Bug #44383: qa: MDS_CLIENT_LATE_RELEASE during MDS thrashing
- First seen in http://pulpito.ceph.com/pdonnell-2020-02-16_17:35:17-kcephfs-wip-pdonnell-testing-20200215.033325-distr...
- 08:00 PM Bug #44383 (New): qa: MDS_CLIENT_LATE_RELEASE during MDS thrashing
- ...
- 07:52 PM Bug #44382 (Resolved): qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- ...
- 07:36 PM Bug #44381: kclient: crash/hang during qa/workunits/fs/snaps/snaptest-capwb.sh
- Another workunit failed same way: /ceph/teuthology-archive/pdonnell-2020-02-29_02:56:38-kcephfs-wip-pdonnell-testing-...
- 07:33 PM Bug #44381: kclient: crash/hang during qa/workunits/fs/snaps/snaptest-capwb.sh
- Note: this appears to only happen with the testing kernel. Must be a regression!
- 07:28 PM Bug #44381 (Closed): kclient: crash/hang during qa/workunits/fs/snaps/snaptest-capwb.sh
- ...
- 06:47 PM Bug #44380 (Resolved): mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_c...
- ...
- 06:13 PM Bug #43901: qa: fsx: fatal error: libaio.h: No such file or directory
- /ceph/teuthology-archive/pdonnell-2020-02-29_02:51:43-fs-wip-pdonnell-testing-20200229.001503-distro-basic-smithi/481...
- 03:36 PM Backport #44315: nautilus: nautilus: pybind/mgr/volumes: incomplete async unlink
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33569
m... - 02:45 PM Feature #44279: client: provide asok commands to getattr an inode with desired caps
- We may also just want to bring the SynClient or do something directly with libcephfs to grab exact sequences instead ...
02/28/2020
- 11:13 PM Bug #42723 (Fix Under Review): pybind/mgr/volumes: add upgrade testing
- 04:56 PM Bug #44316 (Fix Under Review): mds: assert(p != active_requests.end())
- 12:41 PM Documentation #44310 (In Progress): doc: add blog post for recover_session in kclient
- 12:09 PM Bug #44293 (Resolved): nautilus: pybind/mgr/volumes: incomplete async unlink
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:47 AM Feature #44044: qa: add network namespaces to kernel/ceph-fuse mounts for partition testing
- For now both kernel and fuse are working the https://github.com/ceph/ceph/pull/33576....
- 08:40 AM Bug #44339: mimic: cluster [WRN] Health check failed: 1 clients failing to respond to capability ...
- assigned to Patrick for triage
02/27/2020
- 09:26 PM Feature #44214 (Fix Under Review): mount.ceph: add "fs" alias for "mds_namespace"
- 09:20 PM Documentation #44310: doc: add blog post for recover_session in kclient
- Jeff, can you add a note to doc/release/octopus.rst about this feature.
- 08:19 PM Backport #44315 (Resolved): nautilus: nautilus: pybind/mgr/volumes: incomplete async unlink
- 04:50 AM Backport #44315 (In Progress): nautilus: nautilus: pybind/mgr/volumes: incomplete async unlink
- 04:38 AM Backport #44315 (Resolved): nautilus: nautilus: pybind/mgr/volumes: incomplete async unlink
- http://github.com/ceph/ceph/pull/33569
- 08:11 PM Bug #43902 (Triaged): qa: mon_thrash: timeout "ceph quorum_status"
- 06:26 PM Backport #44282: nautilus: mgr/volumes: deadlock when trying to purge large number of trash entries
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33526
m... - 02:56 PM Bug #44339 (Won't Fix - EOL): mimic: cluster [WRN] Health check failed: 1 clients failing to resp...
- Teuthology Job: mimic:multimds:4788256
URL: http://pulpito.ceph.com/?branch=wip-yuri2-testing-2020-02-20-1957-mimic
... - 01:29 PM Feature #44044: qa: add network namespaces to kernel/ceph-fuse mounts for partition testing
- This is for ceph-fuse: https://github.com/ceph/ceph/pull/33576
This will use a separating network namespace to iso... - 01:01 PM Backport #44337 (Resolved): nautilus: mds: purge queue corruption from wrong backport
- https://github.com/ceph/ceph/pull/34307
- 12:56 PM Backport #44330 (Resolved): nautilus: qa: multimds suite using centos7
- https://github.com/ceph/ceph/pull/35184
- 12:56 PM Backport #44329 (Rejected): mimic: client: bad error handling in Client::_lseek
- 12:56 PM Backport #44328 (Resolved): nautilus: client: bad error handling in Client::_lseek
- https://github.com/ceph/ceph/pull/34308
- 11:05 AM Bug #44208 (Fix Under Review): mgr/volumes: support canceling in-progress/pending clone operations.
- 09:48 AM Bug #44318 (Duplicate): nautilus: mgr/volumes: exception when logging message (in logging::log())
- Patrick reported this in: /ceph/teuthology-archive/pdonnell-2020-02-25_16:29:58-fs-nautilus-distro-basic-smithi/4801...
- 06:13 AM Bug #44316 (Resolved): mds: assert(p != active_requests.end())
- Luminous
crash info:
10454 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x11 ... - 04:37 AM Bug #44293 (Pending Backport): nautilus: pybind/mgr/volumes: incomplete async unlink
- 04:24 AM Bug #44276: pybind/mgr/volumes: cleanup stale connection hang
- Patrick,
This issue is present in master too: http://pulpito.ceph.com/vshankar-2020-02-26_17:46:30-fs-wip-vshankar... - 02:53 AM Feature #44212 (In Progress): client: add alias client_fs for client_mds_namespace
02/26/2020
- 05:48 PM Bug #44293 (Fix Under Review): nautilus: pybind/mgr/volumes: incomplete async unlink
- PR 33547 fixes the exception handling but not the underlying cause (the py2 exception). I'm marking this ticket as fi...
- 04:16 AM Bug #44293: nautilus: pybind/mgr/volumes: incomplete async unlink
- exception in purge threads:...
- 05:39 PM Bug #43748 (In Progress): client: improve wanted handling so we don't request unused caps (active...
- 03:21 PM Bug #39543 (Resolved): cephfs-shell: df command does not always produce correct output
- 03:13 PM Documentation #44310 (Resolved): doc: add blog post for recover_session in kclient
- 03:11 PM Bug #43644 (Rejected): mds: Empty directory check is done on the importer side (at import finish)...
- 03:09 PM Bug #38742 (Resolved): cephfs-shell: entering unrecognized command does not print newline after m...
- 02:26 PM Tasks #42085 (Resolved): qa: create tests for new recover_session=clean option
- 11:42 AM Bug #44288: MDSMap encoder "ev" (extended version) is not checked for validity when decoding
- For later reference: https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L555
02/25/2020
- 10:47 PM Bug #44117 (Resolved): vstart_runner.py: align LocalRemote.run with teuthology's run
- 09:56 PM Bug #44288 (Triaged): MDSMap encoder "ev" (extended version) is not checked for validity when dec...
- 03:18 PM Bug #44288 (Won't Fix): MDSMap encoder "ev" (extended version) is not checked for validity when d...
- This is going to be a necessity as we try and enable rolling upgrades! We encode an extended version and use it to de...
- 07:31 PM Bug #44257 (Resolved): vstart.sh: failed by waiting for mgr dashboard module to start
- 04:35 AM Bug #44257: vstart.sh: failed by waiting for mgr dashboard module to start
- I had the same issue on fedora 30 with yaml module not installed.
- 04:07 AM Bug #44257: vstart.sh: failed by waiting for mgr dashboard module to start
- Yeah, all the install-deps.sh and do_cmake.sh no any error. But I have to manually install all the them again to make...
- 07:08 PM Bug #44021 (Pending Backport): client: bad error handling in Client::_lseek
- 07:06 PM Bug #43964 (Resolved): qa: Test failure: test_acls
- 07:05 PM Bug #42835 (Pending Backport): qa: test_scrub_abort fails during check_task_status("idle")
- 07:04 PM Bug #42835 (Resolved): qa: test_scrub_abort fails during check_task_status("idle")
- 07:03 PM Bug #36635 (Pending Backport): mds: purge queue corruption from wrong backport
- 06:56 PM Bug #44295: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- in QA: /ceph/teuthology-archive/pdonnell-2020-02-25_15:06:35-fs-wip-pdonnell-testing-20200224.202837-distro-basic-smi...
- 06:44 PM Bug #44295 (Fix Under Review): mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- 06:12 PM Bug #44295 (In Progress): mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- 06:11 PM Bug #44295 (Resolved): mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- From the LRC testing Octopus:...
- 06:10 PM Bug #44294 (Resolved): mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"
- From the LRC testing Octopus:...
- 05:59 PM Bug #44293: nautilus: pybind/mgr/volumes: incomplete async unlink
- See also: /ceph/teuthology-archive/pdonnell-2020-02-25_16:29:58-fs-nautilus-distro-basic-smithi/4801912/teuthology.log
- 05:57 PM Bug #44293 (Resolved): nautilus: pybind/mgr/volumes: incomplete async unlink
- ...
- 05:50 PM Backport #43143: nautilus: mds: tolerate no snaprealm encoded in on-disk root inode
- c64beb68ef4eba9f717d1d68594360c81f1d0e3a will be in v14.2.8.
- 05:14 PM Bug #43796 (Resolved): qa: test_version_splitting
- 05:07 PM Backport #41106: nautilus: mds: add command that modify session metadata
- b3d662aad44afaebb123540bd2b5ed93199910ec will be in v14.2.8
- 04:22 PM Bug #43968 (Pending Backport): qa: multimds suite using centos7
- Failures also in nautilus: http://pulpito.front.sepia.ceph.com/yuriw-2020-02-19_16:45:17-multimds-nautilus-distro-bas...
- 03:47 PM Backport #44291 (Resolved): nautilus: mds: SIGSEGV in Migrator::export_sessions_flushed
- https://github.com/ceph/ceph/pull/33751
- 03:46 PM Backport #44290 (Rejected): mimic: mds: SIGSEGV in Migrator::export_sessions_flushed
- https://github.com/ceph/ceph/pull/34351
- 03:46 PM Bug #44207 (Resolved): mgr/volumes: deadlock when trying to purge large number of trash entries
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:18 AM Bug #44207 (Pending Backport): mgr/volumes: deadlock when trying to purge large number of trash e...
- 02:29 PM Bug #44276: pybind/mgr/volumes: cleanup stale connection hang
- It's the connection cleanup thread that's waiting on cephfs shutdown() call after initiating a session disconnect:
... - 03:59 AM Bug #44276: pybind/mgr/volumes: cleanup stale connection hang
- Copying from #44281:...
- 03:52 AM Bug #44276: pybind/mgr/volumes: cleanup stale connection hang
- Patrick Donnelly wrote:
> [...]
>
> From: /ceph/teuthology-archive/vshankar-2020-02-24_12:33:54-fs-wip-vshankar-t... - 04:29 AM Backport #44282 (Resolved): nautilus: mgr/volumes: deadlock when trying to purge large number of ...
- 04:21 AM Backport #44282 (In Progress): nautilus: mgr/volumes: deadlock when trying to purge large number ...
- 03:46 AM Backport #44282 (Resolved): nautilus: mgr/volumes: deadlock when trying to purge large number of ...
- https://github.com/ceph/ceph/pull/33526
- 04:00 AM Bug #44281 (Duplicate): pybind/mgr/volumes: cleanup stale connection hang
- Forgot I opened an issue already...
- 03:45 AM Bug #44281 (Duplicate): pybind/mgr/volumes: cleanup stale connection hang
- ...
02/24/2020
- 11:51 PM Backport #42631: nautilus: client: FAILED assert(cap == in->auth_cap)
- ddcd3660d0e05325361b4f150ba74aed99620277 will be in v14.2.8.
- 11:49 PM Backport #43138: nautilus: mds: reports unrecognized message for mgrclient messages
- 36ef173b90b5612141c2932913e489f375419e55 will be in v14.2.8
- 11:40 PM Backport #43219: nautilus: mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (tasks....
- 92549965d0d1d42a4056b55019082d6d594d48cc will be in v14.2.8.
- 11:39 PM Backport #43085: nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- 92549965d0d1d42a4056b55019082d6d594d48cc will be in v14.2.8.
- 11:39 PM Backport #42886: nautilus: mgr/volumes: allow setting uid, gid of subvolume and subvolume group d...
- 92549965d0d1d42a4056b55019082d6d594d48cc will be in v14.2.8.
- 11:33 PM Backport #42790: nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- 001fc7f2b2fb0aea99627255a6895143f5f5898d will be in v14.2.8.
- 11:33 PM Backport #42615: nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- 001fc7f2b2fb0aea99627255a6895143f5f5898d will be in v14.2.8.
- 11:27 PM Backport #42650: nautilus: mds: no assert on frozen dir when scrub path
- git tag --contains b229aa81ad1d282467244249f41c73c7f1c73e67
This will be in the upcoming v14.2.8 release.
- 09:37 PM Feature #44279 (Fix Under Review): client: provide asok commands to getattr an inode with desired...
- Idea is to avoid using UNIX commands or libcephfs calls to implicitly acquire the caps we want. Instead, write a comm...
- 07:52 PM Feature #44277 (Resolved): pybind/mgr/volumes: add command to return metadata regarding a subvolume
- In Ceph CSI, there are cases where an existing subvolume needs to be inspected for a match to an incoming request. Th...
- 07:14 PM Bug #43909 (Pending Backport): mds: SIGSEGV in Migrator::export_sessions_flushed
- 07:10 PM Bug #44021 (Fix Under Review): client: bad error handling in Client::_lseek
- 06:58 PM Bug #44276 (In Progress): pybind/mgr/volumes: cleanup stale connection hang
- 06:58 PM Bug #44276 (Resolved): pybind/mgr/volumes: cleanup stale connection hang
- ...
- 04:08 PM Feature #44274 (New): mds: disconnect file data from inode number
- Currently CephFS uses the inode number to construct the object names for the file data. This has generally worked wel...
- 02:45 PM Bug #44257: vstart.sh: failed by waiting for mgr dashboard module to start
- Xiubo, I think you're missing some dependencies. Have you run `./install-deps.sh`?
- 02:48 AM Bug #44257 (Resolved): vstart.sh: failed by waiting for mgr dashboard module to start
- ...
- 07:06 AM Feature #44212: client: add alias client_fs for client_mds_namespace
- https://github.com/ceph/ceph/pull/33506
- 05:29 AM Feature #44214: mount.ceph: add "fs" alias for "mds_namespace"
- Currently will fix this in ceph mount code, which will add the new "fs=<fs_name>" and translate it to "mds_namespace=...
02/23/2020
- 06:12 PM Feature #44211 (Fix Under Review): mount.ceph: stop printing warning message about mds_namespace
- 01:27 AM Feature #44211 (In Progress): mount.ceph: stop printing warning message about mds_namespace
- > $ mount -t ceph -o mds_namespace=foo ...
> mount.ceph: unrecognized mount option "mds_namespace", passing to kerne... - 03:58 AM Feature #44214: mount.ceph: add "fs" alias for "mds_namespace"
- doc: https://github.com/ceph/ceph/pull/33491
- 02:16 AM Feature #44214 (In Progress): mount.ceph: add "fs" alias for "mds_namespace"
- 02:16 AM Feature #44214: mount.ceph: add "fs" alias for "mds_namespace"
- The patchwork for adding the 'fs' mount options: https://patchwork.kernel.org/patch/11398589/
02/21/2020
- 11:16 PM Bug #44244 (Resolved): pybind/mgr/volumes: "handle_command module 'volumes' command handler threw...
- 11:03 PM Bug #44244 (Fix Under Review): pybind/mgr/volumes: "handle_command module 'volumes' command handl...
- 10:49 PM Bug #44244 (Resolved): pybind/mgr/volumes: "handle_command module 'volumes' command handler threw...
- ...
- 06:06 PM Backport #43141: nautilus: tools/cephfs: linkages injected by cephfs-data-scan have first == head
- This is not yet in a tagged release:
$ git tag --contains d9516150d95617c2ded9bbef0f3e9b7d76e3dcee
It will be i... - 02:31 AM Bug #36635 (Fix Under Review): mds: purge queue corruption from wrong backport
- 01:46 AM Bug #36635 (In Progress): mds: purge queue corruption from wrong backport
02/20/2020
- 02:18 PM Bug #43248 (Fix Under Review): cephfs-shell: do not drop into shell after running command-line co...
- 11:37 AM Bug #43964: qa: Test failure: test_acls
- See - https://tracker.ceph.com/issues/43486#note-2
- 11:37 AM Bug #43964: qa: Test failure: test_acls
- btrfs-progs-devel wasn't available on "CentOS 8 too":https://tracker.ceph.com/issues/43486 after which "we added a fi...
- 10:01 AM Bug #43964 (Fix Under Review): qa: Test failure: test_acls
- 08:09 AM Bug #44117: vstart_runner.py: align LocalRemote.run with teuthology's run
- I was able to reproduce the bug; see /ceph/teuthology-archive/rishabh-2020-02-20_07:10:45-fs-wip-rishabh-dummy-test-d...
02/19/2020
- 06:59 PM Bug #44176 (Resolved): qa: "Error EINVAL: 'Module' object has no attribute 'remove_mds'"
- 05:40 PM Fix #44171 (In Progress): pybind/cephfs: audit for unimplemented bindings for libcephfs
- 02:23 PM Fix #44171: pybind/cephfs: audit for unimplemented bindings for libcephfs
- The initial PR is sent. It doesn't cover all the missing bindings yet.
- 05:01 PM Feature #44214 (Resolved): mount.ceph: add "fs" alias for "mds_namespace"
- I feel "mds_namespace" is not an intuitive name. Let's keep it for backwards compatibility but introduce a cleaner na...
- 05:00 PM Feature #44212 (Resolved): client: add alias client_fs for client_mds_namespace
- I feel "mds_namespace" is not an intuitive name. Let's keep client_mds_namespace for backwards compatibility but intr...
- 04:55 PM Feature #44211 (Resolved): mount.ceph: stop printing warning message about mds_namespace
- We see:...
- 02:31 PM Bug #44207 (Fix Under Review): mgr/volumes: deadlock when trying to purge large number of trash e...
- 12:34 PM Bug #44207 (In Progress): mgr/volumes: deadlock when trying to purge large number of trash entries
- 12:31 PM Bug #44207 (Resolved): mgr/volumes: deadlock when trying to purge large number of trash entries
- There's a subtle deadlock when purge tasks (via the generic async job machinery) tries to fetch the next job to execu...
- 12:37 PM Bug #44208 (Resolved): mgr/volumes: support canceling in-progress/pending clone operations.
- This is useful when a user wants to interrupt a long-running clone operation.
$ ceph fs clone cancel <volume> <clo... - 12:20 AM Bug #44100: cephfs rsync kworker high load.
- none none wrote:
> Zheng Yan wrote:
> > could you check if this still happen with upstream 5.5 kernel
>
> From w...
Also available in: Atom