Activity
From 02/11/2020 to 03/11/2020
03/11/2020
- 06:04 PM Bug #44528 (Resolved): remove sprious whitespace from test_snapshot.py
- 03:27 PM Bug #44565 (Resolved): src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state...
- Seeing frequent MDS daemons crashes in a multi-active setup. The crashes often coincide with inode migrations, but no...
- 01:41 PM Bug #44381 (Closed): kclient: crash/hang during qa/workunits/fs/snaps/snaptest-capwb.sh
- it's bug in v3 patches. patches in testing branch are v5, should have fixed the bug
- 10:01 AM Backport #42440: mimic: mds: create a configurable snapshot limit
- do I need to post a PR against the mimic branch for this item ?
- 09:17 AM Backport #44521 (In Progress): nautilus: qa: ERROR: test_subvolume_snapshot_clone_different_group...
- 07:39 AM Backport #44521: nautilus: qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephf...
- working on this
- 09:16 AM Backport #44484 (In Progress): nautilus: mgr/volumes: synchronize ownership (for symlinks) and in...
- 07:33 AM Backport #44484: nautilus: mgr/volumes: synchronize ownership (for symlinks) and inode timestamps...
- I'm working on this
- 09:08 AM Bug #44437 (Fix Under Review): qa:test_config_session_timeout failed with incorrect options
- 09:06 AM Feature #44044 (Fix Under Review): qa: add network namespaces to kernel/ceph-fuse mounts for part...
- 09:04 AM Bug #44555 (Fix Under Review): qa: tasks.cephfs.test_auto_repair.TestMDSAutoRepair failed
- 06:04 AM Bug #44555 (In Progress): qa: tasks.cephfs.test_auto_repair.TestMDSAutoRepair failed
- The fixing PR: https://github.com/ceph/ceph/pull/33873
- 05:58 AM Bug #44555 (Resolved): qa: tasks.cephfs.test_auto_repair.TestMDSAutoRepair failed
- 2020-03-11 01:10:31,068.068 INFO:__main__:test_backtrace_repair (tasks.cephfs.test_auto_repair.TestMDSAutoRepair) ......
03/10/2020
- 05:44 PM Bug #44393: pybind/mgr/volumes: add `mypy` support
- ...
- 02:08 PM Bug #44546 (Need More Info): cleanup: Can't lookup inode 1
I am getting these mounting a cephfs filesystem when using the 5.5 kernel (did not see them using the 3.10)
[4...- 10:52 AM Bug #44528 (In Progress): remove sprious whitespace from test_snapshot.py
- posted to octopus branch
- 08:06 AM Bug #44528 (Fix Under Review): remove sprious whitespace from test_snapshot.py
03/09/2020
- 10:35 PM Bug #44100: cephfs rsync kworker high load.
I did not have time to install the 5.5 kernel yet on the rsync server. Today I noticed on a CentOS7 vm, with just 1...- 06:18 PM Bug #43817 (Fix Under Review): mds: update cephfs octopus feature bit
- 05:40 PM Bug #44528 (Resolved): remove sprious whitespace from test_snapshot.py
- remove spurious whitespace
- 03:02 PM Bug #44525 (Resolved): LibCephFS::RecalledGetattr test failed
- Error reason:
When do Client::_open, MDS didn't return Fs which causes error on this code
ASSERT_EQ(ceph_ll_deleg... - 02:21 PM Bug #44438: qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephfs.test_volumes....
- The fix for this bug is already in octopus:...
- 02:12 PM Feature #38153: client: proactively release caps it is not using
- Zheng, is there anything left on what this ticket was supposed to cover?
- 01:29 PM Backport #44521 (Resolved): nautilus: qa: ERROR: test_subvolume_snapshot_clone_different_groups (...
- https://github.com/ceph/ceph/pull/33877
- 01:29 PM Backport #42713 (In Progress): nautilus: mgr: daemon state for mds not available
- 01:29 PM Backport #44520 (In Progress): nautilus: qa: test_scrub_abort fails during check_task_status("idle")
- 01:20 PM Backport #44520 (Resolved): nautilus: qa: test_scrub_abort fails during check_task_status("idle")
- https://github.com/ceph/ceph/pull/30704
- 01:07 PM Bug #42835: qa: test_scrub_abort fails during check_task_status("idle")
- tracker #42738 seems to be incorrectly marked as a blocker for this tracker.
- 02:42 AM Bug #44497 (Duplicate): qa/tasks/: ValueError: No JSON object could be decoded
- Same issue with https://tracker.ceph.com/issues/44437.
03/08/2020
- 05:38 PM Bug #44438 (Pending Backport): qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.c...
- 01:39 AM Feature #44044: qa: add network namespaces to kernel/ceph-fuse mounts for partition testing
- Have added the qa/ test case by transfering the bash code to python.
In some case could just s/mount_X.kill()/moun... - 12:35 AM Documentation #44503: Document CephFS's behaviour on O_APPEND
- Also perhaps relevant:
* #7333 - 12:33 AM Documentation #44503 (New): Document CephFS's behaviour on O_APPEND
- I have noticed that on my CephFS (13.2.2) file system mounted via fuse, if multiple writers `O_APPEND` to a file simu...
03/07/2020
03/06/2020
- 07:30 PM Bug #44383: qa: MDS_CLIENT_LATE_RELEASE during MDS thrashing
- I saw this in my test run too at http://pulpito.ceph.com/jlayton-2020-03-06_16:21:14-kcephfs-master-distro-basic-smit...
- 03:54 PM Bug #44438: qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephfs.test_volumes....
- Venky Shankar wrote:
> test case bug: group names are not passed to _verify_clone_attrs(): https://github.com/ceph/c... - 03:50 PM Bug #44438 (Fix Under Review): qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.c...
- 02:12 PM Bug #44438: qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephfs.test_volumes....
- test case bug: group names are not passed to _verify_clone_attrs(): https://github.com/ceph/ceph/blob/master/qa/tasks...
- 02:44 PM Backport #42159: mimic: osdc: objecter ops output does not have useful time information
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31384
merged - 02:44 PM Backport #42143: mimic: mds:split the dir if the op makes it oversized, because some ops maybe in...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31379
merged - 02:43 PM Backport #42156: mimic: mds: infinite loop in Locker::file_update_finish()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31284
merged - 02:43 PM Backport #41114: mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/31283
merged - 02:42 PM Backport #38643: mimic: fs: "log [WRN] : failed to reconnect caps for missing inodes"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31282
merged - 02:41 PM Backport #37906: mimic: make cephfs-data-scan reconstruct snaptable
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31281
merged - 12:40 PM Backport #42738: nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- @Ramana - looks like the follow-on fixes have been merged, so this backport could proceed?
- 12:36 PM Bug #42835 (Resolved): qa: test_scrub_abort fails during check_task_status("idle")
- Since this is a follow-on fix for #42299, let's handle the backporting there.
- 12:32 PM Bug #36094 (Resolved): mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:29 PM Bug #41868 (Resolved): mds: mds returns -5 error when the deleted file does not exist
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:29 PM Bug #41871 (Resolved): client: return error when someone passes bad whence value to llseek
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:22 PM Backport #42148 (Resolved): mimic: mds: mds returns -5 error when the deleted file does not exist
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31381
m... - 12:22 PM Backport #43347 (Resolved): mimic: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32757
m... - 12:20 PM Backport #42146 (Resolved): mimic: client: return error when someone passes bad whence value to l...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31380
m... - 12:20 PM Backport #40494 (Resolved): mimic: test_volume_client: declare only one default for python version
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30110
m... - 10:35 AM Backport #44488 (Rejected): mimic: qa: malformed job
- 10:34 AM Backport #44487 (Resolved): nautilus: pybind/mgr/volumes: add upgrade testing
- https://github.com/ceph/ceph/pull/34461
- 10:33 AM Backport #44484 (Resolved): nautilus: mgr/volumes: synchronize ownership (for symlinks) and inode...
- https://github.com/ceph/ceph/pull/33877
- 10:32 AM Backport #44483 (Resolved): nautilus: mds: assertion failure due to blacklist
- https://github.com/ceph/ceph/pull/34435
- 10:32 AM Backport #44480 (Resolved): nautilus: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- https://github.com/ceph/ceph/pull/34343
- 10:32 AM Backport #44479 (Rejected): mimic: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- 10:32 AM Backport #44478 (Resolved): nautilus: mds: assert(p != active_requests.end())
- https://github.com/ceph/ceph/pull/34338
- 10:32 AM Backport #44477 (Rejected): mimic: mds: assert(p != active_requests.end())
- 10:31 AM Backport #44476 (Resolved): luminous: mds: assert(p != active_requests.end())
- https://github.com/ceph/ceph/pull/34937
- 10:31 AM Backport #44473 (Resolved): nautilus: pybind/mgr/volumes: add `mypy` support
- https://github.com/ceph/ceph/pull/34036
- 04:53 AM Bug #44393 (Pending Backport): pybind/mgr/volumes: add `mypy` support
- 02:53 AM Bug #44456 (Fix Under Review): qa/vstart_runner.py: AttributeError: 'LocalRemote' object has no a...
- 01:09 AM Bug #44456 (In Progress): qa/vstart_runner.py: AttributeError: 'LocalRemote' object has no attrib...
- 01:08 AM Bug #44456 (Resolved): qa/vstart_runner.py: AttributeError: 'LocalRemote' object has no attribute...
- ...
- 02:08 AM Bug #44437: qa:test_config_session_timeout failed with incorrect options
- More detail logs:...
- 12:09 AM Bug #44448 (Fix Under Review): mds: 'if there is lock cache on dir' check is buggy
03/05/2020
- 10:55 PM Feature #44455 (In Progress): cephfs: add recursive unlink RPC
- This is a fairly common operation [1] and there's no particular reason we can't support it. The PurgeQueue (I think) ...
- 07:51 PM Backport #42148: mimic: mds: mds returns -5 error when the deleted file does not exist
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31381
merged - 07:50 PM Backport #43348: nautilus: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- https://github.com/ceph/ceph/pull/32757 merged
- 07:49 PM Backport #42146: mimic: client: return error when someone passes bad whence value to llseek
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31380
merged - 07:48 PM Backport #40494: mimic: test_volume_client: declare only one default for python version
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30110
merged - 04:15 PM Bug #44448 (Resolved): mds: 'if there is lock cache on dir' check is buggy
- 03:50 PM Backport #44291 (In Progress): nautilus: mds: SIGSEGV in Migrator::export_sessions_flushed
- 02:21 PM Documentation #44441: document new "wsync" and "nowsync" kcephfs mount options in mount.ceph manpage
- For this, we need to wait until the feature is merged in mainline kernels (probably in v5.7).
- 02:08 PM Documentation #44441 (Resolved): document new "wsync" and "nowsync" kcephfs mount options in moun...
- We're adding new options to control whether asynchronous dirops are enabled. Document them in the mount.ceph manpage.
- 02:02 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- The relevant bits are now in both the userland ceph tree (for octopus) and the kernel "testing" branch (should make v...
- 11:42 AM Bug #44438 (Resolved): qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephfs.te...
- 2020-03-04T17:53:38.493 INFO:tasks.cephfs_test_runner:===============================================================...
- 10:40 AM Bug #44437 (In Progress): qa:test_config_session_timeout failed with incorrect options
- The fixing PR: https://github.com/ceph/ceph/pull/33740
- 10:34 AM Bug #44437 (Resolved): qa:test_config_session_timeout failed with incorrect options
- 2020-03-05 04:10:04,311.311 INFO:__main__:Running ['./bin/ceph', 'daemon', 'mds.a', 'client', 'config', '9430', '120'...
- 04:42 AM Bug #44431 (Won't Fix - EOL): ubuntu - mimic - per-minute scheduled job delay into next minute le...
- job url: http://pulpito.ceph.com/?branch=wip-yuri2-testing-2020-02-20-1957-mimic
run id: 4788442
description:
fs... - 04:03 AM Bug #44316 (Pending Backport): mds: assert(p != active_requests.end())
- 04:02 AM Bug #44132 (Pending Backport): mds: assertion failure due to blacklist
03/04/2020
- 05:54 PM Bug #41031 (Pending Backport): qa: malformed job
- .... and mimic!
- 04:58 PM Bug #44408 (Fix Under Review): qa: after the cephfs qa test case quit the mountpoints still exist
- 06:23 AM Bug #44408 (In Progress): qa: after the cephfs qa test case quit the mountpoints still exist
- 06:22 AM Bug #44408 (Resolved): qa: after the cephfs qa test case quit the mountpoints still exist
- It should umount all the tempory mountpoints.
- 03:26 PM Bug #44382: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- Actually, it's the later assertion:...
- 03:17 PM Bug #44382: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- This test seems to be based on some very subtle assumptions:...
- 03:08 PM Bug #44382: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- Don't see it in http://pulpito.ceph.com/pdonnell-2020-02-16_17:35:17-kcephfs-wip-pdonnell-testing-20200215.033325-dis...
- 03:03 PM Bug #44382: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- When was the last successful run before this? It'd be nice to know what kernel it was using to help ID the cause.
- 01:16 PM Bug #44416 (Fix Under Review): mds: SimpleLock pointer is passed to Locker::wrlock_start
- 01:00 PM Bug #44416: mds: SimpleLock pointer is passed to Locker::wrlock_start
- should pass MutationImpl::LockOp to Locker::wrlock_start
- 12:55 PM Bug #44416 (Resolved): mds: SimpleLock pointer is passed to Locker::wrlock_start
- 12:27 PM Bug #44415: cephfs.pyx: passing empty string is fine but passing None is not to arg conffile in L...
- According Kefu, this was probably done purposefully by original API author so that client has 3 options: first, get c...
- 12:22 PM Bug #44415 (Resolved): cephfs.pyx: passing empty string is fine but passing None is not to arg co...
- ...
03/03/2020
- 06:37 PM Documentation #44310: doc: add blog post for recover_session in kclient
- We decided not to do this for now and to instead write up a blog post to evangelize this feature (and maybe some othe...
- 05:14 PM Bug #44393 (Fix Under Review): pybind/mgr/volumes: add `mypy` support
- 02:25 PM Bug #44393 (Resolved): pybind/mgr/volumes: add `mypy` support
- Adds mypy checks to the mgr/volumes modules.
- 04:52 PM Bug #44389 (In Progress): client: fuse mount will print call trace with incorrect options
- 01:57 AM Bug #44389 (Resolved): client: fuse mount will print call trace with incorrect options
- ...
- 04:41 PM Bug #44132 (Fix Under Review): mds: assertion failure due to blacklist
- 04:18 PM Bug #42723 (Pending Backport): pybind/mgr/volumes: add upgrade testing
- 10:44 AM Bug #43965 (Pending Backport): mgr/volumes: synchronize ownership (for symlinks) and inode timest...
- 01:02 AM Bug #43750 (Resolved): mds: add perf counters for openfiletable
- 01:01 AM Feature #44214 (Resolved): mount.ceph: add "fs" alias for "mds_namespace"
- 12:59 AM Feature #44212 (Resolved): client: add alias client_fs for client_mds_namespace
- 12:56 AM Bug #44295 (Pending Backport): mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- 12:55 AM Bug #44295 (Resolved): mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- 12:27 AM Bug #44381: kclient: crash/hang during qa/workunits/fs/snaps/snaptest-capwb.sh
- I suspect this is related to the merging of:
[PATCH v3 0/6] ceph: don't request caps for idle open files
I'...
03/02/2020
- 09:17 PM Bug #44386 (Can't reproduce): qa: blogbench cleanup hang/stall
- ...
- 08:07 PM Bug #44384 (Can't reproduce): qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.Test...
- ...
- 08:02 PM Bug #44383: qa: MDS_CLIENT_LATE_RELEASE during MDS thrashing
- First seen in http://pulpito.ceph.com/pdonnell-2020-02-16_17:35:17-kcephfs-wip-pdonnell-testing-20200215.033325-distr...
- 08:00 PM Bug #44383 (New): qa: MDS_CLIENT_LATE_RELEASE during MDS thrashing
- ...
- 07:52 PM Bug #44382 (Resolved): qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- ...
- 07:36 PM Bug #44381: kclient: crash/hang during qa/workunits/fs/snaps/snaptest-capwb.sh
- Another workunit failed same way: /ceph/teuthology-archive/pdonnell-2020-02-29_02:56:38-kcephfs-wip-pdonnell-testing-...
- 07:33 PM Bug #44381: kclient: crash/hang during qa/workunits/fs/snaps/snaptest-capwb.sh
- Note: this appears to only happen with the testing kernel. Must be a regression!
- 07:28 PM Bug #44381 (Closed): kclient: crash/hang during qa/workunits/fs/snaps/snaptest-capwb.sh
- ...
- 06:47 PM Bug #44380 (Resolved): mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_c...
- ...
- 06:13 PM Bug #43901: qa: fsx: fatal error: libaio.h: No such file or directory
- /ceph/teuthology-archive/pdonnell-2020-02-29_02:51:43-fs-wip-pdonnell-testing-20200229.001503-distro-basic-smithi/481...
- 03:36 PM Backport #44315: nautilus: nautilus: pybind/mgr/volumes: incomplete async unlink
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33569
m... - 02:45 PM Feature #44279: client: provide asok commands to getattr an inode with desired caps
- We may also just want to bring the SynClient or do something directly with libcephfs to grab exact sequences instead ...
02/28/2020
- 11:13 PM Bug #42723 (Fix Under Review): pybind/mgr/volumes: add upgrade testing
- 04:56 PM Bug #44316 (Fix Under Review): mds: assert(p != active_requests.end())
- 12:41 PM Documentation #44310 (In Progress): doc: add blog post for recover_session in kclient
- 12:09 PM Bug #44293 (Resolved): nautilus: pybind/mgr/volumes: incomplete async unlink
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:47 AM Feature #44044: qa: add network namespaces to kernel/ceph-fuse mounts for partition testing
- For now both kernel and fuse are working the https://github.com/ceph/ceph/pull/33576....
- 08:40 AM Bug #44339: mimic: cluster [WRN] Health check failed: 1 clients failing to respond to capability ...
- assigned to Patrick for triage
02/27/2020
- 09:26 PM Feature #44214 (Fix Under Review): mount.ceph: add "fs" alias for "mds_namespace"
- 09:20 PM Documentation #44310: doc: add blog post for recover_session in kclient
- Jeff, can you add a note to doc/release/octopus.rst about this feature.
- 08:19 PM Backport #44315 (Resolved): nautilus: nautilus: pybind/mgr/volumes: incomplete async unlink
- 04:50 AM Backport #44315 (In Progress): nautilus: nautilus: pybind/mgr/volumes: incomplete async unlink
- 04:38 AM Backport #44315 (Resolved): nautilus: nautilus: pybind/mgr/volumes: incomplete async unlink
- http://github.com/ceph/ceph/pull/33569
- 08:11 PM Bug #43902 (Triaged): qa: mon_thrash: timeout "ceph quorum_status"
- 06:26 PM Backport #44282: nautilus: mgr/volumes: deadlock when trying to purge large number of trash entries
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33526
m... - 02:56 PM Bug #44339 (Won't Fix - EOL): mimic: cluster [WRN] Health check failed: 1 clients failing to resp...
- Teuthology Job: mimic:multimds:4788256
URL: http://pulpito.ceph.com/?branch=wip-yuri2-testing-2020-02-20-1957-mimic
... - 01:29 PM Feature #44044: qa: add network namespaces to kernel/ceph-fuse mounts for partition testing
- This is for ceph-fuse: https://github.com/ceph/ceph/pull/33576
This will use a separating network namespace to iso... - 01:01 PM Backport #44337 (Resolved): nautilus: mds: purge queue corruption from wrong backport
- https://github.com/ceph/ceph/pull/34307
- 12:56 PM Backport #44330 (Resolved): nautilus: qa: multimds suite using centos7
- https://github.com/ceph/ceph/pull/35184
- 12:56 PM Backport #44329 (Rejected): mimic: client: bad error handling in Client::_lseek
- 12:56 PM Backport #44328 (Resolved): nautilus: client: bad error handling in Client::_lseek
- https://github.com/ceph/ceph/pull/34308
- 11:05 AM Bug #44208 (Fix Under Review): mgr/volumes: support canceling in-progress/pending clone operations.
- 09:48 AM Bug #44318 (Duplicate): nautilus: mgr/volumes: exception when logging message (in logging::log())
- Patrick reported this in: /ceph/teuthology-archive/pdonnell-2020-02-25_16:29:58-fs-nautilus-distro-basic-smithi/4801...
- 06:13 AM Bug #44316 (Resolved): mds: assert(p != active_requests.end())
- Luminous
crash info:
10454 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x11 ... - 04:37 AM Bug #44293 (Pending Backport): nautilus: pybind/mgr/volumes: incomplete async unlink
- 04:24 AM Bug #44276: pybind/mgr/volumes: cleanup stale connection hang
- Patrick,
This issue is present in master too: http://pulpito.ceph.com/vshankar-2020-02-26_17:46:30-fs-wip-vshankar... - 02:53 AM Feature #44212 (In Progress): client: add alias client_fs for client_mds_namespace
02/26/2020
- 05:48 PM Bug #44293 (Fix Under Review): nautilus: pybind/mgr/volumes: incomplete async unlink
- PR 33547 fixes the exception handling but not the underlying cause (the py2 exception). I'm marking this ticket as fi...
- 04:16 AM Bug #44293: nautilus: pybind/mgr/volumes: incomplete async unlink
- exception in purge threads:...
- 05:39 PM Bug #43748 (In Progress): client: improve wanted handling so we don't request unused caps (active...
- 03:21 PM Bug #39543 (Resolved): cephfs-shell: df command does not always produce correct output
- 03:13 PM Documentation #44310 (Resolved): doc: add blog post for recover_session in kclient
- 03:11 PM Bug #43644 (Rejected): mds: Empty directory check is done on the importer side (at import finish)...
- 03:09 PM Bug #38742 (Resolved): cephfs-shell: entering unrecognized command does not print newline after m...
- 02:26 PM Tasks #42085 (Resolved): qa: create tests for new recover_session=clean option
- 11:42 AM Bug #44288: MDSMap encoder "ev" (extended version) is not checked for validity when decoding
- For later reference: https://github.com/ceph/ceph/blob/master/src/osd/OSDMap.cc#L555
02/25/2020
- 10:47 PM Bug #44117 (Resolved): vstart_runner.py: align LocalRemote.run with teuthology's run
- 09:56 PM Bug #44288 (Triaged): MDSMap encoder "ev" (extended version) is not checked for validity when dec...
- 03:18 PM Bug #44288 (Won't Fix): MDSMap encoder "ev" (extended version) is not checked for validity when d...
- This is going to be a necessity as we try and enable rolling upgrades! We encode an extended version and use it to de...
- 07:31 PM Bug #44257 (Resolved): vstart.sh: failed by waiting for mgr dashboard module to start
- 04:35 AM Bug #44257: vstart.sh: failed by waiting for mgr dashboard module to start
- I had the same issue on fedora 30 with yaml module not installed.
- 04:07 AM Bug #44257: vstart.sh: failed by waiting for mgr dashboard module to start
- Yeah, all the install-deps.sh and do_cmake.sh no any error. But I have to manually install all the them again to make...
- 07:08 PM Bug #44021 (Pending Backport): client: bad error handling in Client::_lseek
- 07:06 PM Bug #43964 (Resolved): qa: Test failure: test_acls
- 07:05 PM Bug #42835 (Pending Backport): qa: test_scrub_abort fails during check_task_status("idle")
- 07:04 PM Bug #42835 (Resolved): qa: test_scrub_abort fails during check_task_status("idle")
- 07:03 PM Bug #36635 (Pending Backport): mds: purge queue corruption from wrong backport
- 06:56 PM Bug #44295: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- in QA: /ceph/teuthology-archive/pdonnell-2020-02-25_15:06:35-fs-wip-pdonnell-testing-20200224.202837-distro-basic-smi...
- 06:44 PM Bug #44295 (Fix Under Review): mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- 06:12 PM Bug #44295 (In Progress): mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- 06:11 PM Bug #44295 (Resolved): mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- From the LRC testing Octopus:...
- 06:10 PM Bug #44294 (Resolved): mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"
- From the LRC testing Octopus:...
- 05:59 PM Bug #44293: nautilus: pybind/mgr/volumes: incomplete async unlink
- See also: /ceph/teuthology-archive/pdonnell-2020-02-25_16:29:58-fs-nautilus-distro-basic-smithi/4801912/teuthology.log
- 05:57 PM Bug #44293 (Resolved): nautilus: pybind/mgr/volumes: incomplete async unlink
- ...
- 05:50 PM Backport #43143: nautilus: mds: tolerate no snaprealm encoded in on-disk root inode
- c64beb68ef4eba9f717d1d68594360c81f1d0e3a will be in v14.2.8.
- 05:14 PM Bug #43796 (Resolved): qa: test_version_splitting
- 05:07 PM Backport #41106: nautilus: mds: add command that modify session metadata
- b3d662aad44afaebb123540bd2b5ed93199910ec will be in v14.2.8
- 04:22 PM Bug #43968 (Pending Backport): qa: multimds suite using centos7
- Failures also in nautilus: http://pulpito.front.sepia.ceph.com/yuriw-2020-02-19_16:45:17-multimds-nautilus-distro-bas...
- 03:47 PM Backport #44291 (Resolved): nautilus: mds: SIGSEGV in Migrator::export_sessions_flushed
- https://github.com/ceph/ceph/pull/33751
- 03:46 PM Backport #44290 (Rejected): mimic: mds: SIGSEGV in Migrator::export_sessions_flushed
- https://github.com/ceph/ceph/pull/34351
- 03:46 PM Bug #44207 (Resolved): mgr/volumes: deadlock when trying to purge large number of trash entries
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:18 AM Bug #44207 (Pending Backport): mgr/volumes: deadlock when trying to purge large number of trash e...
- 02:29 PM Bug #44276: pybind/mgr/volumes: cleanup stale connection hang
- It's the connection cleanup thread that's waiting on cephfs shutdown() call after initiating a session disconnect:
... - 03:59 AM Bug #44276: pybind/mgr/volumes: cleanup stale connection hang
- Copying from #44281:...
- 03:52 AM Bug #44276: pybind/mgr/volumes: cleanup stale connection hang
- Patrick Donnelly wrote:
> [...]
>
> From: /ceph/teuthology-archive/vshankar-2020-02-24_12:33:54-fs-wip-vshankar-t... - 04:29 AM Backport #44282 (Resolved): nautilus: mgr/volumes: deadlock when trying to purge large number of ...
- 04:21 AM Backport #44282 (In Progress): nautilus: mgr/volumes: deadlock when trying to purge large number ...
- 03:46 AM Backport #44282 (Resolved): nautilus: mgr/volumes: deadlock when trying to purge large number of ...
- https://github.com/ceph/ceph/pull/33526
- 04:00 AM Bug #44281 (Duplicate): pybind/mgr/volumes: cleanup stale connection hang
- Forgot I opened an issue already...
- 03:45 AM Bug #44281 (Duplicate): pybind/mgr/volumes: cleanup stale connection hang
- ...
02/24/2020
- 11:51 PM Backport #42631: nautilus: client: FAILED assert(cap == in->auth_cap)
- ddcd3660d0e05325361b4f150ba74aed99620277 will be in v14.2.8.
- 11:49 PM Backport #43138: nautilus: mds: reports unrecognized message for mgrclient messages
- 36ef173b90b5612141c2932913e489f375419e55 will be in v14.2.8
- 11:40 PM Backport #43219: nautilus: mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (tasks....
- 92549965d0d1d42a4056b55019082d6d594d48cc will be in v14.2.8.
- 11:39 PM Backport #43085: nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- 92549965d0d1d42a4056b55019082d6d594d48cc will be in v14.2.8.
- 11:39 PM Backport #42886: nautilus: mgr/volumes: allow setting uid, gid of subvolume and subvolume group d...
- 92549965d0d1d42a4056b55019082d6d594d48cc will be in v14.2.8.
- 11:33 PM Backport #42790: nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- 001fc7f2b2fb0aea99627255a6895143f5f5898d will be in v14.2.8.
- 11:33 PM Backport #42615: nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- 001fc7f2b2fb0aea99627255a6895143f5f5898d will be in v14.2.8.
- 11:27 PM Backport #42650: nautilus: mds: no assert on frozen dir when scrub path
- git tag --contains b229aa81ad1d282467244249f41c73c7f1c73e67
This will be in the upcoming v14.2.8 release.
- 09:37 PM Feature #44279 (Fix Under Review): client: provide asok commands to getattr an inode with desired...
- Idea is to avoid using UNIX commands or libcephfs calls to implicitly acquire the caps we want. Instead, write a comm...
- 07:52 PM Feature #44277 (Resolved): pybind/mgr/volumes: add command to return metadata regarding a subvolume
- In Ceph CSI, there are cases where an existing subvolume needs to be inspected for a match to an incoming request. Th...
- 07:14 PM Bug #43909 (Pending Backport): mds: SIGSEGV in Migrator::export_sessions_flushed
- 07:10 PM Bug #44021 (Fix Under Review): client: bad error handling in Client::_lseek
- 06:58 PM Bug #44276 (In Progress): pybind/mgr/volumes: cleanup stale connection hang
- 06:58 PM Bug #44276 (Resolved): pybind/mgr/volumes: cleanup stale connection hang
- ...
- 04:08 PM Feature #44274 (New): mds: disconnect file data from inode number
- Currently CephFS uses the inode number to construct the object names for the file data. This has generally worked wel...
- 02:45 PM Bug #44257: vstart.sh: failed by waiting for mgr dashboard module to start
- Xiubo, I think you're missing some dependencies. Have you run `./install-deps.sh`?
- 02:48 AM Bug #44257 (Resolved): vstart.sh: failed by waiting for mgr dashboard module to start
- ...
- 07:06 AM Feature #44212: client: add alias client_fs for client_mds_namespace
- https://github.com/ceph/ceph/pull/33506
- 05:29 AM Feature #44214: mount.ceph: add "fs" alias for "mds_namespace"
- Currently will fix this in ceph mount code, which will add the new "fs=<fs_name>" and translate it to "mds_namespace=...
02/23/2020
- 06:12 PM Feature #44211 (Fix Under Review): mount.ceph: stop printing warning message about mds_namespace
- 01:27 AM Feature #44211 (In Progress): mount.ceph: stop printing warning message about mds_namespace
- > $ mount -t ceph -o mds_namespace=foo ...
> mount.ceph: unrecognized mount option "mds_namespace", passing to kerne... - 03:58 AM Feature #44214: mount.ceph: add "fs" alias for "mds_namespace"
- doc: https://github.com/ceph/ceph/pull/33491
- 02:16 AM Feature #44214 (In Progress): mount.ceph: add "fs" alias for "mds_namespace"
- 02:16 AM Feature #44214: mount.ceph: add "fs" alias for "mds_namespace"
- The patchwork for adding the 'fs' mount options: https://patchwork.kernel.org/patch/11398589/
02/21/2020
- 11:16 PM Bug #44244 (Resolved): pybind/mgr/volumes: "handle_command module 'volumes' command handler threw...
- 11:03 PM Bug #44244 (Fix Under Review): pybind/mgr/volumes: "handle_command module 'volumes' command handl...
- 10:49 PM Bug #44244 (Resolved): pybind/mgr/volumes: "handle_command module 'volumes' command handler threw...
- ...
- 06:06 PM Backport #43141: nautilus: tools/cephfs: linkages injected by cephfs-data-scan have first == head
- This is not yet in a tagged release:
$ git tag --contains d9516150d95617c2ded9bbef0f3e9b7d76e3dcee
It will be i... - 02:31 AM Bug #36635 (Fix Under Review): mds: purge queue corruption from wrong backport
- 01:46 AM Bug #36635 (In Progress): mds: purge queue corruption from wrong backport
02/20/2020
- 02:18 PM Bug #43248 (Fix Under Review): cephfs-shell: do not drop into shell after running command-line co...
- 11:37 AM Bug #43964: qa: Test failure: test_acls
- See - https://tracker.ceph.com/issues/43486#note-2
- 11:37 AM Bug #43964: qa: Test failure: test_acls
- btrfs-progs-devel wasn't available on "CentOS 8 too":https://tracker.ceph.com/issues/43486 after which "we added a fi...
- 10:01 AM Bug #43964 (Fix Under Review): qa: Test failure: test_acls
- 08:09 AM Bug #44117: vstart_runner.py: align LocalRemote.run with teuthology's run
- I was able to reproduce the bug; see /ceph/teuthology-archive/rishabh-2020-02-20_07:10:45-fs-wip-rishabh-dummy-test-d...
02/19/2020
- 06:59 PM Bug #44176 (Resolved): qa: "Error EINVAL: 'Module' object has no attribute 'remove_mds'"
- 05:40 PM Fix #44171 (In Progress): pybind/cephfs: audit for unimplemented bindings for libcephfs
- 02:23 PM Fix #44171: pybind/cephfs: audit for unimplemented bindings for libcephfs
- The initial PR is sent. It doesn't cover all the missing bindings yet.
- 05:01 PM Feature #44214 (Resolved): mount.ceph: add "fs" alias for "mds_namespace"
- I feel "mds_namespace" is not an intuitive name. Let's keep it for backwards compatibility but introduce a cleaner na...
- 05:00 PM Feature #44212 (Resolved): client: add alias client_fs for client_mds_namespace
- I feel "mds_namespace" is not an intuitive name. Let's keep client_mds_namespace for backwards compatibility but intr...
- 04:55 PM Feature #44211 (Resolved): mount.ceph: stop printing warning message about mds_namespace
- We see:...
- 02:31 PM Bug #44207 (Fix Under Review): mgr/volumes: deadlock when trying to purge large number of trash e...
- 12:34 PM Bug #44207 (In Progress): mgr/volumes: deadlock when trying to purge large number of trash entries
- 12:31 PM Bug #44207 (Resolved): mgr/volumes: deadlock when trying to purge large number of trash entries
- There's a subtle deadlock when purge tasks (via the generic async job machinery) tries to fetch the next job to execu...
- 12:37 PM Bug #44208 (Resolved): mgr/volumes: support canceling in-progress/pending clone operations.
- This is useful when a user wants to interrupt a long-running clone operation.
$ ceph fs clone cancel <volume> <clo... - 12:20 AM Bug #44100: cephfs rsync kworker high load.
- none none wrote:
> Zheng Yan wrote:
> > could you check if this still happen with upstream 5.5 kernel
>
> From w...
02/18/2020
- 11:18 PM Bug #43750 (Fix Under Review): mds: add perf counters for openfiletable
- 11:06 PM Bug #44133 (Rejected): Using VIM in a file system is very slow
- Please seek help on the ceph-users mailing list.
- 11:06 PM Bug #44172 (Triaged): cephfs-journal-tool: cannot set --dry_run arg
- 11:04 PM Bug #43964: qa: Test failure: test_acls
- From master: /ceph/teuthology-archive/teuthology-2020-01-28_03:15:03-fs-master-distro-basic-smithi/4712989/teuthology...
- 10:54 PM Feature #44191: cephfs: geo-replication
- Found the ticket #41074. Maybe we can close this.
- 10:39 PM Feature #44191 (Resolved): cephfs: geo-replication
- This is a skeleton ticket for geo-replication of subvolumes.
- 10:49 PM Feature #44193 (Resolved): pybind/mgr/volumes: add API to manage NFS-Ganesha gateway clusters in ...
- 10:40 PM Feature #44192 (Resolved): mds: stable multimds scrub
- Remove warnings/guards for multimds scrub when blockers complete.
- 10:37 PM Feature #44190 (New): qa: thrash file systems during workload tests
- Verify creation/deletion of file systems does not interfere with on-going workloads. This should be a background task...
- 10:03 PM Feature #38951: client: implement asynchronous unlink/create
- Moving this to Zhegn since he's working on the libcephfs part of this.
- 10:02 PM Feature #38951 (In Progress): client: implement asynchronous unlink/create
- 09:40 PM Feature #24725: mds: propagate rstats from the leaf dirs up to the specified diretory
- Zheng has a follow-on PR: https://github.com/ceph/ceph/pull/32126
- 12:32 PM Bug #43039: client: shutdown race fails with status 141
- I took a look at the logs but there is nothing conclusive there. Again, I suspect that this is a problem down in the ...
- 12:12 PM Bug #44176 (In Progress): qa: "Error EINVAL: 'Module' object has no attribute 'remove_mds'"
- 08:38 AM Feature #44044: qa: add network namespaces to kernel/ceph-fuse mounts for partition testing
- From Jeff's idea and comments of the first version to fulfill the "halt" mount option, which will try to close all th...
- 12:39 AM Bug #42602 (Resolved): client: missing const SEEK_DATA and SEEK_HOLE on ALPINE LINUX
- 12:37 AM Cleanup #40578 (Resolved): mds: reorganize class members in headers to follow coding guidelines
- 12:37 AM Cleanup #43426 (Resolved): mds: reorg mdstypes header
- 12:16 AM Bug #44097: nautilus: "cluster [WRN] Health check failed: 1 clients failing to respond to capabil...
- Maybe related, also ffsb: /ceph/teuthology-archive/pdonnell-2020-02-16_17:35:17-kcephfs-wip-pdonnell-testing-20200215...
02/17/2020
- 11:54 PM Bug #44176 (Resolved): qa: "Error EINVAL: 'Module' object has no attribute 'remove_mds'"
- ...
- 11:50 PM Bug #43039 (New): client: shutdown race fails with status 141
- /ceph/teuthology-archive/pdonnell-2020-02-15_16:51:06-fs-wip-pdonnell-testing-20200215.033325-distro-basic-smithi/476...
- 04:45 PM Bug #44172 (Resolved): cephfs-journal-tool: cannot set --dry_run arg
- cephfs-journal-tool seems to support a --dry_run argument but I'm not able to pass it to the tool.
I believe I tri... - 03:20 PM Fix #44171 (Need More Info): pybind/cephfs: audit for unimplemented bindings for libcephfs
- Recently we've added some missing bindings:...
- 02:58 PM Bug #44127: cephfs-shell: read config options from cephf.conf and from ceph config command
- Following are the cephfs-shell options the description talks about -...
- 02:55 PM Bug #44114: test_cephfs_shell.TestDU test fail unexpectedly
- Nope, none of my recent PRs modify anything around TestDU. Besides, after splitting this PR I couldn't reproduce this...
- 02:49 PM Bug #44114 (Need More Info): test_cephfs_shell.TestDU test fail unexpectedly
- Is this not related to your PR? I have not seen this in my testing.
- 02:47 PM Bug #44132 (Triaged): mds: assertion failure due to blacklist
02/14/2020
- 08:47 AM Bug #44139 (New): mds: check all on-disk metadata is versioned
- 06:41 AM Bug #44133: Using VIM in a file system is very slow
- process in ssh terminal is stuck when I use vim to edit the python or txt file and save to cephfs(mounted in kernel m...
- 05:06 AM Bug #44133 (Rejected): Using VIM in a file system is very slow
- Using VIM in a file system is very slow
- 06:18 AM Backport #42441 (In Progress): nautilus: mds: create a configurable snapshot limit
- 04:54 AM Backport #42160 (In Progress): luminous: osdc: objecter ops output does not have useful time info...
- 04:49 AM Backport #42123 (In Progress): luminous: client: no method to handle SEEK_HOLE and SEEK_DATA in l...
- 04:36 AM Backport #41857 (In Progress): luminous: client: removing dir reports "not empty" issue due to cl...
- 02:51 AM Bug #43909 (Fix Under Review): mds: SIGSEGV in Migrator::export_sessions_flushed
- 12:08 AM Bug #43392 (Resolved): MDSMonitor: support automatic failover to standbys with stronger affinity
02/13/2020
- 11:17 PM Bug #44132 (Resolved): mds: assertion failure due to blacklist
- ...
- 10:35 PM Bug #42467: mds: daemon crashes while updating blacklist
- ceph-post-file: 44655e58-39e0-4fff-a2fc-2645b131c594 for the crash listed above (13.2.8)
- 05:30 PM Bug #42467 (New): mds: daemon crashes while updating blacklist
- A new report of this coming from ceph-users:
"[ceph-users] Ceph MDS ASSERT In function 'MDRequestRef'"
Differen... - 07:19 PM Bug #44127 (Resolved): cephfs-shell: read config options from cephf.conf and from ceph config com...
- cephfs-shell by default has following options -...
- 06:39 PM Bug #44100: cephfs rsync kworker high load.
- Zheng Yan wrote:
> could you check if this still happen with upstream 5.5 kernel
From where should I get it? elre... - 02:15 AM Bug #44100: cephfs rsync kworker high load.
- could you check if this still happen with upstream 5.5 kernel
- 02:00 PM Bug #43943 (Fix Under Review): qa: "[WRN] evicting unresponsive client smithi131:z (6314), after ...
- 12:27 PM Bug #44117 (Resolved): vstart_runner.py: align LocalRemote.run with teuthology's run
- teuthology's run uses keyword argument but that's not the case for vstart_runner.py. This gets the test passing with ...
- 12:09 PM Bug #42986 (Resolved): qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_misc.Tes...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:08 PM Bug #43649 (Resolved): mount.ceph fails with ERANGE if name= option is longer than 37 characters
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:00 PM Backport #43780 (Resolved): nautilus: qa: Test failure: test_drop_cache_command_dead (tasks.cephf...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32919
m... - 11:59 AM Backport #43790 (Resolved): nautilus: RuntimeError: Files in flight high water is unexpectedly lo...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33115
m... - 11:58 AM Backport #43784 (Resolved): nautilus: fs: OpenFileTable object shards have too many k/v pairs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32921
m... - 11:58 AM Backport #43777 (Resolved): nautilus: qa: test_full racy check: AssertionError: 29 not greater th...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32918
m... - 11:58 AM Backport #43733 (Resolved): nautilus: qa: ffsb suite causes SLOW_OPS warnings
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32917
m... - 11:57 AM Backport #43348 (Resolved): nautilus: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32756
m... - 11:51 AM Backport #43729 (Resolved): nautilus: client: chdir does not raise error if a file is passed
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32916
m... - 11:51 AM Backport #43770 (Resolved): nautilus: mount.ceph fails with ERANGE if name= option is longer than...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32807
m... - 11:51 AM Backport #43503 (Resolved): nautilus: mount.ceph: give a hint message when no mds is up or cluste...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32910
m... - 11:51 AM Backport #42951: nautilus: Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volumes.Te...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33122
m... - 11:51 AM Backport #43271: nautilus: qa/tasks: Fix raises that doesn't re-raise in test_volumes.py
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33122
m... - 11:51 AM Backport #43338: nautilus: qa/tasks: add remaining tests for fs volume
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33122
m... - 11:51 AM Backport #43629: nautilus: mgr/volumes: provision subvolumes with config metadata storage in cephfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33122
m... - 11:50 AM Backport #43724: nautilus: mgr/volumes: subvolumes with snapshots can be deleted
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33122
m... - 11:50 AM Backport #44020: pybind/mgr/volumes: restore from snapshot
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33122
m... - 10:24 AM Bug #44114 (Need More Info): test_cephfs_shell.TestDU test fail unexpectedly
- I suspect it's because one of the test machines was pretty laggy. This has been reported multiple times before -
h... - 09:18 AM Bug #44113 (Fix Under Review): cephfs-shell: set proper return value for the tool
- Actually, code for this was already present on the PR, just separated the commit; marking "Fix Under Review".
- 09:15 AM Bug #44113 (Resolved): cephfs-shell: set proper return value for the tool
- Currently, cephfs-shell tool returns zero all the time, whether the shell is in interactive or non-interactive mode -...
02/12/2020
- 07:38 PM Backport #43780: nautilus: qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_misc...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32919
merged - 06:44 PM Backport #43790: nautilus: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/33115
merged - 06:44 PM Backport #43784: nautilus: fs: OpenFileTable object shards have too many k/v pairs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32921
merged - 06:43 PM Backport #43777: nautilus: qa: test_full racy check: AssertionError: 29 not greater than or equal...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32918
merged - 06:41 PM Backport #43733: nautilus: qa: ffsb suite causes SLOW_OPS warnings
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32917
merged - 06:39 PM Backport #43729: nautilus: client: chdir does not raise error if a file is passed
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32916
merged - 06:37 PM Bug #44100: cephfs rsync kworker high load.
- Tried to work around this by not using multiple cephfs mounts. First process finished quickly, 2nd had kworker high l...
- 06:17 PM Bug #44100: cephfs rsync kworker high load.
- If I unmount the cephfs and mount it again, problems seems to be gone.
- 06:08 PM Bug #44100: cephfs rsync kworker high load.
- none none wrote:
> PS. this kworker is only appearing with the 2nd rsync process. I already have a rsync session run... - 03:54 PM Bug #44100: cephfs rsync kworker high load.
- Maybe it is related to renewing capabilities? Or is it normal that the mds is asked to renew 132k caps so often?
{... - 03:30 PM Bug #44100: cephfs rsync kworker high load.
- PS. this kworker is only appearing with the 2nd rsync process. I already have a rsync session running copying from di...
- 02:31 PM Bug #44100: cephfs rsync kworker high load.
- Did another test rsyncing 2 files, 16 minutes???
### concurr. link backup test2 ###
### start:15:06:13 ###
... - 02:05 PM Bug #44100 (Resolved): cephfs rsync kworker high load.
- I have an rsync backup running which became 10 hours. When I test with rsync one instance, it looks like it processes...
- 06:36 PM Backport #43770: nautilus: mount.ceph fails with ERANGE if name= option is longer than 37 characters
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32807
merged - 06:36 PM Backport #43503: nautilus: mount.ceph: give a hint message when no mds is up or cluster is laggy
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32910
merged - 04:42 PM Bug #44101 (New): nautilus: qa: df pool accounting incomplete
- ...
- 04:30 PM Feature #24880 (Resolved): pybind/mgr/volumes: restore from snapshot
- 04:29 PM Bug #43645 (Resolved): mgr/volumes: subvolumes with snapshots can be deleted
- 04:29 PM Backport #43724 (Resolved): nautilus: mgr/volumes: subvolumes with snapshots can be deleted
- Merged https://github.com/ceph/ceph/pull/33122/
- 04:27 PM Feature #43349 (Resolved): mgr/volumes: provision subvolumes with config metadata storage in cephfs
- 04:27 PM Backport #43629 (Resolved): nautilus: mgr/volumes: provision subvolumes with config metadata stor...
- Merged https://github.com/ceph/ceph/pull/33122/
- 04:26 PM Bug #42872 (Resolved): qa/tasks: add remaining tests for fs volume
- 04:25 PM Backport #43338 (Resolved): nautilus: qa/tasks: add remaining tests for fs volume
- Merged https://github.com/ceph/ceph/pull/33122/
- 04:24 PM Bug #41694 (Resolved): qa/tasks: Fix raises that doesn't re-raise in test_volumes.py
- 04:24 PM Backport #43271 (Resolved): nautilus: qa/tasks: Fix raises that doesn't re-raise in test_volumes.py
- Merged https://github.com/ceph/ceph/pull/33122
- 04:23 PM Bug #42646 (Resolved): Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volumes.TestVo...
- 04:23 PM Backport #42951 (Resolved): nautilus: Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test...
- Merged https://github.com/ceph/ceph/pull/33122
- 04:21 PM Backport #44020 (Resolved): pybind/mgr/volumes: restore from snapshot
- Merged https://github.com/ceph/ceph/pull/33122/
- 03:29 PM Bug #43567: qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_target_directory
- Here's the root cause behind the error - https://github.com/ceph/ceph/pull/32612#discussion_r366312713
- 02:13 PM Bug #43905 (Closed): qa: test_rebuild_inotable infinite loop
- It's bug in test branch
- 02:12 PM Bug #43908 (Resolved): mds: FAILED ceph_assert(!p.is_remote_wrlock())
- 01:42 PM Bug #43598 (In Progress): mds: PurgeQueue does not handle objecter errors
- 12:37 PM Backport #43137: nautilus: pybind/mgr/volumes: idle connection drop is not working
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33116
m... - 10:18 AM Backport #43137 (Resolved): nautilus: pybind/mgr/volumes: idle connection drop is not working
- https://github.com/ceph/ceph/pull/33116 merged
- 12:11 PM Bug #44097: nautilus: "cluster [WRN] Health check failed: 1 clients failing to respond to capabil...
- ...
- 12:07 PM Bug #44097 (Can't reproduce): nautilus: "cluster [WRN] Health check failed: 1 clients failing to ...
- 10:19 AM Bug #43113 (Resolved): pybind/mgr/volumes: idle connection drop is not working
- 07:58 AM Backport #42441: nautilus: mds: create a configurable snapshot limit
- Yet to backport.
02/11/2020
- 09:56 PM Bug #40784 (Resolved): mds: metadata changes may be lost when MDS is restarted
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:56 PM Bug #41329 (Resolved): mds: reject sessionless messages
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:55 PM Bug #42088 (Resolved): 'ceph -s' does not show standbys if there are no filesystems
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:52 PM Bug #43484 (Resolved): mds: note features client has when rejecting client due to feature incompat
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:52 PM Bug #43514 (Resolved): qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:49 PM Backport #43568 (Resolved): nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32912
m... - 04:42 PM Backport #43568: nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32912
merged - 09:49 PM Backport #43509 (Resolved): nautilus: 'ceph -s' does not show standbys if there are no filesystems
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32912
m... - 04:41 PM Backport #43509: nautilus: 'ceph -s' does not show standbys if there are no filesystems
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32912
merged - 09:48 PM Backport #43628 (Resolved): nautilus: client: disallow changing fuse_default_permissions option a...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32915
m... - 04:03 PM Backport #43628: nautilus: client: disallow changing fuse_default_permissions option at runtime
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32915
merged - 09:48 PM Backport #43624 (Resolved): nautilus: mds: note features client has when rejecting client due to ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32914
m... - 04:02 PM Backport #43624: nautilus: mds: note features client has when rejecting client due to feature inc...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32914
merged - 09:47 PM Backport #43573 (Resolved): nautilus: cephfs-journal-tool: will crash without any extra argument
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32913
m... - 04:01 PM Backport #43573: nautilus: cephfs-journal-tool: will crash without any extra argument
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32913
merged - 09:47 PM Backport #43343 (Resolved): nautilus: mds: client does not response to cap revoke After session s...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32909
m... - 04:00 PM Backport #43343: nautilus: mds: client does not response to cap revoke After session stale->resum...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32909
merged - 09:46 PM Backport #43558 (Resolved): nautilus: mds: reject forward scrubs when cluster has multiple active...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32602
m... - 03:59 PM Backport #43558: nautilus: mds: reject forward scrubs when cluster has multiple active MDS (more ...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/32602
merged - 09:46 PM Backport #43506 (Resolved): nautilus: MDSMonitor: warn if a new file system is being created with...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32600
m... - 03:59 PM Backport #43506: nautilus: MDSMonitor: warn if a new file system is being created with an EC defa...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32600
merged - 09:45 PM Backport #43345 (Resolved): nautilus: mds: metadata changes may be lost when MDS is restarted
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30843
m... - 03:58 PM Backport #43345: nautilus: mds: metadata changes may be lost when MDS is restarted
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30843
merged - 09:45 PM Backport #41853 (Resolved): nautilus: mds: reject sessionless messages
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30843
m... - 03:58 PM Backport #41853: nautilus: mds: reject sessionless messages
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30843
merged - 06:46 AM Bug #44071 (In Progress): kclient: reconfigure superblock parameters does not work
- 06:46 AM Bug #44071 (Fix Under Review): kclient: reconfigure superblock parameters does not work
- The '-o remount,ceph_optX' does not work.
- 03:27 AM Bug #43392 (Fix Under Review): MDSMonitor: support automatic failover to standbys with stronger a...
- 02:10 AM Bug #43964 (New): qa: Test failure: test_acls
- http://pulpito.front.sepia.ceph.com/jcollin-2020-02-05_00:06:17-fs-inter-mds-testing5-distro-basic-smithi/
Also available in: Atom