Activity
From 06/13/2021 to 07/12/2021
07/12/2021
- 08:08 PM Feature #45746 (Resolved): mgr/nfs: Add interface to update export
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:07 PM Feature #48622 (Resolved): mgr/nfs: Add tests for readonly exports
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:06 PM Bug #49922 (Resolved): MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:06 PM Documentation #50008 (Resolved): mgr/nfs: Add troubleshooting section
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:06 PM Documentation #50161 (Resolved): mgr/nfs: validation error on creating custom export
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:05 PM Bug #50559 (Resolved): session dump includes completed_requests twice, once as an integer and onc...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:05 PM Bug #50807 (Resolved): mds: MDSLog::journaler pointer maybe crash with use-after-free
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:04 PM Bug #51492 (Resolved): pacific: pybind/ceph_volume_client: stat on empty string
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:59 PM Backport #51494 (Resolved): octopus: pacific: pybind/ceph_volume_client: stat on empty string
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42161
m... - 03:42 PM Backport #51494: octopus: pacific: pybind/ceph_volume_client: stat on empty string
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42161
merged - 07:59 PM Backport #51336 (Resolved): octopus: mds: avoid journaling overhead for setxattr("ceph.dir.subvol...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41996
m... - 03:42 PM Backport #51336: octopus: mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") for n...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41996
merged - 07:58 PM Backport #50874 (Resolved): octopus: mds: MDSLog::journaler pointer maybe crash with use-after-free
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41626
m... - 03:40 PM Backport #50874: octopus: mds: MDSLog::journaler pointer maybe crash with use-after-free
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41626
merged - 07:58 PM Backport #50635 (Resolved): octopus: session dump includes completed_requests twice, once as an i...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41625
m... - 03:40 PM Backport #50635: octopus: session dump includes completed_requests twice, once as an integer and ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41625
merged - 07:58 PM Backport #50283 (Resolved): octopus: MDS slow request lookupino #0x100 on rank 1 block forever on...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40782
m... - 03:36 PM Backport #50283: octopus: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40782
merged - 07:52 PM Backport #51493: nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42162
m... - 03:28 PM Backport #50596 (Rejected): octopus: mgr/nfs: Add troubleshooting section
- Let's focus on Pacific.
- 03:28 PM Backport #50354 (Rejected): octopus: mgr/nfs: validation error on creating custom export
- Let's focus on Pacific.
- 03:28 PM Backport #48703 (Rejected): octopus: mgr/nfs: Add tests for readonly exports
- Let's focus on Pacific.
- 03:28 PM Backport #49712 (Rejected): octopus: mgr/nfs: Add interface to update export
- Let's focus on Pacific.
- 01:42 PM Bug #51589 (Triaged): mds: crash when journaling during replay
- 10:31 AM Bug #51630 (Fix Under Review): mgr/snap_schedule: don't throw traceback on non-existent fs
- Instead return error message and log the traceback...
- 07:40 AM Documentation #51428 (In Progress): mgr/nfs: move nfs doc from cephfs to mgr
07/09/2021
- 05:45 PM Bug #51600 (Fix Under Review): mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_h...
- 04:43 AM Bug #51600 (Resolved): mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_hit_rate ...
- META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_hit_rate are not updated.
ceph daemon mds.$(hostnam... - 03:14 PM Feature #51615 (New): mgr/nfs: add interface to update nfs cluster
- 03:08 PM Cleanup #51614 (Resolved): mgr/nfs: remove dashboard test remnant from unit tests
- 03:03 PM Feature #51613 (New): mgr/nfs: add qa tests for rgw
07/08/2021
- 09:47 PM Bug #49536 (Resolved): client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:47 PM Bug #49939 (Resolved): cephfs-mirror: be resilient to recreated snapshot during synchronization
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:46 PM Bug #50112 (Resolved): MDS stuck at stopping when reducing max_mds
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:46 PM Bug #50216 (Resolved): qa: "ls: cannot access 'lost+found': No such file or directory"
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:45 PM Bug #50530 (Resolved): pacific: client: abort after MDS blocklist
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:45 PM Bug #51069 (Resolved): mds: mkdir on ephemerally pinned directory sometimes blocked on journal flush
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51077 (Resolved): MDSMonitor: crash when attempting to mount cephfs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51146 (Resolved): qa: scrub code does not join scrubopts with comma
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51182 (Resolved): pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51184 (Resolved): qa: fs:bugs does not specify distro
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Documentation #51187 (Resolved): doc: pacific updates
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51250 (Resolved): qa: fs:upgrade uses teuthology default distro
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:43 PM Bug #51318 (Resolved): cephfs-mirror: do not terminate on SIGHUP
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:36 PM Backport #51232 (Resolved): pacific: qa: scrub code does not join scrubopts with comma
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42065
m... - 09:36 PM Backport #51251 (Resolved): pacific: qa: fs:upgrade uses teuthology default distro
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42067
m... - 09:35 PM Backport #50913 (Resolved): pacific: MDS heartbeat timed out between during executing MDCache::st...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42061
m... - 09:35 PM Backport #51286 (Resolved): pacific: MDSMonitor: crash when attempting to mount cephfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42068
m... - 09:35 PM Backport #51413 (Resolved): pacific: cephfs-mirror: do not terminate on SIGHUP
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42097
m... - 09:34 PM Backport #51414 (Resolved): pacific: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42072
m... - 09:34 PM Backport #51412 (Resolved): pacific: mds: mkdir on ephemerally pinned directory sometimes blocked...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42071
m... - 09:34 PM Backport #51324 (Resolved): pacific: pacific: client: abort after MDS blocklist
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42070
m... - 09:34 PM Backport #51322 (Resolved): pacific: qa: test_data_scan.TestDataScan.test_pg_files AssertionError...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42069
m... - 09:34 PM Backport #51235 (Resolved): pacific: doc: pacific updates
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42066
m... - 09:34 PM Backport #51231 (Resolved): pacific: pybind/mgr/snap_schedule: Invalid command: Unexpected argume...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42064
m... - 09:33 PM Backport #51230 (Resolved): pacific: qa: fs:bugs does not specify distro
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42063
m... - 09:32 PM Backport #51203 (Resolved): pacific: mds: CephFS kclient gets stuck when getattr() on a certain file
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42062
m... - 09:32 PM Backport #50875 (Resolved): pacific: mds: MDSLog::journaler pointer maybe crash with use-after-free
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42060
m... - 09:32 PM Backport #50848 (Resolved): pacific: mds: "cluster [ERR] Error recovering journal 0x203: (2) No...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42059
m... - 09:31 PM Backport #50846 (Resolved): pacific: mds: journal recovery thread is possibly asserting with mds_...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42058
m... - 09:31 PM Backport #50636 (Resolved): pacific: session dump includes completed_requests twice, once as an i...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42057
m... - 09:31 PM Backport #50630 (Resolved): pacific: mds: Error ENOSYS: mds.a started profiler
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42056
m... - 09:30 PM Backport #50445 (Resolved): pacific: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41052
m... - 09:30 PM Backport #50624 (Resolved): pacific: qa: "ls: cannot access 'lost+found': No such file or directory"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40856
m... - 09:30 PM Backport #50289 (Resolved): pacific: MDS stuck at stopping when reducing max_mds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40856
m... - 09:30 PM Backport #50282 (Resolved): pacific: MDS slow request lookupino #0x100 on rank 1 block forever on...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40856
m... - 07:44 AM Bug #51589 (Resolved): mds: crash when journaling during replay
- MDS version: ceph version 14.2.20 (36274af6eb7f2a5055f2d53ad448f2694e9046a0) nautilus (stable)
Using 200 clients, ...
07/07/2021
- 09:12 PM Bug #44257: vstart.sh: failed by waiting for mgr dashboard module to start
- I just ran into this same issue, where "waiting for mgr dashboard module to start" runs on a loop. Checking mgr.log.x...
- 05:40 PM Backport #51481 (Resolved): pacific: osd: sent kickoff request to MDS and then stuck for 15 minut...
- 05:30 PM Backport #51547 (In Progress): pacific: src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not as...
07/06/2021
- 08:10 PM Backport #51547 (Resolved): pacific: src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assum...
- https://github.com/ceph/ceph/pull/42226
- 08:08 PM Bug #51476 (Pending Backport): src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a ce...
- 06:55 PM Backport #51545 (Rejected): octopus: mgr/volumes: use a dedicated libcephfs handle for subvolume ...
- 06:55 PM Backport #51544 (Resolved): pacific: mgr/volumes: use a dedicated libcephfs handle for subvolume ...
- https://github.com/ceph/ceph/pull/42914
- 06:54 PM Bug #51271 (Pending Backport): mgr/volumes: use a dedicated libcephfs handle for subvolume API calls
- 06:52 PM Bug #50250: mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~...
- /ceph/teuthology-archive/pdonnell-2021-07-04_02:32:34-fs-wip-pdonnell-testing-20210703.052904-distro-basic-smithi/625...
- 06:05 PM Cleanup #51543 (Fix Under Review): mds: improve debugging for mksnap denial
- 06:04 PM Cleanup #51543 (Resolved): mds: improve debugging for mksnap denial
- 03:17 PM Feature #51340 (Fix Under Review): mon/MDSMonitor: allow creating a file system with a specific f...
- 10:50 AM Feature #50150 (Fix Under Review): qa: begin grepping kernel logs for kclient warnings/failures t...
07/05/2021
- 02:23 AM Feature #51518 (Fix Under Review): client: flush the mdlog in unsafe requests' relevant and auth ...
- 01:35 AM Feature #51518 (Resolved): client: flush the mdlog in unsafe requests' relevant and auth MDSes only
- Do not flush the mdlog in all the MDSes, which may make no sense for specific inode.
07/02/2021
- 11:19 PM Backport #51499 (In Progress): pacific: qa: test_ls_H_prints_human_readable_file_size failure
- 08:20 PM Backport #51499 (Resolved): pacific: qa: test_ls_H_prints_human_readable_file_size failure
- https://github.com/ceph/ceph/pull/42166
- 11:16 PM Backport #51500 (In Progress): pacific: qa: FileNotFoundError: [Errno 2] No such file or director...
- 08:30 PM Backport #51500 (Resolved): pacific: qa: FileNotFoundError: [Errno 2] No such file or directory: ...
- https://github.com/ceph/ceph/pull/42165
- 08:29 PM Bug #51183 (Pending Backport): qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/...
- 08:16 PM Bug #51417 (Pending Backport): qa: test_ls_H_prints_human_readable_file_size failure
- 08:08 PM Backport #51493 (Resolved): nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- 07:46 PM Backport #51493: nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42162
merged - 04:43 PM Backport #51493 (In Progress): nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- 04:20 PM Backport #51493 (Resolved): nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- https://github.com/ceph/ceph/pull/42162
- 04:48 PM Bug #51495 (In Progress): client: handle empty path strings
- Standard indicates we should return ENOENT.
- 04:42 PM Backport #51494 (In Progress): octopus: pacific: pybind/ceph_volume_client: stat on empty string
- 04:20 PM Backport #51494 (Resolved): octopus: pacific: pybind/ceph_volume_client: stat on empty string
- https://github.com/ceph/ceph/pull/42161
- 04:18 PM Bug #51492 (Pending Backport): pacific: pybind/ceph_volume_client: stat on empty string
- 04:16 PM Bug #51492 (Resolved): pacific: pybind/ceph_volume_client: stat on empty string
- When volume_prefix begins with "/", the library will try to stat the empty string resulting in a log like:...
07/01/2021
- 11:28 PM Bug #51476 (Fix Under Review): src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a ce...
- 03:28 PM Bug #51476 (Resolved): src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a cephfs-mir...
- When the replication is unidirectional we cannot assume a daemon is running and the "fs snapshot mirror daemon status...
- 09:15 PM Backport #51482 (Rejected): octopus: osd: sent kickoff request to MDS and then stuck for 15 minut...
- 09:15 PM Backport #51481 (Resolved): pacific: osd: sent kickoff request to MDS and then stuck for 15 minut...
- https://github.com/ceph/ceph/pull/42072
- 09:12 PM Bug #51357 (Pending Backport): osd: sent kickoff request to MDS and then stuck for 15 minutes unt...
- The code change is in cephfs.
- 04:02 PM Backport #51232: pacific: qa: scrub code does not join scrubopts with comma
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42065
merged - 04:01 PM Backport #51251: pacific: qa: fs:upgrade uses teuthology default distro
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42067
merged - 12:13 PM Bug #50954: mgr/pybind/snap_schedule: commands only support positional arguments?
- Sebastian Wagner wrote:
> Can you use proper positional arguments here?
>
> [...]
>
> I for one don't think h...
06/30/2021
- 11:58 PM Backport #50913: pacific: MDS heartbeat timed out between during executing MDCache::start_files_t...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42061
merged - 11:57 PM Backport #51286: pacific: MDSMonitor: crash when attempting to mount cephfs
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42068
merged - 10:46 PM Documentation #51459 (Resolved): doc: document what kinds of damage forward scrub can repair
- 07:38 PM Backport #51413: pacific: cephfs-mirror: do not terminate on SIGHUP
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42097
merged - 07:36 PM Backport #51414: pacific: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42072
merged - 07:35 PM Backport #51412: pacific: mds: mkdir on ephemerally pinned directory sometimes blocked on journal...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42071
merged - 07:34 PM Backport #51324: pacific: pacific: client: abort after MDS blocklist
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42070
merged - 07:34 PM Backport #51322: pacific: qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42069
merged - 07:32 PM Backport #51235: pacific: doc: pacific updates
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42066
merged - 07:30 PM Backport #51231: pacific: pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42064
merged - 07:29 PM Backport #51230: pacific: qa: fs:bugs does not specify distro
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42063
merged - 06:44 PM Backport #51203: pacific: mds: CephFS kclient gets stuck when getattr() on a certain file
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42062
merged - 06:43 PM Backport #50875: pacific: mds: MDSLog::journaler pointer maybe crash with use-after-free
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42060
merged - 06:42 PM Backport #50848: pacific: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42059
merged - 06:42 PM Backport #50846: pacific: mds: journal recovery thread is possibly asserting with mds_lock not lo...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42058
merged - 06:41 PM Backport #50636: pacific: session dump includes completed_requests twice, once as an integer and ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42057
merged - 06:41 PM Backport #50630: pacific: mds: Error ENOSYS: mds.a started profiler
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42056
merged - 06:31 PM Backport #50445: pacific: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41052
merged - 06:30 PM Backport #50624: pacific: qa: "ls: cannot access 'lost+found': No such file or directory"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40856
merged - 06:30 PM Backport #50289: pacific: MDS stuck at stopping when reducing max_mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40856
merged - 06:30 PM Backport #50282: pacific: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40856
merged - 06:16 PM Bug #51417: qa: test_ls_H_prints_human_readable_file_size failure
- Oh, this may be the protected_regular enforcement that's in more recent kernels. See:
https://www.kernel.org/d... - 06:10 PM Bug #51440 (Duplicate): fallocate fails with EACCES
- 06:07 PM Bug #51440: fallocate fails with EACCES
- I'm not even sure that this is ceph related. The command was trying to open a file in /tmp to do an fallocate (which ...
- 10:42 AM Bug #51440 (Duplicate): fallocate fails with EACCES
- https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/62...
- 11:46 AM Bug #51062 (Fix Under Review): mds,client: suppport getvxattr RPC
- 10:57 AM Bug #51266: test cleanup failure
- Another occurrence:
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific... - 07:24 AM Feature #40986: cephfs qos: implement cephfs qos base on tokenbucket algorighm
- I'm also interested in the status of QoS for CephFS. Is there any available and mature CephFS QOS mechanism?
06/29/2021
- 10:42 PM Feature #51340 (In Progress): mon/MDSMonitor: allow creating a file system with a specific fscid
- 09:40 PM Feature #51434 (Resolved): pybind/mgr/volumes: add basic introspection
- Something like:...
- 02:58 PM Bug #51183: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3...
- This is clearly a race of some sort. Either it is finding the directory and the mds_sessions file (and possibly the d...
- 02:25 AM Bug #51183 (Fix Under Review): qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/...
- 01:44 PM Bug #50033 (Fix Under Review): mgr/stats: be resilient to offline MDS rank-0
- 01:00 PM Documentation #51428 (Pending Backport): mgr/nfs: move nfs doc from cephfs to mgr
- 12:59 PM Backport #51413 (In Progress): pacific: cephfs-mirror: do not terminate on SIGHUP
- 12:54 PM Backport #50994 (Resolved): pacific: cephfs-mirror: be resilient to recreated snapshot during syn...
- So, this is already in pacific -- I missed updating the tracker.
- 12:52 PM Backport #50991 (In Progress): pacific: mgr/nfs: skipping conf file or passing empty file throws ...
- https://github.com/ceph/ceph/pull/42096
- 12:52 PM Backport #51174 (In Progress): pacific: mgr/nfs: add nfs-ganesha config hierarchy
- https://github.com/ceph/ceph/pull/42096
- 08:12 AM Fix #49341 (Resolved): qa: add async dirops testing
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:10 AM Bug #50447 (Resolved): cephfs-mirror: disallow adding a active peered file system back to its source
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:10 AM Bug #50532 (Resolved): mgr/volumes: hang when removing subvolume when pools are full
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:08 AM Bug #50867 (Resolved): qa: fs:mirror: reduced data availability
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:08 AM Bug #51204 (Resolved): cephfs-mirror: false warning of "keyring not found" seen in cephfs-mirror ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:41 AM Backport #51421 (Rejected): pacific: mgr/nfs: Add support for RGW export
- 07:37 AM Bug #47172 (Pending Backport): mgr/nfs: Add support for RGW export
- 07:21 AM Bug #51170 (Resolved): pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
- 07:19 AM Backport #50627: pacific: client: access(path, X_OK) on non-executable file as root always succeeds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41294
m... - 06:13 AM Backport #51285 (In Progress): pacific: mds: unknown metric type is always -1
- 05:25 AM Backport #51285: pacific: mds: unknown metric type is always -1
- Patrick Donnelly wrote:
> Xiubo, this has non-trivial conflicts. Can you take this one please?
Sure, will finish ... - 05:26 AM Backport #51200 (In Progress): pacific: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- 05:20 AM Backport #51411 (In Progress): pacific: pybind/mgr/volumes: purge queue seems to block operating ...
- 04:58 AM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- ...
06/28/2021
- 11:23 PM Bug #51183: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3...
- No lazy umount involved: /ceph/teuthology-archive/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.22542...
- 10:25 PM Bug #51417 (Fix Under Review): qa: test_ls_H_prints_human_readable_file_size failure
- 10:01 PM Bug #51417 (Resolved): qa: test_ls_H_prints_human_readable_file_size failure
- Related to #51169.
See https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.2254... - 08:08 PM Feature #51416: kclient: add debugging for mds failover events
- On IRC Patrick said:...
- 08:02 PM Feature #51416 (Fix Under Review): kclient: add debugging for mds failover events
- 07:54 PM Backport #51414 (In Progress): pacific: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- 07:00 PM Backport #51414 (Resolved): pacific: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- https://github.com/ceph/ceph/pull/42072
- 07:53 PM Backport #51412 (In Progress): pacific: mds: mkdir on ephemerally pinned directory sometimes bloc...
- 06:55 PM Backport #51412 (Resolved): pacific: mds: mkdir on ephemerally pinned directory sometimes blocked...
- https://github.com/ceph/ceph/pull/42071
- 07:51 PM Backport #51324 (In Progress): pacific: pacific: client: abort after MDS blocklist
- 07:49 PM Backport #51322 (In Progress): pacific: qa: test_data_scan.TestDataScan.test_pg_files AssertionEr...
- 07:48 PM Backport #51286 (In Progress): pacific: MDSMonitor: crash when attempting to mount cephfs
- 07:47 PM Backport #51285 (Need More Info): pacific: mds: unknown metric type is always -1
- Xiubo, this has non-trivial conflicts. Can you take this one please?
- 07:46 PM Backport #51251 (In Progress): pacific: qa: fs:upgrade uses teuthology default distro
- 07:45 PM Backport #51235 (In Progress): pacific: doc: pacific updates
- 07:43 PM Backport #51232 (In Progress): pacific: qa: scrub code does not join scrubopts with comma
- 07:42 PM Backport #51231 (In Progress): pacific: pybind/mgr/snap_schedule: Invalid command: Unexpected arg...
- 07:41 PM Backport #51230 (In Progress): pacific: qa: fs:bugs does not specify distro
- 07:39 PM Backport #51203 (In Progress): pacific: mds: CephFS kclient gets stuck when getattr() on a certai...
- 07:38 PM Backport #51411 (Need More Info): pacific: pybind/mgr/volumes: purge queue seems to block operati...
- Kotresh, please take this one.
- 06:55 PM Backport #51411 (Resolved): pacific: pybind/mgr/volumes: purge queue seems to block operating on ...
- https://github.com/ceph/ceph/pull/42083
- 07:37 PM Backport #51200 (Need More Info): pacific: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- Kotresh, please take this one.
- 07:37 PM Backport #51198 (Need More Info): pacific: msg: active_connections regression
- Not sure this backport is necessary.
- 07:35 PM Backport #51413 (Need More Info): pacific: cephfs-mirror: do not terminate on SIGHUP
- Venky, please do this one.
- 07:00 PM Backport #51413 (Resolved): pacific: cephfs-mirror: do not terminate on SIGHUP
- https://github.com/ceph/ceph/pull/42097
- 07:35 PM Backport #50994 (Need More Info): pacific: cephfs-mirror: be resilient to recreated snapshot duri...
- Venky, please do this one.
- 07:35 PM Backport #51174 (Need More Info): pacific: mgr/nfs: add nfs-ganesha config hierarchy
- Varsha please do this one.
- 07:34 PM Backport #50991 (Need More Info): pacific: mgr/nfs: skipping conf file or passing empty file thro...
- Varsha please do this one.
- 07:34 PM Backport #50913 (In Progress): pacific: MDS heartbeat timed out between during executing MDCache:...
- 07:32 PM Backport #50875 (In Progress): pacific: mds: MDSLog::journaler pointer maybe crash with use-after...
- 07:30 PM Backport #50848 (In Progress): pacific: mds: "cluster [ERR] Error recovering journal 0x203: (2)...
- 07:28 PM Backport #50846 (In Progress): pacific: mds: journal recovery thread is possibly asserting with m...
- 07:27 PM Backport #50636 (In Progress): pacific: session dump includes completed_requests twice, once as a...
- 07:26 PM Backport #50630 (In Progress): pacific: mds: Error ENOSYS: mds.a started profiler
- 07:24 PM Backport #51284 (Resolved): pacific: cephfs-mirror: false warning of "keyring not found" seen in ...
- 04:27 PM Backport #51284: pacific: cephfs-mirror: false warning of "keyring not found" seen in cephfs-mirr...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41947
merged - 07:23 PM Backport #51283 (Resolved): pacific: cephfs-mirror: disallow adding a active peered file system b...
- 04:27 PM Backport #51283: pacific: cephfs-mirror: disallow adding a active peered file system back to its ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41947
merged - 07:23 PM Backport #51186 (Resolved): pacific: qa: add async dirops testing
- 07:23 PM Backport #51086 (Resolved): pacific: qa: fs:mirror: reduced data availability
- 04:27 PM Backport #51086: pacific: qa: fs:mirror: reduced data availability
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41947
merged - 07:23 PM Backport #51084 (Resolved): pacific: mgr/volumes: hang when removing subvolume when pools are full
- 07:22 PM Backport #50899 (Resolved): pacific: mds: monclient: wait_auth_rotating timed out after 30
- 04:32 PM Backport #50899: pacific: mds: monclient: wait_auth_rotating timed out after 30
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41450
merged - 07:22 PM Backport #50627 (Resolved): pacific: client: access(path, X_OK) on non-executable file as root al...
- 07:21 PM Backport #50624 (In Progress): pacific: qa: "ls: cannot access 'lost+found': No such file or dire...
- 07:14 PM Bug #51410: kclient: fails to finish reconnect during MDS thrashing (testing branch)
- It comes from this swath of code, that gets called when the caps have been renewed:...
- 06:32 PM Bug #51410: kclient: fails to finish reconnect during MDS thrashing (testing branch)
- ...
- 06:29 PM Bug #51410: kclient: fails to finish reconnect during MDS thrashing (testing branch)
- Some interesting messages in the kernel log: /ceph/teuthology-archive/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-te...
- 06:28 PM Bug #51410 (New): kclient: fails to finish reconnect during MDS thrashing (testing branch)
- ...
- 07:05 PM Backport #51335 (Resolved): pacific: mds: avoid journaling overhead for setxattr("ceph.dir.subvol...
- 04:31 PM Backport #51335: pacific: mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") for n...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41995
merged - 07:00 PM Backport #51415 (Resolved): octopus: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- https://github.com/ceph/ceph/pull/43785
- 06:57 PM Bug #51280 (Pending Backport): mds: "FAILED ceph_assert(r == 0 || r == -2)"
- 06:55 PM Bug #51318 (Pending Backport): cephfs-mirror: do not terminate on SIGHUP
- 06:53 PM Bug #51256 (Pending Backport): pybind/mgr/volumes: purge queue seems to block operating on cephfs...
- This is no longer urgent so I'm changing this to just Pacific to reduce the risk associated with this change.
- 06:52 PM Bug #51069 (Pending Backport): mds: mkdir on ephemerally pinned directory sometimes blocked on jo...
- 01:43 PM Bug #51278 (Triaged): mds: "FAILED ceph_assert(!segments.empty())"
- 01:42 PM Bug #51281 (Triaged): qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad...
- 09:08 AM Cleanup #51407 (Resolved): mgr/volumes/fs/operations/versions/subvolume_attrs.py: fix various fla...
- ...
- 09:07 AM Cleanup #51406 (Fix Under Review): mgr/volumes/fs/operations/versions/op_sm.py: fix various flake...
- ...
- 09:05 AM Cleanup #51405 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_v2.py: fix variou...
- ...
- 09:05 AM Cleanup #51404 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_v1.py: fix variou...
- ...
- 09:03 AM Cleanup #51403 (Resolved): mgr/volumes/fs/operations/versions/auth_metadata.py: fix various flake...
- ...
- 09:02 AM Cleanup #51402 (Resolved): mgr/volumes/fs/operations/versions/subvolume_base.py: fix various flak...
- ...
- 09:00 AM Cleanup #51401 (Fix Under Review): mgr/volumes/fs/operations/versions/metadata_manager.py: fix va...
- ...
- 08:59 AM Cleanup #51400 (Fix Under Review): mgr/volumes/fs/operations/trash.py: fix various flake8 issues
- ...
- 08:58 AM Cleanup #51399 (Fix Under Review): mgr/volumes/fs/operations/template.py: fix various flake8 issues
- ...
- 08:57 AM Cleanup #51398 (Resolved): mgr/volumes/fs/operations/subvolume.py: fix various flake8 issues
- ...
- 08:56 AM Cleanup #51397 (Fix Under Review): mgr/volumes/fs/operations/volume.py: fix various flake8 issues
- ...
- 08:55 AM Cleanup #51396 (Resolved): mgr/volumes/fs/operations/clone_index.py: fix various flake8 issues
- ...
- 08:53 AM Cleanup #51395 (Fix Under Review): mgr/volumes/fs/operations/lock.py: fix various flake8 issues
- ...
- 08:52 AM Cleanup #51394 (Fix Under Review): mgr/volumes/fs/operations/pin_util.py: fix various flake8 issues
- ...
- 08:50 AM Cleanup #51393 (Resolved): mgr/volumes/fs/operations/group.py: add extra blank line
- ...
- 08:49 AM Cleanup #51392 (Resolved): mgr/volumes/fs/operations/snapshot_util.py: add extra blank line
- ...
- 08:48 AM Cleanup #51391 (Resolved): mgr/volumes/fs/operations/resolver.py: add extra blank line
- ...
- 08:47 AM Cleanup #51390 (Resolved): mgr/volumes/fs/operations/access.py: fix various flake8 issues
- ...
- 08:46 AM Cleanup #51389 (Fix Under Review): mgr/volumes/fs/operations/rankevicter.py: fix various flake8 i...
- ...
- 08:44 AM Cleanup #51388 (Fix Under Review): mgr/volumes/fs/operations/index.py: add extra blank line
- ...
- 08:43 AM Cleanup #51387 (Resolved): mgr/volumes/fs/purge_queue.py: add extra blank line
- ...
- 08:40 AM Cleanup #51386 (Fix Under Review): mgr/volumes/fs/volume.py: fix various flake8 issues
- ...
- 08:38 AM Cleanup #51385 (Fix Under Review): mgr/volumes/fs/fs_util.py: add extra blank line
- ...
- 08:36 AM Cleanup #51384 (Resolved): mgr/volumes/fs/vol_spec.py: fix various flake8 issues
- ...
- 08:34 AM Cleanup #51383 (Fix Under Review): mgr/volumes/fs/exception.py: fix various flake8 issues
- ...
- 08:33 AM Cleanup #51382 (Fix Under Review): mgr/volumes/fs/async_cloner.py: fix various flake8 issues
- ...
- 08:30 AM Cleanup #51381 (Resolved): mgr/volumes/fs/async_job.py: fix various flake8 issues
- ...
- 08:29 AM Cleanup #51380 (Resolved): mgr/volumes/module.py: fix various flake8 issues
- ...
- 08:27 AM Cleanup #51379: mgr/volumes: add flake8 test
- Before fixing these issues, make sure you have flake8 installed. You have read PEP 8 style guide[1] and flake8 guide[...
- 08:25 AM Cleanup #51379 (New): mgr/volumes: add flake8 test
06/25/2021
- 10:27 AM Bug #51365 (In Progress): mgr/nfs: show both ipv4 and ipv6 address in cluster info command
- https://github.com/ceph/ceph/blob/74df5af8e2d36c6143f214cab0fca1693d39f86e/src/pybind/mgr/nfs/cluster.py#L18-L28
Wit... - 01:15 AM Bug #51357 (Resolved): osd: sent kickoff request to MDS and then stuck for 15 minutes until MDS c...
- ...
- 01:09 AM Bug #51280 (Fix Under Review): mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Currently in MDS I just add one imporvement fixing, which will respawn the MDS daemon instead of crash it. But for th...
06/24/2021
- 01:59 PM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Xiubo Li wrote:
> Patrick Donnelly wrote:
> > Xiubo Li wrote:
> > > From the mds.e.log we can see that the "100000... - 05:59 AM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > From the mds.e.log we can see that the "10000003280.00000000:head" re... - 04:35 AM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Xiubo Li wrote:
> From the mds.e.log we can see that the "10000003280.00000000:head" request was stuck and timedout ... - 03:21 AM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- From the mds.e.log we can see that the "10000003280.00000000:head" request was stuck and timedout just after 15m, whi...
- 04:50 AM Tasks #51341 (In Progress): Steps to recover file system(s) after recovering the Ceph monitor store
- In certain rare cases, all the Ceph Monitors might end up with corrupted Monitor stores. The Monitor stores can be re...
- 03:52 AM Feature #51340 (Resolved): mon/MDSMonitor: allow creating a file system with a specific fscid
- In the scenario where the monitor databases are lost and must be rebuilt, the file system will need recreated. (Assum...
06/23/2021
- 08:59 PM Backport #51337 (In Progress): nautilus: mds: avoid journaling overhead for setxattr("ceph.dir.su...
- 08:30 PM Backport #51337 (Rejected): nautilus: mds: avoid journaling overhead for setxattr("ceph.dir.subvo...
- https://github.com/ceph/ceph/pull/41997
- 08:57 PM Backport #51336 (In Progress): octopus: mds: avoid journaling overhead for setxattr("ceph.dir.sub...
- 08:30 PM Backport #51336 (Resolved): octopus: mds: avoid journaling overhead for setxattr("ceph.dir.subvol...
- https://github.com/ceph/ceph/pull/41996
- 08:55 PM Backport #51335 (In Progress): pacific: mds: avoid journaling overhead for setxattr("ceph.dir.sub...
- 08:30 PM Backport #51335 (Resolved): pacific: mds: avoid journaling overhead for setxattr("ceph.dir.subvol...
- https://github.com/ceph/ceph/pull/41995
- 08:25 PM Bug #51276 (Pending Backport): mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") ...
- 07:40 PM Feature #51333 (In Progress): qa: use cephadm to provision cephfs for fs:workloads
- 05:18 PM Feature #51333 (Resolved): qa: use cephadm to provision cephfs for fs:workloads
- To increase our test coverage!
- 04:32 PM Feature #51332 (Fix Under Review): qa: increase metadata replication to exercise lock/witness cod...
- 04:25 PM Feature #51332 (Fix Under Review): qa: increase metadata replication to exercise lock/witness cod...
- 12:20 PM Bug #51318 (Fix Under Review): cephfs-mirror: do not terminate on SIGHUP
- 09:14 AM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- In the osd.7 side, the request was blocked and the osd was added to backoff:...
- 04:43 AM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Xiubo Li wrote:
> It seems the same "23C_IO_MDC_TruncateFinish" called twice:
>
> [...]
Sorry, it is not.
C... - 02:28 AM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- It seems the same "23C_IO_MDC_TruncateFinish" called twice:...
- 02:40 AM Backport #51324 (Resolved): pacific: pacific: client: abort after MDS blocklist
- https://github.com/ceph/ceph/pull/42070
- 02:40 AM Backport #51323 (Resolved): octopus: qa: test_data_scan.TestDataScan.test_pg_files AssertionError...
- https://github.com/ceph/ceph/pull/45159
- 02:40 AM Backport #51322 (Resolved): pacific: qa: test_data_scan.TestDataScan.test_pg_files AssertionError...
- https://github.com/ceph/ceph/pull/42069
- 02:38 AM Bug #50808 (Pending Backport): qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Item...
- 02:37 AM Bug #50530 (Pending Backport): pacific: client: abort after MDS blocklist
- 02:34 AM Bug #47276 (Resolved): MDSMonitor: add command to rename file systems
- 02:34 AM Bug #50852 (Resolved): mds: remove fs_name stored in MDSRank
- 02:31 AM Bug #50495: libcephfs: shutdown race fails with status 141
- /ceph/teuthology-archive/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/618...
06/22/2021
- 12:17 PM Bug #51318 (Resolved): cephfs-mirror: do not terminate on SIGHUP
- So, utilities such as logrotate would send SIGHUP to the daemon which would terminate it. This is being seen in some ...
- 06:17 AM Bug #51271 (Fix Under Review): mgr/volumes: use a dedicated libcephfs handle for subvolume API calls
06/21/2021
- 04:53 AM Backport #51283 (In Progress): pacific: cephfs-mirror: disallow adding a active peered file syste...
- 04:53 AM Backport #51284 (In Progress): pacific: cephfs-mirror: false warning of "keyring not found" seen ...
- 04:53 AM Backport #51086 (In Progress): pacific: qa: fs:mirror: reduced data availability
- 02:43 AM Bug #51295 (Rejected): When fsname = k8s cephfs is specified, an error is displayed:"HEALTH_ERR 1...
- When fsname = k8s cephfs is specified, an error is displayed:
# ceph health detail
HEALTH_ERR 1 auth entities have ...
06/19/2021
- 01:03 PM Bug #51092 (Resolved): mds: Timed out waiting for MDS daemons to become healthy
- 02:55 AM Backport #51286 (Resolved): pacific: MDSMonitor: crash when attempting to mount cephfs
- https://github.com/ceph/ceph/pull/42068
- 02:55 AM Backport #51285 (Resolved): pacific: mds: unknown metric type is always -1
- https://github.com/ceph/ceph/pull/42088
- 02:55 AM Backport #51284 (Resolved): pacific: cephfs-mirror: false warning of "keyring not found" seen in ...
- https://github.com/ceph/ceph/pull/41947
- 02:54 AM Bug #51250 (Pending Backport): qa: fs:upgrade uses teuthology default distro
- 02:53 AM Bug #51077: MDSMonitor: crash when attempting to mount cephfs
- Thanks for the detailed notes! It was very helpful tracking the bug down.
- 02:53 AM Bug #51077 (Pending Backport): MDSMonitor: crash when attempting to mount cephfs
- 02:51 AM Bug #51204 (Pending Backport): cephfs-mirror: false warning of "keyring not found" seen in cephfs...
- 02:50 AM Bug #51113 (Pending Backport): mds: unknown metric type is always -1
- 02:50 AM Backport #51283 (Resolved): pacific: cephfs-mirror: disallow adding a active peered file system b...
- https://github.com/ceph/ceph/pull/41947
- 02:49 AM Bug #50447 (Pending Backport): cephfs-mirror: disallow adding a active peered file system back to...
- 02:27 AM Bug #51228: qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
- Similar failure in different test: /ceph/teuthology-archive/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-2021...
- 12:26 AM Bug #51281 (Duplicate): qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1...
- ...
- 12:18 AM Bug #51280 (Resolved): mds: "FAILED ceph_assert(r == 0 || r == -2)"
- ...
- 12:08 AM Bug #51278 (Triaged): mds: "FAILED ceph_assert(!segments.empty())"
- ...
- 12:04 AM Bug #43216 (Triaged): MDSMonitor: removes MDS coming out of quorum election
- /ceph/teuthology-archive/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/617...
06/18/2021
- 07:39 PM Bug #51276 (Fix Under Review): mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") ...
- 07:36 PM Bug #51276 (Resolved): mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") for no-o...
- In preparation for acquiring the xlock on the directory inode, the MDS must journal a few events before continuing on...
- 07:12 PM Bug #51183: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3...
- I submitted a PR to add log messages in this scenario. Once we merge that, perhaps we can clarify what happened.
- 01:28 PM Bug #51271: mgr/volumes: use a dedicated libcephfs handle for subvolume API calls
- I'm testing the changes. Will push a PR once ready.
- 04:36 AM Bug #51271 (Resolved): mgr/volumes: use a dedicated libcephfs handle for subvolume API calls
- So as to improve cache efficiency of volumes plugin by avoiding purge queue threads using this handle.
06/17/2021
- 08:00 PM Bug #51256 (Fix Under Review): pybind/mgr/volumes: purge queue seems to block operating on cephfs...
- 12:51 PM Bug #51256: pybind/mgr/volumes: purge queue seems to block operating on cephfs connection require...
- Partially-fixes: https://github.com/ceph/ceph/pull/41917
- 07:41 AM Bug #51256: pybind/mgr/volumes: purge queue seems to block operating on cephfs connection require...
- Purge threads (or any async job in mg/volumes) operate in two steps:
1. Perform a file system call to fetch an ent... - 04:44 AM Bug #51256 (In Progress): pybind/mgr/volumes: purge queue seems to block operating on cephfs conn...
- 07:55 PM Backport #51186: pacific: qa: add async dirops testing
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41823
merged - 07:52 PM Bug #51170: pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
- https://github.com/ceph/ceph/pull/41811 merged
- 07:52 PM Backport #51084: pacific: mgr/volumes: hang when removing subvolume when pools are full
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41691
merged - 07:51 PM Backport #50627: pacific: client: access(path, X_OK) on non-executable file as root always succeeds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41294
merged - 04:38 PM Feature #49340 (In Progress): libcephfssqlite: library for sqlite interface to CephFS
- 04:01 PM Bug #51267: CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps...
- Most recent occurrence is here:
https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021-0... - 03:58 PM Bug #51267 (New): CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-ca...
- https://sentry.ceph.com/organizations/ceph/issues/7357/...
- 03:53 PM Bug #50279: qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
- This occurred again on a pacific run:
https://pulpito.ceph.com/yuriw-2021-06-14_16:21:07-fs-wip-yuri2-testing-2021... - 03:22 PM Bug #51266 (New): test cleanup failure
- This test's cleanup failed due to being unable to remove the mountpoint directory after the test:
https://pulp... - 02:52 PM Feature #51265 (Fix Under Review): mgr/nfs: add interface to create exports from json file
- 02:50 PM Feature #50449 (Fix Under Review): mgr/nfs: Add unit tests for conf parser and others
- 02:49 PM Bug #47172 (Fix Under Review): mgr/nfs: Add support for RGW export
- 02:49 PM Cleanup #50816 (Fix Under Review): mgr/nfs: add nfs to mypy
- 02:44 PM Bug #51264 (New): TestVolumeClient failure
- Failed TestVolumeClient test:...
- 02:38 PM Bug #51263 (New): pjdfstest rename test 10.t failed with EACCES
- pjdfstest rename test failed with -EACCES during teuthology testing for pacific backport:
https://pulpito.ceph.com... - 02:07 PM Bug #51262 (Duplicate): test_full.py test has incorrect assumption
- 02:02 PM Bug #51262 (Duplicate): test_full.py test has incorrect assumption
- Snippet from test_full.py:...
- 02:06 PM Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- Snippet from test_full.py:...
- 11:23 AM Bug #51092 (Fix Under Review): mds: Timed out waiting for MDS daemons to become healthy
06/16/2021
- 11:48 PM Bug #51256 (Resolved): pybind/mgr/volumes: purge queue seems to block operating on cephfs connect...
- ...
- 07:32 PM Bug #51250 (Fix Under Review): qa: fs:upgrade uses teuthology default distro
- 07:17 PM Bug #51250 (Pending Backport): qa: fs:upgrade uses teuthology default distro
- 07:16 PM Bug #51250 (Resolved): qa: fs:upgrade uses teuthology default distro
- https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/617...
- 07:20 PM Backport #51251 (Resolved): pacific: qa: fs:upgrade uses teuthology default distro
- https://github.com/ceph/ceph/pull/42067
- 06:58 PM Bug #51077 (Fix Under Review): MDSMonitor: crash when attempting to mount cephfs
- 02:40 PM Bug #51077 (In Progress): MDSMonitor: crash when attempting to mount cephfs
- 03:31 PM Bug #50178 (Rejected): qa: "TypeError: run() got an unexpected keyword argument 'shell'"
- Caused by testing #38481
- 05:06 AM Bug #50530 (Fix Under Review): pacific: client: abort after MDS blocklist
- 01:55 AM Bug #51092: mds: Timed out waiting for MDS daemons to become healthy
- From osd.4 logs we can see that the available size is 0xc037e7b = 192.2MB, but the total is 0xbebc200 = 190.7MB:
<... - 12:19 AM Bug #49132: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLOCKDONE",
- We also had a crash that resembles this quite closely - I'm attaching the MDS logs. Are there any leads regarding po...
06/15/2021
- 10:00 PM Backport #51235 (Resolved): pacific: doc: pacific updates
- https://github.com/ceph/ceph/pull/42066
- 09:58 PM Documentation #51187 (Pending Backport): doc: pacific updates
- 06:08 PM Bug #51077: MDSMonitor: crash when attempting to mount cephfs
- I'm sorry, I must have copypasted the same command twice.
The first command of course was:
ceph-fuse -n client.bare... - 05:40 PM Bug #51077: MDSMonitor: crash when attempting to mount cephfs
- Stanislav Datskevych wrote:
> An update:
>
> I seem to have found the reason of the issue:
>
> I had already h... - 12:29 PM Bug #51077: MDSMonitor: crash when attempting to mount cephfs
- An update:
I seem to have found the reason of the issue:
I had already had one CephFS which was working fine.
... - 05:35 PM Backport #51232 (Resolved): pacific: qa: scrub code does not join scrubopts with comma
- https://github.com/ceph/ceph/pull/42065
- 05:35 PM Backport #51231 (Resolved): pacific: pybind/mgr/snap_schedule: Invalid command: Unexpected argume...
- https://github.com/ceph/ceph/pull/42064
- 05:35 PM Backport #51230 (Resolved): pacific: qa: fs:bugs does not specify distro
- https://github.com/ceph/ceph/pull/42063
- 05:34 PM Bug #51182 (Pending Backport): pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs...
- 05:33 PM Bug #51184 (Pending Backport): qa: fs:bugs does not specify distro
- 05:32 PM Bug #51146 (Pending Backport): qa: scrub code does not join scrubopts with comma
- 05:28 PM Bug #51229 (New): qa: test_multi_snap_schedule list difference failure
- ...
- 05:15 PM Bug #51228 (New): qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
- ...
- 05:01 PM Bug #51178: MDS became read-only while using rsync to copy files
- Well, the filesystem has recovered since, scrub made us found out that one of the MDS didn't have the permission to w...
- 01:20 AM Bug #51178: MDS became read-only while using rsync to copy files
- Sorry about that last comment, the pool wasn't empty, in fact, everything was in another namespace for security and i...
- 03:31 PM Feature #48991 (Resolved): client: allow looking up snapped inodes by inode number+snapid tuple
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:27 PM Backport #50623 (Resolved): octopus: qa: "ls: cannot access 'lost+found': No such file or directory"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40768
m... - 02:46 PM Backport #50623: octopus: qa: "ls: cannot access 'lost+found': No such file or directory"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40768
merged - 03:27 PM Backport #50288 (Resolved): octopus: MDS stuck at stopping when reducing max_mds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40768
m... - 02:46 PM Backport #50288: octopus: MDS stuck at stopping when reducing max_mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40768
merged - 03:27 PM Backport #49513 (Resolved): octopus: client: allow looking up snapped inodes by inode number+snap...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40768
m... - 02:46 PM Backport #49513: octopus: client: allow looking up snapped inodes by inode number+snapid tuple
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40768
merged - 10:50 AM Bug #43039 (Resolved): client: shutdown race fails with status 141
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:48 AM Bug #49379 (Resolved): client: wake up the front pos waiter
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:47 AM Bug #49837 (Resolved): mgr/pybind/snap_schedule: do not fail when no fs snapshots are available
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:46 AM Bug #50035 (Resolved): cephfs-mirror: use sensible mount/shutdown timeouts
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:46 AM Cleanup #50080 (Resolved): mgr/nfs: move nfs code out of volumes plugin
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:46 AM Bug #50091 (Resolved): cephfs-top: exception: addwstr() returned ERR
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:46 AM Bug #50224 (Resolved): qa: test_mirroring_init_failure_with_recovery failure
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:46 AM Documentation #50229 (Resolved): cephfs-mirror: update docs with `fs snapshot mirror daemon statu...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:45 AM Bug #50246 (Resolved): mds: failure replaying journal (EMetaBlob)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:45 AM Bug #50266 (Resolved): "ceph fs snapshot mirror daemon status" should not use json keys as value
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:45 AM Bug #50298 (Resolved): libcephfs: support file descriptor based *at() APIs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:45 AM Bug #50442 (Resolved): cephfs-mirror: ignore snapshots on parent directories when synchronizing s...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:44 AM Bug #50523 (Resolved): Mirroring path "remove" don't not seem to work
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:44 AM Bug #50561 (Resolved): cephfs-mirror: incrementally transfer snapshots whenever possible
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:44 AM Feature #50581 (Resolved): cephfs-mirror: allow mirror daemon to connect to local/primary cluster...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:43 AM Bug #50783 (Resolved): mgr/nfs: cli is broken as cluster id and binding arguments are optional
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:43 AM Bug #50819 (Resolved): mon,doc: deprecate min_compat_client
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:43 AM Bug #50822 (Resolved): qa: testing kernel patch for client metrics causes mds abort
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:43 AM Bug #50976 (Resolved): mds: scrub error on inode 0x1
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:42 AM Bug #51060 (Resolved): qa: test_ephemeral_pin_distribution failure
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:42 AM Bug #51067 (Resolved): mds: segfault printing unknown metric
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:36 AM Backport #47020: nautilus: client: shutdown race fails with status 141
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41593
m... - 10:34 AM Backport #50625: nautilus: qa: "ls: cannot access 'lost+found': No such file or directory"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40769
m... - 10:33 AM Backport #50290: nautilus: MDS stuck at stopping when reducing max_mds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40769
m... - 10:33 AM Backport #49514: nautilus: client: allow looking up snapped inodes by inode number+snapid tuple
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40769
m... - 10:33 AM Backport #50628: nautilus: client: access(path, X_OK) on non-executable file as root always succeeds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41297
m... - 10:33 AM Backport #49519: nautilus: client: wake up the front pos waiter
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40865
m... - 10:33 AM Backport #50634: nautilus: mds: failure replaying journal (EMetaBlob)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41144
m... - 10:19 AM Backport #50872: pacific: qa: testing kernel patch for client metrics causes mds abort
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41596
m... - 10:19 AM Backport #51085 (Resolved): pacific: mds: scrub error on inode 0x1
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41685
m... - 10:19 AM Backport #51070 (Resolved): pacific: qa: test_ephemeral_pin_distribution failure
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41659
m... - 10:19 AM Backport #50538: pacific: mgr/pybind/snap_schedule: do not fail when no fs snapshots are available
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41044
m... - 09:12 AM Backport #50541 (Resolved): pacific: libcephfs: support file descriptor based *at() APIs
- 09:12 AM Backport #50629 (Resolved): pacific: cephfs-mirror: ignore snapshots on parent directories when s...
- 09:12 AM Backport #50241 (Resolved): pacific: cephfs-mirror: update docs with `fs snapshot mirror daemon s...
- 09:11 AM Backport #50993 (Resolved): pacific: cephfs-mirror: incrementally transfer snapshots whenever pos...
- 09:11 AM Backport #50876 (Resolved): pacific: cephfs-mirror: allow mirror daemon to connect to local/prima...
- 09:10 AM Backport #50917 (Resolved): pacific: Mirroring path "remove" don't not seem to work
- 09:09 AM Backport #50537 (Resolved): pacific: "ceph fs snapshot mirror daemon status" should not use json ...
- 09:09 AM Backport #50877 (Resolved): pacific: qa: test_mirroring_init_failure_with_recovery failure
- 09:09 AM Backport #50871 (Resolved): pacific: cephfs-mirror: use sensible mount/shutdown timeouts
- https://github.com/ceph/ceph/pull/41475
- 09:02 AM Backport #50873 (Resolved): pacific: mon,doc: deprecate min_compat_client
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41468
m... - 09:01 AM Backport #50392: pacific: cephfs-top: exception: addwstr() returned ERR
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41053
m... - 09:01 AM Backport #50843: pacific: mgr/nfs: cli is broken as cluster id and binding arguments are optional
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41389
m... - 09:01 AM Backport #50597: pacific: mgr/nfs: Add troubleshooting section
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41389
m... - 09:01 AM Backport #50488: pacific: mgr/nfs: move nfs code out of volumes plugin
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41389
m... - 09:00 AM Backport #50186: pacific: qa: daemonwatchdog fails if mounts not defined
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40634
m... - 04:39 AM Bug #45434 (Fix Under Review): qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- 04:39 AM Bug #49912 (Fix Under Review): client: dir->dentries inconsistent, both newname and oldname point...
- 04:38 AM Bug #51069 (Fix Under Review): mds: mkdir on ephemerally pinned directory sometimes blocked on jo...
- 02:45 AM Bug #51069: mds: mkdir on ephemerally pinned directory sometimes blocked on journal flush
- 5 seconds is also the interval of "mds_tick_interval" in mds daemon, which will call the scatter_tick() and finally w...
06/14/2021
- 09:05 PM Bug #51178: MDS became read-only while using rsync to copy files
- The PG auto-scaler is disabled and we missed the fact that it has been created with only 8 PG, could it affect the ab...
- 06:04 PM Bug #51178: MDS became read-only while using rsync to copy files
- To restart the CephFS, I had to find another broken inode in the log.
So I found that this entry crashes the MDS.
... - 04:43 PM Bug #51178: MDS became read-only while using rsync to copy files
- More information, our Ceph FUSE clients are running ceph 15 if that matters.
- 04:41 PM Bug #51178: MDS became read-only while using rsync to copy files
- We now have a pool that can't list any object, from the log, I understand the MDS is trying to get inode 20000000f24 ...
- 03:59 PM Bug #51178: MDS became read-only while using rsync to copy files
- Here is the hex dumps of the "bad backtrace on directory" inodes....
- 03:55 PM Bug #51178: MDS became read-only while using rsync to copy files
- It seems to now be corrupted to the point where no MDS can start it, not even read-only.
-1017> 2021-06-14 11:50:... - 03:43 PM Bug #51178: MDS became read-only while using rsync to copy files
- The second MDS tried to take the active role and crashed with:
2021-06-14 11:41:27.795 7f21d26d9700 1 mds.0.340 rej... - 03:34 PM Bug #51178: MDS became read-only while using rsync to copy files
- It just re-mounted read-only right now. Here is the new log, there is no crash this time. This time it is a bigger log.
- 02:02 PM Bug #51178: MDS became read-only while using rsync to copy files
- Yes, the log file for the day is attached as a zstd-compressed file, I re-attached it as plain text since it is quite...
- 01:53 PM Bug #51178 (Need More Info): MDS became read-only while using rsync to copy files
- Do you have any logs from the MDS?
- 08:14 PM Bug #51183: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3...
- This sounds unlikely to be a ceph bug. Its usage of debugfs is pretty straightforward. I'd be more inclined to think ...
- 05:25 PM Bug #51183: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3...
- https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/616...
- 05:10 PM Bug #51183: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3...
- Kernel log from that period:...
- 02:05 PM Bug #51183: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3...
- Weird:...
- 01:47 PM Bug #51183 (Triaged): qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/de...
- 06:30 PM Bug #51191: Cannot Mount CephFS No Timeout, mount error 5 = Input/output error
- Patrick Donnelly wrote:
> You could be hitting: https://github.com/rook/rook/issues/8085
I appreciate you pointin... - 05:46 PM Bug #51191: Cannot Mount CephFS No Timeout, mount error 5 = Input/output error
- You could be hitting: https://github.com/rook/rook/issues/8085
- 03:20 PM Bug #51191: Cannot Mount CephFS No Timeout, mount error 5 = Input/output error
- Patrick Donnelly wrote:
> Looks like this is probably a networking issue of some kind. Are you using host or pod net... - 01:45 PM Bug #51191 (Need More Info): Cannot Mount CephFS No Timeout, mount error 5 = Input/output error
- Looks like this is probably a networking issue of some kind. Are you using host or pod networking in rook? Also, any ...
- 02:52 PM Bug #51182 (Fix Under Review): pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs...
- 02:49 PM Bug #51182 (In Progress): pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
- 01:48 PM Bug #51182 (Triaged): pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
- 01:42 PM Bug #51197 (Triaged): qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Doc...
- 02:47 AM Bug #51197 (Triaged): qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Doc...
- ...
- 08:31 AM Bug #50237: cephfs-journal-tool/cephfs-data-scan: Stuck in infinite loop with "NetHandler create_...
- it seems that the tools (cephfs) doesn't support msgrv2
enabling msgrv1 fixed the issue
- 07:27 AM Bug #51204 (Fix Under Review): cephfs-mirror: false warning of "keyring not found" seen in cephfs...
- 07:09 AM Bug #51204 (In Progress): cephfs-mirror: false warning of "keyring not found" seen in cephfs-mirr...
- 07:08 AM Bug #51204 (Resolved): cephfs-mirror: false warning of "keyring not found" seen in cephfs-mirror ...
- ...
- 03:00 AM Backport #51203 (Resolved): pacific: mds: CephFS kclient gets stuck when getattr() on a certain file
- https://github.com/ceph/ceph/pull/42062
- 03:00 AM Backport #51202 (Resolved): octopus: mds: CephFS kclient gets stuck when getattr() on a certain file
- https://github.com/ceph/ceph/pull/45158
- 02:56 AM Backport #51201 (Resolved): octopus: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- https://github.com/ceph/ceph/pull/44800
- 02:56 AM Backport #51200 (Resolved): pacific: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- https://github.com/ceph/ceph/pull/42086
- 02:56 AM Feature #48404 (Resolved): client: add a ceph.caps vxattr
- 02:55 AM Backport #51199 (Resolved): octopus: msg: active_connections regression
- https://github.com/ceph/ceph/pull/43310
- 02:55 AM Backport #51198 (Resolved): pacific: msg: active_connections regression
- https://github.com/ceph/ceph/pull/42936
- 02:55 AM Bug #50840 (Pending Backport): mds: CephFS kclient gets stuck when getattr() on a certain file
- 02:53 AM Bug #50622 (Pending Backport): msg: active_connections regression
- 02:52 AM Bug #48231 (Pending Backport): qa: test_subvolume_clone_in_progress_snapshot_rm is racy
06/13/2021
- 03:20 AM Bug #51191: Cannot Mount CephFS No Timeout, mount error 5 = Input/output error
- An updated attempt to mount using ceph-fuse...
- 02:58 AM Bug #51191: Cannot Mount CephFS No Timeout, mount error 5 = Input/output error
- More details on the issue can be found here: https://github.com/rook/rook/issues/7994
- 02:57 AM Bug #51191 (Need More Info): Cannot Mount CephFS No Timeout, mount error 5 = Input/output error
- On most hosts, mounting the CephFS via the kernel or ceph-fuse will not succeed. On one host, a Raspberry PI 4, it di...
Also available in: Atom