Activity
From 06/22/2021 to 07/21/2021
07/21/2021
- 09:26 PM Backport #50898: octopus: mds: monclient: wait_auth_rotating timed out after 30
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41449
merged - 09:22 PM Feature #51716: Add option in `fs new` command to start rank 0 in failed state
- Patrick Donnelly wrote:
> Another thing I thought of after our discussion today, Ramana: I think the --recover flag ... - 07:39 PM Feature #51716: Add option in `fs new` command to start rank 0 in failed state
- Another thing I thought of after our discussion today, Ramana: I think the --recover flag should do:
- Set rank0 t... - 08:25 PM Backport #51790 (Rejected): pacific: mgr/nfs: move nfs doc from cephfs to mgr
- 08:24 PM Documentation #51428 (Pending Backport): mgr/nfs: move nfs doc from cephfs to mgr
- 08:19 PM Bug #51789 (New): mgr/nfs: allow deployment of multiple nfs-ganesha daemons on single host
- ...
- 06:55 PM Feature #51416: kclient: add debugging for mds failover events
- I can see where to add such a message, but I'm not that familiar with all of the different MDS states. Which ones, sp...
- 06:37 PM Feature #51787 (Resolved): mgr/nfs: deploy nfs-ganesha daemons on non-default port
- `ceph orch apply nfs`[1] supports deployment nfs-ganesha daemons on non-default port. Add port argument to `nfs clust...
- 03:56 AM Bug #51757 (New): crash: /lib/x86_64-linux-gnu/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=005f7c5e895e1fbe65e7b621...- 03:56 AM Bug #51756 (Need More Info): crash: std::_Rb_tree_insert_and_rebalance(bool, std::_Rb_tree_node_b...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0b18f26403253ce222ac9009...
07/20/2021
- 03:47 PM Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED ...
- Neha Ojha wrote:
> /a/sage-2021-06-29_21:27:07-rados-wip-sage3-testing-2021-06-28-1912-distro-basic-smithi/6244042
...
07/19/2021
- 10:39 PM Tasks #51341: Steps to recover file system(s) after recovering the Ceph monitor store
- Testing out steps to recover a multiple active MDS file system after recovering monitor store using OSDs:
- Stop a... - 03:04 PM Backport #51499: pacific: qa: test_ls_H_prints_human_readable_file_size failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42166
merged - 05:49 AM Bug #51722 (Fix Under Review): mds: slow performance on parallel rm operations for multiple kclients
- This is from bugzilla https://bugzilla.redhat.com/show_bug.cgi?id=1974882.
- 05:46 AM Bug #51722 (Resolved): mds: slow performance on parallel rm operations for multiple kclients
- There has another case that could cause the unlinkat to be delayed for a long time sometimes, such as for the "remova...
07/16/2021
- 10:07 PM Feature #51716 (Resolved): Add option in `fs new` command to start rank 0 in failed state
- Source: https://github.com/ceph/ceph/pull/42295#discussion_r670827459
Currently, to recover a file system after re... - 05:10 PM Bug #51706 (Duplicate): pacific: qa: osd deep-scrub stat mismatch
- 10:16 AM Bug #51706 (Duplicate): pacific: qa: osd deep-scrub stat mismatch
- Found in [1], which was failed due to this error.
[1] http://qa-proxy.ceph.com/teuthology/yuriw-2021-07-13_17:37:5... - 04:58 PM Bug #51191: Cannot Mount CephFS No Timeout, mount error 5 = Input/output error
- I have upgraded the Ceph cluster to v16.2.5 and upgrade Rook to v1.6.7. The issue still remains....
- 10:46 AM Bug #51707 (Resolved): pybind/mgr/volumes: Cloner threads stuck in loop trying to clone the stale...
- Clones are created on a subvolume. While the clones are not complete, they are
removed with force option resulting i... - 08:36 AM Cleanup #51385 (Fix Under Review): mgr/volumes/fs/fs_util.py: add extra blank line
- 06:33 AM Bug #51705 (Resolved): qa: tasks.cephfs.fuse_mount:mount command failed
- fuse_mount:mount command failed in:
http://qa-proxy.ceph.com/teuthology/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testi... - 05:58 AM Bug #51704 (Fix Under Review): pacific: qa: Test failure: test_mount_all_caps_absent (tasks.cephf...
- test_mount_all_caps_absent fails in:
http://qa-proxy.ceph.com/teuthology/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-test...
07/15/2021
- 07:09 PM Feature #51416: kclient: add debugging for mds failover events
- Jeff Layton wrote:
> We already have this dout() message already when we get a new map:
>
> [...]
>
> By mds f... - 12:24 PM Feature #51416: kclient: add debugging for mds failover events
- We already have this dout() message already when we get a new map:...
- 07:05 PM Cleanup #51393 (Fix Under Review): mgr/volumes/fs/operations/group.py: add extra blank line
- 04:51 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- i have downgraded the mon.
yes after creating and deleting the fs the upgrade ran through and all is fine - 01:07 AM Bug #51673 (Fix Under Review): MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- 02:49 PM Backport #51547: pacific: src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a cephfs-...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42226
merged - 12:55 PM Documentation #51683 (Resolved): mgr/nfs: add note about creating exports for nfs using vstart to...
- Add this page to developer guide index
https://github.com/ceph/ceph/blob/master/doc/dev/vstart-ganesha.rst
07/14/2021
- 07:27 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- Daniel Keller wrote:
> it was installed in 2015 with 0.80 Firefly or 0.87 Giant I'm not sure
>
> and then upgrade... - 05:14 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=67b285ce3000d0cd47449cbc18...
- 04:30 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- it was installed in 2015 with 0.80 Firefly or 0.87 Giant I'm not sure
and then upgraded to 0.94 Hammer > 10 Jewel ... - 04:08 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- Daniel Keller wrote:
> btw in the cluster no CephFS is used and there are no mds running either
Thanks for the re... - 02:52 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- btw in the cluster no CephFS is used and there are no mds running either
- 02:28 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- The crash is in FSMap::decode().
- 12:32 PM Bug #51673 (Resolved): MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- i tried to update my ceph from 15.2.13 to 16.2.4 on my proxmox 7.0 severs.
after restarting the first monitor cras... - 02:23 PM Backport #51500: pacific: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kerne...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42165
merged - 02:22 PM Backport #51285: pacific: mds: unknown metric type is always -1
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42088
merged - 02:21 PM Backport #51200: pacific: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42086
merged - 09:39 AM Bug #51666 (Fix Under Review): cephfs-mirror: removing a mirrored directory path causes other syn...
- 09:31 AM Bug #51666 (Resolved): cephfs-mirror: removing a mirrored directory path causes other sync failur...
07/13/2021
- 02:50 PM Cleanup #51651 (New): mgr/volumes: replace mon_command with check_mon_command
- 05:19 AM Tasks #51341 (In Progress): Steps to recover file system(s) after recovering the Ceph monitor store
- Steps to recover single active MDS file system https://github.com/ceph/ceph/pull/42295
07/12/2021
- 08:08 PM Feature #45746 (Resolved): mgr/nfs: Add interface to update export
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:07 PM Feature #48622 (Resolved): mgr/nfs: Add tests for readonly exports
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:06 PM Bug #49922 (Resolved): MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:06 PM Documentation #50008 (Resolved): mgr/nfs: Add troubleshooting section
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:06 PM Documentation #50161 (Resolved): mgr/nfs: validation error on creating custom export
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:05 PM Bug #50559 (Resolved): session dump includes completed_requests twice, once as an integer and onc...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:05 PM Bug #50807 (Resolved): mds: MDSLog::journaler pointer maybe crash with use-after-free
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:04 PM Bug #51492 (Resolved): pacific: pybind/ceph_volume_client: stat on empty string
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:59 PM Backport #51494 (Resolved): octopus: pacific: pybind/ceph_volume_client: stat on empty string
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42161
m... - 03:42 PM Backport #51494: octopus: pacific: pybind/ceph_volume_client: stat on empty string
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42161
merged - 07:59 PM Backport #51336 (Resolved): octopus: mds: avoid journaling overhead for setxattr("ceph.dir.subvol...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41996
m... - 03:42 PM Backport #51336: octopus: mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") for n...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41996
merged - 07:58 PM Backport #50874 (Resolved): octopus: mds: MDSLog::journaler pointer maybe crash with use-after-free
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41626
m... - 03:40 PM Backport #50874: octopus: mds: MDSLog::journaler pointer maybe crash with use-after-free
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41626
merged - 07:58 PM Backport #50635 (Resolved): octopus: session dump includes completed_requests twice, once as an i...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41625
m... - 03:40 PM Backport #50635: octopus: session dump includes completed_requests twice, once as an integer and ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41625
merged - 07:58 PM Backport #50283 (Resolved): octopus: MDS slow request lookupino #0x100 on rank 1 block forever on...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40782
m... - 03:36 PM Backport #50283: octopus: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40782
merged - 07:52 PM Backport #51493: nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42162
m... - 03:28 PM Backport #50596 (Rejected): octopus: mgr/nfs: Add troubleshooting section
- Let's focus on Pacific.
- 03:28 PM Backport #50354 (Rejected): octopus: mgr/nfs: validation error on creating custom export
- Let's focus on Pacific.
- 03:28 PM Backport #48703 (Rejected): octopus: mgr/nfs: Add tests for readonly exports
- Let's focus on Pacific.
- 03:28 PM Backport #49712 (Rejected): octopus: mgr/nfs: Add interface to update export
- Let's focus on Pacific.
- 01:42 PM Bug #51589 (Triaged): mds: crash when journaling during replay
- 10:31 AM Bug #51630 (Fix Under Review): mgr/snap_schedule: don't throw traceback on non-existent fs
- Instead return error message and log the traceback...
- 07:40 AM Documentation #51428 (In Progress): mgr/nfs: move nfs doc from cephfs to mgr
07/09/2021
- 05:45 PM Bug #51600 (Fix Under Review): mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_h...
- 04:43 AM Bug #51600 (Resolved): mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_hit_rate ...
- META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_hit_rate are not updated.
ceph daemon mds.$(hostnam... - 03:14 PM Feature #51615 (New): mgr/nfs: add interface to update nfs cluster
- 03:08 PM Cleanup #51614 (Resolved): mgr/nfs: remove dashboard test remnant from unit tests
- 03:03 PM Feature #51613 (New): mgr/nfs: add qa tests for rgw
07/08/2021
- 09:47 PM Bug #49536 (Resolved): client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:47 PM Bug #49939 (Resolved): cephfs-mirror: be resilient to recreated snapshot during synchronization
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:46 PM Bug #50112 (Resolved): MDS stuck at stopping when reducing max_mds
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:46 PM Bug #50216 (Resolved): qa: "ls: cannot access 'lost+found': No such file or directory"
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:45 PM Bug #50530 (Resolved): pacific: client: abort after MDS blocklist
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:45 PM Bug #51069 (Resolved): mds: mkdir on ephemerally pinned directory sometimes blocked on journal flush
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51077 (Resolved): MDSMonitor: crash when attempting to mount cephfs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51146 (Resolved): qa: scrub code does not join scrubopts with comma
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51182 (Resolved): pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51184 (Resolved): qa: fs:bugs does not specify distro
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Documentation #51187 (Resolved): doc: pacific updates
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51250 (Resolved): qa: fs:upgrade uses teuthology default distro
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:43 PM Bug #51318 (Resolved): cephfs-mirror: do not terminate on SIGHUP
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:36 PM Backport #51232 (Resolved): pacific: qa: scrub code does not join scrubopts with comma
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42065
m... - 09:36 PM Backport #51251 (Resolved): pacific: qa: fs:upgrade uses teuthology default distro
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42067
m... - 09:35 PM Backport #50913 (Resolved): pacific: MDS heartbeat timed out between during executing MDCache::st...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42061
m... - 09:35 PM Backport #51286 (Resolved): pacific: MDSMonitor: crash when attempting to mount cephfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42068
m... - 09:35 PM Backport #51413 (Resolved): pacific: cephfs-mirror: do not terminate on SIGHUP
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42097
m... - 09:34 PM Backport #51414 (Resolved): pacific: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42072
m... - 09:34 PM Backport #51412 (Resolved): pacific: mds: mkdir on ephemerally pinned directory sometimes blocked...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42071
m... - 09:34 PM Backport #51324 (Resolved): pacific: pacific: client: abort after MDS blocklist
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42070
m... - 09:34 PM Backport #51322 (Resolved): pacific: qa: test_data_scan.TestDataScan.test_pg_files AssertionError...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42069
m... - 09:34 PM Backport #51235 (Resolved): pacific: doc: pacific updates
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42066
m... - 09:34 PM Backport #51231 (Resolved): pacific: pybind/mgr/snap_schedule: Invalid command: Unexpected argume...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42064
m... - 09:33 PM Backport #51230 (Resolved): pacific: qa: fs:bugs does not specify distro
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42063
m... - 09:32 PM Backport #51203 (Resolved): pacific: mds: CephFS kclient gets stuck when getattr() on a certain file
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42062
m... - 09:32 PM Backport #50875 (Resolved): pacific: mds: MDSLog::journaler pointer maybe crash with use-after-free
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42060
m... - 09:32 PM Backport #50848 (Resolved): pacific: mds: "cluster [ERR] Error recovering journal 0x203: (2) No...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42059
m... - 09:31 PM Backport #50846 (Resolved): pacific: mds: journal recovery thread is possibly asserting with mds_...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42058
m... - 09:31 PM Backport #50636 (Resolved): pacific: session dump includes completed_requests twice, once as an i...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42057
m... - 09:31 PM Backport #50630 (Resolved): pacific: mds: Error ENOSYS: mds.a started profiler
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42056
m... - 09:30 PM Backport #50445 (Resolved): pacific: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41052
m... - 09:30 PM Backport #50624 (Resolved): pacific: qa: "ls: cannot access 'lost+found': No such file or directory"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40856
m... - 09:30 PM Backport #50289 (Resolved): pacific: MDS stuck at stopping when reducing max_mds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40856
m... - 09:30 PM Backport #50282 (Resolved): pacific: MDS slow request lookupino #0x100 on rank 1 block forever on...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40856
m... - 09:21 PM Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED ...
- /a/sage-2021-06-29_21:27:07-rados-wip-sage3-testing-2021-06-28-1912-distro-basic-smithi/6244042
/a/rfriedma-2021-07-... - 07:44 AM Bug #51589 (Resolved): mds: crash when journaling during replay
- MDS version: ceph version 14.2.20 (36274af6eb7f2a5055f2d53ad448f2694e9046a0) nautilus (stable)
Using 200 clients, ...
07/07/2021
- 09:12 PM Bug #44257: vstart.sh: failed by waiting for mgr dashboard module to start
- I just ran into this same issue, where "waiting for mgr dashboard module to start" runs on a loop. Checking mgr.log.x...
- 05:40 PM Backport #51481 (Resolved): pacific: osd: sent kickoff request to MDS and then stuck for 15 minut...
- 05:30 PM Backport #51547 (In Progress): pacific: src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not as...
07/06/2021
- 08:10 PM Backport #51547 (Resolved): pacific: src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assum...
- https://github.com/ceph/ceph/pull/42226
- 08:08 PM Bug #51476 (Pending Backport): src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a ce...
- 06:55 PM Backport #51545 (Rejected): octopus: mgr/volumes: use a dedicated libcephfs handle for subvolume ...
- 06:55 PM Backport #51544 (Resolved): pacific: mgr/volumes: use a dedicated libcephfs handle for subvolume ...
- https://github.com/ceph/ceph/pull/42914
- 06:54 PM Bug #51271 (Pending Backport): mgr/volumes: use a dedicated libcephfs handle for subvolume API calls
- 06:52 PM Bug #50250: mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~...
- /ceph/teuthology-archive/pdonnell-2021-07-04_02:32:34-fs-wip-pdonnell-testing-20210703.052904-distro-basic-smithi/625...
- 06:05 PM Cleanup #51543 (Fix Under Review): mds: improve debugging for mksnap denial
- 06:04 PM Cleanup #51543 (Resolved): mds: improve debugging for mksnap denial
- 03:17 PM Feature #51340 (Fix Under Review): mon/MDSMonitor: allow creating a file system with a specific f...
- 10:50 AM Feature #50150 (Fix Under Review): qa: begin grepping kernel logs for kclient warnings/failures t...
07/05/2021
- 02:23 AM Feature #51518 (Fix Under Review): client: flush the mdlog in unsafe requests' relevant and auth ...
- 01:35 AM Feature #51518 (Resolved): client: flush the mdlog in unsafe requests' relevant and auth MDSes only
- Do not flush the mdlog in all the MDSes, which may make no sense for specific inode.
07/02/2021
- 11:19 PM Backport #51499 (In Progress): pacific: qa: test_ls_H_prints_human_readable_file_size failure
- 08:20 PM Backport #51499 (Resolved): pacific: qa: test_ls_H_prints_human_readable_file_size failure
- https://github.com/ceph/ceph/pull/42166
- 11:16 PM Backport #51500 (In Progress): pacific: qa: FileNotFoundError: [Errno 2] No such file or director...
- 08:30 PM Backport #51500 (Resolved): pacific: qa: FileNotFoundError: [Errno 2] No such file or directory: ...
- https://github.com/ceph/ceph/pull/42165
- 08:29 PM Bug #51183 (Pending Backport): qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/...
- 08:16 PM Bug #51417 (Pending Backport): qa: test_ls_H_prints_human_readable_file_size failure
- 08:08 PM Backport #51493 (Resolved): nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- 07:46 PM Backport #51493: nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42162
merged - 04:43 PM Backport #51493 (In Progress): nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- 04:20 PM Backport #51493 (Resolved): nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- https://github.com/ceph/ceph/pull/42162
- 04:48 PM Bug #51495 (In Progress): client: handle empty path strings
- Standard indicates we should return ENOENT.
- 04:42 PM Backport #51494 (In Progress): octopus: pacific: pybind/ceph_volume_client: stat on empty string
- 04:20 PM Backport #51494 (Resolved): octopus: pacific: pybind/ceph_volume_client: stat on empty string
- https://github.com/ceph/ceph/pull/42161
- 04:18 PM Bug #51492 (Pending Backport): pacific: pybind/ceph_volume_client: stat on empty string
- 04:16 PM Bug #51492 (Resolved): pacific: pybind/ceph_volume_client: stat on empty string
- When volume_prefix begins with "/", the library will try to stat the empty string resulting in a log like:...
07/01/2021
- 11:28 PM Bug #51476 (Fix Under Review): src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a ce...
- 03:28 PM Bug #51476 (Resolved): src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a cephfs-mir...
- When the replication is unidirectional we cannot assume a daemon is running and the "fs snapshot mirror daemon status...
- 09:15 PM Backport #51482 (Rejected): octopus: osd: sent kickoff request to MDS and then stuck for 15 minut...
- 09:15 PM Backport #51481 (Resolved): pacific: osd: sent kickoff request to MDS and then stuck for 15 minut...
- https://github.com/ceph/ceph/pull/42072
- 09:12 PM Bug #51357 (Pending Backport): osd: sent kickoff request to MDS and then stuck for 15 minutes unt...
- The code change is in cephfs.
- 04:02 PM Backport #51232: pacific: qa: scrub code does not join scrubopts with comma
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42065
merged - 04:01 PM Backport #51251: pacific: qa: fs:upgrade uses teuthology default distro
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42067
merged - 12:13 PM Bug #50954: mgr/pybind/snap_schedule: commands only support positional arguments?
- Sebastian Wagner wrote:
> Can you use proper positional arguments here?
>
> [...]
>
> I for one don't think h...
06/30/2021
- 11:58 PM Backport #50913: pacific: MDS heartbeat timed out between during executing MDCache::start_files_t...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42061
merged - 11:57 PM Backport #51286: pacific: MDSMonitor: crash when attempting to mount cephfs
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42068
merged - 11:48 PM Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED ...
- ...
- 10:46 PM Documentation #51459 (Resolved): doc: document what kinds of damage forward scrub can repair
- 07:38 PM Backport #51413: pacific: cephfs-mirror: do not terminate on SIGHUP
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42097
merged - 07:36 PM Backport #51414: pacific: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42072
merged - 07:35 PM Backport #51412: pacific: mds: mkdir on ephemerally pinned directory sometimes blocked on journal...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42071
merged - 07:34 PM Backport #51324: pacific: pacific: client: abort after MDS blocklist
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42070
merged - 07:34 PM Backport #51322: pacific: qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42069
merged - 07:32 PM Backport #51235: pacific: doc: pacific updates
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42066
merged - 07:30 PM Backport #51231: pacific: pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42064
merged - 07:29 PM Backport #51230: pacific: qa: fs:bugs does not specify distro
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42063
merged - 06:44 PM Backport #51203: pacific: mds: CephFS kclient gets stuck when getattr() on a certain file
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42062
merged - 06:43 PM Backport #50875: pacific: mds: MDSLog::journaler pointer maybe crash with use-after-free
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42060
merged - 06:42 PM Backport #50848: pacific: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42059
merged - 06:42 PM Backport #50846: pacific: mds: journal recovery thread is possibly asserting with mds_lock not lo...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42058
merged - 06:41 PM Backport #50636: pacific: session dump includes completed_requests twice, once as an integer and ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42057
merged - 06:41 PM Backport #50630: pacific: mds: Error ENOSYS: mds.a started profiler
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42056
merged - 06:31 PM Backport #50445: pacific: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41052
merged - 06:30 PM Backport #50624: pacific: qa: "ls: cannot access 'lost+found': No such file or directory"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40856
merged - 06:30 PM Backport #50289: pacific: MDS stuck at stopping when reducing max_mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40856
merged - 06:30 PM Backport #50282: pacific: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40856
merged - 06:16 PM Bug #51417: qa: test_ls_H_prints_human_readable_file_size failure
- Oh, this may be the protected_regular enforcement that's in more recent kernels. See:
https://www.kernel.org/d... - 06:10 PM Bug #51440 (Duplicate): fallocate fails with EACCES
- 06:07 PM Bug #51440: fallocate fails with EACCES
- I'm not even sure that this is ceph related. The command was trying to open a file in /tmp to do an fallocate (which ...
- 10:42 AM Bug #51440 (Duplicate): fallocate fails with EACCES
- https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/62...
- 11:46 AM Bug #51062 (Fix Under Review): mds,client: suppport getvxattr RPC
- 10:57 AM Bug #51266: test cleanup failure
- Another occurrence:
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific... - 07:24 AM Feature #40986: cephfs qos: implement cephfs qos base on tokenbucket algorighm
- I'm also interested in the status of QoS for CephFS. Is there any available and mature CephFS QOS mechanism?
06/29/2021
- 10:42 PM Feature #51340 (In Progress): mon/MDSMonitor: allow creating a file system with a specific fscid
- 09:40 PM Feature #51434 (Resolved): pybind/mgr/volumes: add basic introspection
- Something like:...
- 02:58 PM Bug #51183: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3...
- This is clearly a race of some sort. Either it is finding the directory and the mds_sessions file (and possibly the d...
- 02:25 AM Bug #51183 (Fix Under Review): qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/...
- 01:44 PM Bug #50033 (Fix Under Review): mgr/stats: be resilient to offline MDS rank-0
- 01:00 PM Documentation #51428 (Resolved): mgr/nfs: move nfs doc from cephfs to mgr
- 12:59 PM Backport #51413 (In Progress): pacific: cephfs-mirror: do not terminate on SIGHUP
- 12:54 PM Backport #50994 (Resolved): pacific: cephfs-mirror: be resilient to recreated snapshot during syn...
- So, this is already in pacific -- I missed updating the tracker.
- 12:52 PM Backport #50991 (In Progress): pacific: mgr/nfs: skipping conf file or passing empty file throws ...
- https://github.com/ceph/ceph/pull/42096
- 12:52 PM Backport #51174 (In Progress): pacific: mgr/nfs: add nfs-ganesha config hierarchy
- https://github.com/ceph/ceph/pull/42096
- 08:12 AM Fix #49341 (Resolved): qa: add async dirops testing
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:10 AM Bug #50447 (Resolved): cephfs-mirror: disallow adding a active peered file system back to its source
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:10 AM Bug #50532 (Resolved): mgr/volumes: hang when removing subvolume when pools are full
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:08 AM Bug #50867 (Resolved): qa: fs:mirror: reduced data availability
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:08 AM Bug #51204 (Resolved): cephfs-mirror: false warning of "keyring not found" seen in cephfs-mirror ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:41 AM Backport #51421 (Rejected): pacific: mgr/nfs: Add support for RGW export
- 07:37 AM Bug #47172 (Pending Backport): mgr/nfs: Add support for RGW export
- 07:21 AM Bug #51170 (Resolved): pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
- 07:19 AM Backport #50627: pacific: client: access(path, X_OK) on non-executable file as root always succeeds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41294
m... - 06:13 AM Backport #51285 (In Progress): pacific: mds: unknown metric type is always -1
- 05:25 AM Backport #51285: pacific: mds: unknown metric type is always -1
- Patrick Donnelly wrote:
> Xiubo, this has non-trivial conflicts. Can you take this one please?
Sure, will finish ... - 05:26 AM Backport #51200 (In Progress): pacific: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- 05:20 AM Backport #51411 (In Progress): pacific: pybind/mgr/volumes: purge queue seems to block operating ...
- 04:58 AM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- ...
06/28/2021
- 11:23 PM Bug #51183: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3...
- No lazy umount involved: /ceph/teuthology-archive/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.22542...
- 10:25 PM Bug #51417 (Fix Under Review): qa: test_ls_H_prints_human_readable_file_size failure
- 10:01 PM Bug #51417 (Resolved): qa: test_ls_H_prints_human_readable_file_size failure
- Related to #51169.
See https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.2254... - 08:08 PM Feature #51416: kclient: add debugging for mds failover events
- On IRC Patrick said:...
- 08:02 PM Feature #51416 (Fix Under Review): kclient: add debugging for mds failover events
- 07:54 PM Backport #51414 (In Progress): pacific: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- 07:00 PM Backport #51414 (Resolved): pacific: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- https://github.com/ceph/ceph/pull/42072
- 07:53 PM Backport #51412 (In Progress): pacific: mds: mkdir on ephemerally pinned directory sometimes bloc...
- 06:55 PM Backport #51412 (Resolved): pacific: mds: mkdir on ephemerally pinned directory sometimes blocked...
- https://github.com/ceph/ceph/pull/42071
- 07:51 PM Backport #51324 (In Progress): pacific: pacific: client: abort after MDS blocklist
- 07:49 PM Backport #51322 (In Progress): pacific: qa: test_data_scan.TestDataScan.test_pg_files AssertionEr...
- 07:48 PM Backport #51286 (In Progress): pacific: MDSMonitor: crash when attempting to mount cephfs
- 07:47 PM Backport #51285 (Need More Info): pacific: mds: unknown metric type is always -1
- Xiubo, this has non-trivial conflicts. Can you take this one please?
- 07:46 PM Backport #51251 (In Progress): pacific: qa: fs:upgrade uses teuthology default distro
- 07:45 PM Backport #51235 (In Progress): pacific: doc: pacific updates
- 07:43 PM Backport #51232 (In Progress): pacific: qa: scrub code does not join scrubopts with comma
- 07:42 PM Backport #51231 (In Progress): pacific: pybind/mgr/snap_schedule: Invalid command: Unexpected arg...
- 07:41 PM Backport #51230 (In Progress): pacific: qa: fs:bugs does not specify distro
- 07:39 PM Backport #51203 (In Progress): pacific: mds: CephFS kclient gets stuck when getattr() on a certai...
- 07:38 PM Backport #51411 (Need More Info): pacific: pybind/mgr/volumes: purge queue seems to block operati...
- Kotresh, please take this one.
- 06:55 PM Backport #51411 (Resolved): pacific: pybind/mgr/volumes: purge queue seems to block operating on ...
- https://github.com/ceph/ceph/pull/42083
- 07:37 PM Backport #51200 (Need More Info): pacific: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- Kotresh, please take this one.
- 07:37 PM Backport #51198 (Need More Info): pacific: msg: active_connections regression
- Not sure this backport is necessary.
- 07:35 PM Backport #51413 (Need More Info): pacific: cephfs-mirror: do not terminate on SIGHUP
- Venky, please do this one.
- 07:00 PM Backport #51413 (Resolved): pacific: cephfs-mirror: do not terminate on SIGHUP
- https://github.com/ceph/ceph/pull/42097
- 07:35 PM Backport #50994 (Need More Info): pacific: cephfs-mirror: be resilient to recreated snapshot duri...
- Venky, please do this one.
- 07:35 PM Backport #51174 (Need More Info): pacific: mgr/nfs: add nfs-ganesha config hierarchy
- Varsha please do this one.
- 07:34 PM Backport #50991 (Need More Info): pacific: mgr/nfs: skipping conf file or passing empty file thro...
- Varsha please do this one.
- 07:34 PM Backport #50913 (In Progress): pacific: MDS heartbeat timed out between during executing MDCache:...
- 07:32 PM Backport #50875 (In Progress): pacific: mds: MDSLog::journaler pointer maybe crash with use-after...
- 07:30 PM Backport #50848 (In Progress): pacific: mds: "cluster [ERR] Error recovering journal 0x203: (2)...
- 07:28 PM Backport #50846 (In Progress): pacific: mds: journal recovery thread is possibly asserting with m...
- 07:27 PM Backport #50636 (In Progress): pacific: session dump includes completed_requests twice, once as a...
- 07:26 PM Backport #50630 (In Progress): pacific: mds: Error ENOSYS: mds.a started profiler
- 07:24 PM Backport #51284 (Resolved): pacific: cephfs-mirror: false warning of "keyring not found" seen in ...
- 04:27 PM Backport #51284: pacific: cephfs-mirror: false warning of "keyring not found" seen in cephfs-mirr...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41947
merged - 07:23 PM Backport #51283 (Resolved): pacific: cephfs-mirror: disallow adding a active peered file system b...
- 04:27 PM Backport #51283: pacific: cephfs-mirror: disallow adding a active peered file system back to its ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41947
merged - 07:23 PM Backport #51186 (Resolved): pacific: qa: add async dirops testing
- 07:23 PM Backport #51086 (Resolved): pacific: qa: fs:mirror: reduced data availability
- 04:27 PM Backport #51086: pacific: qa: fs:mirror: reduced data availability
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41947
merged - 07:23 PM Backport #51084 (Resolved): pacific: mgr/volumes: hang when removing subvolume when pools are full
- 07:22 PM Backport #50899 (Resolved): pacific: mds: monclient: wait_auth_rotating timed out after 30
- 04:32 PM Backport #50899: pacific: mds: monclient: wait_auth_rotating timed out after 30
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41450
merged - 07:22 PM Backport #50627 (Resolved): pacific: client: access(path, X_OK) on non-executable file as root al...
- 07:21 PM Backport #50624 (In Progress): pacific: qa: "ls: cannot access 'lost+found': No such file or dire...
- 07:14 PM Bug #51410: kclient: fails to finish reconnect during MDS thrashing (testing branch)
- It comes from this swath of code, that gets called when the caps have been renewed:...
- 06:32 PM Bug #51410: kclient: fails to finish reconnect during MDS thrashing (testing branch)
- ...
- 06:29 PM Bug #51410: kclient: fails to finish reconnect during MDS thrashing (testing branch)
- Some interesting messages in the kernel log: /ceph/teuthology-archive/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-te...
- 06:28 PM Bug #51410 (New): kclient: fails to finish reconnect during MDS thrashing (testing branch)
- ...
- 07:05 PM Backport #51335 (Resolved): pacific: mds: avoid journaling overhead for setxattr("ceph.dir.subvol...
- 04:31 PM Backport #51335: pacific: mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") for n...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41995
merged - 07:00 PM Backport #51415 (Resolved): octopus: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- https://github.com/ceph/ceph/pull/43785
- 06:57 PM Bug #51280 (Pending Backport): mds: "FAILED ceph_assert(r == 0 || r == -2)"
- 06:55 PM Bug #51318 (Pending Backport): cephfs-mirror: do not terminate on SIGHUP
- 06:53 PM Bug #51256 (Pending Backport): pybind/mgr/volumes: purge queue seems to block operating on cephfs...
- This is no longer urgent so I'm changing this to just Pacific to reduce the risk associated with this change.
- 06:52 PM Bug #51069 (Pending Backport): mds: mkdir on ephemerally pinned directory sometimes blocked on jo...
- 01:43 PM Bug #51278 (Triaged): mds: "FAILED ceph_assert(!segments.empty())"
- 01:42 PM Bug #51281 (Triaged): qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad...
- 09:08 AM Cleanup #51407 (Resolved): mgr/volumes/fs/operations/versions/subvolume_attrs.py: fix various fla...
- ...
- 09:07 AM Cleanup #51406 (Fix Under Review): mgr/volumes/fs/operations/versions/op_sm.py: fix various flake...
- ...
- 09:05 AM Cleanup #51405 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_v2.py: fix variou...
- ...
- 09:05 AM Cleanup #51404 (Fix Under Review): mgr/volumes/fs/operations/versions/subvolume_v1.py: fix variou...
- ...
- 09:03 AM Cleanup #51403 (Resolved): mgr/volumes/fs/operations/versions/auth_metadata.py: fix various flake...
- ...
- 09:02 AM Cleanup #51402 (Resolved): mgr/volumes/fs/operations/versions/subvolume_base.py: fix various flak...
- ...
- 09:00 AM Cleanup #51401 (Fix Under Review): mgr/volumes/fs/operations/versions/metadata_manager.py: fix va...
- ...
- 08:59 AM Cleanup #51400 (Fix Under Review): mgr/volumes/fs/operations/trash.py: fix various flake8 issues
- ...
- 08:58 AM Cleanup #51399 (Fix Under Review): mgr/volumes/fs/operations/template.py: fix various flake8 issues
- ...
- 08:57 AM Cleanup #51398 (Resolved): mgr/volumes/fs/operations/subvolume.py: fix various flake8 issues
- ...
- 08:56 AM Cleanup #51397 (Fix Under Review): mgr/volumes/fs/operations/volume.py: fix various flake8 issues
- ...
- 08:55 AM Cleanup #51396 (Resolved): mgr/volumes/fs/operations/clone_index.py: fix various flake8 issues
- ...
- 08:53 AM Cleanup #51395 (Fix Under Review): mgr/volumes/fs/operations/lock.py: fix various flake8 issues
- ...
- 08:52 AM Cleanup #51394 (Fix Under Review): mgr/volumes/fs/operations/pin_util.py: fix various flake8 issues
- ...
- 08:50 AM Cleanup #51393 (Resolved): mgr/volumes/fs/operations/group.py: add extra blank line
- ...
- 08:49 AM Cleanup #51392 (Resolved): mgr/volumes/fs/operations/snapshot_util.py: add extra blank line
- ...
- 08:48 AM Cleanup #51391 (Resolved): mgr/volumes/fs/operations/resolver.py: add extra blank line
- ...
- 08:47 AM Cleanup #51390 (Resolved): mgr/volumes/fs/operations/access.py: fix various flake8 issues
- ...
- 08:46 AM Cleanup #51389 (Fix Under Review): mgr/volumes/fs/operations/rankevicter.py: fix various flake8 i...
- ...
- 08:44 AM Cleanup #51388 (Fix Under Review): mgr/volumes/fs/operations/index.py: add extra blank line
- ...
- 08:43 AM Cleanup #51387 (Resolved): mgr/volumes/fs/purge_queue.py: add extra blank line
- ...
- 08:40 AM Cleanup #51386 (Fix Under Review): mgr/volumes/fs/volume.py: fix various flake8 issues
- ...
- 08:38 AM Cleanup #51385 (Fix Under Review): mgr/volumes/fs/fs_util.py: add extra blank line
- ...
- 08:36 AM Cleanup #51384 (Resolved): mgr/volumes/fs/vol_spec.py: fix various flake8 issues
- ...
- 08:34 AM Cleanup #51383 (Fix Under Review): mgr/volumes/fs/exception.py: fix various flake8 issues
- ...
- 08:33 AM Cleanup #51382 (Fix Under Review): mgr/volumes/fs/async_cloner.py: fix various flake8 issues
- ...
- 08:30 AM Cleanup #51381 (Resolved): mgr/volumes/fs/async_job.py: fix various flake8 issues
- ...
- 08:29 AM Cleanup #51380 (Resolved): mgr/volumes/module.py: fix various flake8 issues
- ...
- 08:27 AM Cleanup #51379: mgr/volumes: add flake8 test
- Before fixing these issues, make sure you have flake8 installed. You have read PEP 8 style guide[1] and flake8 guide[...
- 08:25 AM Cleanup #51379 (New): mgr/volumes: add flake8 test
06/25/2021
- 10:27 AM Bug #51365 (In Progress): mgr/nfs: show both ipv4 and ipv6 address in cluster info command
- https://github.com/ceph/ceph/blob/74df5af8e2d36c6143f214cab0fca1693d39f86e/src/pybind/mgr/nfs/cluster.py#L18-L28
Wit... - 01:15 AM Bug #51357 (Resolved): osd: sent kickoff request to MDS and then stuck for 15 minutes until MDS c...
- ...
- 01:09 AM Bug #51280 (Fix Under Review): mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Currently in MDS I just add one imporvement fixing, which will respawn the MDS daemon instead of crash it. But for th...
06/24/2021
- 01:59 PM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Xiubo Li wrote:
> Patrick Donnelly wrote:
> > Xiubo Li wrote:
> > > From the mds.e.log we can see that the "100000... - 05:59 AM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > From the mds.e.log we can see that the "10000003280.00000000:head" re... - 04:35 AM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Xiubo Li wrote:
> From the mds.e.log we can see that the "10000003280.00000000:head" request was stuck and timedout ... - 03:21 AM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- From the mds.e.log we can see that the "10000003280.00000000:head" request was stuck and timedout just after 15m, whi...
- 04:50 AM Tasks #51341 (In Progress): Steps to recover file system(s) after recovering the Ceph monitor store
- In certain rare cases, all the Ceph Monitors might end up with corrupted Monitor stores. The Monitor stores can be re...
- 03:52 AM Feature #51340 (Resolved): mon/MDSMonitor: allow creating a file system with a specific fscid
- In the scenario where the monitor databases are lost and must be rebuilt, the file system will need recreated. (Assum...
06/23/2021
- 08:59 PM Backport #51337 (In Progress): nautilus: mds: avoid journaling overhead for setxattr("ceph.dir.su...
- 08:30 PM Backport #51337 (Rejected): nautilus: mds: avoid journaling overhead for setxattr("ceph.dir.subvo...
- https://github.com/ceph/ceph/pull/41997
- 08:57 PM Backport #51336 (In Progress): octopus: mds: avoid journaling overhead for setxattr("ceph.dir.sub...
- 08:30 PM Backport #51336 (Resolved): octopus: mds: avoid journaling overhead for setxattr("ceph.dir.subvol...
- https://github.com/ceph/ceph/pull/41996
- 08:55 PM Backport #51335 (In Progress): pacific: mds: avoid journaling overhead for setxattr("ceph.dir.sub...
- 08:30 PM Backport #51335 (Resolved): pacific: mds: avoid journaling overhead for setxattr("ceph.dir.subvol...
- https://github.com/ceph/ceph/pull/41995
- 08:25 PM Bug #51276 (Pending Backport): mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") ...
- 07:40 PM Feature #51333 (In Progress): qa: use cephadm to provision cephfs for fs:workloads
- 05:18 PM Feature #51333 (Resolved): qa: use cephadm to provision cephfs for fs:workloads
- To increase our test coverage!
- 04:32 PM Feature #51332 (Fix Under Review): qa: increase metadata replication to exercise lock/witness cod...
- 04:25 PM Feature #51332 (Fix Under Review): qa: increase metadata replication to exercise lock/witness cod...
- 12:20 PM Bug #51318 (Fix Under Review): cephfs-mirror: do not terminate on SIGHUP
- 09:14 AM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- In the osd.7 side, the request was blocked and the osd was added to backoff:...
- 04:43 AM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Xiubo Li wrote:
> It seems the same "23C_IO_MDC_TruncateFinish" called twice:
>
> [...]
Sorry, it is not.
C... - 02:28 AM Bug #51280: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- It seems the same "23C_IO_MDC_TruncateFinish" called twice:...
- 02:40 AM Backport #51324 (Resolved): pacific: pacific: client: abort after MDS blocklist
- https://github.com/ceph/ceph/pull/42070
- 02:40 AM Backport #51323 (Resolved): octopus: qa: test_data_scan.TestDataScan.test_pg_files AssertionError...
- https://github.com/ceph/ceph/pull/45159
- 02:40 AM Backport #51322 (Resolved): pacific: qa: test_data_scan.TestDataScan.test_pg_files AssertionError...
- https://github.com/ceph/ceph/pull/42069
- 02:38 AM Bug #50808 (Pending Backport): qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Item...
- 02:37 AM Bug #50530 (Pending Backport): pacific: client: abort after MDS blocklist
- 02:34 AM Bug #47276 (Resolved): MDSMonitor: add command to rename file systems
- 02:34 AM Bug #50852 (Resolved): mds: remove fs_name stored in MDSRank
- 02:31 AM Bug #50495: libcephfs: shutdown race fails with status 141
- /ceph/teuthology-archive/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/618...
06/22/2021
- 12:17 PM Bug #51318 (Resolved): cephfs-mirror: do not terminate on SIGHUP
- So, utilities such as logrotate would send SIGHUP to the daemon which would terminate it. This is being seen in some ...
- 06:17 AM Bug #51271 (Fix Under Review): mgr/volumes: use a dedicated libcephfs handle for subvolume API calls
Also available in: Atom