Activity
From 06/29/2021 to 07/28/2021
07/28/2021
- 07:54 PM Bug #51923 (Triaged): crash: Client::resolve_mds(std::__cxx11::basic_string<char, std::char_trait...
- 05:00 PM Bug #51923 (Duplicate): crash: Client::resolve_mds(std::__cxx11::basic_string<char, std::char_tra...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d870d8b3d46e44c2bd507fd8...- 06:50 PM Backport #51939 (In Progress): octopus: MDSMonitor: monitor crash after upgrade from ceph 15.2.13...
- 05:50 PM Backport #51939 (Resolved): octopus: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to...
- https://github.com/ceph/ceph/pull/42537
- 06:30 PM Backport #51940 (In Progress): pacific: MDSMonitor: monitor crash after upgrade from ceph 15.2.13...
- 05:50 PM Backport #51940 (Resolved): pacific: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to...
- https://github.com/ceph/ceph/pull/42536
- 05:47 PM Bug #51673 (Pending Backport): MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- 05:41 PM Backport #51938 (Rejected): octopus: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull)...
- 05:41 PM Backport #51937 (Resolved): pacific: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull)...
- https://github.com/ceph/ceph/pull/42923
- 05:40 PM Backport #51936 (Rejected): octopus: mds: improve debugging for mksnap denial
- 05:40 PM Backport #51935 (Resolved): pacific: mds: improve debugging for mksnap denial
- https://github.com/ceph/ceph/pull/42935
- 05:36 PM Cleanup #51543 (Pending Backport): mds: improve debugging for mksnap denial
- 05:36 PM Backport #51933 (Resolved): octopus: mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.co...
- https://github.com/ceph/ceph/pull/45161
- 05:36 PM Bug #50984 (Resolved): qa: test_full multiple the mon_osd_full_ratio twice
- Backport tracked by #45434
- 05:36 PM Backport #51932 (Resolved): pacific: mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.co...
- https://github.com/ceph/ceph/pull/42938
- 05:35 PM Bug #45434 (Pending Backport): qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- 05:33 PM Bug #48422 (Pending Backport): mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.count(md...
- 05:25 PM Bug #51914 (Rejected): crash: int Client::_do_remount(bool): abort
- 05:00 PM Bug #51914 (Rejected): crash: int Client::_do_remount(bool): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6bdaa8326b0988c129febd5f...- 04:27 PM Bug #51267: CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps...
- /ceph/teuthology-archive/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/629...
- 04:18 PM Bug #51905 (Fix Under Review): qa: "error reading sessionmap 'mds1_sessionmap'"
- 04:16 PM Bug #51905 (Resolved): qa: "error reading sessionmap 'mds1_sessionmap'"
- ...
- 03:24 PM Backport #51819: pacific: cephfs-mirror: removing a mirrored directory path causes other sync fai...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42458
merged - 11:34 AM Fix #51873 (Fix Under Review): mds: update hit_dir for dir distinguishes META_POP_IRD and METE_PO...
- mds: update hit_dir for dir distinguishes META_POP_IRD and METE_POP_READDIR in the pop
- 09:22 AM Fix #51857 (Fix Under Review): client: make sure only to update dir dist from auth mds
- 07:04 AM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Milind Changire wrote:
> Dan,
> 1. how many active mds were there in the cluster ?
One
> 2. was there any dir... - 06:40 AM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Dan,
1. how many active mds were there in the cluster ?
2. was there any dir pinning active ?
3. could you list an... - 04:40 AM Bug #51589 (In Progress): mds: crash when journaling during replay
07/27/2021
- 11:54 PM Bug #51870: pybind/mgr/volumes: first subvolume permissions set perms on /volumes and /volumes/group
- Ramana Raja wrote:
> Nice catch! The current test, test_subvolume_create_with_desired_mode_in_group doesn't check wh... - 10:19 PM Bug #51870: pybind/mgr/volumes: first subvolume permissions set perms on /volumes and /volumes/group
- Nice catch! The current test, test_subvolume_create_with_desired_mode_in_group doesn't check whether the mode of subv...
- 07:22 PM Bug #51870 (Pending Backport): pybind/mgr/volumes: first subvolume permissions set perms on /volu...
- Because we use the "mkdirs" method for making parents to a subvolume, the perms for the subvolume specified by --mode...
- 07:15 PM Feature #51416 (Fix Under Review): kclient: add debugging for mds failover events
- 06:13 PM Feature #51416: kclient: add debugging for mds failover events
- No, we don't enable douts anywhere in the krbd suite.
- 05:37 PM Feature #51416: kclient: add debugging for mds failover events
- Oh, actually, we can use the format directive, so something like this would probably also work:...
- 05:24 PM Feature #51416: kclient: add debugging for mds failover events
- Basically, we'd just need to do something like this in a script before the test runs:...
- 04:17 PM Feature #51416: kclient: add debugging for mds failover events
- Jeff Layton wrote:
> I'm not convinced it's beneficial to spam the kernel's ring buffer with these messages. Ceph is... - 12:47 PM Feature #1276 (In Progress): client: expose mds partition via virtual xattrs
- 11:36 AM Feature #1276: client: expose mds partition via virtual xattrs
- Patch posted:
https://lore.kernel.org/ceph-devel/20210727113509.7714-1-jlayton@kernel.org/T/#u - 12:27 PM Bug #51866: mds daemon damaged after outage
- The PGs are not active until 09:30:14, so that's evidence for my theory.
This looks like a bootstrapping issue arisi... - 11:24 AM Bug #51866: mds daemon damaged after outage
- MON / OSD logs attached from between 09:25:00 and 09:31:00
I'll try the MDS delay and get back to you. - 10:12 AM Bug #51866: mds daemon damaged after outage
- ...
- 09:49 AM Bug #51866 (Can't reproduce): mds daemon damaged after outage
- Seen on a containerised test cluster with 3 x MON, 4 x OSD, 2 x MDS.
ceph version 15.2.13 (c44bc49e7a57a87d84dfff2... - 04:53 AM Bug #50390 (Resolved): mds: monclient: wait_auth_rotating timed out after 30
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:51 AM Bug #51113 (Resolved): mds: unknown metric type is always -1
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:50 AM Bug #51183 (Resolved): qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/d...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:50 AM Bug #51417 (Resolved): qa: test_ls_H_prints_human_readable_file_size failure
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:49 AM Bug #51476 (Resolved): src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a cephfs-mir...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:46 AM Backport #50898 (Resolved): octopus: mds: monclient: wait_auth_rotating timed out after 30
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41449
m... - 04:36 AM Backport #51499 (Resolved): pacific: qa: test_ls_H_prints_human_readable_file_size failure
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42166
m... - 04:35 AM Backport #51547 (Resolved): pacific: src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assum...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42226
m... - 04:33 AM Backport #51500 (Resolved): pacific: qa: FileNotFoundError: [Errno 2] No such file or directory: ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42165
m... - 04:32 AM Backport #51285: pacific: mds: unknown metric type is always -1
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42088
m... - 04:32 AM Backport #51200 (Resolved): pacific: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42086
m... - 04:15 AM Fix #51857 (Pending Backport): client: make sure only to update dir dist from auth mds
- In mds/Server::set_trace_dist, if (dir->is_auth() && !forward_all_requests_to_auth) dir->get_dist_spec(ds.dist, whoa...
- 02:32 AM Cleanup #51393 (Resolved): mgr/volumes/fs/operations/group.py: add extra blank line
07/26/2021
- 01:47 PM Bug #51707 (Fix Under Review): pybind/mgr/volumes: Cloner threads stuck in loop trying to clone t...
- 01:43 PM Bug #51756 (Triaged): crash: std::_Rb_tree_insert_and_rebalance(bool, std::_Rb_tree_node_base*, s...
- 01:42 PM Bug #51757 (Triaged): crash: /lib/x86_64-linux-gnu/libpthread.so.0(
- 01:39 PM Bug #51824 (Triaged): pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- 11:14 AM Feature #51416: kclient: add debugging for mds failover events
- I'm not convinced it's beneficial to spam the kernel's ring buffer with these messages. Ceph is already a bit too cha...
- 06:29 AM Backport #51285 (Resolved): pacific: mds: unknown metric type is always -1
07/23/2021
- 06:20 PM Backport #51834 (Resolved): pacific: mon/MDSMonitor: allow creating a file system with a specific...
- https://github.com/ceph/ceph/pull/42900
- 06:20 PM Backport #51833 (Resolved): pacific: client: flush the mdlog in unsafe requests' relevant and aut...
- https://github.com/ceph/ceph/pull/42925
- 06:20 PM Backport #51832 (Resolved): pacific: mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and c...
- https://github.com/ceph/ceph/pull/42939
- 06:20 PM Backport #51831 (In Progress): octopus: mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, an...
- https://github.com/ceph/ceph/pull/45160
- 06:18 PM Bug #51600 (Pending Backport): mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_h...
- 06:17 PM Feature #51518 (Pending Backport): client: flush the mdlog in unsafe requests' relevant and auth ...
- 06:16 PM Feature #51340 (Pending Backport): mon/MDSMonitor: allow creating a file system with a specific f...
- 09:56 AM Bug #51795 (Fix Under Review): mgr/nfs:update pool name to '.nfs' in vstart.sh
- 08:42 AM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- After a few minutes num_strays is now 19966371 and unchanging.
- 08:27 AM Bug #51824 (Pending Backport): pacific scrub ~mds_dir causes stray related ceph_assert, abort and...
- We are testing the scalability of pacific snapshots, snap trimming, and num_stray accounting and ran into some crashe...
- 05:16 AM Backport #51819 (In Progress): pacific: cephfs-mirror: removing a mirrored directory path causes ...
- 05:10 AM Backport #51819 (Resolved): pacific: cephfs-mirror: removing a mirrored directory path causes oth...
- https://github.com/ceph/ceph/pull/42458
- 05:06 AM Bug #51666 (Pending Backport): cephfs-mirror: removing a mirrored directory path causes other syn...
07/22/2021
- 09:52 PM Feature #51416: kclient: add debugging for mds failover events
- Jeff Layton wrote:
> I can see where to add such a message, but I'm not that familiar with all of the different MDS ... - 02:47 PM Bug #51805 (Fix Under Review): pybind/mgr/volumes: The cancelled clone still goes ahead and compl...
- 02:44 PM Bug #51805 (Resolved): pybind/mgr/volumes: The cancelled clone still goes ahead and complete the ...
- When a clone is created and cancelled for some reason. The clone cancel cmd
succeeds but the clone completes in the ... - 12:58 PM Documentation #51683: mgr/nfs: add note about creating exports for nfs using vstart to developer ...
- Add about rgw exports creation as well
- 12:56 PM Bug #51800 (Resolved): mgr/nfs: create rgw export with vstart
- 11:14 AM Bug #51795 (Pending Backport): mgr/nfs:update pool name to '.nfs' in vstart.sh
- As we have changed nfs-ganesha's default pool name....
07/21/2021
- 09:26 PM Backport #50898: octopus: mds: monclient: wait_auth_rotating timed out after 30
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41449
merged - 09:22 PM Feature #51716: Add option in `fs new` command to start rank 0 in failed state
- Patrick Donnelly wrote:
> Another thing I thought of after our discussion today, Ramana: I think the --recover flag ... - 07:39 PM Feature #51716: Add option in `fs new` command to start rank 0 in failed state
- Another thing I thought of after our discussion today, Ramana: I think the --recover flag should do:
- Set rank0 t... - 08:25 PM Backport #51790 (New): pacific: mgr/nfs: move nfs doc from cephfs to mgr
- 08:24 PM Documentation #51428 (Pending Backport): mgr/nfs: move nfs doc from cephfs to mgr
- 08:19 PM Bug #51789 (New): mgr/nfs: allow deployment of multiple nfs-ganesha daemons on single host
- ...
- 06:55 PM Feature #51416: kclient: add debugging for mds failover events
- I can see where to add such a message, but I'm not that familiar with all of the different MDS states. Which ones, sp...
- 06:37 PM Feature #51787 (Resolved): mgr/nfs: deploy nfs-ganesha daemons on non-default port
- `ceph orch apply nfs`[1] supports deployment nfs-ganesha daemons on non-default port. Add port argument to `nfs clust...
- 03:56 AM Bug #51757 (New): crash: /lib/x86_64-linux-gnu/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=005f7c5e895e1fbe65e7b621...- 03:56 AM Bug #51756 (Triaged): crash: std::_Rb_tree_insert_and_rebalance(bool, std::_Rb_tree_node_base*, s...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0b18f26403253ce222ac9009...
07/19/2021
- 10:39 PM Tasks #51341: Steps to recover file system(s) after recovering the Ceph monitor store
- Testing out steps to recover a multiple active MDS file system after recovering monitor store using OSDs:
- Stop a... - 03:04 PM Backport #51499: pacific: qa: test_ls_H_prints_human_readable_file_size failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42166
merged - 05:49 AM Bug #51722 (Fix Under Review): mds: slow performance on parallel rm operations for multiple kclients
- This is from bugzilla https://bugzilla.redhat.com/show_bug.cgi?id=1974882.
- 05:46 AM Bug #51722 (Resolved): mds: slow performance on parallel rm operations for multiple kclients
- There has another case that could cause the unlinkat to be delayed for a long time sometimes, such as for the "remova...
07/16/2021
- 10:07 PM Feature #51716 (Resolved): Add option in `fs new` command to start rank 0 in failed state
- Source: https://github.com/ceph/ceph/pull/42295#discussion_r670827459
Currently, to recover a file system after re... - 05:10 PM Bug #51706 (Duplicate): pacific: qa: osd deep-scrub stat mismatch
- 10:16 AM Bug #51706 (Duplicate): pacific: qa: osd deep-scrub stat mismatch
- Found in [1], which was failed due to this error.
[1] http://qa-proxy.ceph.com/teuthology/yuriw-2021-07-13_17:37:5... - 04:58 PM Bug #51191: Cannot Mount CephFS No Timeout, mount error 5 = Input/output error
- I have upgraded the Ceph cluster to v16.2.5 and upgrade Rook to v1.6.7. The issue still remains....
- 10:46 AM Bug #51707 (Resolved): pybind/mgr/volumes: Cloner threads stuck in loop trying to clone the stale...
- Clones are created on a subvolume. While the clones are not complete, they are
removed with force option resulting i... - 08:36 AM Cleanup #51385 (Fix Under Review): mgr/volumes/fs/fs_util.py: add extra blank line
- 06:33 AM Bug #51705 (Resolved): qa: tasks.cephfs.fuse_mount:mount command failed
- fuse_mount:mount command failed in:
http://qa-proxy.ceph.com/teuthology/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-testi... - 05:58 AM Bug #51704 (Fix Under Review): pacific: qa: Test failure: test_mount_all_caps_absent (tasks.cephf...
- test_mount_all_caps_absent fails in:
http://qa-proxy.ceph.com/teuthology/yuriw-2021-07-08_23:33:26-fs-wip-yuri2-test...
07/15/2021
- 07:09 PM Feature #51416: kclient: add debugging for mds failover events
- Jeff Layton wrote:
> We already have this dout() message already when we get a new map:
>
> [...]
>
> By mds f... - 12:24 PM Feature #51416: kclient: add debugging for mds failover events
- We already have this dout() message already when we get a new map:...
- 07:05 PM Cleanup #51393 (Fix Under Review): mgr/volumes/fs/operations/group.py: add extra blank line
- 04:51 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- i have downgraded the mon.
yes after creating and deleting the fs the upgrade ran through and all is fine - 01:07 AM Bug #51673 (Fix Under Review): MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- 02:49 PM Backport #51547: pacific: src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a cephfs-...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42226
merged - 12:55 PM Documentation #51683 (Resolved): mgr/nfs: add note about creating exports for nfs using vstart to...
- Add this page to developer guide index
https://github.com/ceph/ceph/blob/master/doc/dev/vstart-ganesha.rst
07/14/2021
- 07:27 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- Daniel Keller wrote:
> it was installed in 2015 with 0.80 Firefly or 0.87 Giant I'm not sure
>
> and then upgrade... - 05:14 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=67b285ce3000d0cd47449cbc18...
- 04:30 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- it was installed in 2015 with 0.80 Firefly or 0.87 Giant I'm not sure
and then upgraded to 0.94 Hammer > 10 Jewel ... - 04:08 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- Daniel Keller wrote:
> btw in the cluster no CephFS is used and there are no mds running either
Thanks for the re... - 02:52 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- btw in the cluster no CephFS is used and there are no mds running either
- 02:28 PM Bug #51673: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- The crash is in FSMap::decode().
- 12:32 PM Bug #51673 (Resolved): MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- i tried to update my ceph from 15.2.13 to 16.2.4 on my proxmox 7.0 severs.
after restarting the first monitor cras... - 02:23 PM Backport #51500: pacific: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kerne...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42165
merged - 02:22 PM Backport #51285: pacific: mds: unknown metric type is always -1
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42088
merged - 02:21 PM Backport #51200: pacific: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42086
merged - 09:39 AM Bug #51666 (Fix Under Review): cephfs-mirror: removing a mirrored directory path causes other syn...
- 09:31 AM Bug #51666 (Resolved): cephfs-mirror: removing a mirrored directory path causes other sync failur...
07/13/2021
- 02:50 PM Cleanup #51651 (New): mgr/volumes: replace mon_command with check_mon_command
- 05:19 AM Tasks #51341 (In Progress): Steps to recover file system(s) after recovering the Ceph monitor store
- Steps to recover single active MDS file system https://github.com/ceph/ceph/pull/42295
07/12/2021
- 08:08 PM Feature #45746 (Resolved): mgr/nfs: Add interface to update export
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:07 PM Feature #48622 (Resolved): mgr/nfs: Add tests for readonly exports
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:06 PM Bug #49922 (Resolved): MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:06 PM Documentation #50008 (Resolved): mgr/nfs: Add troubleshooting section
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:06 PM Documentation #50161 (Resolved): mgr/nfs: validation error on creating custom export
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:05 PM Bug #50559 (Resolved): session dump includes completed_requests twice, once as an integer and onc...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:05 PM Bug #50807 (Resolved): mds: MDSLog::journaler pointer maybe crash with use-after-free
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:04 PM Bug #51492 (Resolved): pacific: pybind/ceph_volume_client: stat on empty string
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:59 PM Backport #51494 (Resolved): octopus: pacific: pybind/ceph_volume_client: stat on empty string
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42161
m... - 03:42 PM Backport #51494: octopus: pacific: pybind/ceph_volume_client: stat on empty string
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42161
merged - 07:59 PM Backport #51336 (Resolved): octopus: mds: avoid journaling overhead for setxattr("ceph.dir.subvol...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41996
m... - 03:42 PM Backport #51336: octopus: mds: avoid journaling overhead for setxattr("ceph.dir.subvolume") for n...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41996
merged - 07:58 PM Backport #50874 (Resolved): octopus: mds: MDSLog::journaler pointer maybe crash with use-after-free
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41626
m... - 03:40 PM Backport #50874: octopus: mds: MDSLog::journaler pointer maybe crash with use-after-free
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41626
merged - 07:58 PM Backport #50635 (Resolved): octopus: session dump includes completed_requests twice, once as an i...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41625
m... - 03:40 PM Backport #50635: octopus: session dump includes completed_requests twice, once as an integer and ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41625
merged - 07:58 PM Backport #50283 (Resolved): octopus: MDS slow request lookupino #0x100 on rank 1 block forever on...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40782
m... - 03:36 PM Backport #50283: octopus: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40782
merged - 07:52 PM Backport #51493: nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42162
m... - 03:28 PM Backport #50596 (Rejected): octopus: mgr/nfs: Add troubleshooting section
- Let's focus on Pacific.
- 03:28 PM Backport #50354 (Rejected): octopus: mgr/nfs: validation error on creating custom export
- Let's focus on Pacific.
- 03:28 PM Backport #48703 (Rejected): octopus: mgr/nfs: Add tests for readonly exports
- Let's focus on Pacific.
- 03:28 PM Backport #49712 (Rejected): octopus: mgr/nfs: Add interface to update export
- Let's focus on Pacific.
- 01:42 PM Bug #51589 (Triaged): mds: crash when journaling during replay
- 10:31 AM Bug #51630 (Fix Under Review): mgr/snap_schedule: don't throw traceback on non-existent fs
- Instead return error message and log the traceback...
- 07:40 AM Documentation #51428 (In Progress): mgr/nfs: move nfs doc from cephfs to mgr
07/09/2021
- 05:45 PM Bug #51600 (Fix Under Review): mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_h...
- 04:43 AM Bug #51600 (Pending Backport): mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_h...
- META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_hit_rate are not updated.
ceph daemon mds.$(hostnam... - 03:14 PM Feature #51615 (New): mgr/nfs: add interface to update nfs cluster
- 03:08 PM Cleanup #51614 (Resolved): mgr/nfs: remove dashboard test remnant from unit tests
- 03:03 PM Feature #51613 (New): mgr/nfs: add qa tests for rgw
07/08/2021
- 09:47 PM Bug #49536 (Resolved): client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:47 PM Bug #49939 (Resolved): cephfs-mirror: be resilient to recreated snapshot during synchronization
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:46 PM Bug #50112 (Resolved): MDS stuck at stopping when reducing max_mds
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:46 PM Bug #50216 (Resolved): qa: "ls: cannot access 'lost+found': No such file or directory"
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:45 PM Bug #50530 (Resolved): pacific: client: abort after MDS blocklist
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:45 PM Bug #51069 (Resolved): mds: mkdir on ephemerally pinned directory sometimes blocked on journal flush
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51077 (Resolved): MDSMonitor: crash when attempting to mount cephfs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51146 (Resolved): qa: scrub code does not join scrubopts with comma
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51182 (Resolved): pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51184 (Resolved): qa: fs:bugs does not specify distro
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Documentation #51187 (Resolved): doc: pacific updates
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:44 PM Bug #51250 (Resolved): qa: fs:upgrade uses teuthology default distro
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:43 PM Bug #51318 (Resolved): cephfs-mirror: do not terminate on SIGHUP
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:36 PM Backport #51232 (Resolved): pacific: qa: scrub code does not join scrubopts with comma
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42065
m... - 09:36 PM Backport #51251 (Resolved): pacific: qa: fs:upgrade uses teuthology default distro
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42067
m... - 09:35 PM Backport #50913 (Resolved): pacific: MDS heartbeat timed out between during executing MDCache::st...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42061
m... - 09:35 PM Backport #51286 (Resolved): pacific: MDSMonitor: crash when attempting to mount cephfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42068
m... - 09:35 PM Backport #51413 (Resolved): pacific: cephfs-mirror: do not terminate on SIGHUP
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42097
m... - 09:34 PM Backport #51414 (Resolved): pacific: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42072
m... - 09:34 PM Backport #51412 (Resolved): pacific: mds: mkdir on ephemerally pinned directory sometimes blocked...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42071
m... - 09:34 PM Backport #51324 (Resolved): pacific: pacific: client: abort after MDS blocklist
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42070
m... - 09:34 PM Backport #51322 (Resolved): pacific: qa: test_data_scan.TestDataScan.test_pg_files AssertionError...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42069
m... - 09:34 PM Backport #51235 (Resolved): pacific: doc: pacific updates
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42066
m... - 09:34 PM Backport #51231 (Resolved): pacific: pybind/mgr/snap_schedule: Invalid command: Unexpected argume...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42064
m... - 09:33 PM Backport #51230 (Resolved): pacific: qa: fs:bugs does not specify distro
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42063
m... - 09:32 PM Backport #51203 (Resolved): pacific: mds: CephFS kclient gets stuck when getattr() on a certain file
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42062
m... - 09:32 PM Backport #50875 (Resolved): pacific: mds: MDSLog::journaler pointer maybe crash with use-after-free
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42060
m... - 09:32 PM Backport #50848 (Resolved): pacific: mds: "cluster [ERR] Error recovering journal 0x203: (2) No...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42059
m... - 09:31 PM Backport #50846 (Resolved): pacific: mds: journal recovery thread is possibly asserting with mds_...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42058
m... - 09:31 PM Backport #50636 (Resolved): pacific: session dump includes completed_requests twice, once as an i...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42057
m... - 09:31 PM Backport #50630 (Resolved): pacific: mds: Error ENOSYS: mds.a started profiler
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42056
m... - 09:30 PM Backport #50445 (Resolved): pacific: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41052
m... - 09:30 PM Backport #50624 (Resolved): pacific: qa: "ls: cannot access 'lost+found': No such file or directory"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40856
m... - 09:30 PM Backport #50289 (Resolved): pacific: MDS stuck at stopping when reducing max_mds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40856
m... - 09:30 PM Backport #50282 (Resolved): pacific: MDS slow request lookupino #0x100 on rank 1 block forever on...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40856
m... - 07:44 AM Bug #51589 (Resolved): mds: crash when journaling during replay
- MDS version: ceph version 14.2.20 (36274af6eb7f2a5055f2d53ad448f2694e9046a0) nautilus (stable)
Using 200 clients, ...
07/07/2021
- 09:12 PM Bug #44257: vstart.sh: failed by waiting for mgr dashboard module to start
- I just ran into this same issue, where "waiting for mgr dashboard module to start" runs on a loop. Checking mgr.log.x...
- 05:40 PM Backport #51481 (Resolved): pacific: osd: sent kickoff request to MDS and then stuck for 15 minut...
- 05:30 PM Backport #51547 (In Progress): pacific: src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not as...
07/06/2021
- 08:10 PM Backport #51547 (Resolved): pacific: src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assum...
- https://github.com/ceph/ceph/pull/42226
- 08:08 PM Bug #51476 (Pending Backport): src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a ce...
- 06:55 PM Backport #51545 (Need More Info): octopus: mgr/volumes: use a dedicated libcephfs handle for subv...
- 06:55 PM Backport #51544 (Resolved): pacific: mgr/volumes: use a dedicated libcephfs handle for subvolume ...
- https://github.com/ceph/ceph/pull/42914
- 06:54 PM Bug #51271 (Pending Backport): mgr/volumes: use a dedicated libcephfs handle for subvolume API calls
- 06:52 PM Bug #50250: mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~...
- /ceph/teuthology-archive/pdonnell-2021-07-04_02:32:34-fs-wip-pdonnell-testing-20210703.052904-distro-basic-smithi/625...
- 06:05 PM Cleanup #51543 (Fix Under Review): mds: improve debugging for mksnap denial
- 06:04 PM Cleanup #51543 (Resolved): mds: improve debugging for mksnap denial
- 03:17 PM Feature #51340 (Fix Under Review): mon/MDSMonitor: allow creating a file system with a specific f...
- 10:50 AM Feature #50150 (Fix Under Review): qa: begin grepping kernel logs for kclient warnings/failures t...
07/05/2021
- 02:23 AM Feature #51518 (Fix Under Review): client: flush the mdlog in unsafe requests' relevant and auth ...
- 01:35 AM Feature #51518 (Resolved): client: flush the mdlog in unsafe requests' relevant and auth MDSes only
- Do not flush the mdlog in all the MDSes, which may make no sense for specific inode.
07/02/2021
- 11:19 PM Backport #51499 (In Progress): pacific: qa: test_ls_H_prints_human_readable_file_size failure
- 08:20 PM Backport #51499 (Resolved): pacific: qa: test_ls_H_prints_human_readable_file_size failure
- https://github.com/ceph/ceph/pull/42166
- 11:16 PM Backport #51500 (In Progress): pacific: qa: FileNotFoundError: [Errno 2] No such file or director...
- 08:30 PM Backport #51500 (Resolved): pacific: qa: FileNotFoundError: [Errno 2] No such file or directory: ...
- https://github.com/ceph/ceph/pull/42165
- 08:29 PM Bug #51183 (Pending Backport): qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/...
- 08:16 PM Bug #51417 (Pending Backport): qa: test_ls_H_prints_human_readable_file_size failure
- 08:08 PM Backport #51493 (Resolved): nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- 07:46 PM Backport #51493: nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42162
merged - 04:43 PM Backport #51493 (In Progress): nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- 04:20 PM Backport #51493 (Resolved): nautilus: pacific: pybind/ceph_volume_client: stat on empty string
- https://github.com/ceph/ceph/pull/42162
- 04:48 PM Bug #51495 (In Progress): client: handle empty path strings
- Standard indicates we should return ENOENT.
- 04:42 PM Backport #51494 (In Progress): octopus: pacific: pybind/ceph_volume_client: stat on empty string
- 04:20 PM Backport #51494 (Resolved): octopus: pacific: pybind/ceph_volume_client: stat on empty string
- https://github.com/ceph/ceph/pull/42161
- 04:18 PM Bug #51492 (Pending Backport): pacific: pybind/ceph_volume_client: stat on empty string
- 04:16 PM Bug #51492 (Resolved): pacific: pybind/ceph_volume_client: stat on empty string
- When volume_prefix begins with "/", the library will try to stat the empty string resulting in a log like:...
07/01/2021
- 11:28 PM Bug #51476 (Fix Under Review): src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a ce...
- 03:28 PM Bug #51476 (Resolved): src/pybind/mgr/mirroring/fs/snapshot_mirror.py: do not assume a cephfs-mir...
- When the replication is unidirectional we cannot assume a daemon is running and the "fs snapshot mirror daemon status...
- 09:15 PM Backport #51482 (Need More Info): octopus: osd: sent kickoff request to MDS and then stuck for 15...
- 09:15 PM Backport #51481 (Resolved): pacific: osd: sent kickoff request to MDS and then stuck for 15 minut...
- https://github.com/ceph/ceph/pull/42072
- 09:12 PM Bug #51357 (Pending Backport): osd: sent kickoff request to MDS and then stuck for 15 minutes unt...
- The code change is in cephfs.
- 04:02 PM Backport #51232: pacific: qa: scrub code does not join scrubopts with comma
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42065
merged - 04:01 PM Backport #51251: pacific: qa: fs:upgrade uses teuthology default distro
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42067
merged - 12:13 PM Bug #50954: mgr/pybind/snap_schedule: commands only support positional arguments?
- Sebastian Wagner wrote:
> Can you use proper positional arguments here?
>
> [...]
>
> I for one don't think h...
06/30/2021
- 11:58 PM Backport #50913: pacific: MDS heartbeat timed out between during executing MDCache::start_files_t...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42061
merged - 11:57 PM Backport #51286: pacific: MDSMonitor: crash when attempting to mount cephfs
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42068
merged - 10:46 PM Documentation #51459 (Resolved): doc: document what kinds of damage forward scrub can repair
- 07:38 PM Backport #51413: pacific: cephfs-mirror: do not terminate on SIGHUP
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42097
merged - 07:36 PM Backport #51414: pacific: mds: "FAILED ceph_assert(r == 0 || r == -2)"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42072
merged - 07:35 PM Backport #51412: pacific: mds: mkdir on ephemerally pinned directory sometimes blocked on journal...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42071
merged - 07:34 PM Backport #51324: pacific: pacific: client: abort after MDS blocklist
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42070
merged - 07:34 PM Backport #51322: pacific: qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42069
merged - 07:32 PM Backport #51235: pacific: doc: pacific updates
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42066
merged - 07:30 PM Backport #51231: pacific: pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42064
merged - 07:29 PM Backport #51230: pacific: qa: fs:bugs does not specify distro
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42063
merged - 06:44 PM Backport #51203: pacific: mds: CephFS kclient gets stuck when getattr() on a certain file
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42062
merged - 06:43 PM Backport #50875: pacific: mds: MDSLog::journaler pointer maybe crash with use-after-free
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42060
merged - 06:42 PM Backport #50848: pacific: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42059
merged - 06:42 PM Backport #50846: pacific: mds: journal recovery thread is possibly asserting with mds_lock not lo...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42058
merged - 06:41 PM Backport #50636: pacific: session dump includes completed_requests twice, once as an integer and ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42057
merged - 06:41 PM Backport #50630: pacific: mds: Error ENOSYS: mds.a started profiler
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42056
merged - 06:31 PM Backport #50445: pacific: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41052
merged - 06:30 PM Backport #50624: pacific: qa: "ls: cannot access 'lost+found': No such file or directory"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40856
merged - 06:30 PM Backport #50289: pacific: MDS stuck at stopping when reducing max_mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40856
merged - 06:30 PM Backport #50282: pacific: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40856
merged - 06:16 PM Bug #51417: qa: test_ls_H_prints_human_readable_file_size failure
- Oh, this may be the protected_regular enforcement that's in more recent kernels. See:
https://www.kernel.org/d... - 06:10 PM Bug #51440 (Duplicate): fallocate fails with EACCES
- 06:07 PM Bug #51440: fallocate fails with EACCES
- I'm not even sure that this is ceph related. The command was trying to open a file in /tmp to do an fallocate (which ...
- 10:42 AM Bug #51440 (Duplicate): fallocate fails with EACCES
- https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific-distro-basic-smithi/62...
- 11:46 AM Bug #51062 (Fix Under Review): mds,client: suppport getvxattr RPC
- 10:57 AM Bug #51266: test cleanup failure
- Another occurrence:
https://pulpito.ceph.com/yuriw-2021-06-29_00:14:14-fs-wip-yuri-testing-2021-06-28-1259-pacific... - 07:24 AM Feature #40986: cephfs qos: implement cephfs qos base on tokenbucket algorighm
- I'm also interested in the status of QoS for CephFS. Is there any available and mature CephFS QOS mechanism?
06/29/2021
- 10:42 PM Feature #51340 (In Progress): mon/MDSMonitor: allow creating a file system with a specific fscid
- 09:40 PM Feature #51434 (Resolved): pybind/mgr/volumes: add basic introspection
- Something like:...
- 02:58 PM Bug #51183: qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3...
- This is clearly a race of some sort. Either it is finding the directory and the mds_sessions file (and possibly the d...
- 02:25 AM Bug #51183 (Fix Under Review): qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/...
- 01:44 PM Bug #50033 (Fix Under Review): mgr/stats: be resilient to offline MDS rank-0
- 01:00 PM Documentation #51428 (Pending Backport): mgr/nfs: move nfs doc from cephfs to mgr
- 12:59 PM Backport #51413 (In Progress): pacific: cephfs-mirror: do not terminate on SIGHUP
- 12:54 PM Backport #50994 (Resolved): pacific: cephfs-mirror: be resilient to recreated snapshot during syn...
- So, this is already in pacific -- I missed updating the tracker.
- 12:52 PM Backport #50991 (In Progress): pacific: mgr/nfs: skipping conf file or passing empty file throws ...
- https://github.com/ceph/ceph/pull/42096
- 12:52 PM Backport #51174 (In Progress): pacific: mgr/nfs: add nfs-ganesha config hierarchy
- https://github.com/ceph/ceph/pull/42096
- 08:12 AM Fix #49341 (Resolved): qa: add async dirops testing
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:10 AM Bug #50447 (Resolved): cephfs-mirror: disallow adding a active peered file system back to its source
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:10 AM Bug #50532 (Resolved): mgr/volumes: hang when removing subvolume when pools are full
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:08 AM Bug #50867 (Resolved): qa: fs:mirror: reduced data availability
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:08 AM Bug #51204 (Resolved): cephfs-mirror: false warning of "keyring not found" seen in cephfs-mirror ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:41 AM Backport #51421 (Need More Info): pacific: mgr/nfs: Add support for RGW export
- 07:37 AM Feature #47172 (Pending Backport): mgr/nfs: Add support for RGW export
- 07:21 AM Bug #51170 (Resolved): pacific: qa: AttributeError: 'RemoteProcess' object has no attribute 'split'
- 07:19 AM Backport #50627: pacific: client: access(path, X_OK) on non-executable file as root always succeeds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/41294
m... - 06:13 AM Backport #51285 (In Progress): pacific: mds: unknown metric type is always -1
- 05:25 AM Backport #51285: pacific: mds: unknown metric type is always -1
- Patrick Donnelly wrote:
> Xiubo, this has non-trivial conflicts. Can you take this one please?
Sure, will finish ... - 05:26 AM Backport #51200 (In Progress): pacific: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- 05:20 AM Backport #51411 (In Progress): pacific: pybind/mgr/volumes: purge queue seems to block operating ...
- 04:58 AM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- ...
Also available in: Atom