Activity
From 07/28/2021 to 08/26/2021
08/26/2021
- 06:38 PM Backport #51544: pacific: mgr/volumes: use a dedicated libcephfs handle for subvolume API calls
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42914
merged - 06:38 PM Backport #51834: pacific: mon/MDSMonitor: allow creating a file system with a specific fscid
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42900
merged - 04:34 PM Backport #52029 (Need More Info): pacific: mgr/nfs :update pool name to '.nfs' in vstart.sh
- 04:33 PM Backport #51421 (Need More Info): pacific: mgr/nfs: Add support for RGW export
- 04:32 PM Backport #51832 (In Progress): pacific: mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, an...
- 04:30 PM Backport #51932 (In Progress): pacific: mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather...
- 04:29 PM Backport #51977 (In Progress): pacific: client: make sure only to update dir dist from auth mds
- 04:27 PM Backport #51198 (In Progress): pacific: msg: active_connections regression
- 04:18 PM Backport #51935 (In Progress): pacific: mds: improve debugging for mksnap denial
- 04:16 PM Backport #52036 (Resolved): pacific: mon/MDSMonitor.cc: fix join fscid not applied with pending f...
- 04:15 PM Backport #51983 (Resolved): pacific: mon/MDSMonitor: do not pointlessly kill standbys that are in...
- 04:15 PM Backport #51411 (Resolved): pacific: pybind/mgr/volumes: purge queue seems to block operating on ...
- 04:15 PM Backport #51174 (Resolved): pacific: mgr/nfs: add nfs-ganesha config hierarchy
- 04:15 PM Backport #50991 (Resolved): pacific: mgr/nfs: skipping conf file or passing empty file throws tra...
- 12:42 PM Backport #52384 (In Progress): pacific: pybind/mgr/volumes: Cloner threads stuck in loop trying t...
- 02:11 AM Bug #52386 (Fix Under Review): client: fix dump mds twice
- 02:04 AM Backport #51833 (In Progress): pacific: client: flush the mdlog in unsafe requests' relevant and ...
- 02:01 AM Backport #51937 (In Progress): pacific: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFu...
- 01:44 AM Tasks #47920 (Won't Fix): client: get rid of the client_lock for mdsmap
- This approach makes not much sense but will make the code more complex and risky, will close it.
08/25/2021
- 07:05 PM Backport #51411: pacific: pybind/mgr/volumes: purge queue seems to block operating on cephfs conn...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42083
merged - 03:37 PM Bug #51866: mds daemon damaged after outage
- David Piper wrote:
> Do the pgs need to be `active+clean` or are there other pg states that it would be safe to star... - 03:02 PM Bug #51866: mds daemon damaged after outage
- We've delayed MDS restarts with a script that waits for `active+clean` pgs first. Script is attached; it's borrowed ...
- 02:06 PM Bug #52406: cephfs_metadata pool got full after upgrade from Nautilus to Pacific 16.2.5
- Same as https://tracker.ceph.com/issues/52260 ?
- 09:15 AM Bug #52406: cephfs_metadata pool got full after upgrade from Nautilus to Pacific 16.2.5
- OSDs with metadata utilization on SSD drives after recreating OSDs and filling CephFS with same data again:...
- 09:08 AM Bug #52406 (Need More Info): cephfs_metadata pool got full after upgrade from Nautilus to Pacific...
- Hi
I have following setup on my Ceph cluster:
cephfs_metadata pool - using crush rule to use only SSD devices t... - 12:01 PM Bug #52260: 1 MDSs are read only | pacific 16.2.5
- Is it the same as https://tracker.ceph.com/issues/52406 ?
- 09:40 AM Bug #52260: 1 MDSs are read only | pacific 16.2.5
We think there’s a major bug here with great risk.
Do you need any further logs/information? Try to replicate th...- 04:27 AM Backport #51544 (In Progress): pacific: mgr/volumes: use a dedicated libcephfs handle for subvolu...
08/24/2021
- 09:51 PM Backport #51983: pacific: mon/MDSMonitor: do not pointlessly kill standbys that are incompatible ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42578
merged - 09:51 PM Backport #52036: pacific: mon/MDSMonitor.cc: fix join fscid not applied with pending fsmap at boot
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42578
merged - 09:50 PM Backport #51174: pacific: mgr/nfs: add nfs-ganesha config hierarchy
- Varsha Rao wrote:
> https://github.com/ceph/ceph/pull/42096
merged - 09:50 PM Backport #50991: pacific: mgr/nfs: skipping conf file or passing empty file throws traceback
- Varsha Rao wrote:
> https://github.com/ceph/ceph/pull/42096
merged - 08:32 PM Bug #52397 (Resolved): pacific: qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
- See this in Yuri's latest pacific run,
https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-... - 08:03 PM Bug #52396 (Duplicate): pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.O...
- See this in Yuri's latest pacific run,
https://pulpito.ceph.com/yuriw-2021-08-23_19:33:26-fs-wip-yuri4-testing-2021-... - 12:29 PM Bug #51281 (In Progress): qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853...
- 12:27 PM Bug #36273 (New): qa: add background task for some units which drops MDS cache
- I worked on this a while back, but probably never pushed a PR. I'll move tracker state when I have the PR pushed.
- 12:23 PM Bug #51589: mds: crash when journaling during replay
- I'm unable to reproduce the crash. Do you have the MDS logs at the time of crash? As you mentioned you do not have th...
- 04:30 AM Backport #51834 (In Progress): pacific: mon/MDSMonitor: allow creating a file system with a speci...
- 02:17 AM Bug #52386 (Resolved): client: fix dump mds twice
- src/client: In asok cmd "dump_cache", fix dump mds twice
- 12:05 AM Backport #52384 (Resolved): pacific: pybind/mgr/volumes: Cloner threads stuck in loop trying to c...
- https://github.com/ceph/ceph/pull/42932
- 12:04 AM Bug #51707 (Pending Backport): pybind/mgr/volumes: Cloner threads stuck in loop trying to clone t...
- 12:03 AM Bug #51805 (Resolved): pybind/mgr/volumes: The cancelled clone still goes ahead and complete the ...
- backport part of #51707
08/23/2021
- 11:14 PM Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED ...
- I reproduced this issue with additional logging using the same reproducer https://tracker.ceph.com/issues/51282#note-...
- 08:06 PM Backport #52084: pacific: pybind/mgr/stats: KeyError
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42702
m... - 08:03 PM Backport #51940: pacific: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42536
m... - 05:17 PM Bug #52382 (In Progress): mds,client: add flag to MClientSession for reject reason
- 05:17 PM Bug #52382 (Resolved): mds,client: add flag to MClientSession for reject reason
- Ceph tracker for #47450.
08/22/2021
- 12:20 PM Bug #52260: 1 MDSs are read only | pacific 16.2.5
- Hi,
Any update on this?
08/20/2021
- 08:53 PM Bug #51295 (Rejected): When fsname = k8s cephfs is specified, an error is displayed:"HEALTH_ERR 1...
- That auth credential is not valid for Octopus. You should upgrade to Pacific.
- 09:06 AM Bug #51975 (Resolved): pybind/mgr/stats: KeyError
- 12:34 AM Bug #49132: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLOCKDONE",
- Andras Pataki wrote:
> max_mds is 1 - we are running a single active MDS only. The clients are all ceph-fuse.
> I'...
08/19/2021
- 10:10 PM Backport #51544: pacific: mgr/volumes: use a dedicated libcephfs handle for subvolume API calls
- Venky, please take this one.
- 07:56 PM Backport #51935 (Need More Info): pacific: mds: improve debugging for mksnap denial
- 07:56 PM Backport #51937 (Need More Info): pacific: qa: test_full_fsync (tasks.cephfs.test_full.TestCluste...
- Xiubo, please take this one.
- 07:51 PM Backport #51940 (Resolved): pacific: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to...
- 07:47 PM Backport #52084 (Resolved): pacific: pybind/mgr/stats: KeyError
- 06:00 PM Bug #49132: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLOCKDONE",
- max_mds is 1 - we are running a single active MDS only. The clients are all ceph-fuse.
I've tried setting the MDS l...
08/18/2021
- 09:07 AM Bug #51589: mds: crash when journaling during replay
- Normally, MDLog does not expire the latest log segment during trimming. The exception to this is when then log segmen...
08/17/2021
- 03:57 PM Backport #52084: pacific: pybind/mgr/stats: KeyError
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42702
merged - 02:16 AM Documentation #49406 (Resolved): Exceeding osd nearfull ratio causes write throttle.
08/16/2021
- 10:38 PM Fix #52104 (Fix Under Review): qa: add testing for "copyfrom" mount option
- 09:53 PM Feature #51716 (Fix Under Review): Add option in `fs new` command to start rank 0 in failed state
- 09:30 PM Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED ...
- Casey Bodley wrote:
> should the rgw suite be whitelisting these too?
I think ignorelisting these errors/warnings... - 08:58 PM Backport #51940: pacific: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42536
merged - 06:18 PM Bug #52280: Mds crash and fails with assert on prepare_new_inode
- Patrick Donnelly wrote:
> Were there any cluster health warnings? Some of the surround MDS logs would also be helpfu... - 01:43 PM Bug #52280 (Need More Info): Mds crash and fails with assert on prepare_new_inode
- Were there any cluster health warnings? Some of the surround MDS logs would also be helpful.
- 08:57 AM Bug #52280 (Resolved): Mds crash and fails with assert on prepare_new_inode
- Hi All
We have nautilus 14.2.7, cluster with 3 MDs.
Sometimes, during heavy loads of kubernetese pods, the MDs kee... - 01:50 PM Bug #52134: botched cephadm upgrade due to mds failures
- If you hit this again, increase debugging on mons to debug_mon=20 and let it chew for 30s-1m so we can hopefully see ...
- 05:51 AM Bug #52274 (Fix Under Review): mgr/nfs: add more log messages
- 04:33 AM Bug #52274 (Resolved): mgr/nfs: add more log messages
- 02:43 AM Bug #49132: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLOCKDONE",
- Andras Pataki wrote:
> [...]
> On our other large cluster we don't see such crashes. One major config difference...
08/14/2021
- 05:55 AM Bug #52260 (Duplicate): 1 MDSs are read only | pacific 16.2.5
- Hi,
We've upgraded from Ceph 14.2.20 to 16.2.5 a couple of weeks ago and suddently the MDS metadata pool OSD's fi...
08/12/2021
- 02:01 PM Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED ...
- should the rgw suite be whitelisting these too?
- 01:35 PM Bug #52094: Tried out Quincy: All MDS Standby
- Hmm, not sure turnaround times, but neither the mailing list, a proxmox forum post, nor this ticket has been responde...
- 10:49 AM Bug #51866: mds daemon damaged after outage
- Thanks Dan, that's a convincing diagnosis. We've diverted our attention to other projects but I'll look into implemen...
08/11/2021
- 09:15 PM Feature #51716 (In Progress): Add option in `fs new` command to start rank 0 in failed state
- 03:37 PM Bug #51365 (In Progress): mgr/nfs: show both ipv4 and ipv6 address in cluster info command
- The latest code tries to use the HostSpec.addr field here. Only if that field is empty (or a hostname instead of an ...
- 03:36 PM Bug #52134: botched cephadm upgrade due to mds failures
- ...
- 03:31 PM Bug #52134 (Can't reproduce): botched cephadm upgrade due to mds failures
- I tried to upgrade my cephadm cluster from one (development) quincy-ish build to another. It got about halfway throug...
- 01:26 PM Bug #49307 (Duplicate): nautilus: qa: "RuntimeError: expected fetching path of an pending clone t...
- 01:23 PM Bug #49307: nautilus: qa: "RuntimeError: expected fetching path of an pending clone to fail"
- Duplicate of this https://tracker.ceph.com/issues/48231 but not backported to nautilus.
- 01:25 PM Bug #52123 (Fix Under Review): mds sends cap updates with btime zeroed out
- 01:20 PM Bug #51589: mds: crash when journaling during replay
- If I'm reading this correctly, it looks like an log entry can be submitted when there are no log segments available f...
- 12:57 PM Bug #52062 (Fix Under Review): cephfs-mirror: terminating a mirror daemon can cause a crash at times
- 12:13 PM Documentation #49406 (In Progress): Exceeding osd nearfull ratio causes write throttle.
- Ok, added a blurb to https://docs.ceph.com/en/latest/cephfs/troubleshooting/#kernel-mount-debugging
08/10/2021
- 08:37 PM Feature #51787 (Resolved): mgr/nfs: deploy nfs-ganesha daemons on non-default port
- 06:50 PM Bug #52123: mds sends cap updates with btime zeroed out
- This was noticed as I was replicating the problem in #46574. With this PR in place, the btime is correctly reported i...
- 02:57 PM Bug #52123 (Resolved): mds sends cap updates with btime zeroed out
- The MDS is sending cap updates that have the btime zeroed out in some cases. Ensure that it's sending the right btime...
- 04:38 PM Bug #50719: xattr returning from the dead (sic!)
- Jeff Layton wrote:
> Ralph, ping? Were you ever able to determine whether this was fixed in later kernels?
At first... - 02:37 PM Cleanup #51614 (Resolved): mgr/nfs: remove dashboard test remnant from unit tests
- 02:36 PM Bug #51800 (Resolved): mgr/nfs: create rgw export with vstart
- 02:36 PM Documentation #51683 (Resolved): mgr/nfs: add note about creating exports for nfs using vstart to...
- 02:33 PM Documentation #24642: doc: visibility semantics to other clients
- I had in mind that hopefully CephFS could have user-facing documentation about file visibility semantics across multi...
08/09/2021
- 08:08 PM Backport #51819 (Resolved): pacific: cephfs-mirror: removing a mirrored directory path causes oth...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42458
m... - 08:05 PM Backport #51939: octopus: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42537
m... - 06:56 PM Bug #50719: xattr returning from the dead (sic!)
- Ralph, ping? Were you ever able to determine whether this was fixed in later kernels?
- 06:54 PM Bug #48439 (Resolved): fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) ...
- I think this was fixed upstream with commit 62575e270f661aba64778cbc5f354511cf9abb21, and that got backported to RHEL...
- 06:47 PM Documentation #24642: doc: visibility semantics to other clients
- It's not clear to me what this tracker bug is actually asking for. I get that you want some documentation about "guar...
- 06:10 PM Bug #44976 (Resolved): MDS problem slow requests, cache pressure, damaged metadata after upgradin...
- I think we ended up resolving the original issue and then the bug wandered off into the weeds and Mitchell opened a n...
- 06:07 PM Feature #42447 (Resolved): add basic client setup page
- 06:06 PM Bug #47998 (Resolved): cephfs kernel client hung
- 06:06 PM Bug #47998: cephfs kernel client hung
- Fixed in mainline in commit bca9fc14c70fc.
- 06:02 PM Bug #48125 (Resolved): qa: test_subvolume_snapshot_clone_cancel_in_progress failure
- Fixed in upstream commit 04fabb1199d1f995d6b9a1c42c046ac4bdac2d19.
- 05:45 PM Feature #1276 (Fix Under Review): client: expose mds partition via virtual xattrs
- 05:45 PM Feature #6373: kcephfs: qa: test fscache
- Patrick Donnelly wrote:
>
> What's special about smithi vs. gibba? We could teach kclient.yaml to setup a file syst... - 05:35 PM Bug #50083 (Resolved): CephFS file access issues using kernel driver: file overwritten with null ...
- This is probably the bug that was fixed by this commit:
https://git.kernel.org/pub/scm/linux/kernel/git/torval... - 10:36 AM Bug #51870 (Fix Under Review): pybind/mgr/volumes: first subvolume permissions set perms on /volu...
- 10:34 AM Fix #52104 (Fix Under Review): qa: add testing for "copyfrom" mount option
- We currently don't have any testing coverage for the copy_file_range syscall, which requires the "copyfrom" mount opt...
- 10:21 AM Feature #51787 (Fix Under Review): mgr/nfs: deploy nfs-ganesha daemons on non-default port
08/08/2021
- 03:36 PM Bug #52094: Tried out Quincy: All MDS Standby
- ...
08/07/2021
- 10:43 AM Bug #52094 (Duplicate): Tried out Quincy: All MDS Standby
- On Proxmox, and suffering with #51445 (https://tracker.ceph.com/issues/51445)
As any good "Knows enough to be danger... - 12:23 AM Bug #48673 (In Progress): High memory usage on standby replay MDS
- I've been able to reproduce this. Will try to track down the cause...
08/06/2021
- 08:37 PM Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process
- Sebastian, do we have documentation somewhere about which situations cephadm will restart services? I think there nee...
- 06:29 PM Fix #52068: qa: add testing for "ms_mode" mount option
- There's a potential problem. I suspect that a lot of the qa/ infrastructure is building device strings using v1 mon a...
- 05:51 PM Fix #52068 (Fix Under Review): qa: add testing for "ms_mode" mount option
- 04:25 PM Fix #52068: qa: add testing for "ms_mode" mount option
- Moving to CephFS as the purpose of the ticket is the backport to pacific.
- 12:52 PM Backport #52084 (In Progress): pacific: pybind/mgr/stats: KeyError
- 12:25 PM Backport #52084 (Resolved): pacific: pybind/mgr/stats: KeyError
- https://github.com/ceph/ceph/pull/42702
- 12:22 PM Bug #51975 (Pending Backport): pybind/mgr/stats: KeyError
08/05/2021
- 06:09 PM Fix #52068 (Resolved): qa: add testing for "ms_mode" mount option
- We currently don't have any testing coverage for the "ms_mode" mount option. Add new workload variants for them.
- 05:35 PM Bug #48673: High memory usage on standby replay MDS
- We are seeing the same issue on pacific 16.2.5 as well. Not a big issue but very annoying. ...
- 11:47 AM Cleanup #51614 (Fix Under Review): mgr/nfs: remove dashboard test remnant from unit tests
- 05:26 AM Bug #52062 (Resolved): cephfs-mirror: terminating a mirror daemon can cause a crash at times
- Seen in this teuthology run which thrashes the mirror daemon for active/active HA test: https://pulpito.ceph.com/vsha...
08/04/2021
- 05:56 PM Bug #46902 (Rejected): mds: CInode::maybe_export_pin is broken
- 04:30 PM Backport #50188 (Rejected): octopus: qa: "Assertion `cb_done' failed."
- Backporting to octopus is not worth the effort.
- 11:43 AM Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED ...
- lot of these instances in: https://pulpito.ceph.com/ideepika-2021-08-03_21:18:59-rbd-wip-rbd-update-feature-distro-ba...
- 11:29 AM Documentation #51683 (In Progress): mgr/nfs: add note about creating exports for nfs using vstart...
- 11:29 AM Bug #51800 (Fix Under Review): mgr/nfs: create rgw export with vstart
- 03:04 AM Bug #51722: mds: slow performance on parallel rm operations for multiple kclients
- The new PR will switch mds_lock to fair mutex.
08/03/2021
- 06:22 PM Backport #52036 (In Progress): pacific: mon/MDSMonitor.cc: fix join fscid not applied with pendin...
- 06:20 PM Backport #52036 (Resolved): pacific: mon/MDSMonitor.cc: fix join fscid not applied with pending f...
- https://github.com/ceph/ceph/pull/42578
- 06:16 PM Bug #49157 (Pending Backport): mon/MDSMonitor.cc: fix join fscid not applied with pending fsmap a...
- 04:50 PM Backport #52029 (Rejected): pacific: mgr/nfs :update pool name to '.nfs' in vstart.sh
- 04:48 PM Bug #51795 (Pending Backport): mgr/nfs:update pool name to '.nfs' in vstart.sh
- 01:23 PM Bug #51975 (Fix Under Review): pybind/mgr/stats: KeyError
- 02:27 AM Bug #49132 (Triaged): mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLOC...
- 01:04 AM Bug #49132: mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLOCKDONE",
- ...
08/02/2021
- 07:43 PM Bug #43960 (Triaged): MDS: incorrectly issues Fc for new opens when there is an existing writer
- 01:41 PM Bug #51923 (Duplicate): crash: Client::resolve_mds(std::__cxx11::basic_string<char, std::char_tra...
- 01:41 PM Bug #51757 (Duplicate): crash: /lib/x86_64-linux-gnu/libpthread.so.0(
- 11:44 AM Bug #51989 (Fix Under Review): cephfs-mirror: cephfs-mirror daemon status for a particular FS is ...
- 10:38 AM Bug #51989 (Resolved): cephfs-mirror: cephfs-mirror daemon status for a particular FS is not showing
- `daemon status` doesn't require a file system name....
08/01/2021
- 08:58 PM Backport #51983 (In Progress): pacific: mon/MDSMonitor: do not pointlessly kill standbys that are...
- 03:50 AM Backport #51983 (Resolved): pacific: mon/MDSMonitor: do not pointlessly kill standbys that are in...
- https://github.com/ceph/ceph/pull/42578
- 03:46 AM Bug #49720 (Pending Backport): mon/MDSMonitor: do not pointlessly kill standbys that are incompat...
07/30/2021
- 09:05 PM Backport #51977 (Resolved): pacific: client: make sure only to update dir dist from auth mds
- https://github.com/ceph/ceph/pull/42937
- 09:05 PM Backport #51976 (Rejected): octopus: client: make sure only to update dir dist from auth mds
- 09:03 PM Bug #51857 (Pending Backport): client: make sure only to update dir dist from auth mds
- 08:56 PM Bug #51975 (Resolved): pybind/mgr/stats: KeyError
- ...
- 02:35 AM Backport #51939 (Resolved): octopus: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to...
07/29/2021
- 08:46 PM Bug #51964 (In Progress): qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- ...
- 09:23 AM Bug #51956 (Fix Under Review): mds: switch to use ceph_assert() instead of assert()
- 09:21 AM Bug #51956 (Resolved): mds: switch to use ceph_assert() instead of assert()
- If the -DNDEBUG was specified when build the code, the assert() will do nothing.
I hit one odd issue that when...
07/28/2021
- 07:54 PM Bug #51923 (Triaged): crash: Client::resolve_mds(std::__cxx11::basic_string<char, std::char_trait...
- 05:00 PM Bug #51923 (Duplicate): crash: Client::resolve_mds(std::__cxx11::basic_string<char, std::char_tra...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d870d8b3d46e44c2bd507fd8...- 06:50 PM Backport #51939 (In Progress): octopus: MDSMonitor: monitor crash after upgrade from ceph 15.2.13...
- 05:50 PM Backport #51939 (Resolved): octopus: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to...
- https://github.com/ceph/ceph/pull/42537
- 06:30 PM Backport #51940 (In Progress): pacific: MDSMonitor: monitor crash after upgrade from ceph 15.2.13...
- 05:50 PM Backport #51940 (Resolved): pacific: MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to...
- https://github.com/ceph/ceph/pull/42536
- 05:47 PM Bug #51673 (Pending Backport): MDSMonitor: monitor crash after upgrade from ceph 15.2.13 to 16.2.4
- 05:41 PM Backport #51938 (Rejected): octopus: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull)...
- 05:41 PM Backport #51937 (Resolved): pacific: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull)...
- https://github.com/ceph/ceph/pull/42923
- 05:40 PM Backport #51936 (Rejected): octopus: mds: improve debugging for mksnap denial
- 05:40 PM Backport #51935 (Resolved): pacific: mds: improve debugging for mksnap denial
- https://github.com/ceph/ceph/pull/42935
- 05:36 PM Cleanup #51543 (Pending Backport): mds: improve debugging for mksnap denial
- 05:36 PM Backport #51933 (Resolved): octopus: mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.co...
- https://github.com/ceph/ceph/pull/45161
- 05:36 PM Bug #50984 (Resolved): qa: test_full multiple the mon_osd_full_ratio twice
- Backport tracked by #45434
- 05:36 PM Backport #51932 (Resolved): pacific: mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.co...
- https://github.com/ceph/ceph/pull/42938
- 05:35 PM Bug #45434 (Pending Backport): qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- 05:33 PM Bug #48422 (Pending Backport): mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.count(md...
- 05:25 PM Bug #51914 (Rejected): crash: int Client::_do_remount(bool): abort
- 05:00 PM Bug #51914 (Rejected): crash: int Client::_do_remount(bool): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6bdaa8326b0988c129febd5f...- 04:27 PM Bug #51267: CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps...
- /ceph/teuthology-archive/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/629...
- 04:18 PM Bug #51905 (Fix Under Review): qa: "error reading sessionmap 'mds1_sessionmap'"
- 04:16 PM Bug #51905 (Resolved): qa: "error reading sessionmap 'mds1_sessionmap'"
- ...
- 03:24 PM Backport #51819: pacific: cephfs-mirror: removing a mirrored directory path causes other sync fai...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42458
merged - 11:34 AM Fix #51873 (Fix Under Review): mds: update hit_dir for dir distinguishes META_POP_IRD and METE_PO...
- mds: update hit_dir for dir distinguishes META_POP_IRD and METE_POP_READDIR in the pop
- 09:22 AM Bug #51857 (Fix Under Review): client: make sure only to update dir dist from auth mds
- 07:04 AM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Milind Changire wrote:
> Dan,
> 1. how many active mds were there in the cluster ?
One
> 2. was there any dir... - 06:40 AM Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Dan,
1. how many active mds were there in the cluster ?
2. was there any dir pinning active ?
3. could you list an... - 04:40 AM Bug #51589 (In Progress): mds: crash when journaling during replay
Also available in: Atom