Activity
From 11/17/2021 to 12/16/2021
12/16/2021
- 07:32 PM Bug #53641 (Fix Under Review): mds: recursive scrub does not trigger stray reintegration
- 03:44 PM Bug #53641 (Resolved): mds: recursive scrub does not trigger stray reintegration
- One might think using recursive scrub would load a dentry and trigger reintegration, but it does not. Presently the c...
- 06:34 PM Bug #53649 (New): allow teuthology to create more than one named filesystem
- Varsha had asked that I create a test for this PR:
https://github.com/ceph/ceph/pull/44279
...but it wasn't... - 04:48 PM Bug #53645 (New): MDCache::shutdown_pass: ceph_assert(!migrator->is_importing())
- I'm running a pinning/multimds thrash test (see stressfs.sh attached) on a 3 node test cluster and occasionally seein...
- 01:42 PM Bug #53623 (Fix Under Review): mds: LogSegment will only save one ESubtreeMap event if the ESubtr...
- 02:24 AM Bug #53623 (Fix Under Review): mds: LogSegment will only save one ESubtreeMap event if the ESubtr...
- ...
- 05:35 AM Bug #53459 (Won't Fix): mds: start a new MDLog segment if new coming event possibly exceeds the e...
- Will fix it in another tracker https://tracker.ceph.com/issues/53623. Closing this one.
- 02:44 AM Bug #53542: Ceph Metadata Pool disk throughput usage increasing
- I have figured out one case may could cause this, please see the tracker https://tracker.ceph.com/issues/53623.
Ju... - 02:20 AM Bug #40002: mds: not trim log under heavy load
More logs:...
12/15/2021
- 05:58 PM Bug #53615 (Fix Under Review): qa: upgrade test fails with "timeout expired in wait_until_healthy"
- 05:51 PM Bug #53615: qa: upgrade test fails with "timeout expired in wait_until_healthy"
- regression caused by fix for #51984
- 10:11 AM Bug #53615 (Resolved): qa: upgrade test fails with "timeout expired in wait_until_healthy"
- https://pulpito.ceph.com/vshankar-2021-12-15_07:13:38-fs-master-testing-default-smithi/6563822/
The test reached a... - 04:05 PM Bug #53619 (Fix Under Review): mds: fails to reintegrate strays if destdn's directory is full (EN...
- 03:00 PM Bug #53619 (Resolved): mds: fails to reintegrate strays if destdn's directory is full (ENOSPC)
- This should work because no stray needs created and the directory's size will not increase.
- 03:00 PM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- > Do you have the logs when the last inode disappeared ?
I got the log for inode 0x20006fdf4cf being purged. log l... - 05:02 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- 玮文 胡 wrote:
> Xiubo Li wrote:
> > Do you have the logs when the last inode disappeared ?
>
> No, I only have deb... - 04:56 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- Xiubo Li wrote:
> Do you have the logs when the last inode disappeared ?
No, I only have debug_mds set to 1/5. I ... - 04:48 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- 玮文 胡 wrote:
> Xiubo Li wrote:
> > BTW, could your lasted of `ceph fs status` ?
>
> I don't quite understand thi... - 04:37 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- Xiubo Li wrote:
> BTW, could your lasted of `ceph fs status` ?
I don't quite understand this. the output don't c... - 02:34 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- 玮文 胡 wrote:
> BTW, is there any way to traverse all the inodes in the stray dir, so that I can find out all such sta... - 01:00 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- 玮文 胡 wrote:
> The inode 0x200065b309d has gone, I don't know how. But I got another inode that crashes the rank 1. I... - 10:39 AM Bug #40002: mds: not trim log under heavy load
- Xiubo Li wrote:
> There has one case that could lead the journal logs to fill the metadata pool full, such as in cas... - 03:39 AM Bug #53611 (Triaged): mds,client: can not identify pool id if pool name is positive integer when ...
- ...
12/14/2021
- 05:29 PM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- Though almost identical, here are the logs before the crash in the new case....
- 05:25 PM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- BTW, is there any way to traverse all the inodes in the stray dir, so that I can find out all such stall caps in one ...
- 05:18 PM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- The inode 0x200065b309d has gone, I don't know how. But I got another inode that crashes the rank 1. It is very simil...
- 07:39 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- There is one thing that looks strange to me. It is rank 1 that wants to export the inode to rank 0. But when I issue ...
- 07:19 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- Xiubo Li wrote:
> 玮文 胡 wrote:
> > This dir should have been deleted about one month ago. Just found that one client... - 06:51 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- 玮文 胡 wrote:
> This dir should have been deleted about one month ago. Just found that one client is still holding a c... - 05:49 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- Xiubo Li wrote:
> Are you using the fuse client or kclient ?
Both. But I believe only kclient have ever accessed ... - 05:37 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- Are you using the fuse client or kclient ?
- 03:42 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- This dir should have been deleted about one month ago. Just found that one client is still holding a cap on it. And I...
- 02:57 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- The dir "200065b309d/" is already located in the stray/, I think it's queued for being purging, will it be possible t...
- 02:29 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- If I didn't miss something important the system dirs shouldn't be migrated in theory.
- 01:58 AM Bug #53597: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- Attached the logs after setting debug_mds to 1/20. This may be the most interesting part:...
- 10:47 AM Bug #53601 (Fix Under Review): vstart_runner: Running test_data_scan test locally fails with trac...
- 10:35 AM Bug #53601 (Resolved): vstart_runner: Running test_data_scan test locally fails with tracebacks
- Following tracebacks are seen
1.... - 10:08 AM Bug #40002: mds: not trim log under heavy load
- There has one case that could lead the journal logs to fill the metadata pool full, such as in case of tracker #53597...
- 01:06 AM Bug #44988 (Duplicate): client: track dirty inodes in a per-session list for effective cap flushing
12/13/2021
- 06:55 PM Bug #44100: cephfs rsync kworker high load.
- We've recently started using cephfs snapshots and are running into a similar issue with the kernel client. It seems ...
- 05:25 PM Bug #53597 (Resolved): mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
- ...
- 03:37 PM Backport #53445 (In Progress): pacific: mds: opening connection to up:replay/up:creating daemon c...
- 01:38 PM Bug #53521 (Fix Under Review): mds: heartbeat timeout by _prefetch_dirfrags during up:rejoin
- 01:38 PM Bug #53542 (Triaged): Ceph Metadata Pool disk throughput usage increasing
- 01:15 PM Fix #52824 (Closed): qa: skip internal metadata directory when scanning ceph debugfs directory
- Not relevant anymore.
- 01:12 PM Feature #49942 (Resolved): cephfs-mirror: enable running in HA
- 01:12 PM Feature #50372 (Resolved): test: Implement cephfs-mirror trasher test for HA active/active
- 08:30 AM Bug #53509: quota support for subvolumegroup
- Elaborating a bit more on the restriction mentioned by Ramana in the first comment.
If the quota is set on the sub...
12/10/2021
- 02:49 PM Feature #50235 (Fix Under Review): allow cephfs-shell to mount named filesystems
- 11:13 AM Bug #53542: Ceph Metadata Pool disk throughput usage increasing
- We are considering to increase the "activity" based thresholds to see if we get less metadata IO.
We were actually... - 10:06 AM Bug #53542: Ceph Metadata Pool disk throughput usage increasing
- Thanks for the reply, we tried decreasing the mds_log_max_segments option, but didn't really notice a difference.
... - 09:18 AM Bug #53542: Ceph Metadata Pool disk throughput usage increasing
- In heavy load case, the MDLog could accumulate many journal log events and could be submit in batch to metadata pool ...
- 08:55 AM Bug #53542: Ceph Metadata Pool disk throughput usage increasing
- We also did a dump of the objecter_requests and it seems there are some large objects written by the mds-es?
ceph ... - 10:08 AM Backport #53332 (In Progress): pacific: ceph-fuse seems to need root permissions to mount (ceph-f...
- 10:01 AM Backport #53331 (In Progress): octopus: ceph-fuse seems to need root permissions to mount (ceph-f...
- 09:12 AM Backport #53444 (In Progress): octopus: qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6...
- 08:58 AM Backport #52951 (Rejected): octopus: qa: skip internal metadata directory when scanning ceph debu...
- Not required since the relevant files are under sysfs.
- 08:58 AM Backport #52950 (Rejected): pacific: qa: skip internal metadata directory when scanning ceph debu...
- Not required since the relevant files are under sysfs.
- 04:46 AM Backport #52952 (Resolved): pacific: mds: crash when journaling during replay
12/09/2021
- 09:03 PM Bug #53574 (New): qa: downgrade testing of MDS/mons in minor releases
- Verify that an older mon/MDS can be brought up and can decode on-disk structures normally.
- 08:59 PM Bug #53573 (Resolved): qa: test new clients against older Ceph clusters
- Confirm that e.g. a Quincy client can still mount/use a Pacific CephFS cluster.
- 04:34 PM Bug #53487 (Resolved): qa: mount error 22 = Invalid argument
- 09:43 AM Documentation #53558 (New): Document cephfs recursive accounting
- I cannot find any user documentation for cephfs recursive accounting.
We should add something similar to https://b... - 04:44 AM Bug #44916 (Fix Under Review): client: syncfs flush is only fast with a single MDS
- 01:52 AM Bug #51956 (Resolved): mds: switch to use ceph_assert() instead of assert()
- 01:46 AM Bug #53504: client: infinite loop "got ESTALE" after mds recovery
- Dan van der Ster wrote:
> Xiubo Li wrote:
> > Dan van der Ster wrote:
> > > more of the client log is attached. (f...
12/08/2021
- 05:05 PM Bug #53542 (Fix Under Review): Ceph Metadata Pool disk throughput usage increasing
- Hi All,
We have been observing that if we let our MDS run for some time, the bandwidth usage of the disks in the m... - 02:22 PM Bug #53509: quota support for subvolumegroup
- Thanks, Ramana for the follow-up and validation of the use case.
- 08:10 AM Bug #53504: client: infinite loop "got ESTALE" after mds recovery
- Xiubo Li wrote:
> Dan van der Ster wrote:
> > more of the client log is attached. (from yesterday)
> >
> > do yo... - 08:04 AM Bug #53521 (Resolved): mds: heartbeat timeout by _prefetch_dirfrags during up:rejoin
- This timeout issue happens with v14.2.19. It may also be reproduced in the latest version.
2021-12-05 20:42:13.472... - 06:33 AM Bug #53520 (New): mds: put both fair mutex MDLog::submit_mutex and mds_lock to test under heavy load
- The related trackers:
MDLog::submit_mutex: https://tracker.ceph.com/issues/40002
mds_lock: https://tracker.ceph.c... - 01:58 AM Bug #53459: mds: start a new MDLog segment if new coming event possibly exceeds the expected segm...
- Yeah, by creating a number of directories and set the distributed pin on each of them, the ESubtreeMap event can reac...
12/07/2021
- 10:02 PM Bug #53509: quota support for subvolumegroup
- As per Venky, we need to keep in mind the following limitation with CephFS quotas:
"Quotas must be configured carefu... - 01:31 PM Bug #53509 (Resolved): quota support for subvolumegroup
- Today, we can apply quota to individual subvolume. However when working on a multi-tenant environment, the storage ad...
- 01:25 PM Bug #53504: client: infinite loop "got ESTALE" after mds recovery
- Dan van der Ster wrote:
> more of the client log is attached. (from yesterday)
>
> do you still need mds? which d... - 08:16 AM Bug #53504: client: infinite loop "got ESTALE" after mds recovery
- more of the client log is attached. (from yesterday)
do you still need mds? which debug level? - 06:26 AM Bug #53504 (Fix Under Review): client: infinite loop "got ESTALE" after mds recovery
- 02:28 AM Bug #53504: client: infinite loop "got ESTALE" after mds recovery
- Dan van der Ster wrote:
> Cluster had max_mds 3 at the time of those logs. It's running 14.2.22 -- we didn't upgrade... - 02:20 AM Bug #53504: client: infinite loop "got ESTALE" after mds recovery
- Cluster had max_mds 3 at the time of those logs. It's running 14.2.22 -- we didn't upgrade; the latest recovery was f...
- 12:37 AM Bug #53504: client: infinite loop "got ESTALE" after mds recovery
- BTW, what's the `max_mds` in your setups ? And how many up MDSes after you upgraded ? It seems there only has one.
12/06/2021
- 02:19 PM Bug #53504 (Resolved): client: infinite loop "got ESTALE" after mds recovery
- After an MDS recovery we inevitably see a few clients hammering the MDSs in a loop, doing getattr on a stale fh.
On ... - 12:18 PM Bug #53487 (Fix Under Review): qa: mount error 22 = Invalid argument
- 11:13 AM Bug #53487: qa: mount error 22 = Invalid argument
- http://pulpito.front.sepia.ceph.com/yuriw-2021-12-03_15:27:18-rados-wip-yuri11-testing-2021-12-02-1451-distro-default...
- 09:23 AM Bug #52406 (Need More Info): cephfs_metadata pool got full after upgrade from Nautilus to Pacific...
- Did you see any suspect logs in the mds logs ? Such as no mdlog->trim() got called, etc.
There have two similar t... - 07:59 AM Bug #52280: Mds crash and fails with assert on prepare_new_inode
- Hi xiubo Li
Thanks for information
The problem happen to us on nautilus 14.2.7
Are the fixes should be included... - 01:18 AM Bug #52280: Mds crash and fails with assert on prepare_new_inode
- ...
- 03:18 AM Bug #40002 (Fix Under Review): mds: not trim log under heavy load
- 01:21 AM Bug #40002: mds: not trim log under heavy load
- The implementations of the Mutex (e.g. std::mutex in C++) do not guarantee fairness, they do not guarantee that the l...
12/03/2021
- 09:02 PM Bug #53487: qa: mount error 22 = Invalid argument
- http://pulpito.front.sepia.ceph.com/yuriw-2021-12-02_20:31:36-rados-wip-yuri5-testing-2021-12-01-0841-distro-default-...
- 03:03 PM Bug #53487 (In Progress): qa: mount error 22 = Invalid argument
- 03:03 PM Bug #53487: qa: mount error 22 = Invalid argument
- I'll take a look...
- 02:15 PM Bug #53487 (Resolved): qa: mount error 22 = Invalid argument
- ...
12/02/2021
- 01:30 PM Bug #53360: pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only suppor...
- Ramana, any idea about the module traceback? I recall seeing it earlier...
- 07:04 AM Bug #53459 (Fix Under Review): mds: start a new MDLog segment if new coming event possibly exceed...
- 06:57 AM Bug #53459 (Won't Fix): mds: start a new MDLog segment if new coming event possibly exceeds the e...
- The following is one example of the mds side logs:...
- 05:20 AM Backport #53458 (Resolved): pacific: pacific: qa: Test failure: test_deep_split (tasks.cephfs.tes...
- https://github.com/ceph/ceph/pull/44642
- 05:15 AM Bug #52487 (Pending Backport): qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.Test...
12/01/2021
- 07:35 PM Bug #53214: qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.c...
- This patch doesn't appear to be applicable to Pacific or Octopus since the get_op_read_count method doesn't exist in ...
- 04:09 AM Bug #53214 (Pending Backport): qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-...
- 10:54 AM Bug #53216 (Resolved): qa: "RuntimeError: value of attributes should be either str or None. clien...
- 09:50 AM Bug #53360: pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only suppor...
- `volume_client` script threw a traceback with "ModuleNotFoundError: No module named 'ceph_volume_client'"::...
- 09:26 AM Bug #53360: pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only suppor...
- From monitor logs::...
- 06:31 AM Bug #40002 (In Progress): mds: not trim log under heavy load
- 06:29 AM Feature #10764 (In Progress): optimize memory usage of MDSCacheObject
- 06:21 AM Bug #52397 (Resolved): pacific: qa: test_acls (tasks.cephfs.test_acls.TestACLs) failed
- This has been fixed by the following commit in git://git.ceph.com/xfstests-dev.git:...
- 04:18 AM Bug #53436 (Duplicate): mds, mon: mds beacon messages get dropped? (mds never reaches up:active s...
- This is a known bug long time ago.
- 02:26 AM Bug #53436: mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
- From remote/smithi154/log/ceph-mds.d.log.gz, we can see that the mon.1 connection was broken:...
- 04:10 AM Backport #53446 (Rejected): octopus: mds: opening connection to up:replay/up:creating daemon caus...
- 04:10 AM Backport #53445 (Resolved): pacific: mds: opening connection to up:replay/up:creating daemon caus...
- https://github.com/ceph/ceph/pull/44296
- 04:10 AM Backport #53444 (Resolved): octopus: qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731...
- https://github.com/ceph/ceph/pull/44270
- 04:10 AM Backport #53443 (Rejected): pacific: qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731...
- 04:09 AM Bug #53082 (Resolved): ceph-fuse: segmenetation fault in Client::handle_mds_map
- 04:07 AM Bug #53194 (Pending Backport): mds: opening connection to up:replay/up:creating daemon causes mes...
- 04:06 AM Feature #52725 (Resolved): qa: mds_dir_max_entries workunit test case
- 04:05 AM Feature #47277 (Resolved): implement new mount "device" syntax for kcephfs
11/30/2021
- 01:48 PM Bug #53436: mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
- Seems the same issue with https://tracker.ceph.com/issues/51705.
- 01:39 PM Bug #53436 (Triaged): mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
- 12:46 PM Bug #53436 (Duplicate): mds, mon: mds beacon messages get dropped? (mds never reaches up:active s...
- Seen in this run - https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-test...
- 11:37 AM Feature #40633 (In Progress): mds: dump recent log events for extraordinary events
- 09:20 AM Bug #48711 (Closed): mds: standby-replay mds abort when replay metablob
- No updates from haitao yet, closing this.
- 01:31 AM Bug #16739 (Fix Under Review): Client::setxattr always sends setxattr request to MDS
- 01:17 AM Feature #18514 (Resolved): qa: don't use a node for each kclient
- I had pushed several patches by adding unsharing network namespace support to fix this. More detail please see https:...
11/29/2021
- 11:21 AM Bug #52625 (Resolved): qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:20 AM Bug #52949 (Resolved): RuntimeError: The following counters failed to be set on mds daemons: {'md...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:20 AM Bug #52975 (Resolved): MDSMonitor: no active MDS after cluster deployment
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:20 AM Bug #52994 (Resolved): client: do not defer releasing caps when revoking
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:19 AM Bug #53155 (Resolved): MDSMonitor: assertion during upgrade to v16.2.5+
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:17 AM Backport #53121: pacific: mds: collect I/O sizes from client for cephfs-top
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43784
m... - 11:15 AM Backport #53217 (Resolved): pacific: test: Implement cephfs-mirror trasher test for HA active/active
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43924
m... - 11:15 AM Backport #53164: pacific: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)+0xf3)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43815
m... - 11:15 AM Backport #52678 (Resolved): pacific: qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnap...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43702
m... - 11:14 AM Backport #53231 (Resolved): pacific: MDSMonitor: assertion during upgrade to v16.2.5+
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43890
m... - 11:14 AM Backport #53006: pacific: RuntimeError: The following counters failed to be set on mds daemons: {...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43828
m... - 05:40 AM Bug #39634 (Fix Under Review): qa: test_full_same_file timeout
- 05:34 AM Bug #39634: qa: test_full_same_file timeout
- In /ceph/teuthology-archive/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2021-11-08-1003-pacific-distro-basic-smith...
11/26/2021
- 09:50 AM Bug #48673: High memory usage on standby replay MDS
- Patrick Donnelly wrote:
> I've been able to reproduce this. Will try to track down the cause...
The same situatio...
11/25/2021
- 05:41 AM Bug #48812 (New): qa: test_scrub_pause_and_resume_with_abort failure
- This has started to show up again: https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-2021...
- 05:34 AM Bug #39634: qa: test_full_same_file timeout
- Checked all the other OSDs, they all didn't reach the "mon osd full ratio: 0.7". Only the osd.4 did.
That means the ... - 05:30 AM Bug #39634: qa: test_full_same_file timeout
- When the test_full test case was deleting the "large_file_b" and "large_file_a", from /ceph/teuthology-archive/yuriw-...
11/24/2021
- 11:40 AM Feature #53310: Add admin socket command to trim caps
- Patrick mentioned about these config options:
- mds_session_cache_liveness_decay_rate
- mds_session_cache_livenes... - 11:22 AM Bug #53360: pacific: client: "handle_auth_bad_method server allowed_methods [2] but i only suppor...
- ceph-fuse fails way before `install.upgrade` in run. Looks like the failure is when everything is nautilus::...
11/23/2021
- 06:18 PM Fix #52591 (Fix Under Review): mds: mds_oft_prefetch_dirfrags = false is not qa tested
- 04:08 PM Bug #52094 (Duplicate): Tried out Quincy: All MDS Standby
- 01:54 PM Feature #53310: Add admin socket command to trim caps
- Brief background - this request came up from some community members. They run a file system scanning job every day (?...
- 01:40 PM Bug #53360 (Triaged): pacific: client: "handle_auth_bad_method server allowed_methods [2] but i o...
- 01:22 PM Bug #52487 (Fix Under Review): qa: Test failure: test_deep_split (tasks.cephfs.test_fragment.Test...
- 01:20 PM Bug #53300 (Duplicate): qa: cluster [WRN] Scrub error on inode
- Duplicate of https://tracker.ceph.com/issues/50250
- 09:24 AM Bug #50946 (Duplicate): mgr/stats: exception ValueError in perf stats
- 09:19 AM Bug #50946: mgr/stats: exception ValueError in perf stats
- This issue seems duplicate of https://tracker.ceph.com/issues/48473
It will automatically get resolved once https://... - 04:17 AM Feature #49811 (Resolved): mds: collect I/O sizes from client for cephfs-top
- 04:15 AM Backport #53121 (Resolved): pacific: mds: collect I/O sizes from client for cephfs-top
11/22/2021
- 05:37 PM Documentation #53236 (Resolved): doc: ephemeral pinning with subvolumegroups
- 05:36 PM Backport #53245 (Resolved): pacific: doc: ephemeral pinning with subvolumegroups
- 05:33 PM Bug #53360 (Duplicate): pacific: client: "handle_auth_bad_method server allowed_methods [2] but i...
- Nautilus ceph-fuse client fails to start for Pacific upgrade tests:...
- 02:58 PM Backport #53121: pacific: mds: collect I/O sizes from client for cephfs-top
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43784
merged - 02:27 PM Bug #53293 (Resolved): qa: v16.2.4 mds crash caused by centos stream kernel
- 02:41 AM Bug #53082 (Fix Under Review): ceph-fuse: segmenetation fault in Client::handle_mds_map
- 01:24 AM Backport #53164 (Resolved): pacific: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)+0xf3)
11/20/2021
- 05:03 PM Backport #53347 (Resolved): pacific: qa: v16.2.4 mds crash caused by centos stream kernel
- 12:47 AM Backport #53347 (In Progress): pacific: qa: v16.2.4 mds crash caused by centos stream kernel
11/19/2021
- 11:45 PM Backport #53347 (Resolved): pacific: qa: v16.2.4 mds crash caused by centos stream kernel
- https://github.com/ceph/ceph/pull/44034
- 11:44 PM Bug #53293 (Pending Backport): qa: v16.2.4 mds crash caused by centos stream kernel
- 02:17 PM Backport #53217: pacific: test: Implement cephfs-mirror trasher test for HA active/active
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43924
merged - 02:14 PM Backport #53164: pacific: mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)+0xf3)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43815
merged - 04:45 AM Backport #53332 (Resolved): pacific: ceph-fuse seems to need root permissions to mount (ceph-fuse...
- https://github.com/ceph/ceph/pull/44272
- 04:45 AM Backport #53331 (Resolved): octopus: ceph-fuse seems to need root permissions to mount (ceph-fuse...
- https://github.com/ceph/ceph/pull/44271
- 04:41 AM Documentation #53054 (Pending Backport): ceph-fuse seems to need root permissions to mount (ceph-...
11/18/2021
- 03:23 PM Backport #53120: pacific: client: do not defer releasing caps when revoking
- https://github.com/ceph/ceph/pull/43782 merged
- 03:21 PM Backport #52678: pacific: qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43702
merged - 02:51 PM Bug #48473 (Fix Under Review): fs perf stats command crashes
- 02:31 PM Bug #53314 (Duplicate): qa: fs/upgrade/mds_upgrade_sequence test timeout
- 09:12 AM Bug #53314: qa: fs/upgrade/mds_upgrade_sequence test timeout
- @Xiubo, I think the PR https://github.com/ceph/ceph/pull/43784 is causing this.
- 09:09 AM Bug #53314 (Duplicate): qa: fs/upgrade/mds_upgrade_sequence test timeout
- The qa suite mds_upgrade_sequence becomes dead with job timeout because of mds crash.
-------------
ceph versi... - 12:31 PM Bug #48773: qa: scrub does not complete
- Another Instance
http://qa-proxy.ceph.com/teuthology/yuriw-2021-11-17_19:02:43-fs-wip-yuri10-testing-2021-11-17-08... - 08:50 AM Bug #39634: qa: test_full_same_file timeout
- Will work on it.
- 08:47 AM Bug #52396 (Duplicate): pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.O...
- This is duplicated to https://tracker.ceph.com/issues/53218.
- 03:55 AM Cleanup #51406 (Fix Under Review): mgr/volumes/fs/operations/versions/op_sm.py: fix various flake...
11/17/2021
- 09:29 PM Feature #53310 (New): Add admin socket command to trim caps
- Add an admin socket command to cause the MDS to reclaim state from a client. This would simply involve calling reclai...
- 04:30 PM Backport #53232 (Resolved): pacific: MDSMonitor: no active MDS after cluster deployment
- 03:59 PM Backport #53232: pacific: MDSMonitor: no active MDS after cluster deployment
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43891
merged - 04:14 PM Backport #53231: pacific: MDSMonitor: assertion during upgrade to v16.2.5+
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43890
merged - 03:49 PM Backport #53006 (Resolved): pacific: RuntimeError: The following counters failed to be set on mds...
- 03:16 PM Backport #53006: pacific: RuntimeError: The following counters failed to be set on mds daemons: {...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43828
merged - 03:48 PM Backport #53120 (Resolved): pacific: client: do not defer releasing caps when revoking
- 01:41 PM Bug #50946: mgr/stats: exception ValueError in perf stats
- Nikhil, please take this one.
- 11:00 AM Backport #53304 (Rejected): octopus: Improve API documentation for struct ceph_client_callback_args
- 11:00 AM Backport #53303 (Rejected): pacific: Improve API documentation for struct ceph_client_callback_args
- 10:57 AM Documentation #53004 (Pending Backport): Improve API documentation for struct ceph_client_callbac...
- 10:56 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- I'll take a look at the failure soon.
- 10:35 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- Also seen in this run
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_15:19:37-fs-wip-yuri2-testing-2021-11-0... - 08:46 AM Bug #53082 (In Progress): ceph-fuse: segmenetation fault in Client::handle_mds_map
- 07:35 AM Bug #53300 (Duplicate): qa: cluster [WRN] Scrub error on inode
- "2021-11-09T18:59:42.703093+0000 mds.l (mds.0) 19 : cluster [WRN] Scrub error on inode 0x100000012cc (/client.0/tmp/b...
- 07:18 AM Bug #39634: qa: test_full_same_file timeout
- Seen in the below pacific run
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-08_20:21:11-fs-wip-yuri5-testing-2... - 06:42 AM Bug #51705: qa: tasks.cephfs.fuse_mount:mount command failed
- New instance
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testing-2021-11-11-1339-pa... - 06:34 AM Backport #52875: pacific: qa: test_dirfrag_limit
- Seen in this pacific test run as well. Should go away once the fix is backported
http://pulpito.front.sepia.ceph.c... - 06:25 AM Bug #52396: pacific: qa: ERROR: test_perf_counters (tasks.cephfs.test_openfiletable.OpenFileTable)
- Seen in this pacific run as well
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testin... - 06:00 AM Bug #53216 (Fix Under Review): qa: "RuntimeError: value of attributes should be either str or Non...
- The parameters' order is incorrect and missing the 'client_config'.
- 04:38 AM Backport #53218 (In Progress): pacific: qa: Test failure: test_perf_counters (tasks.cephfs.test_o...
- 01:04 AM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- Andras Pataki wrote:
> Thanks Patrick! How safe do you feel this patch is? Does it need a lot of testing or is it ...
Also available in: Atom