Activity
From 04/03/2022 to 05/02/2022
05/02/2022
- 05:03 PM Bug #55516: qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 col...
- From logs for a sample test:...
- 05:02 PM Bug #55516 (Resolved): qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data:...
- Started seeing these failures recently:
- https://pulpito.ceph.com/vshankar-2022-05-02_09:11:25-fs-wip-vshankar-te... - 01:28 PM Support #55486 (In Progress): cephfs degraded during upgrade from 16.2.5 -> 16.2.6
- Hi Jesse,
Do you have the MDS logs when the file system was reported as damaged? cephadm does set the relevant con... - 12:52 PM Backport #55239 (In Progress): quincy: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-...
- 12:52 PM Backport #55238 (In Progress): pacific: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an...
- 12:44 PM Bug #55464 (In Progress): cephfs: mds/client error when client stale reconnect
- 12:21 PM Bug #55331: pjd failure (caused by xattr's value not consistent between auth MDS and replicate MD...
- I don't think you have enough information to solve this:
It's not clear which test actually failed. pjdfstests con... - 11:57 AM Bug #55331: pjd failure (caused by xattr's value not consistent between auth MDS and replicate MD...
- Assigning to Xiubo for further investigation about which commit fixed this issue.
- 06:59 AM Bug #55041: mgr/volumes: display in-progress clones for a snapshot
- from irc chat:...
- 06:37 AM Bug #53601 (Resolved): vstart_runner: Running test_data_scan test locally fails with tracebacks
- 06:29 AM Feature #55401 (Fix Under Review): mgr/volumes: allow users to add metadata (key-value pairs) for...
- 04:55 AM Backport #55413: quincy: mds: add perf counter to record slow replies
- Nikhil, please take this.
- 04:55 AM Backport #55412: pacific: mds: add perf counter to record slow replies
- Nikhil, please take this.
- 04:53 AM Backport #55385: quincy: mgr/snap_schedule: include timezone information in scheduled snapshots
- Milind, please take this.
04/30/2022
- 08:18 AM Bug #55331: pjd failure (caused by xattr's value not consistent between auth MDS and replicate MD...
- Since the teuthology run used a kclient, I tried running 100 iterations of pjd.sh on the latest testing kernel 5.18.0...
04/29/2022
- 04:32 PM Support #55486: cephfs degraded during upgrade from 16.2.5 -> 16.2.6
- I've managed to fix this, and am posting here to save anyone else from wasting as much time as I did.
After some... - 03:48 PM Backport #55348: quincy: mgr/volumes: Show clone failure reason in clone status command
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45927
merged - 03:47 PM Backport #55337: quincy: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metri...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45291
merged - 03:46 PM Backport #54480: quincy: mgr/stats: be resilient to offline MDS rank-0
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45291
merged - 08:44 AM Bug #55313: Unexpected file access behavior using ceph-fuse
- Thanks, I can confirm that this works and as you mentioned does slow down file access. In our case, which is an rsnyc...
04/28/2022
- 07:30 PM Support #55486 (In Progress): cephfs degraded during upgrade from 16.2.5 -> 16.2.6
- Hello everyone. I've tried upgrading my ceph cluster by a point release following instructions here: https://docs.cep...
- 05:29 PM Bug #54546: mds: crash due to corrupt inode and omap entry
- Saw this in another cluster. The corruption is seen in the EMetaBlob journal event. The inode+dentry fetch from the j...
- 12:44 PM Bug #55313: Unexpected file access behavior using ceph-fuse
- Matthias Aebi wrote:
> Ok, thank you. I'll certainly give this a try. Besides some cost in performance, does this ha... - 12:35 PM Bug #55313: Unexpected file access behavior using ceph-fuse
- Ok, thank you. I'll certainly give this a try. Besides some cost in performance, does this have any impact on who mig...
- 12:26 PM Bug #55313: Unexpected file access behavior using ceph-fuse
- Hi Matthias,
Quick workaround would be to set "fuse_default_permissions=true" but it might cost you performance. - 12:24 PM Bug #55313 (Fix Under Review): Unexpected file access behavior using ceph-fuse
- 05:01 AM Bug #55170 (Fix Under Review): mds: crash during rejoin (CDir::fetch_keys)
04/27/2022
- 05:11 PM Bug #54701: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CDentry*, MDR...
- I've managed to reproduce this crash today. Will send out a fix.
- 02:26 PM Feature #55470 (Resolved): qa: postgresql test suite workunit
- Run postgresql database test suite as a workunit for cephfs.
- 06:58 AM Bug #55464 (In Progress): cephfs: mds/client error when client stale reconnect
- Options:
mds_session_blocklist_on_evict: false
mds_session_blocklist_on_timeout: false
client_reconnect_stal... - 05:42 AM Feature #55463 (Duplicate): cephfs-top: allow users to chose sorting order
- Right now, the client list are sorted based on client connection order. Allow users to chose a sort field. This would...
04/26/2022
- 05:01 PM Bug #54236 (Resolved): qa/cephfs: change default timeout from 900 secs to 300
- 12:40 PM Feature #48911 (Fix Under Review): cephfs-shell needs "ln" command equivalent
- 09:45 AM Backport #55449 (Resolved): pacific: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm ...
- https://github.com/ceph/ceph/pull/46798
- 06:19 AM Bug #55332: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Venky Shankar wrote:
> Xiubo, please take a look.
Sure. - 05:46 AM Bug #55332 (Triaged): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- Xiubo, please take a look.
- 05:54 AM Bug #55313: Unexpected file access behavior using ceph-fuse
- Thanks for the report, Matthias. This seems straightforward to reproduce.
Kotresh, please take a look. - 05:49 AM Bug #55316: qa: add client asok support to get the options
- Neeraj, guessing this is probably required for writing test to be run by vstart_runner for https://github.com/ceph/ce...
- 05:47 AM Bug #55331 (Triaged): pjd failure (caused by xattr's value not consistent between auth MDS and re...
- Milind, please take a look.
- 04:30 AM Backport #55447 (Resolved): quincy: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm d...
- https://github.com/ceph/ceph/pull/46476
- 04:25 AM Bug #54411 (Pending Backport): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon...
- 02:36 AM Bug #55446 (New): mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' c...
- mgr-nfs-upgrade example: /a/yuriw-2022-04-23_16:12:08-rados-wip-55324-pacific-backport-distro-default-smithi/6803121
...
04/25/2022
- 02:20 PM Backport #55428 (Resolved): quincy: unaccessible dentries after fsstress run with namespace-restr...
- https://github.com/ceph/ceph/pull/46184
- 02:20 PM Backport #55427 (Resolved): pacific: unaccessible dentries after fsstress run with namespace-rest...
- https://github.com/ceph/ceph/pull/46183
- 02:19 PM Bug #54046 (Pending Backport): unaccessible dentries after fsstress run with namespace-restricted...
04/22/2022
- 05:30 PM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- Hmm, the stuff in split_at looks like we can and should just swap the logic -- instead of iterating over all inodes i...
- 02:12 PM Feature #55414 (New): mds:asok interface to cleanup permanently damaged inodes
- There exists a nagging bug in the MDS due to a corrupt on-disk inode causing and assert in the MDS when removing the ...
- 12:00 PM Backport #55413 (Resolved): quincy: mds: add perf counter to record slow replies
- https://github.com/ceph/ceph/pull/46156
- 12:00 PM Backport #55412 (Resolved): pacific: mds: add perf counter to record slow replies
- https://github.com/ceph/ceph/pull/46138
- 11:57 AM Feature #55126 (Pending Backport): mds: add perf counter to record slow replies
- 11:57 AM Feature #55126 (Resolved): mds: add perf counter to record slow replies
- 08:57 AM Bug #55409 (Resolved): client: incorrect operator precedence in Client.cc
- Here's the code I am referring to in following explanation - https://github.com/ceph/ceph/commit/ad61e1dd1a56cd27be17...
- 03:07 AM Backport #55376 (In Progress): quincy: mgr/volumes: allow users to add metadata (key-value pairs)...
04/21/2022
- 01:22 PM Bug #55394 (Pending Backport): qa/cephfs: don't exclamation mark on test_cephfs_shell.py
- 08:40 AM Feature #55401 (Resolved): mgr/volumes: allow users to add metadata (key-value pairs) for subvolu...
- This is similar to subvolume metadata get/set/list/remove. Updating an existing key should be supported.
The snapsho... - 05:45 AM Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- NOTE: in this test the _*inline data*_ is enabled:...
04/20/2022
- 11:30 AM Bug #55394 (Pending Backport): qa/cephfs: don't exclamation mark on test_cephfs_shell.py
- Exclamation mark is a special character for bash as well as
cephfs-shell. For bash, it substitutes current command w... - 04:14 AM Backport #55384 (In Progress): pacific: mgr/snap_schedule: include timezone information in schedu...
04/19/2022
- 05:30 PM Backport #55385 (Resolved): quincy: mgr/snap_schedule: include timezone information in scheduled ...
- https://github.com/ceph/ceph/pull/47734
- 05:30 PM Backport #55384 (Resolved): pacific: mgr/snap_schedule: include timezone information in scheduled...
- https://github.com/ceph/ceph/pull/45968
- 05:27 PM Bug #54374 (Pending Backport): mgr/snap_schedule: include timezone information in scheduled snaps...
- 06:30 AM Bug #54374 (Fix Under Review): mgr/snap_schedule: include timezone information in scheduled snaps...
- 03:16 PM Backport #55375 (In Progress): pacific: mgr/volumes: allow users to add metadata (key-value pairs...
- 11:25 AM Backport #55375 (Resolved): pacific: mgr/volumes: allow users to add metadata (key-value pairs) t...
- https://github.com/ceph/ceph/pull/45961
- 11:46 AM Bug #55240 (Fix Under Review): mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- 11:39 AM Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- I have create a new tracker#55377 to fix the kernel issue in https://tracker.ceph.com/issues/55240#note-4.
And thi... - 09:16 AM Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- Another issue in this failure:
In _*mds.1*_ after it find the inode for _*#0x1/client.0/tmp/fsstress/ltp-full-2009... - 06:18 AM Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- The file _*#0x1/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress*_ was created in _*mds....
- 05:52 AM Bug #55240 (In Progress): mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- 11:25 AM Backport #55376 (Resolved): quincy: mgr/volumes: allow users to add metadata (key-value pairs) to...
- https://github.com/ceph/ceph/pull/45994
- 11:22 AM Feature #54472 (Pending Backport): mgr/volumes: allow users to add metadata (key-value pairs) to ...
- 09:19 AM Bug #55196 (In Progress): mgr/stats: perf stats command doesn't have filter option for fs names.
- 09:19 AM Bug #55234 (Fix Under Review): snap_schedule: replace .snap with the client configured snap dir name
- 09:12 AM Feature #51434 (Fix Under Review): pybind/mgr/volumes: add basic introspection
- 09:01 AM Backport #55338 (In Progress): pacific: Test failure: test_perf_stats_stale_metrics (tasks.cephfs...
- 08:56 AM Backport #55337 (In Progress): quincy: Test failure: test_perf_stats_stale_metrics (tasks.cephfs....
- 04:04 AM Backport #55039 (In Progress): quincy: ceph-fuse: mount -a on already mounted folder should be ig...
04/18/2022
- 04:00 PM Backport #55056: pacific: mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr r...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45906
merged - 04:00 PM Backport #53760: pacific: snap scheduler: cephfs snapshot schedule status doesn't list the snapsh...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45906
merged - 09:50 AM Bug #55354 (Resolved): cephfs: xfstests-dev can't be run against fuse mounted cephfs
- This will require 2 steps -
1. Modify xfstests-dev repo to add the ability to mount CephFS using FUSE.
2. Modify qa... - 08:53 AM Backport #55353 (In Progress): quincy: pybind/mgr/volumes: Clone operation hangs
- 08:50 AM Backport #55353 (Resolved): quincy: pybind/mgr/volumes: Clone operation hangs
- https://github.com/ceph/ceph/pull/45927
- 08:52 AM Backport #55352 (In Progress): pacific: pybind/mgr/volumes: Clone operation hangs
- 08:50 AM Backport #55352 (Resolved): pacific: pybind/mgr/volumes: Clone operation hangs
- https://github.com/ceph/ceph/pull/45928
- 08:51 AM Backport #55349 (In Progress): pacific: mgr/volumes: Show clone failure reason in clone status co...
- 04:15 AM Backport #55349 (Resolved): pacific: mgr/volumes: Show clone failure reason in clone status command
- https://github.com/ceph/ceph/pull/45928
- 08:48 AM Backport #55348 (In Progress): quincy: mgr/volumes: Show clone failure reason in clone status com...
- 04:15 AM Backport #55348 (Resolved): quincy: mgr/volumes: Show clone failure reason in clone status command
- https://github.com/ceph/ceph/pull/45927
- 08:45 AM Bug #55217 (Pending Backport): pybind/mgr/volumes: Clone operation hangs
- 05:52 AM Backport #55040 (In Progress): pacific: ceph-fuse: mount -a on already mounted folder should be i...
- 04:14 AM Bug #55190 (Pending Backport): mgr/volumes: Show clone failure reason in clone status command
04/17/2022
- 09:55 AM Backport #55346 (Resolved): pacific: client: get stuck forever when the forward seq exceeds 256
- https://github.com/ceph/ceph/pull/46179
- 09:55 AM Backport #55345 (Resolved): quincy: client: get stuck forever when the forward seq exceeds 256
- https://github.com/ceph/ceph/pull/46178
- 09:53 AM Bug #55129 (Pending Backport): client: get stuck forever when the forward seq exceeds 256
04/16/2022
- 03:25 PM Backport #55343 (Resolved): pacific: mds: try to reset heartbeat when fetching or committing.
- https://github.com/ceph/ceph/pull/46180
- 03:25 PM Backport #55342 (Resolved): quincy: mds: try to reset heartbeat when fetching or committing.
- https://github.com/ceph/ceph/pull/46181
- 03:20 PM Bug #54345 (Pending Backport): mds: try to reset heartbeat when fetching or committing.
- 03:20 PM Bug #54345 (Resolved): mds: try to reset heartbeat when fetching or committing.
04/14/2022
- 12:10 PM Backport #55338 (Resolved): pacific: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.te...
- https://github.com/ceph/ceph/pull/45293
- 12:10 PM Backport #55337 (Resolved): quincy: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.tes...
- https://github.com/ceph/ceph/pull/45291
- 12:07 PM Bug #54971 (Pending Backport): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds...
- 12:05 PM Backport #55336 (Resolved): quincy: Issue removing subvolume with retained snapshots - Possible q...
- https://github.com/ceph/ceph/pull/46140
- 12:05 PM Backport #55335 (Resolved): pacific: Issue removing subvolume with retained snapshots - Possible ...
- https://github.com/ceph/ceph/pull/46139
- 12:02 PM Bug #54625 (Pending Backport): Issue removing subvolume with retained snapshots - Possible quincy...
- 09:35 AM Bug #55332 (Resolved): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- https://pulpito.ceph.com/vshankar-2022-04-11_12:24:06-fs-wip-vshankar-testing1-20220411-144044-testing-default-smithi...
- 08:58 AM Bug #55331 (Resolved): pjd failure (caused by xattr's value not consistent between auth MDS and r...
- This run: https://pulpito.ceph.com/vshankar-2022-04-11_12:24:06-fs-wip-vshankar-testing1-20220411-144044-testing-defa...
- 06:02 AM Bug #50821: qa: untar_snap_rm failure during mds thrashing
- Similar failure here: https://pulpito.ceph.com/vshankar-2022-04-11_12:24:06-fs-wip-vshankar-testing1-20220411-144044-...
- 05:39 AM Bug #55329 (Fix Under Review): qa: add test case for fsync crash issue
- 05:35 AM Bug #55329: qa: add test case for fsync crash issue
- This could be reproduce very easy by using the following kernel patch:...
- 05:30 AM Bug #55329 (Resolved): qa: add test case for fsync crash issue
- This is the test case for https://tracker.ceph.com/issues/55327.
04/13/2022
- 02:04 PM Backport #55264: quincy: mount.ceph: mount helper incorrectly passes `ms_mode' mount option to ol...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45780
merged - 12:14 PM Bug #55316 (New): qa: add client asok support to get the options
- Currently for the vstart_runner.py it only support mon/mds/osd:...
- 09:29 AM Bug #55313 (Resolved): Unexpected file access behavior using ceph-fuse
- Since upgrading from Nautilus (14.2.21) to Pacific (16.2.7) ceph-fuse shows a rather unexpected and unusual behavior ...
- 08:09 AM Backport #53760 (In Progress): pacific: snap scheduler: cephfs snapshot schedule status doesn't l...
- 07:28 AM Backport #53760 (New): pacific: snap scheduler: cephfs snapshot schedule status doesn't list the ...
- * re-doing bad backport
- 02:23 AM Feature #55283: qa: add fsync/sync stuck waiting for unsafe request test
- Normally when before fixing this we can reproduce it very easy, and also mostly the duration is larger around 4 secon...
- 02:11 AM Feature #55283: qa: add fsync/sync stuck waiting for unsafe request test
- Added two test cases support, one for file sync and another is for filesystem sync.
- 02:09 AM Feature #55283 (Fix Under Review): qa: add fsync/sync stuck waiting for unsafe request test
04/12/2022
- 02:51 PM Backport #55239: quincy: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
- https://github.com/ceph/ceph/pull/45879
- 02:39 PM Backport #55238: pacific: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
- https://github.com/ceph/ceph/pull/45878
- 09:05 AM Feature #55283 (In Progress): qa: add fsync/sync stuck waiting for unsafe request test
- 04:43 AM Feature #55283 (Resolved): qa: add fsync/sync stuck waiting for unsafe request test
- The kclient has fixed this in:...
- 06:29 AM Bug #54701 (Triaged): crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CDe...
04/11/2022
- 01:57 PM Backport #55264 (In Progress): quincy: mount.ceph: mount helper incorrectly passes `ms_mode' moun...
- 01:55 PM Backport #55264 (In Progress): quincy: mount.ceph: mount helper incorrectly passes `ms_mode' moun...
- https://github.com/ceph/ceph/pull/45780
- 01:50 PM Bug #55110 (Pending Backport): mount.ceph: mount helper incorrectly passes `ms_mode' mount option...
- 01:48 PM Bug #55216 (Fix Under Review): cephfs-shell: creates directories in local file system even if fil...
- 01:47 PM Bug #55242 (Fix Under Review): cephfs-shell: put command should accept both path mandatorily and ...
- 01:00 PM Bug #55165 (Fix Under Review): client: validate pool against pool ids as well as pool names
- 12:57 PM Bug #55234 (Triaged): snap_schedule: replace .snap with the client configured snap dir name
- 12:55 PM Bug #55236 (Triaged): qa: fs/snaps tests fails with "hit max job timeout"
- 05:22 AM Bug #55236: qa: fs/snaps tests fails with "hit max job timeout"
- Another instance: https://pulpito.ceph.com/vshankar-2022-04-09_12:55:41-fs-wip-vshankar-testing-55110-20220408-203242...
- 12:54 PM Bug #55217 (Fix Under Review): pybind/mgr/volumes: Clone operation hangs
- 12:53 PM Bug #55240 (Triaged): mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- 05:30 AM Bug #54108: qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.ex...
- https://pulpito.ceph.com/vshankar-2022-04-09_12:55:41-fs-wip-vshankar-testing-55110-20220408-203242-testing-default-s...
- 05:30 AM Bug #54108: qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.ex...
- Ramana, this is showing up a bit in master. Please take a look.
- 05:15 AM Bug #48680: mds: scrubbing stuck "scrub active (0 inodes in the stack)"
- Maybe related (but no backtrace in OSDs): https://pulpito.ceph.com/vshankar-2022-04-09_12:55:41-fs-wip-vshankar-testi...
- 03:03 AM Bug #55253 (Fix Under Review): client: switch to glibc's STATX macros
- 02:50 AM Bug #55253 (Resolved): client: switch to glibc's STATX macros
- Currently the glibc has support the STATX macros:...
04/10/2022
- 08:02 PM Bug #53996 (In Progress): qa: update fs:upgrade tasks to upgrade from pacific instead of octopus,...
04/08/2022
- 01:03 PM Bug #55242 (Resolved): cephfs-shell: put command should accept both path mandatorily and validate...
- Currently, there are no checks to make sure, local_path is valid. For instance for a file "helloworld" at /home/dparm...
- 10:25 AM Feature #48736: qa: enable debug loglevel kclient test suits
- Please add the the `dynamic_debug` option in `.yaml` file, likes:...
- 09:43 AM Bug #55240 (Resolved): mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- Seen here: https://pulpito.ceph.com/vshankar-2022-04-07_05:07:33-fs-master-testing-default-smithi/6780578/
Its an ... - 06:45 AM Backport #55239 (Resolved): quincy: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds...
- 06:45 AM Backport #55238 (Resolved): pacific: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-md...
- 06:41 AM Documentation #54551 (Pending Backport): docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-a...
- 05:39 AM Bug #55236 (Triaged): qa: fs/snaps tests fails with "hit max job timeout"
- Seen here: https://pulpito.ceph.com/vshankar-2022-04-07_15:19:12-fs-wip-vshankar-testing-55110-20220407-173953-testin...
- 05:37 AM Bug #54374: mgr/snap_schedule: include timezone information in scheduled snapshots
- 1. CephFS snapshot file names created by mgr/snap_schedule are always in UTC time zone
2. If the local time zone on ... - 05:31 AM Bug #55235 (Duplicate): snap_schedule: ceph snapshots datestamps lack a timezone field
- duplicate of https://tracker.ceph.com/issues/54374
- 05:25 AM Bug #55235 (Duplicate): snap_schedule: ceph snapshots datestamps lack a timezone field
- add time zone suffix to the snap dir names
- 05:14 AM Bug #55234 (Resolved): snap_schedule: replace .snap with the client configured snap dir name
- snap_schedule assumes that the client snap dir is always ".snap"
the module functionality will break when the client...
04/07/2022
- 05:28 PM Bug #55190 (Fix Under Review): mgr/volumes: Show clone failure reason in clone status command
- 10:26 AM Bug #55217: pybind/mgr/volumes: Clone operation hangs
- The tests ran and mgr logs attached. The code was instrumented a bit to find the callers of the locks.
The mgr.x.l... - 10:02 AM Bug #55217 (Resolved): pybind/mgr/volumes: Clone operation hangs
- The clone operation hangs while testing clone failure PR locally (teuthology via vstart_runner).
The hang is seen wh... - 09:37 AM Bug #54701: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CDentry*, MDR...
- Seen in a pacific cluster. Also, yet another similar backtrace from the same cluster:...
- 08:20 AM Bug #55216 (Resolved): cephfs-shell: creates directories in local file system even if file not found
- The "get" command in cephfs-shell when used to get a file that doesn't exist on ceph filesystem, would throw an error...
- 06:44 AM Feature #55215 (Resolved): mds: fragment directory snapshots
- Not sure if this has been discussed anywhere (I cannot find relevant trackers regarding this).
The MDS does not fr... - 06:14 AM Feature #55214: mds: add asok/tell command to clear stale omap entries
- (formatting description)
Its rather easy to end up with an on-disk inode object that stays un-fragmented, with oma... - 06:09 AM Feature #55214 (New): mds: add asok/tell command to clear stale omap entries
- Its rather easy to end up with an on-disk inode object that stays un-fragmented, with omap count exceeding `mds_bal_s...
04/06/2022
- 01:22 PM Feature #55126 (Fix Under Review): mds: add perf counter to record slow replies
- 12:20 PM Feature #55197 (Resolved): cephfs-top: make cephfs-top display scrollable like top
- Based on the discussions in the BZ [1],
* Make cephfs-top display scrollable like top does. Enable Up/Down, PgUp/... - 10:56 AM Bug #55196 (In Progress): mgr/stats: perf stats command doesn't have filter option for fs names.
- Adding filter option in perf stats command for fs names(--fs_name),the way it is done for client_id and mds_rank, to ...
- 08:59 AM Backport #55191 (Duplicate): pacific: Client::setxattr always sends setxattr request to MDS
- 08:12 AM Backport #55191 (Duplicate): pacific: Client::setxattr always sends setxattr request to MDS
- duplicated to https://tracker.ceph.com/issues/55192
- 08:58 AM Bug #45434 (Resolved): qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- 08:57 AM Backport #51938 (Rejected): octopus: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull)...
- No need this in octopus, there have too many some depending patches needed and some features hasn't supported yet, wi...
- 08:14 AM Backport #55192 (In Progress): pacific: Client::setxattr always sends setxattr request to MDS
- 08:14 AM Backport #55192 (Resolved): pacific: Client::setxattr always sends setxattr request to MDS
- https://github.com/ceph/ceph/pull/45792
- 07:02 AM Bug #55190 (In Progress): mgr/volumes: Show clone failure reason in clone status command
- 07:01 AM Bug #55190 (Resolved): mgr/volumes: Show clone failure reason in clone status command
- If the clone is failed for some reason, show the failure reason in the
clone status command. This would help CSI to ... - 05:42 AM Bug #53521 (Resolved): mds: heartbeat timeout by _prefetch_dirfrags during up:rejoin
- 05:41 AM Backport #53759 (Resolved): pacific: mds: heartbeat timeout by _prefetch_dirfrags during up:rejoin
04/05/2022
- 12:15 PM Bug #48203: qa: quota failure
- Dear all, I went through the discussion about the failed test and would like to ask you to consider an alternative so...
- 07:56 AM Bug #48863 (Fix Under Review): cephfs-shell should allow changing all mode bits
- 06:17 AM Bug #55173: qa: missing dbench binary?
- Similar failure for iozone: http://pulpito.front.sepia.ceph.com/yuriw-2022-04-04_17:00:37-fs-wip-yuri-testing-2022-04...
- 04:43 AM Bug #55134 (Fix Under Review): ceph pacific fails to perform fs/mirror test
04/04/2022
- 01:11 PM Bug #55173 (Can't reproduce): qa: missing dbench binary?
- Seen here: https://pulpito.ceph.com/vshankar-2022-04-03_15:22:48-fs-wip-vshankar-testing-20220403-170344-testing-defa...
- 12:54 PM Bug #55112 (Triaged): cephfs-shell: saving files doesn't work as expected
- 12:50 PM Bug #55148 (Triaged): snap_schedule: remove subvolume(-group) interfaces
- 12:50 PM Bug #55148: snap_schedule: remove subvolume(-group) interfaces
- ... since there are no users right now (and probably in the near future).
- 12:45 PM Bug #55170 (Triaged): mds: crash during rejoin (CDir::fetch_keys)
- 10:09 AM Bug #55170 (Resolved): mds: crash during rejoin (CDir::fetch_keys)
- Seen here: https://pulpito.ceph.com/vshankar-2022-04-03_15:22:48-fs-wip-vshankar-testing-20220403-170344-testing-defa...
Also available in: Atom