Activity
From 03/21/2022 to 04/19/2022
04/19/2022
- 05:30 PM Backport #55385 (Resolved): quincy: mgr/snap_schedule: include timezone information in scheduled ...
- https://github.com/ceph/ceph/pull/47734
- 05:30 PM Backport #55384 (Resolved): pacific: mgr/snap_schedule: include timezone information in scheduled...
- https://github.com/ceph/ceph/pull/45968
- 05:27 PM Bug #54374 (Pending Backport): mgr/snap_schedule: include timezone information in scheduled snaps...
- 06:30 AM Bug #54374 (Fix Under Review): mgr/snap_schedule: include timezone information in scheduled snaps...
- 03:16 PM Backport #55375 (In Progress): pacific: mgr/volumes: allow users to add metadata (key-value pairs...
- 11:25 AM Backport #55375 (Resolved): pacific: mgr/volumes: allow users to add metadata (key-value pairs) t...
- https://github.com/ceph/ceph/pull/45961
- 11:46 AM Bug #55240 (Fix Under Review): mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- 11:39 AM Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- I have create a new tracker#55377 to fix the kernel issue in https://tracker.ceph.com/issues/55240#note-4.
And thi... - 09:16 AM Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- Another issue in this failure:
In _*mds.1*_ after it find the inode for _*#0x1/client.0/tmp/fsstress/ltp-full-2009... - 06:18 AM Bug #55240: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- The file _*#0x1/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel/fs/fsstress/fsstress*_ was created in _*mds....
- 05:52 AM Bug #55240 (In Progress): mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- 11:25 AM Backport #55376 (Resolved): quincy: mgr/volumes: allow users to add metadata (key-value pairs) to...
- https://github.com/ceph/ceph/pull/45994
- 11:22 AM Feature #54472 (Pending Backport): mgr/volumes: allow users to add metadata (key-value pairs) to ...
- 09:19 AM Bug #55196 (In Progress): mgr/stats: perf stats command doesn't have filter option for fs names.
- 09:19 AM Bug #55234 (Fix Under Review): snap_schedule: replace .snap with the client configured snap dir name
- 09:12 AM Feature #51434 (Fix Under Review): pybind/mgr/volumes: add basic introspection
- 09:01 AM Backport #55338 (In Progress): pacific: Test failure: test_perf_stats_stale_metrics (tasks.cephfs...
- 08:56 AM Backport #55337 (In Progress): quincy: Test failure: test_perf_stats_stale_metrics (tasks.cephfs....
- 04:04 AM Backport #55039 (In Progress): quincy: ceph-fuse: mount -a on already mounted folder should be ig...
04/18/2022
- 04:00 PM Backport #55056: pacific: mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr r...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45906
merged - 04:00 PM Backport #53760: pacific: snap scheduler: cephfs snapshot schedule status doesn't list the snapsh...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45906
merged - 09:50 AM Bug #55354 (Resolved): cephfs: xfstests-dev can't be run against fuse mounted cephfs
- This will require 2 steps -
1. Modify xfstests-dev repo to add the ability to mount CephFS using FUSE.
2. Modify qa... - 08:53 AM Backport #55353 (In Progress): quincy: pybind/mgr/volumes: Clone operation hangs
- 08:50 AM Backport #55353 (Resolved): quincy: pybind/mgr/volumes: Clone operation hangs
- https://github.com/ceph/ceph/pull/45927
- 08:52 AM Backport #55352 (In Progress): pacific: pybind/mgr/volumes: Clone operation hangs
- 08:50 AM Backport #55352 (Resolved): pacific: pybind/mgr/volumes: Clone operation hangs
- https://github.com/ceph/ceph/pull/45928
- 08:51 AM Backport #55349 (In Progress): pacific: mgr/volumes: Show clone failure reason in clone status co...
- 04:15 AM Backport #55349 (Resolved): pacific: mgr/volumes: Show clone failure reason in clone status command
- https://github.com/ceph/ceph/pull/45928
- 08:48 AM Backport #55348 (In Progress): quincy: mgr/volumes: Show clone failure reason in clone status com...
- 04:15 AM Backport #55348 (Resolved): quincy: mgr/volumes: Show clone failure reason in clone status command
- https://github.com/ceph/ceph/pull/45927
- 08:45 AM Bug #55217 (Pending Backport): pybind/mgr/volumes: Clone operation hangs
- 05:52 AM Backport #55040 (In Progress): pacific: ceph-fuse: mount -a on already mounted folder should be i...
- 04:14 AM Bug #55190 (Pending Backport): mgr/volumes: Show clone failure reason in clone status command
04/17/2022
- 09:55 AM Backport #55346 (Resolved): pacific: client: get stuck forever when the forward seq exceeds 256
- https://github.com/ceph/ceph/pull/46179
- 09:55 AM Backport #55345 (Resolved): quincy: client: get stuck forever when the forward seq exceeds 256
- https://github.com/ceph/ceph/pull/46178
- 09:53 AM Bug #55129 (Pending Backport): client: get stuck forever when the forward seq exceeds 256
04/16/2022
- 03:25 PM Backport #55343 (Resolved): pacific: mds: try to reset heartbeat when fetching or committing.
- https://github.com/ceph/ceph/pull/46180
- 03:25 PM Backport #55342 (Resolved): quincy: mds: try to reset heartbeat when fetching or committing.
- https://github.com/ceph/ceph/pull/46181
- 03:20 PM Bug #54345 (Pending Backport): mds: try to reset heartbeat when fetching or committing.
- 03:20 PM Bug #54345 (Resolved): mds: try to reset heartbeat when fetching or committing.
04/14/2022
- 12:10 PM Backport #55338 (Resolved): pacific: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.te...
- https://github.com/ceph/ceph/pull/45293
- 12:10 PM Backport #55337 (Resolved): quincy: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.tes...
- https://github.com/ceph/ceph/pull/45291
- 12:07 PM Bug #54971 (Pending Backport): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds...
- 12:05 PM Backport #55336 (Resolved): quincy: Issue removing subvolume with retained snapshots - Possible q...
- https://github.com/ceph/ceph/pull/46140
- 12:05 PM Backport #55335 (Resolved): pacific: Issue removing subvolume with retained snapshots - Possible ...
- https://github.com/ceph/ceph/pull/46139
- 12:02 PM Bug #54625 (Pending Backport): Issue removing subvolume with retained snapshots - Possible quincy...
- 09:35 AM Bug #55332 (Resolved): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- https://pulpito.ceph.com/vshankar-2022-04-11_12:24:06-fs-wip-vshankar-testing1-20220411-144044-testing-default-smithi...
- 08:58 AM Bug #55331 (Resolved): pjd failure (caused by xattr's value not consistent between auth MDS and r...
- This run: https://pulpito.ceph.com/vshankar-2022-04-11_12:24:06-fs-wip-vshankar-testing1-20220411-144044-testing-defa...
- 06:02 AM Bug #50821: qa: untar_snap_rm failure during mds thrashing
- Similar failure here: https://pulpito.ceph.com/vshankar-2022-04-11_12:24:06-fs-wip-vshankar-testing1-20220411-144044-...
- 05:39 AM Bug #55329 (Fix Under Review): qa: add test case for fsync crash issue
- 05:35 AM Bug #55329: qa: add test case for fsync crash issue
- This could be reproduce very easy by using the following kernel patch:...
- 05:30 AM Bug #55329 (Resolved): qa: add test case for fsync crash issue
- This is the test case for https://tracker.ceph.com/issues/55327.
04/13/2022
- 02:04 PM Backport #55264: quincy: mount.ceph: mount helper incorrectly passes `ms_mode' mount option to ol...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45780
merged - 12:14 PM Bug #55316 (New): qa: add client asok support to get the options
- Currently for the vstart_runner.py it only support mon/mds/osd:...
- 09:29 AM Bug #55313 (Resolved): Unexpected file access behavior using ceph-fuse
- Since upgrading from Nautilus (14.2.21) to Pacific (16.2.7) ceph-fuse shows a rather unexpected and unusual behavior ...
- 08:09 AM Backport #53760 (In Progress): pacific: snap scheduler: cephfs snapshot schedule status doesn't l...
- 07:28 AM Backport #53760 (New): pacific: snap scheduler: cephfs snapshot schedule status doesn't list the ...
- * re-doing bad backport
- 02:23 AM Feature #55283: qa: add fsync/sync stuck waiting for unsafe request test
- Normally when before fixing this we can reproduce it very easy, and also mostly the duration is larger around 4 secon...
- 02:11 AM Feature #55283: qa: add fsync/sync stuck waiting for unsafe request test
- Added two test cases support, one for file sync and another is for filesystem sync.
- 02:09 AM Feature #55283 (Fix Under Review): qa: add fsync/sync stuck waiting for unsafe request test
04/12/2022
- 02:51 PM Backport #55239: quincy: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
- https://github.com/ceph/ceph/pull/45879
- 02:39 PM Backport #55238: pacific: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
- https://github.com/ceph/ceph/pull/45878
- 09:05 AM Feature #55283 (In Progress): qa: add fsync/sync stuck waiting for unsafe request test
- 04:43 AM Feature #55283 (Resolved): qa: add fsync/sync stuck waiting for unsafe request test
- The kclient has fixed this in:...
- 06:29 AM Bug #54701 (Triaged): crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CDe...
04/11/2022
- 01:57 PM Backport #55264 (In Progress): quincy: mount.ceph: mount helper incorrectly passes `ms_mode' moun...
- 01:55 PM Backport #55264 (In Progress): quincy: mount.ceph: mount helper incorrectly passes `ms_mode' moun...
- https://github.com/ceph/ceph/pull/45780
- 01:50 PM Bug #55110 (Pending Backport): mount.ceph: mount helper incorrectly passes `ms_mode' mount option...
- 01:48 PM Bug #55216 (Fix Under Review): cephfs-shell: creates directories in local file system even if fil...
- 01:47 PM Bug #55242 (Fix Under Review): cephfs-shell: put command should accept both path mandatorily and ...
- 01:00 PM Bug #55165 (Fix Under Review): client: validate pool against pool ids as well as pool names
- 12:57 PM Bug #55234 (Triaged): snap_schedule: replace .snap with the client configured snap dir name
- 12:55 PM Bug #55236 (Triaged): qa: fs/snaps tests fails with "hit max job timeout"
- 05:22 AM Bug #55236: qa: fs/snaps tests fails with "hit max job timeout"
- Another instance: https://pulpito.ceph.com/vshankar-2022-04-09_12:55:41-fs-wip-vshankar-testing-55110-20220408-203242...
- 12:54 PM Bug #55217 (Fix Under Review): pybind/mgr/volumes: Clone operation hangs
- 12:53 PM Bug #55240 (Triaged): mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- 05:30 AM Bug #54108: qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.ex...
- https://pulpito.ceph.com/vshankar-2022-04-09_12:55:41-fs-wip-vshankar-testing-55110-20220408-203242-testing-default-s...
- 05:30 AM Bug #54108: qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.ex...
- Ramana, this is showing up a bit in master. Please take a look.
- 05:15 AM Bug #48680: mds: scrubbing stuck "scrub active (0 inodes in the stack)"
- Maybe related (but no backtrace in OSDs): https://pulpito.ceph.com/vshankar-2022-04-09_12:55:41-fs-wip-vshankar-testi...
- 03:03 AM Bug #55253 (Fix Under Review): client: switch to glibc's STATX macros
- 02:50 AM Bug #55253 (Resolved): client: switch to glibc's STATX macros
- Currently the glibc has support the STATX macros:...
04/10/2022
- 08:02 PM Bug #53996 (In Progress): qa: update fs:upgrade tasks to upgrade from pacific instead of octopus,...
04/08/2022
- 01:03 PM Bug #55242 (Resolved): cephfs-shell: put command should accept both path mandatorily and validate...
- Currently, there are no checks to make sure, local_path is valid. For instance for a file "helloworld" at /home/dparm...
- 10:25 AM Feature #48736: qa: enable debug loglevel kclient test suits
- Please add the the `dynamic_debug` option in `.yaml` file, likes:...
- 09:43 AM Bug #55240 (Resolved): mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- Seen here: https://pulpito.ceph.com/vshankar-2022-04-07_05:07:33-fs-master-testing-default-smithi/6780578/
Its an ... - 06:45 AM Backport #55239 (Resolved): quincy: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds...
- 06:45 AM Backport #55238 (Resolved): pacific: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-md...
- 06:41 AM Documentation #54551 (Pending Backport): docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-a...
- 05:39 AM Bug #55236 (Triaged): qa: fs/snaps tests fails with "hit max job timeout"
- Seen here: https://pulpito.ceph.com/vshankar-2022-04-07_15:19:12-fs-wip-vshankar-testing-55110-20220407-173953-testin...
- 05:37 AM Bug #54374: mgr/snap_schedule: include timezone information in scheduled snapshots
- 1. CephFS snapshot file names created by mgr/snap_schedule are always in UTC time zone
2. If the local time zone on ... - 05:31 AM Bug #55235 (Duplicate): snap_schedule: ceph snapshots datestamps lack a timezone field
- duplicate of https://tracker.ceph.com/issues/54374
- 05:25 AM Bug #55235 (Duplicate): snap_schedule: ceph snapshots datestamps lack a timezone field
- add time zone suffix to the snap dir names
- 05:14 AM Bug #55234 (Resolved): snap_schedule: replace .snap with the client configured snap dir name
- snap_schedule assumes that the client snap dir is always ".snap"
the module functionality will break when the client...
04/07/2022
- 05:28 PM Bug #55190 (Fix Under Review): mgr/volumes: Show clone failure reason in clone status command
- 10:26 AM Bug #55217: pybind/mgr/volumes: Clone operation hangs
- The tests ran and mgr logs attached. The code was instrumented a bit to find the callers of the locks.
The mgr.x.l... - 10:02 AM Bug #55217 (Resolved): pybind/mgr/volumes: Clone operation hangs
- The clone operation hangs while testing clone failure PR locally (teuthology via vstart_runner).
The hang is seen wh... - 09:37 AM Bug #54701: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CDentry*, MDR...
- Seen in a pacific cluster. Also, yet another similar backtrace from the same cluster:...
- 08:20 AM Bug #55216 (Resolved): cephfs-shell: creates directories in local file system even if file not found
- The "get" command in cephfs-shell when used to get a file that doesn't exist on ceph filesystem, would throw an error...
- 06:44 AM Feature #55215 (Resolved): mds: fragment directory snapshots
- Not sure if this has been discussed anywhere (I cannot find relevant trackers regarding this).
The MDS does not fr... - 06:14 AM Feature #55214: mds: add asok/tell command to clear stale omap entries
- (formatting description)
Its rather easy to end up with an on-disk inode object that stays un-fragmented, with oma... - 06:09 AM Feature #55214 (New): mds: add asok/tell command to clear stale omap entries
- Its rather easy to end up with an on-disk inode object that stays un-fragmented, with omap count exceeding `mds_bal_s...
04/06/2022
- 01:22 PM Feature #55126 (Fix Under Review): mds: add perf counter to record slow replies
- 12:20 PM Feature #55197 (Resolved): cephfs-top: make cephfs-top display scrollable like top
- Based on the discussions in the BZ [1],
* Make cephfs-top display scrollable like top does. Enable Up/Down, PgUp/... - 10:56 AM Bug #55196 (In Progress): mgr/stats: perf stats command doesn't have filter option for fs names.
- Adding filter option in perf stats command for fs names(--fs_name),the way it is done for client_id and mds_rank, to ...
- 08:59 AM Backport #55191 (Duplicate): pacific: Client::setxattr always sends setxattr request to MDS
- 08:12 AM Backport #55191 (Duplicate): pacific: Client::setxattr always sends setxattr request to MDS
- duplicated to https://tracker.ceph.com/issues/55192
- 08:58 AM Bug #45434 (Resolved): qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- 08:57 AM Backport #51938 (Rejected): octopus: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull)...
- No need this in octopus, there have too many some depending patches needed and some features hasn't supported yet, wi...
- 08:14 AM Backport #55192 (In Progress): pacific: Client::setxattr always sends setxattr request to MDS
- 08:14 AM Backport #55192 (Resolved): pacific: Client::setxattr always sends setxattr request to MDS
- https://github.com/ceph/ceph/pull/45792
- 07:02 AM Bug #55190 (In Progress): mgr/volumes: Show clone failure reason in clone status command
- 07:01 AM Bug #55190 (Resolved): mgr/volumes: Show clone failure reason in clone status command
- If the clone is failed for some reason, show the failure reason in the
clone status command. This would help CSI to ... - 05:42 AM Bug #53521 (Resolved): mds: heartbeat timeout by _prefetch_dirfrags during up:rejoin
- 05:41 AM Backport #53759 (Resolved): pacific: mds: heartbeat timeout by _prefetch_dirfrags during up:rejoin
04/05/2022
- 12:15 PM Bug #48203: qa: quota failure
- Dear all, I went through the discussion about the failed test and would like to ask you to consider an alternative so...
- 07:56 AM Bug #48863 (Fix Under Review): cephfs-shell should allow changing all mode bits
- 06:17 AM Bug #55173: qa: missing dbench binary?
- Similar failure for iozone: http://pulpito.front.sepia.ceph.com/yuriw-2022-04-04_17:00:37-fs-wip-yuri-testing-2022-04...
- 04:43 AM Bug #55134 (Fix Under Review): ceph pacific fails to perform fs/mirror test
04/04/2022
- 01:11 PM Bug #55173 (Can't reproduce): qa: missing dbench binary?
- Seen here: https://pulpito.ceph.com/vshankar-2022-04-03_15:22:48-fs-wip-vshankar-testing-20220403-170344-testing-defa...
- 12:54 PM Bug #55112 (Triaged): cephfs-shell: saving files doesn't work as expected
- 12:50 PM Bug #55148 (Triaged): snap_schedule: remove subvolume(-group) interfaces
- 12:50 PM Bug #55148: snap_schedule: remove subvolume(-group) interfaces
- ... since there are no users right now (and probably in the near future).
- 12:45 PM Bug #55170 (Triaged): mds: crash during rejoin (CDir::fetch_keys)
- 10:09 AM Bug #55170 (Resolved): mds: crash during rejoin (CDir::fetch_keys)
- Seen here: https://pulpito.ceph.com/vshankar-2022-04-03_15:22:48-fs-wip-vshankar-testing-20220403-170344-testing-defa...
04/02/2022
- 01:38 AM Bug #55165 (Fix Under Review): client: validate pool against pool ids as well as pool names
- Problem:
If the pool is a numeric string (eg. 23134), the current validation assumes
that it is a pool id and looks... - 01:17 AM Bug #53504: client: infinite loop "got ESTALE" after mds recovery
- Xiubo Li wrote:
> 吴羡 reported a same bug in his product use case, and he can see the client stuck almost every 2 wee... - 12:59 AM Bug #53504: client: infinite loop "got ESTALE" after mds recovery
- 吴羡 reported a same bug in his product use case, and he can see the client stuck almost every 2 weeks.
I went throu...
04/01/2022
- 02:23 PM Backport #54533: quincy: mds,client: suppport getvxattr RPC
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45377
merged
03/31/2022
- 07:04 PM Backport #54221: quincy: mgr/volumes: `fs volume rename` command
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45541
merged - 07:02 PM Backport #55055: quincy: mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr re...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45672
merged - 03:22 PM Backport #54574: quincy: mgr/volumes: The 'mode' argument is not honored on idempotent subvolume ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45405
merged - 02:56 PM Bug #54460: snaptest-multiple-capsnaps.sh test failure
- Another instance - http://pulpito.front.sepia.ceph.com/yuriw-2022-03-29_20:09:22-fs-wip-yuri-testing-2022-03-29-0741-...
- 01:04 PM Bug #55148 (Closed): snap_schedule: remove subvolume(-group) interfaces
- * validation of subvolume and subvolume group is missing from all snap_schedule
- 12:26 PM Bug #54411: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem i...
- Why on earth are we trying to fix this in the client? This is an MDS bug plain and simple, and a security-sensitive o...
- 09:12 AM Bug #55144 (Fix Under Review): client: stop retrying the request when exceeding 256 times
- 07:19 AM Bug #55144 (Fix Under Review): client: stop retrying the request when exceeding 256 times
- The type of 'r_attempts' in kernel 'ceph_mds_request' is 'int',
while in 'ceph_mds_request_head' the type of 'num_re... - 08:26 AM Bug #54743: crash: Client::_get_vino(Inode*)
- This seems to be the real culprit:...
- 07:40 AM Bug #48680: mds: scrubbing stuck "scrub active (0 inodes in the stack)"
- scrape logs point to a crash in osd:...
- 07:16 AM Documentation #54551: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
- Thomas Roth wrote:
> Concerning my question on how to specify the host for the MDS, Google helped:
>
> - Add 2 ho... - 07:10 AM Documentation #54551: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
- Thomas Roth wrote:
> Hi Venky,
>
> yes and no - the change makes it clear that a directory has to be created.
>...
03/30/2022
- 01:42 PM Bug #55134 (Resolved): ceph pacific fails to perform fs/mirror test
- During execution of the integration tests (IBM Z, BE) the fs/mirror suite produces a set of error related to segfault...
- 01:33 PM Bug #52260: 1 MDSs are read only | pacific 16.2.5
- cephuser2345 user wrote:
> Hi
>
> Any update on this ?:)
Its been a while since I attempted to reproduce this ... - 12:15 PM Bug #55129: client: get stuck forever when the forward seq exceeds 256
- The kclient fixing patchwork is: https://patchwork.kernel.org/project/ceph-devel/list/?series=627352
- 12:14 PM Bug #55129 (Resolved): client: get stuck forever when the forward seq exceeds 256
- The type of 'num_fwd' in ceph 'MClientRequestForward' is 'int32_t',
while in 'ceph_mds_request_head' the type is '__... - 11:43 AM Feature #55126: mds: add perf counter to record slow replies
- Reviewer suggested that the pr could be backported, so create the tracker here.
- 11:42 AM Feature #55126: mds: add perf counter to record slow replies
- Though we have MDS_HEALTH_SLOW_METADATA_IO and MDS_HEALTH_SLOW_REQUEST health alert, but those are not
precise nor a... - 11:42 AM Feature #55126 (Pending Backport): mds: add perf counter to record slow replies
- 09:23 AM Documentation #54551: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
- Concerning my question on how to specify the host for the MDS, Google helped:
- Add 2 hosts by @ceph orch host add... - 09:17 AM Documentation #54551: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
- Hi Venky,
yes and no - the change makes it clear that a directory has to be created.
But is there a canonical pa... - 09:00 AM Documentation #54551 (Fix Under Review): docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-a...
- Thomas,
Does the documentation changes makes things clear - https://ceph--45639.org.readthedocs.build/en/45639/cep... - 06:17 AM Feature #55121 (Resolved): cephfs-top: new options to limit and order-by
- Based on the suggestion in the BZ [1], create two new options for cephfs-top:
1. Limit the number of clients to be...
03/29/2022
- 01:37 PM Bug #55110 (Fix Under Review): mount.ceph: mount helper incorrectly passes `ms_mode' mount option...
- 01:17 PM Bug #55110 (Pending Backport): mount.ceph: mount helper incorrectly passes `ms_mode' mount option...
- seen here - http://pulpito.front.sepia.ceph.com/yuriw-2022-03-28_19:24:40-smoke-quincy-distro-default-smithi/6765567/...
- 01:28 PM Bug #55112 (Resolved): cephfs-shell: saving files doesn't work as expected
- The "get" command in cephfs-shell doesn't behave as expected. Suppose I do this to save a file in a directory that li...
- 01:22 PM Bug #55111: cephfs-shell: tab completion throwing errors
- I have a subvolume on one of my filesystems, and when I try to walk down into that subvolume using tab-completion in ...
- 01:18 PM Bug #55111 (New): cephfs-shell: tab completion throwing errors
- 11:48 AM Bug #55108 (New): cephfs: recursive stats (ceph.dir.rbytes) on snapshotted directory shows invali...
- I am observing that the 'ceph.dir.rbytes' shows correct value on subvolume snapshot as long as the subvolume data is ...
- 10:03 AM Bug #54625 (Fix Under Review): Issue removing subvolume with retained snapshots - Possible quincy...
03/28/2022
- 03:13 PM Bug #54625: Issue removing subvolume with retained snapshots - Possible quincy regression?
- Great, thank you for the assistance.
I've removed the unneeded subvolume delete action from the test and it is now p... - 02:20 PM Backport #54573: pacific: mgr/volumes: The 'mode' argument is not honored on idempotent subvolume...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45474
merged - 02:16 PM Backport #52635: pacific: mds sends cap updates with btime zeroed out
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45163
merged - 02:16 PM Backport #52875: pacific: qa: test_dirfrag_limit
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45565
merged - 02:15 PM Backport #52680: pacific: Add option in `fs new` command to start rank 0 in failed state
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45565
merged - 02:14 PM Backport #52427: pacific: qa: "error reading sessionmap 'mds1_sessionmap'"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45564
merged - 01:40 PM Backport #55055 (In Progress): quincy: mgr/snap-schedule: scheduled snapshots are not created aft...
- 01:40 PM Backport #55056 (In Progress): pacific: mgr/snap-schedule: scheduled snapshots are not created af...
03/25/2022
- 03:07 PM Backport #52629: octopus: pybind/mgr/volumes: first subvolume permissions set perms on /volumes a...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43224
merged - 02:40 PM Backport #53759: pacific: mds: heartbeat timeout by _prefetch_dirfrags during up:rejoin
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44551
merged - 06:45 AM Bug #54971 (Fix Under Review): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds...
- 01:55 AM Backport #55056 (Resolved): pacific: mgr/snap-schedule: scheduled snapshots are not created after...
- https://github.com/ceph/ceph/pull/45906
- 01:55 AM Backport #55055 (Resolved): quincy: mgr/snap-schedule: scheduled snapshots are not created after ...
- https://github.com/ceph/ceph/pull/45672
- 01:54 AM Bug #54052 (Pending Backport): mgr/snap-schedule: scheduled snapshots are not created after ceph-...
03/24/2022
- 05:39 PM Bug #54406: cephadm/mgr-nfs-upgrade: cluster [WRN] overall HEALTH_WARN no active mgr
- /a/yuriw-2022-03-23_14:51:02-rados-wip-yuri4-testing-2022-03-21-1648-pacific-distro-default-smithi/6756019
- 01:33 PM Bug #55041 (Resolved): mgr/volumes: display in-progress clones for a snapshot
- Right now, there is no way of knowing the set of clone operations n-progress/pending for a subvolume snapshot (unless...
- 11:07 AM Bug #54625: Issue removing subvolume with retained snapshots - Possible quincy regression?
- Design/Code Behavior:
---------------------
Looked into the code further. It's designed in a such way that we sh... - 09:36 AM Bug #54625: Issue removing subvolume with retained snapshots - Possible quincy regression?
- Hi John,
> Now I expect that last "subvolume rm" to pass as the retained snapshot has been deleted on the previous... - 11:07 AM Backport #55040 (Rejected): pacific: ceph-fuse: mount -a on already mounted folder should be ignored
- https://github.com/ceph/ceph/pull/45925
- 11:06 AM Backport #55039 (Resolved): quincy: ceph-fuse: mount -a on already mounted folder should be ignored
- https://github.com/ceph/ceph/pull/45939
- 11:01 AM Bug #46075 (Pending Backport): ceph-fuse: mount -a on already mounted folder should be ignored
- 08:09 AM Bug #54653 (Fix Under Review): crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag...
- 07:02 AM Bug #54653: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_map.count(stag))
- There has one issue in ceph-fuse code that when lookup/create/mkdir/link/readdir, etc, it will make fake fuse inode n...
03/23/2022
- 07:46 PM Feature #54472 (Fix Under Review): mgr/volumes: allow users to add metadata (key-value pairs) to ...
- 09:06 AM Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
- Ivan Guan wrote:
> Xiubo Li wrote:
> > > t5: mds rdlock_start filelock and got the rdlock but can’t acquire the xlo... - 08:53 AM Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
- Xiubo Li wrote:
> > t5: mds rdlock_start filelock and got the rdlock but can’t acquire the xlock of linklock, so wai... - 08:20 AM Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
- > t5: mds rdlock_start filelock and got the rdlock but can’t acquire the xlock of linklock, so wait here
Here you ... - 07:02 AM Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
- Ivan Guan wrote:
> *the logs after t6 moment:*
> *send safe reply after writing journal to disk*
>
> 2022-02-21... - 07:01 AM Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
- h2. the logs after t6 moment:
h3. send safe reply after writing journal to disk
2022-02-21 15:11:40.015369 7f1d3...
03/22/2022
- 11:36 PM Backport #52636: pacific: MDSMonitor: removes MDS coming out of quorum election
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43698
merged - 09:09 PM Backport #52680 (In Progress): pacific: Add option in `fs new` command to start rank 0 in failed ...
- 07:33 PM Backport #52875 (In Progress): pacific: qa: test_dirfrag_limit
- 01:32 PM Backport #52875: pacific: qa: test_dirfrag_limit
- Ramana,
Could you please check if backport is feasible and post one if it is?
Cheers,
Venky - 06:50 PM Backport #52427 (In Progress): pacific: qa: "error reading sessionmap 'mds1_sessionmap'"
- 01:33 PM Backport #52427: pacific: qa: "error reading sessionmap 'mds1_sessionmap'"
- Ramana, please take this.
- 05:17 PM Backport #54223 (Rejected): pacific: mgr/volumes: `fs volume rename` command
- The feature is meant for quincy onwards.
- 05:17 PM Backport #54222 (Rejected): octopus: mgr/volumes: `fs volume rename` command
- The feature is meant for Ceph quincy onwards.
- 05:03 PM Feature #51162 (Pending Backport): mgr/volumes: `fs volume rename` command
- 05:03 PM Feature #51162 (Rejected): mgr/volumes: `fs volume rename` command
- The feature is meant for quincy and later releases.
- 12:31 AM Feature #51162: mgr/volumes: `fs volume rename` command
- Venky, this tracker was originally planned for quincy. It depends on a feature tracker https://tracker.ceph.com/issue...
- 04:36 PM Fix #54317 (Fix Under Review): qa: add testing in fs:workload for different kinds of subvolumes
- 02:18 PM Backport #54478: pacific: ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not exp...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45351
merged - 02:15 PM Backport #54256: pacific: mgr/volumes: uid/gid of the clone is incorrect
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45205
merged - 02:15 PM Backport #54335: pacific: mgr/volumes: A deleted subvolumegroup when listed using "ceph fs subvol...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45205
mergedhttps://tracker.ceph.com/issues/54332 - 02:15 PM Backport #54332: pacific: mgr/volumes: File Quota attributes not getting inherited to the cloned ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45205
merged - 05:36 AM Feature #54978 (In Progress): cephfs-top:addition of filesystem menu(improving GUI)
03/21/2022
- 10:43 PM Backport #54223 (In Progress): pacific: mgr/volumes: `fs volume rename` command
- 10:36 PM Backport #54221 (In Progress): quincy: mgr/volumes: `fs volume rename` command
- 05:22 PM Bug #54670 (Duplicate): crash: Client::_get_vino(Inode*)
- Duplicate of https://tracker.ceph.com/issues/54743
- 01:04 PM Feature #54978 (Resolved): cephfs-top:addition of filesystem menu(improving GUI)
- It deals with the option of having menu for selecting filesystems:
1. to display all filesystems metrics at once.
2... - 12:54 PM Bug #54557 (Triaged): scrub repair does not clear earlier damage health status
- 12:53 PM Bug #54560 (Triaged): snap_schedule: avoid throwing traceback for bad or missing arguments
- 12:50 PM Bug #54606 (Triaged): check-counter task runs till max job timeout
- 12:49 PM Bug #54625 (Triaged): Issue removing subvolume with retained snapshots - Possible quincy regression?
- 12:47 PM Bug #54653 (Triaged): crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_map...
- 12:43 PM Bug #54743 (Triaged): crash: Client::_get_vino(Inode*)
- 12:42 PM Bug #54971 (Triaged): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics....
- 10:20 AM Bug #54971 (Resolved): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics...
- Seen here - https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-def...
- 11:43 AM Bug #54976 (Duplicate): mds: Test failure: test_filelock_eviction (tasks.cephfs.test_client_recov...
- https://tracker.ceph.com/issues/44565
- 11:37 AM Bug #54976 (Duplicate): mds: Test failure: test_filelock_eviction (tasks.cephfs.test_client_recov...
- https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/...
Also available in: Atom