Activity
From 02/06/2022 to 03/07/2022
03/07/2022
- 01:47 PM Backport #54477: quincy: ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not expe...
- Nikhil, please take this.
- 05:05 AM Backport #54477 (Resolved): quincy: ceph-fuse: If nonroot user runs ceph-fuse mount on then path ...
- https://github.com/ceph/ceph/pull/45331
- 01:46 PM Backport #54480: quincy: mgr/stats: be resilient to offline MDS rank-0
- Jos, please take this.
- 05:11 AM Backport #54480 (Resolved): quincy: mgr/stats: be resilient to offline MDS rank-0
- https://github.com/ceph/ceph/pull/45291
- 01:46 PM Bug #54460 (Triaged): snaptest-multiple-capsnaps.sh test failure
- 01:45 PM Bug #54461 (Triaged): ffsb.sh test failure
- 01:44 PM Bug #54462 (Triaged): Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 w...
- Jeff thinks it might be a permission issue.
- 01:19 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Hitting this bug involves having hardlinks to inodes which are authoritative in another active mds. When a non-primar...
- 12:18 PM Bug #52438: qa: ffsb timeout
- Created one pr in ffsb https://github.com/ceph/ffsb/pull/3 to fix it
- 12:16 PM Bug #52438: qa: ffsb timeout
- Actually the `ffsb` test finished very fast and took 346.70 seconds:
```
2022-02-28T08:29:36.007 INFO:tasks.worku... - 05:11 AM Backport #54479 (Resolved): pacific: mgr/stats: be resilient to offline MDS rank-0
- https://github.com/ceph/ceph/pull/45293
- 05:07 AM Bug #50033 (Pending Backport): mgr/stats: be resilient to offline MDS rank-0
- 05:05 AM Backport #54478 (Resolved): pacific: ceph-fuse: If nonroot user runs ceph-fuse mount on then path...
- https://github.com/ceph/ceph/pull/45351
- 05:03 AM Bug #51062 (Resolved): mds,client: suppport getvxattr RPC
- 05:01 AM Bug #54049 (Pending Backport): ceph-fuse: If nonroot user runs ceph-fuse mount on then path is no...
03/04/2022
- 10:00 AM Feature #54472: mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes
- Just FYI - mgr/volumes uses .meta file as a metadata store for persisting subvolume related information (path, state,...
- 09:46 AM Feature #54472 (Resolved): mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes
- This is similar to RBDs `image-meta get/set/list/remove' interfaces. Updating an existing key should be supported.
... - 09:45 AM Bug #54237 (Fix Under Review): pybind/cephfs: Add mapping for Ernno 13:Permission Denied and addi...
03/03/2022
- 04:00 PM Backport #54257 (Resolved): quincy: mgr/volumes: uid/gid of the clone is incorrect
- 03:44 PM Backport #54257: quincy: mgr/volumes: uid/gid of the clone is incorrect
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45165
merged - 02:54 PM Bug #54463: mds: flush mdlog if locked and still has wanted caps not satisfied
- More detail please see bz: https://bugzilla.redhat.com/show_bug.cgi?id=2049653
- 02:45 PM Bug #54463 (Resolved): mds: flush mdlog if locked and still has wanted caps not satisfied
- In _do_cap_update() if one client is releasing the Fw caps the
relevant client range will be erased, and then new_ma... - 02:44 PM Bug #54462 (Duplicate): Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055...
- https://pulpito.ceph.com/yuriw-2022-03-01_20:21:46-fs-wip-yuri-testing-2022-02-28-0823-quincy-distro-default-smithi/6...
- 02:36 PM Bug #54461 (Resolved): ffsb.sh test failure
- https://pulpito.ceph.com/yuriw-2022-03-01_20:21:46-fs-wip-yuri-testing-2022-02-28-0823-quincy-distro-default-smithi/6...
- 02:03 PM Bug #54460 (Triaged): snaptest-multiple-capsnaps.sh test failure
- Test failure on quincy run:
https://pulpito.ceph.com/yuriw-2022-03-01_20:21:46-fs-wip-yuri-testing-2022-02-28-0823... - 01:59 PM Bug #54459 (Fix Under Review): fs:upgrade fails with "hit max job timeout"
- 01:55 PM Bug #54459 (Rejected): fs:upgrade fails with "hit max job timeout"
- fs:upgrade test upgrades from pacific v16.2.4 upto lastest. When running with a distro kernel, which might not unders...
03/02/2022
- 05:08 PM Backport #51201: octopus: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44800
merged - 04:42 PM Backport #53865: octopus: mds: FAILED ceph_assert(mut->is_wrlocked(&pin->filelock))
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44624
merged - 03:50 PM Backport #54242: octopus: mds: clients can send a "new" op (file operation) and crash the MDS
- Venky Shankar wrote:
> https://github.com/ceph/ceph/pull/44976
merged
03/01/2022
- 06:18 AM Backport #54256 (In Progress): pacific: mgr/volumes: uid/gid of the clone is incorrect
- 06:18 AM Backport #54335 (In Progress): pacific: mgr/volumes: A deleted subvolumegroup when listed using "...
- 06:18 AM Backport #54332 (In Progress): pacific: mgr/volumes: File Quota attributes not getting inherited ...
02/28/2022
- 05:59 PM Bug #52260: 1 MDSs are read only | pacific 16.2.5
- Hi
Any update on this ?:) - 03:48 PM Backport #54218: quincy: mds: seg fault in expire_recursive
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45097
merged - 02:21 PM Bug #54411 (Triaged): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 fi...
- 02:21 PM Backport #54407 (In Progress): quincy: mds: seg fault in expire_recursive
- 01:45 PM Bug #54406 (Triaged): cephadm/mgr-nfs-upgrade: cluster [WRN] overall HEALTH_WARN no active mgr
- 01:04 PM Bug #54421 (Fix Under Review): mds: assert fail in Server::_dir_is_nonempty() because xlocker of ...
- 01:00 PM Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
- pr: https://github.com/ceph/ceph/pull/45195
- 09:45 AM Bug #54421 (Fix Under Review): mds: assert fail in Server::_dir_is_nonempty() because xlocker of ...
- ENV: Jewel ceph-10.2.2
Description:
Server::_dir_is_nonempty() always expects inode has the xlocker, but sometime... - 10:24 AM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Jeff Layton wrote:
> I see this in the current batch of logs (mds.test.cephadm1.xlapqu.log):
>
> [...]
>
> ...... - 07:00 AM Backport #54420 (Rejected): octopus: mgr/volumes: uid/gid of the clone is incorrect
02/25/2022
- 06:06 PM Backport #54241: pacific: mds: clients can send a "new" op (file operation) and crash the MDS
- Venky Shankar wrote:
> https://github.com/ceph/ceph/pull/44975
merged - 05:18 PM Bug #54411 (Resolved): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 f...
- /a/yuriw-2022-02-21_15:48:20-rados-wip-yuri7-testing-2022-02-17-0852-pacific-distro-default-smithi/6698603...
- 04:23 PM Backport #54217: pacific: client: client session state stuck in opening and hang all the time
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45100
merged - 04:22 PM Backport #54220: pacific: mds: seg fault in expire_recursive
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45099
merged - 04:21 PM Backport #54194: pacific: mds: mds_oft_prefetch_dirfrags default to false
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45016
merged - 04:19 PM Backport #54161: pacific: mon/MDSMonitor: sanity assert when inline data turned on in MDSMap from...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44910
merged - 04:19 PM Backport #53761: pacific: mds: mds_oft_prefetch_dirfrags = false is not qa tested
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44504
merged - 04:18 PM Backport #53948: pacific: mgr/volumes: Failed to create clones if the source snapshot's quota is ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42932
merged - 04:18 PM Backport #52384: pacific: pybind/mgr/volumes: Cloner threads stuck in loop trying to clone the st...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42932
merged - 10:22 AM Backport #54333 (In Progress): quincy: mgr/volumes: File Quota attributes not getting inherited t...
- 10:21 AM Backport #54257 (In Progress): quincy: mgr/volumes: uid/gid of the clone is incorrect
- 10:21 AM Backport #54336 (In Progress): quincy: mgr/volumes: A deleted subvolumegroup when listed using "c...
- 08:29 AM Backport #52634 (In Progress): octopus: mds sends cap updates with btime zeroed out
- 08:29 AM Backport #52635 (In Progress): pacific: mds sends cap updates with btime zeroed out
- 08:28 AM Backport #52443 (In Progress): octopus: client: fix dump mds twice
- 08:27 AM Backport #51976 (Need More Info): octopus: client: make sure only to update dir dist from auth mds
- non trivial backport
- 08:26 AM Backport #51938 (Need More Info): octopus: qa: test_full_fsync (tasks.cephfs.test_full.TestCluste...
- non trivial backport
- 08:24 AM Backport #51936 (Need More Info): octopus: mds: improve debugging for mksnap denial
- non trivial cherry-pick
- 08:23 AM Backport #51933 (In Progress): octopus: mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather...
- 08:22 AM Backport #51831 (In Progress): octopus: mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, an...
- 08:21 AM Backport #51545 (Need More Info): octopus: mgr/volumes: use a dedicated libcephfs handle for subv...
- non trivial backport
- 08:19 AM Backport #51482 (Need More Info): octopus: osd: sent kickoff request to MDS and then stuck for 15...
- non trivial cherry pick
- 08:17 AM Backport #51323 (In Progress): octopus: qa: test_data_scan.TestDataScan.test_pg_files AssertionEr...
- 08:16 AM Backport #51202 (In Progress): octopus: mds: CephFS kclient gets stuck when getattr() on a certai...
- 08:15 AM Backport #50914 (In Progress): octopus: MDS heartbeat timed out between during executing MDCache:...
- 08:14 AM Backport #50849 (Need More Info): octopus: mds: "cluster [ERR] Error recovering journal 0x203: ...
- 08:13 AM Backport #50849: octopus: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file ...
- Non trivial backport
- 08:12 AM Backport #50847 (In Progress): octopus: mds: journal recovery thread is possibly asserting with m...
- 08:10 AM Backport #50631 (In Progress): octopus: mds: Error ENOSYS: mds.a started profiler
- 02:00 AM Backport #54407 (Resolved): quincy: mds: seg fault in expire_recursive
- https://github.com/ceph/ceph/pull/45097
02/24/2022
- 10:51 PM Bug #54406 (Triaged): cephadm/mgr-nfs-upgrade: cluster [WRN] overall HEALTH_WARN no active mgr
- /a/yuriw-2022-02-21_15:48:20-rados-wip-yuri7-testing-2022-02-17-0852-pacific-distro-default-smithi/6698628...
- 07:23 PM Bug #53246: rhel 8.4 and centos stream unable to install cephfs-java
- /a/sseshasa-2022-02-24_11:27:07-rados-wip-45118-45121-quincy-testing-distro-default-smithi/6704247
- 05:09 PM Bug #54404 (New): snap-schedule retention not working as expected
- When hourly and daily snapshots are created on the same path, snap retention is not honored correctly. The daily snap...
- 01:16 PM Bug #54384 (Fix Under Review): mds: crash due to seemingly unrecoverable metadata error
02/23/2022
- 02:27 PM Bug #54384 (Resolved): mds: crash due to seemingly unrecoverable metadata error
From: https://www.spinics.net/lists/ceph-users/msg71028.html
Reported by Wolfgang Mair...- 09:39 AM Bug #54375 (Resolved): mgr/volumes: The 'mode' argument is not honored on idempotent subvolume cr...
- The 'mode' argument is not honored on idempotent subvolume creation of existing subvolume.
Steps to reproduce:
1.... - 07:34 AM Bug #54285 (Fix Under Review): make stop.sh clear the evicted clients too
- 04:27 AM Bug #54374 (Resolved): mgr/snap_schedule: include timezone information in scheduled snapshots
- Scheduled snapshots are stamped with local tz timestamp of the host/container. Including the tz information in the sn...
02/22/2022
- 05:03 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- I see this in the current batch of logs (mds.test.cephadm1.xlapqu.log):...
- 03:59 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- FWIW, I also turned up kernel debug logs and did this:...
- 02:27 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- I reproduced this morning again and gathered the mds logs:
ceph-post-file: bf0318cc-3e34-4d61-8895-03dfdae86c25
- 01:18 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Thx! I'll try that.
- 01:16 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Venky Shankar wrote:
> Also, I could not reproduce it running generic/070. Would it be possible to share kernel buff... - 01:06 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Venky Shankar wrote:
> But, I cannot see the unlink request coming in or a failure with -EACCES for the path in ques... - 09:25 AM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Jeff,
For this unlink request... - 02:02 PM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Hey Venky,
yes, the workaround fixes my Ceph 13 cluster (until the next restart).
Whether it should be marked a... - 09:43 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Niklas,
Were you able to get things to a stable state after following your note https://tracker.ceph.com/issues/54... - 11:17 AM Bug #48673: High memory usage on standby replay MDS
- Yongseok/Mykola - Patrick is on PTO - I'll try to make progress on this issue.
Yongseok, you mention https://githu... - 09:22 AM Bug #54052 (In Progress): mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr r...
- 04:29 AM Cleanup #54362 (Fix Under Review): client: do not release the global snaprealm until unmounting
- 04:26 AM Cleanup #54362 (Resolved): client: do not release the global snaprealm until unmounting
- The global snaprealm will be created and then destroyed immediately every time when updating it.
02/21/2022
- 03:08 PM Bug #54345 (Fix Under Review): mds: try to reset heartbeat when fetching or committing.
- 03:05 PM Bug #54345 (Resolved): mds: try to reset heartbeat when fetching or committing.
- When there have too many dentries need to load, the heartbeat may not get a change to be reset.
- 01:38 PM Bug #54271 (Triaged): mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- 10:05 AM Backport #54217 (In Progress): pacific: client: client session state stuck in opening and hang al...
- 10:02 AM Backport #54220 (In Progress): pacific: mds: seg fault in expire_recursive
- 09:59 AM Backport #54216 (In Progress): quincy: client: client session state stuck in opening and hang all...
- 09:55 AM Backport #54218 (In Progress): quincy: mds: seg fault in expire_recursive
- 09:10 AM Backport #54336 (Resolved): quincy: mgr/volumes: A deleted subvolumegroup when listed using "ceph...
- https://github.com/ceph/ceph/pull/45165
- 09:10 AM Backport #54335 (Resolved): pacific: mgr/volumes: A deleted subvolumegroup when listed using "cep...
- https://github.com/ceph/ceph/pull/45205
- 09:10 AM Backport #54334 (Rejected): octopus: mgr/volumes: A deleted subvolumegroup when listed using "cep...
- 09:10 AM Backport #54333 (Resolved): quincy: mgr/volumes: File Quota attributes not getting inherited to t...
- https://github.com/ceph/ceph/pull/45165
- 09:10 AM Backport #54332 (Resolved): pacific: mgr/volumes: File Quota attributes not getting inherited to ...
- https://github.com/ceph/ceph/pull/45205
- 09:10 AM Backport #54331 (Rejected): octopus: mgr/volumes: File Quota attributes not getting inherited to ...
- 09:07 AM Bug #54121 (Pending Backport): mgr/volumes: File Quota attributes not getting inherited to the cl...
- 09:07 AM Bug #54099 (Pending Backport): mgr/volumes: A deleted subvolumegroup when listed using "ceph fs s...
02/17/2022
- 05:52 PM Fix #54317 (Pending Backport): qa: add testing in fs:workload for different kinds of subvolumes
- Notably:
- subvolume with isolated namespace
- subvolume with (large) quota - 09:10 AM Bug #54283 (Fix Under Review): qa/cephfs: is_mounted() depends on a mutable variable
02/16/2022
- 08:22 PM Backport #54234: quincy: qa: use cephadm to provision cephfs for fs:workloads
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44982
merged - 02:50 PM Backport #54219 (In Progress): octopus: mds: seg fault in expire_recursive
02/15/2022
- 04:20 PM Backport #54160: quincy: mon/MDSMonitor: sanity assert when inline data turned on in MDSMap from ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44909
merged - 04:19 PM Backport #54123: quincy:mgr/volumes: Failed to create clones if the source snapshot's quota is ex...
- Kotresh Hiremath Ravishankar wrote:
> https://github.com/ceph/ceph/pull/44875
merged - 12:51 PM Bug #54111 (Fix Under Review): data pool attached to a file system can be attached to another fil...
- 12:37 PM Bug #44100 (Fix Under Review): cephfs rsync kworker high load.
- The patchwork: https://patchwork.kernel.org/project/ceph-devel/list/?series=614517
- 11:17 AM Bug #54285 (New): make stop.sh clear the evicted clients too
- patch [1] doesn't seem to clear the evicted/blocklisted clients, which makes the df hangs still.
Steps to reproduc... - 09:37 AM Bug #54271: mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- Just FYI - the workaround is to remove the mds<>_openfiles objects from the metadata pool.
- 09:36 AM Bug #54271: mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- This seems to be happening in nautilus. Should check if it can be hit in master (quincy, etc.).
- 06:15 AM Bug #52641 (Resolved): snap scheduler: Traceback seen when snapshot schedule remove command is pa...
- 05:14 AM Bug #52641: snap scheduler: Traceback seen when snapshot schedule remove command is passed withou...
- Also tried to reproduce issue after adding snap-schedule. Still no traceback observed.
[nshelke@fedora build]$ ./b... - 04:55 AM Bug #52641: snap scheduler: Traceback seen when snapshot schedule remove command is passed withou...
- Tried command provided in tracker. Not seen any Traceback if command is passed without required parameters
[nshelk... - 05:30 AM Bug #54283 (Resolved): qa/cephfs: is_mounted() depends on a mutable variable
- This can lead to bugs when variable value is stale or incorrect. It's safer to actually check if the file system is m...
02/14/2022
- 03:49 PM Bug #54271 (Triaged): mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- See: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/NPRA7OGSNQUI32WPZQMKH3SJNPPOSBRK/?sort=date
M... - 09:26 AM Backport #54196 (In Progress): quincy: mds: mds_oft_prefetch_dirfrags default to false
- 09:24 AM Backport #54194 (In Progress): pacific: mds: mds_oft_prefetch_dirfrags default to false
- 09:21 AM Backport #54195 (In Progress): octopus: mds: mds_oft_prefetch_dirfrags default to false
- 04:48 AM Backport #54257: quincy: mgr/volumes: uid/gid of the clone is incorrect
- Kotresh, please post the backport.
- 04:47 AM Backport #54256: pacific: mgr/volumes: uid/gid of the clone is incorrect
- Kotresh, please post the backport.
02/11/2022
- 11:38 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Thanks Dan, I will add it to the config of the Ceph 16 cluster.
Unfortunatley I can't use it for the source cluste... - 08:26 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Niklas, you don't have to wait for that PR -- just do `ceph config set mds mds_oft_prefetch_dirfrags false` now.
F... - 02:11 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Niklas Hambuechen wrote:
> Today I had a multi-hour CephFS outage due to a bug that I believe was discussed in vario... - 01:31 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Thanks! That sounds like it might, yes.
From that, it seems related bugs are (I think don't have Redmine permissio... - 01:16 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- This PR may help you https://github.com/ceph/ceph/pull/44667.
- 12:54 AM Bug #54253 (New): Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Today I had a multi-hour CephFS outage due to a bug that I believe was discussed in various mailing lists and posts a...
- 07:45 AM Backport #54257 (Resolved): quincy: mgr/volumes: uid/gid of the clone is incorrect
- https://github.com/ceph/ceph/pull/45165
- 07:45 AM Backport #54256 (Resolved): pacific: mgr/volumes: uid/gid of the clone is incorrect
- https://github.com/ceph/ceph/pull/45205
- 07:41 AM Bug #54066 (Pending Backport): mgr/volumes: uid/gid of the clone is incorrect
- 02:16 AM Bug #44100 (In Progress): cephfs rsync kworker high load.
- Will work on it.
- 01:47 AM Bug #52438 (Fix Under Review): qa: ffsb timeout
02/10/2022
- 05:36 PM Backport #54234 (In Progress): quincy: qa: use cephadm to provision cephfs for fs:workloads
- 12:48 PM Backport #54242 (In Progress): octopus: mds: clients can send a "new" op (file operation) and cra...
- 12:42 PM Backport #54242 (Resolved): octopus: mds: clients can send a "new" op (file operation) and crash ...
- https://github.com/ceph/ceph/pull/44976
- 12:46 PM Backport #54241 (In Progress): pacific: mds: clients can send a "new" op (file operation) and cra...
- 12:41 PM Backport #54241 (Resolved): pacific: mds: clients can send a "new" op (file operation) and crash ...
- https://github.com/ceph/ceph/pull/44975
- 06:46 AM Bug #54066 (Fix Under Review): mgr/volumes: uid/gid of the clone is incorrect
- 05:47 AM Bug #54066: mgr/volumes: uid/gid of the clone is incorrect
- This is the regression caused by commit 18b85c53a (https://tracker.ceph.com/issues/53848)
Modifying the subject ac... - 05:08 AM Bug #52982 (Fix Under Review): client: Inode::hold_caps_until should be a time from a monotonic c...
- 02:58 AM Bug #44100: cephfs rsync kworker high load.
- Hmm. IIRC the equivalent code on the user space client has much less trouble because the SnapRealm has a list of inod...
02/09/2022
- 08:21 PM Bug #54237 (Resolved): pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding path ...
- opendir() in src/pybind/cephfs/cephfs.pyx returns a generic cephfs.OSError: Errno 13 for this tracker issue https://t...
- 06:51 PM Bug #54236 (Resolved): qa/cephfs: change default timeout from 900 secs to 300
- 15 minutes is unnecessarily large as a default value for timeout for a command. Not having to wait unnecessarily on a...
- 02:41 PM Backport #54234 (Resolved): quincy: qa: use cephadm to provision cephfs for fs:workloads
- https://github.com/ceph/ceph/pull/44982
- 02:35 PM Feature #51333 (Pending Backport): qa: use cephadm to provision cephfs for fs:workloads
- 08:59 AM Bug #52438: qa: ffsb timeout
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Recent failures:
> >
> > - https://pulpito.ceph.com/vshankar-2022-02-... - 08:14 AM Bug #52438: qa: ffsb timeout
- Venky Shankar wrote:
> Recent failures:
>
> - https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshank... - 05:54 AM Bug #52438: qa: ffsb timeout
- Recent failures:
- https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-t... - 06:12 AM Backport #54223 (Rejected): pacific: mgr/volumes: `fs volume rename` command
- https://github.com/ceph/ceph/pull/45542
- 06:12 AM Backport #54222 (Rejected): octopus: mgr/volumes: `fs volume rename` command
- 06:11 AM Backport #54221 (Resolved): quincy: mgr/volumes: `fs volume rename` command
- https://github.com/ceph/ceph/pull/45541
- 06:10 AM Backport #54220 (Resolved): pacific: mds: seg fault in expire_recursive
- https://github.com/ceph/ceph/pull/45099
- 06:10 AM Backport #54219 (Resolved): octopus: mds: seg fault in expire_recursive
- https://github.com/ceph/ceph/pull/45055
- 06:10 AM Backport #54218 (In Progress): quincy: mds: seg fault in expire_recursive
- https://github.com/ceph/ceph/pull/45097
- 06:10 AM Backport #54217 (Resolved): pacific: client: client session state stuck in opening and hang all t...
- https://github.com/ceph/ceph/pull/45100
- 06:10 AM Backport #54216 (Resolved): quincy: client: client session state stuck in opening and hang all th...
- https://github.com/ceph/ceph/pull/45098
- 06:09 AM Bug #53805 (Pending Backport): mds: seg fault in expire_recursive
- 06:07 AM Bug #53911 (Pending Backport): client: client session state stuck in opening and hang all the time
- 06:05 AM Feature #51162 (Pending Backport): mgr/volumes: `fs volume rename` command
- 06:01 AM Feature #53903 (Resolved): mount: add option to support fake mounts
02/08/2022
- 07:32 PM Bug #54107 (Fix Under Review): kclient: hang during umount
- 05:14 PM Feature #54205 (New): hard links: explore using a "referent" inode whenever hard linking
- https://docs.ceph.com/en/latest/dev/cephfs-snapshots/#hard-links
Part of the reason we have to snapshot hard linke... - 01:10 PM Backport #54196 (Resolved): quincy: mds: mds_oft_prefetch_dirfrags default to false
- https://github.com/ceph/ceph/pull/45017
- 01:10 PM Backport #54195 (Resolved): octopus: mds: mds_oft_prefetch_dirfrags default to false
- https://github.com/ceph/ceph/pull/45015
- 01:10 PM Backport #54194 (Resolved): pacific: mds: mds_oft_prefetch_dirfrags default to false
- https://github.com/ceph/ceph/pull/45016
- 01:07 PM Bug #53952 (Pending Backport): mds: mds_oft_prefetch_dirfrags default to false
02/07/2022
- 08:54 AM Bug #52641: snap scheduler: Traceback seen when snapshot schedule remove command is passed withou...
- Nikhil, please take a look.
- 04:52 AM Bug #54066: mgr/volumes: uid/gid of the clone is incorrect
- https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/...
- 02:53 AM Bug #53859: qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Patrick Donnelly wrote:
> > > That explains the current behavior but wh...
Also available in: Atom