Activity
From 02/21/2022 to 03/22/2022
03/22/2022
- 11:36 PM Backport #52636: pacific: MDSMonitor: removes MDS coming out of quorum election
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43698
merged - 09:09 PM Backport #52680 (In Progress): pacific: Add option in `fs new` command to start rank 0 in failed ...
- 07:33 PM Backport #52875 (In Progress): pacific: qa: test_dirfrag_limit
- 01:32 PM Backport #52875: pacific: qa: test_dirfrag_limit
- Ramana,
Could you please check if backport is feasible and post one if it is?
Cheers,
Venky - 06:50 PM Backport #52427 (In Progress): pacific: qa: "error reading sessionmap 'mds1_sessionmap'"
- 01:33 PM Backport #52427: pacific: qa: "error reading sessionmap 'mds1_sessionmap'"
- Ramana, please take this.
- 05:17 PM Backport #54223 (Rejected): pacific: mgr/volumes: `fs volume rename` command
- The feature is meant for quincy onwards.
- 05:17 PM Backport #54222 (Rejected): octopus: mgr/volumes: `fs volume rename` command
- The feature is meant for Ceph quincy onwards.
- 05:03 PM Feature #51162 (Pending Backport): mgr/volumes: `fs volume rename` command
- 05:03 PM Feature #51162 (Rejected): mgr/volumes: `fs volume rename` command
- The feature is meant for quincy and later releases.
- 12:31 AM Feature #51162: mgr/volumes: `fs volume rename` command
- Venky, this tracker was originally planned for quincy. It depends on a feature tracker https://tracker.ceph.com/issue...
- 04:36 PM Fix #54317 (Fix Under Review): qa: add testing in fs:workload for different kinds of subvolumes
- 02:18 PM Backport #54478: pacific: ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not exp...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45351
merged - 02:15 PM Backport #54256: pacific: mgr/volumes: uid/gid of the clone is incorrect
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45205
merged - 02:15 PM Backport #54335: pacific: mgr/volumes: A deleted subvolumegroup when listed using "ceph fs subvol...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45205
mergedhttps://tracker.ceph.com/issues/54332 - 02:15 PM Backport #54332: pacific: mgr/volumes: File Quota attributes not getting inherited to the cloned ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45205
merged - 05:36 AM Feature #54978 (In Progress): cephfs-top:addition of filesystem menu(improving GUI)
03/21/2022
- 10:43 PM Backport #54223 (In Progress): pacific: mgr/volumes: `fs volume rename` command
- 10:36 PM Backport #54221 (In Progress): quincy: mgr/volumes: `fs volume rename` command
- 05:22 PM Bug #54670 (Duplicate): crash: Client::_get_vino(Inode*)
- Duplicate of https://tracker.ceph.com/issues/54743
- 01:04 PM Feature #54978 (Resolved): cephfs-top:addition of filesystem menu(improving GUI)
- It deals with the option of having menu for selecting filesystems:
1. to display all filesystems metrics at once.
2... - 12:54 PM Bug #54557 (Triaged): scrub repair does not clear earlier damage health status
- 12:53 PM Bug #54560 (Triaged): snap_schedule: avoid throwing traceback for bad or missing arguments
- 12:50 PM Bug #54606 (Triaged): check-counter task runs till max job timeout
- 12:49 PM Bug #54625 (Triaged): Issue removing subvolume with retained snapshots - Possible quincy regression?
- 12:47 PM Bug #54653 (Triaged): crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_map...
- 12:43 PM Bug #54743 (Triaged): crash: Client::_get_vino(Inode*)
- 12:42 PM Bug #54971 (Triaged): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics....
- 10:20 AM Bug #54971 (Resolved): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics...
- Seen here - https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-def...
- 11:43 AM Bug #54976 (Duplicate): mds: Test failure: test_filelock_eviction (tasks.cephfs.test_client_recov...
- https://tracker.ceph.com/issues/44565
- 11:37 AM Bug #54976 (Duplicate): mds: Test failure: test_filelock_eviction (tasks.cephfs.test_client_recov...
- https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/...
03/19/2022
- 01:30 AM Bug #54961 (New): crash: std::_Rb_tree<metareqid_t, metareqid_t, std::_Identity<metareqid_t>, std...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9a2bf47292bd4dc84dcdf199...- 01:30 AM Bug #54959 (New): crash: tcmalloc::ThreadCache::FetchFromCentralCache(unsigned int, int, void* (*...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4e5d7408c2347625b1145ba6...- 01:29 AM Bug #54943 (Duplicate): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [w...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5b6323621cb40594236b5ec8...- 01:27 AM Bug #54893 (New): crash: pthread_kill()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9fb644e7578933dc792e46ec...- 01:25 AM Bug #54840 (New): crash: void MDCache::handle_cache_rejoin_weak(ceph::cref_t<MMDSCacheRejoin>&): ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e688dea9acbbc7db8676a075...- 01:25 AM Bug #54834 (Duplicate): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&):...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fbdb1c22c9e19c3039dd2acf...- 01:25 AM Bug #54833 (Pending Backport): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<ML...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=93eb25af7720dff70a580dca...- 01:25 AM Bug #54824 (New): crash: pthread_kill()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6cbe446e1c71d34d8441b9f2...- 01:24 AM Bug #54798 (New): crash: double const ceph::common::ConfigProxy::get_val<double>(std::basic_strin...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a6be1eafcec3445c3e9779d3...- 01:23 AM Bug #54765 (New): crash: void MDCache::rejoin_send_rejoins(): assert(auth >= 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=671c68d3b0d2638ffcd6dd94...- 01:23 AM Bug #54760 (Closed): crash: void CDir::try_remove_dentries_for_stray(): assert(dn->get_linkage()-...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3a7fca349de2f63168745e7a...- 01:22 AM Bug #54747 (New): crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): assert(ino...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=13291700f923cdc78c6a9b9d...- 01:22 AM Bug #54743 (Triaged): crash: Client::_get_vino(Inode*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7160918e1db8c7e75cc34967...- 01:22 AM Bug #54741 (New): crash: MDSTableClient::got_journaled_ack(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=366ab44cb3c1d002359d6d1d...- 01:22 AM Bug #54730 (Resolved): crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state ==...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0cc462317baee377357357c8...- 01:21 AM Bug #54715 (New): crash: void MDCache::handle_cache_rejoin_weak(ceph::cref_t<MMDSCacheRejoin>&): ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=749d32887b2f7880076f66d4...- 01:20 AM Bug #54701 (Resolved): crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CD...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3aaddcbc011a81127704b874...- 01:19 AM Bug #54680 (New): crash: int Client::_do_remount(bool): abort
*New crash events were reported via Telemetry with newer versions (['16.2.4', '16.2.7']) than encountered in Tracke...- 01:19 AM Bug #54670 (Duplicate): crash: Client::_get_vino(Inode*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=bbf2e21ab50312877746899b...- 01:18 AM Bug #54665 (New): crash: void ObjectCacher::bh_write_commit(int64_t, sobject_t, std::vector<std::...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=40f5abbee44ea6074376240b...- 01:18 AM Bug #54653 (Resolved): crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_ma...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=89d76bc0664d80c1ac55706e...- 01:17 AM Bug #54644 (New): crash: void SessionMap::replay_open_sessions(version_t, std::map<client_t, enti...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b4f84116e68bc35269f3752a...- 01:17 AM Bug #54643 (Duplicate): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): ass...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3bfe2bce55d41f67f48676a9...- 01:17 AM Bug #54636 (New): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOC...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b5ee69eea19cfb224a80033a...
03/18/2022
- 09:10 PM Bug #54625 (Resolved): Issue removing subvolume with retained snapshots - Possible quincy regress...
- I'm hitting a situation with test code that occurs only on quincy at this time.
To summarize:
* ceph fs subvolume c... - 01:20 PM Bug #44565 (New): src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == L...
- Hit this here: https://pulpito.ceph.com/vshankar-2022-03-18_02:56:29-fs:upgrade-wip-vshankar-testing-20220317-101203-...
03/17/2022
- 09:50 PM Backport #54477: quincy: ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not expe...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45331
merged - 02:40 PM Backport #53760 (In Progress): pacific: snap scheduler: cephfs snapshot schedule status doesn't l...
- 01:35 PM Feature #50470 (Fix Under Review): cephfs-top: multiple file system support
- 01:16 PM Backport #54532 (In Progress): pacific: mds,client: suppport getvxattr RPC
- 01:06 PM Bug #48805 (Resolved): mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blog...
- * now available on master and quincy
- 12:21 PM Bug #54606: check-counter task runs till max job timeout
- Maybe related to: https://tracker.ceph.com/issues/50546
- 12:20 PM Bug #54606 (Triaged): check-counter task runs till max job timeout
- Seen here - http://pulpito.front.sepia.ceph.com/yuriw-2022-03-14_18:57:01-fs-wip-yuri2-testing-2022-03-14-0946-quincy...
- 11:50 AM Bug #54557: scrub repair does not clear earlier damage health status
- Miland asked me to try: After you run "scrub repair" followed by a "scrub" without any issues, and if the "damage ls"...
- 06:49 AM Backport #54573 (In Progress): pacific: mgr/volumes: The 'mode' argument is not honored on idempo...
03/16/2022
- 10:20 AM Backport #54578 (Resolved): quincy: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and...
- https://github.com/ceph/ceph/pull/46647
- 10:20 AM Backport #54577 (Resolved): pacific: pybind/cephfs: Add mapping for Ernno 13:Permission Denied an...
- https://github.com/ceph/ceph/pull/46646
- 10:15 AM Bug #54237 (Pending Backport): pybind/cephfs: Add mapping for Ernno 13:Permission Denied and addi...
- 06:20 AM Backport #54574 (In Progress): quincy: mgr/volumes: The 'mode' argument is not honored on idempot...
- 03:55 AM Backport #54574 (Resolved): quincy: mgr/volumes: The 'mode' argument is not honored on idempotent...
- https://github.com/ceph/ceph/pull/45405
- 03:55 AM Backport #54573 (Resolved): pacific: mgr/volumes: The 'mode' argument is not honored on idempoten...
- https://github.com/ceph/ceph/pull/45474
- 03:54 AM Bug #54375 (Pending Backport): mgr/volumes: The 'mode' argument is not honored on idempotent subv...
03/15/2022
- 11:18 AM Bug #54560 (Pending Backport): snap_schedule: avoid throwing traceback for bad or missing arguments
- validate all arguments before commencing to execute snap_schedule and retention commands
- 08:39 AM Bug #53750 (Resolved): mds: FAILED ceph_assert(mut->is_wrlocked(&pin->filelock))
- 08:36 AM Backport #53865 (Resolved): octopus: mds: FAILED ceph_assert(mut->is_wrlocked(&pin->filelock))
- 02:54 AM Bug #54557 (Pending Backport): scrub repair does not clear earlier damage health status
- From Chris Palmer on cpeh-users.ceph.io mailing list ...
Reading this thread made me realise I had overlooked ceph... - 01:37 AM Bug #54411: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem i...
- Thanks Kotresh, the mgr will connect the cephfs cluster via libcephfs does internl required filesystem operations:
...
03/14/2022
- 03:28 PM Bug #54501: libcephfs: client needs to update the mtime and change attr when snaps are created an...
- Nikhil,
Please take this (Xiubo is be a bit tied up with some other work).
Cheers,
Venky - 02:26 PM Documentation #54551 (Resolved): docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds ca...
- The docu _ADDING AN MDS_ starts by @Create an mds data point /var/lib/ceph/mds/ceph-${id}@ (according to Google, this...
- 01:13 PM Backport #54533 (In Progress): quincy: mds,client: suppport getvxattr RPC
- 12:28 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- The issue happens the auth mds for the primary hardlink sends a `dentry_unlink' message to replica MDSs and the repli...
- 09:16 AM Bug #54546 (New): mds: crash due to corrupt inode and omap entry
- A corrupted on-disk inode causes the MDS to crash with an assert. The backtrace looks something like:...
- 07:17 AM Bug #54411 (Fix Under Review): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon...
- 06:16 AM Bug #54411: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem i...
- This is one issue similar with https://tracker.ceph.com/issues/53293, which is for the kclient. But this is for libce...
- 06:08 AM Bug #54411: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem i...
- The mds crashed in :...
- 05:29 AM Bug #54411 (In Progress): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); ...
03/11/2022
- 10:16 AM Backport #54478 (In Progress): pacific: ceph-fuse: If nonroot user runs ceph-fuse mount on then p...
- 03:06 AM Backport #54533 (Resolved): quincy: mds,client: suppport getvxattr RPC
- https://github.com/ceph/ceph/pull/45377
- 03:06 AM Backport #54532 (Resolved): pacific: mds,client: suppport getvxattr RPC
- https://github.com/ceph/ceph/pull/45487
- 03:00 AM Bug #51062 (Pending Backport): mds,client: suppport getvxattr RPC
03/10/2022
- 10:48 PM Bug #48873: test_cluster_set_reset_user_config: AssertionError: NFS Ganesha cluster deployment fa...
- /a/yuriw-2022-03-10_01:04:51-rados-wip-yuri5-testing-2022-03-07-0958-distro-default-smithi/6728547
- 08:51 AM Backport #54477 (In Progress): quincy: ceph-fuse: If nonroot user runs ceph-fuse mount on then pa...
- 06:29 AM Bug #54512 (Closed): 'client ls' shows only one client
- My bad, This is not a bug.
- 05:45 AM Bug #54512 (Closed): 'client ls' shows only one client
- 'client ls' shows only one client when there are two clients mounting 2 different filesystems.
Steps to reproduce:...
03/09/2022
- 03:55 PM Feature #54472: mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes
- Venky Shankar wrote:
> The user-metadata could be in a separate section in .meta, probably having the subvolume uu... - 06:43 AM Bug #54461 (Fix Under Review): ffsb.sh test failure
- I just revert the buggy commit because these two issues are contradict each other. The old issue needs it to close th...
03/08/2022
- 09:05 PM Bug #54501 (Pending Backport): libcephfs: client needs to update the mtime and change attr when s...
- This issue was identified by Jeff here, https://bugzilla.redhat.com/show_bug.cgi?id=1975689#c21
The libcephfs clie... - 01:48 PM Bug #54462: Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status...
- Venky Shankar wrote:
> Jeff thinks it might be a permission issue.
This is a different one - https://tracker.ceph... - 10:32 AM Bug #53911: client: client session state stuck in opening and hang all the time
- This fixing will introduce new bug in https://tracker.ceph.com/issues/54461.
- 10:28 AM Bug #54461: ffsb.sh test failure
- This bug is introduced by https://tracker.ceph.com/issues/53911.
- 08:09 AM Bug #54461: ffsb.sh test failure
- In remote/smithi156/log/ceph-mds.b.log.gz, we can see that the reconnection is denied by mds.b:...
- 07:50 AM Bug #54461: ffsb.sh test failure
- It failed in ffsb code:...
- 10:11 AM Backport #54479 (In Progress): pacific: mgr/stats: be resilient to offline MDS rank-0
- 10:03 AM Backport #54480 (In Progress): quincy: mgr/stats: be resilient to offline MDS rank-0
- 09:45 AM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- OK - super easy to reproduce with some directory pinning. steps -
(requires a ceph user with path restricted caps)
... - 09:25 AM Bug #54375 (Fix Under Review): mgr/volumes: The 'mode' argument is not honored on idempotent subv...
03/07/2022
- 01:47 PM Backport #54477: quincy: ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not expe...
- Nikhil, please take this.
- 05:05 AM Backport #54477 (Resolved): quincy: ceph-fuse: If nonroot user runs ceph-fuse mount on then path ...
- https://github.com/ceph/ceph/pull/45331
- 01:46 PM Backport #54480: quincy: mgr/stats: be resilient to offline MDS rank-0
- Jos, please take this.
- 05:11 AM Backport #54480 (Resolved): quincy: mgr/stats: be resilient to offline MDS rank-0
- https://github.com/ceph/ceph/pull/45291
- 01:46 PM Bug #54460 (Triaged): snaptest-multiple-capsnaps.sh test failure
- 01:45 PM Bug #54461 (Triaged): ffsb.sh test failure
- 01:44 PM Bug #54462 (Triaged): Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 w...
- Jeff thinks it might be a permission issue.
- 01:19 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Hitting this bug involves having hardlinks to inodes which are authoritative in another active mds. When a non-primar...
- 12:18 PM Bug #52438: qa: ffsb timeout
- Created one pr in ffsb https://github.com/ceph/ffsb/pull/3 to fix it
- 12:16 PM Bug #52438: qa: ffsb timeout
- Actually the `ffsb` test finished very fast and took 346.70 seconds:
```
2022-02-28T08:29:36.007 INFO:tasks.worku... - 05:11 AM Backport #54479 (Resolved): pacific: mgr/stats: be resilient to offline MDS rank-0
- https://github.com/ceph/ceph/pull/45293
- 05:07 AM Bug #50033 (Pending Backport): mgr/stats: be resilient to offline MDS rank-0
- 05:05 AM Backport #54478 (Resolved): pacific: ceph-fuse: If nonroot user runs ceph-fuse mount on then path...
- https://github.com/ceph/ceph/pull/45351
- 05:03 AM Bug #51062 (Resolved): mds,client: suppport getvxattr RPC
- 05:01 AM Bug #54049 (Pending Backport): ceph-fuse: If nonroot user runs ceph-fuse mount on then path is no...
03/04/2022
- 10:00 AM Feature #54472: mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes
- Just FYI - mgr/volumes uses .meta file as a metadata store for persisting subvolume related information (path, state,...
- 09:46 AM Feature #54472 (Resolved): mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes
- This is similar to RBDs `image-meta get/set/list/remove' interfaces. Updating an existing key should be supported.
... - 09:45 AM Bug #54237 (Fix Under Review): pybind/cephfs: Add mapping for Ernno 13:Permission Denied and addi...
03/03/2022
- 04:00 PM Backport #54257 (Resolved): quincy: mgr/volumes: uid/gid of the clone is incorrect
- 03:44 PM Backport #54257: quincy: mgr/volumes: uid/gid of the clone is incorrect
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45165
merged - 02:54 PM Bug #54463: mds: flush mdlog if locked and still has wanted caps not satisfied
- More detail please see bz: https://bugzilla.redhat.com/show_bug.cgi?id=2049653
- 02:45 PM Bug #54463 (Resolved): mds: flush mdlog if locked and still has wanted caps not satisfied
- In _do_cap_update() if one client is releasing the Fw caps the
relevant client range will be erased, and then new_ma... - 02:44 PM Bug #54462 (Duplicate): Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055...
- https://pulpito.ceph.com/yuriw-2022-03-01_20:21:46-fs-wip-yuri-testing-2022-02-28-0823-quincy-distro-default-smithi/6...
- 02:36 PM Bug #54461 (Resolved): ffsb.sh test failure
- https://pulpito.ceph.com/yuriw-2022-03-01_20:21:46-fs-wip-yuri-testing-2022-02-28-0823-quincy-distro-default-smithi/6...
- 02:03 PM Bug #54460 (Resolved): snaptest-multiple-capsnaps.sh test failure
- Test failure on quincy run:
https://pulpito.ceph.com/yuriw-2022-03-01_20:21:46-fs-wip-yuri-testing-2022-02-28-0823... - 01:59 PM Bug #54459 (Fix Under Review): fs:upgrade fails with "hit max job timeout"
- 01:55 PM Bug #54459 (Rejected): fs:upgrade fails with "hit max job timeout"
- fs:upgrade test upgrades from pacific v16.2.4 upto lastest. When running with a distro kernel, which might not unders...
03/02/2022
- 05:08 PM Backport #51201: octopus: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44800
merged - 04:42 PM Backport #53865: octopus: mds: FAILED ceph_assert(mut->is_wrlocked(&pin->filelock))
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44624
merged - 03:50 PM Backport #54242: octopus: mds: clients can send a "new" op (file operation) and crash the MDS
- Venky Shankar wrote:
> https://github.com/ceph/ceph/pull/44976
merged
03/01/2022
- 06:18 AM Backport #54256 (In Progress): pacific: mgr/volumes: uid/gid of the clone is incorrect
- 06:18 AM Backport #54335 (In Progress): pacific: mgr/volumes: A deleted subvolumegroup when listed using "...
- 06:18 AM Backport #54332 (In Progress): pacific: mgr/volumes: File Quota attributes not getting inherited ...
02/28/2022
- 05:59 PM Bug #52260: 1 MDSs are read only | pacific 16.2.5
- Hi
Any update on this ?:) - 03:48 PM Backport #54218: quincy: mds: seg fault in expire_recursive
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45097
merged - 02:21 PM Bug #54411 (Triaged): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 fi...
- 02:21 PM Backport #54407 (In Progress): quincy: mds: seg fault in expire_recursive
- 01:45 PM Bug #54406 (Triaged): cephadm/mgr-nfs-upgrade: cluster [WRN] overall HEALTH_WARN no active mgr
- 01:04 PM Bug #54421 (Fix Under Review): mds: assert fail in Server::_dir_is_nonempty() because xlocker of ...
- 01:00 PM Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
- pr: https://github.com/ceph/ceph/pull/45195
- 09:45 AM Bug #54421 (Fix Under Review): mds: assert fail in Server::_dir_is_nonempty() because xlocker of ...
- ENV: Jewel ceph-10.2.2
Description:
Server::_dir_is_nonempty() always expects inode has the xlocker, but sometime... - 10:24 AM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Jeff Layton wrote:
> I see this in the current batch of logs (mds.test.cephadm1.xlapqu.log):
>
> [...]
>
> ...... - 07:00 AM Backport #54420 (Rejected): octopus: mgr/volumes: uid/gid of the clone is incorrect
02/25/2022
- 06:06 PM Backport #54241: pacific: mds: clients can send a "new" op (file operation) and crash the MDS
- Venky Shankar wrote:
> https://github.com/ceph/ceph/pull/44975
merged - 05:18 PM Bug #54411 (Resolved): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 f...
- /a/yuriw-2022-02-21_15:48:20-rados-wip-yuri7-testing-2022-02-17-0852-pacific-distro-default-smithi/6698603...
- 04:23 PM Backport #54217: pacific: client: client session state stuck in opening and hang all the time
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45100
merged - 04:22 PM Backport #54220: pacific: mds: seg fault in expire_recursive
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45099
merged - 04:21 PM Backport #54194: pacific: mds: mds_oft_prefetch_dirfrags default to false
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45016
merged - 04:19 PM Backport #54161: pacific: mon/MDSMonitor: sanity assert when inline data turned on in MDSMap from...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44910
merged - 04:19 PM Backport #53761: pacific: mds: mds_oft_prefetch_dirfrags = false is not qa tested
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44504
merged - 04:18 PM Backport #53948: pacific: mgr/volumes: Failed to create clones if the source snapshot's quota is ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42932
merged - 04:18 PM Backport #52384: pacific: pybind/mgr/volumes: Cloner threads stuck in loop trying to clone the st...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42932
merged - 10:22 AM Backport #54333 (In Progress): quincy: mgr/volumes: File Quota attributes not getting inherited t...
- 10:21 AM Backport #54257 (In Progress): quincy: mgr/volumes: uid/gid of the clone is incorrect
- 10:21 AM Backport #54336 (In Progress): quincy: mgr/volumes: A deleted subvolumegroup when listed using "c...
- 08:29 AM Backport #52634 (In Progress): octopus: mds sends cap updates with btime zeroed out
- 08:29 AM Backport #52635 (In Progress): pacific: mds sends cap updates with btime zeroed out
- 08:28 AM Backport #52443 (In Progress): octopus: client: fix dump mds twice
- 08:27 AM Backport #51976 (Need More Info): octopus: client: make sure only to update dir dist from auth mds
- non trivial backport
- 08:26 AM Backport #51938 (Need More Info): octopus: qa: test_full_fsync (tasks.cephfs.test_full.TestCluste...
- non trivial backport
- 08:24 AM Backport #51936 (Need More Info): octopus: mds: improve debugging for mksnap denial
- non trivial cherry-pick
- 08:23 AM Backport #51933 (In Progress): octopus: mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather...
- 08:22 AM Backport #51831 (In Progress): octopus: mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, an...
- 08:21 AM Backport #51545 (Need More Info): octopus: mgr/volumes: use a dedicated libcephfs handle for subv...
- non trivial backport
- 08:19 AM Backport #51482 (Need More Info): octopus: osd: sent kickoff request to MDS and then stuck for 15...
- non trivial cherry pick
- 08:17 AM Backport #51323 (In Progress): octopus: qa: test_data_scan.TestDataScan.test_pg_files AssertionEr...
- 08:16 AM Backport #51202 (In Progress): octopus: mds: CephFS kclient gets stuck when getattr() on a certai...
- 08:15 AM Backport #50914 (In Progress): octopus: MDS heartbeat timed out between during executing MDCache:...
- 08:14 AM Backport #50849 (Need More Info): octopus: mds: "cluster [ERR] Error recovering journal 0x203: ...
- 08:13 AM Backport #50849: octopus: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file ...
- Non trivial backport
- 08:12 AM Backport #50847 (In Progress): octopus: mds: journal recovery thread is possibly asserting with m...
- 08:10 AM Backport #50631 (In Progress): octopus: mds: Error ENOSYS: mds.a started profiler
- 02:00 AM Backport #54407 (Resolved): quincy: mds: seg fault in expire_recursive
- https://github.com/ceph/ceph/pull/45097
02/24/2022
- 10:51 PM Bug #54406 (Triaged): cephadm/mgr-nfs-upgrade: cluster [WRN] overall HEALTH_WARN no active mgr
- /a/yuriw-2022-02-21_15:48:20-rados-wip-yuri7-testing-2022-02-17-0852-pacific-distro-default-smithi/6698628...
- 07:23 PM Bug #53246: rhel 8.4 and centos stream unable to install cephfs-java
- /a/sseshasa-2022-02-24_11:27:07-rados-wip-45118-45121-quincy-testing-distro-default-smithi/6704247
- 05:09 PM Bug #54404 (New): snap-schedule retention not working as expected
- When hourly and daily snapshots are created on the same path, snap retention is not honored correctly. The daily snap...
- 01:16 PM Bug #54384 (Fix Under Review): mds: crash due to seemingly unrecoverable metadata error
02/23/2022
- 02:27 PM Bug #54384 (Resolved): mds: crash due to seemingly unrecoverable metadata error
From: https://www.spinics.net/lists/ceph-users/msg71028.html
Reported by Wolfgang Mair...- 09:39 AM Bug #54375 (Resolved): mgr/volumes: The 'mode' argument is not honored on idempotent subvolume cr...
- The 'mode' argument is not honored on idempotent subvolume creation of existing subvolume.
Steps to reproduce:
1.... - 07:34 AM Bug #54285 (Fix Under Review): make stop.sh clear the evicted clients too
- 04:27 AM Bug #54374 (Resolved): mgr/snap_schedule: include timezone information in scheduled snapshots
- Scheduled snapshots are stamped with local tz timestamp of the host/container. Including the tz information in the sn...
02/22/2022
- 05:03 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- I see this in the current batch of logs (mds.test.cephadm1.xlapqu.log):...
- 03:59 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- FWIW, I also turned up kernel debug logs and did this:...
- 02:27 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- I reproduced this morning again and gathered the mds logs:
ceph-post-file: bf0318cc-3e34-4d61-8895-03dfdae86c25
- 01:18 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Thx! I'll try that.
- 01:16 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Venky Shankar wrote:
> Also, I could not reproduce it running generic/070. Would it be possible to share kernel buff... - 01:06 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Venky Shankar wrote:
> But, I cannot see the unlink request coming in or a failure with -EACCES for the path in ques... - 09:25 AM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Jeff,
For this unlink request... - 02:02 PM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Hey Venky,
yes, the workaround fixes my Ceph 13 cluster (until the next restart).
Whether it should be marked a... - 09:43 AM Bug #54253: Avoid OOM exceeding 10x MDS cache limit on restart after many files were opened
- Niklas,
Were you able to get things to a stable state after following your note https://tracker.ceph.com/issues/54... - 11:17 AM Bug #48673: High memory usage on standby replay MDS
- Yongseok/Mykola - Patrick is on PTO - I'll try to make progress on this issue.
Yongseok, you mention https://githu... - 09:22 AM Bug #54052 (In Progress): mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr r...
- 04:29 AM Cleanup #54362 (Fix Under Review): client: do not release the global snaprealm until unmounting
- 04:26 AM Cleanup #54362 (Resolved): client: do not release the global snaprealm until unmounting
- The global snaprealm will be created and then destroyed immediately every time when updating it.
02/21/2022
- 03:08 PM Bug #54345 (Fix Under Review): mds: try to reset heartbeat when fetching or committing.
- 03:05 PM Bug #54345 (Resolved): mds: try to reset heartbeat when fetching or committing.
- When there have too many dentries need to load, the heartbeat may not get a change to be reset.
- 01:38 PM Bug #54271 (Triaged): mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
- 10:05 AM Backport #54217 (In Progress): pacific: client: client session state stuck in opening and hang al...
- 10:02 AM Backport #54220 (In Progress): pacific: mds: seg fault in expire_recursive
- 09:59 AM Backport #54216 (In Progress): quincy: client: client session state stuck in opening and hang all...
- 09:55 AM Backport #54218 (In Progress): quincy: mds: seg fault in expire_recursive
- 09:10 AM Backport #54336 (Resolved): quincy: mgr/volumes: A deleted subvolumegroup when listed using "ceph...
- https://github.com/ceph/ceph/pull/45165
- 09:10 AM Backport #54335 (Resolved): pacific: mgr/volumes: A deleted subvolumegroup when listed using "cep...
- https://github.com/ceph/ceph/pull/45205
- 09:10 AM Backport #54334 (Rejected): octopus: mgr/volumes: A deleted subvolumegroup when listed using "cep...
- 09:10 AM Backport #54333 (Resolved): quincy: mgr/volumes: File Quota attributes not getting inherited to t...
- https://github.com/ceph/ceph/pull/45165
- 09:10 AM Backport #54332 (Resolved): pacific: mgr/volumes: File Quota attributes not getting inherited to ...
- https://github.com/ceph/ceph/pull/45205
- 09:10 AM Backport #54331 (Rejected): octopus: mgr/volumes: File Quota attributes not getting inherited to ...
- 09:07 AM Bug #54121 (Pending Backport): mgr/volumes: File Quota attributes not getting inherited to the cl...
- 09:07 AM Bug #54099 (Pending Backport): mgr/volumes: A deleted subvolumegroup when listed using "ceph fs s...
Also available in: Atom