Activity
From 03/01/2022 to 03/30/2022
03/30/2022
- 01:42 PM Bug #55134 (Resolved): ceph pacific fails to perform fs/mirror test
- During execution of the integration tests (IBM Z, BE) the fs/mirror suite produces a set of error related to segfault...
- 01:33 PM Bug #52260: 1 MDSs are read only | pacific 16.2.5
- cephuser2345 user wrote:
> Hi
>
> Any update on this ?:)
Its been a while since I attempted to reproduce this ... - 12:15 PM Bug #55129: client: get stuck forever when the forward seq exceeds 256
- The kclient fixing patchwork is: https://patchwork.kernel.org/project/ceph-devel/list/?series=627352
- 12:14 PM Bug #55129 (Resolved): client: get stuck forever when the forward seq exceeds 256
- The type of 'num_fwd' in ceph 'MClientRequestForward' is 'int32_t',
while in 'ceph_mds_request_head' the type is '__... - 11:43 AM Feature #55126: mds: add perf counter to record slow replies
- Reviewer suggested that the pr could be backported, so create the tracker here.
- 11:42 AM Feature #55126: mds: add perf counter to record slow replies
- Though we have MDS_HEALTH_SLOW_METADATA_IO and MDS_HEALTH_SLOW_REQUEST health alert, but those are not
precise nor a... - 11:42 AM Feature #55126 (Pending Backport): mds: add perf counter to record slow replies
- 09:23 AM Documentation #54551: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
- Concerning my question on how to specify the host for the MDS, Google helped:
- Add 2 hosts by @ceph orch host add... - 09:17 AM Documentation #54551: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
- Hi Venky,
yes and no - the change makes it clear that a directory has to be created.
But is there a canonical pa... - 09:00 AM Documentation #54551 (Fix Under Review): docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-a...
- Thomas,
Does the documentation changes makes things clear - https://ceph--45639.org.readthedocs.build/en/45639/cep... - 06:17 AM Feature #55121 (Resolved): cephfs-top: new options to limit and order-by
- Based on the suggestion in the BZ [1], create two new options for cephfs-top:
1. Limit the number of clients to be...
03/29/2022
- 01:37 PM Bug #55110 (Fix Under Review): mount.ceph: mount helper incorrectly passes `ms_mode' mount option...
- 01:17 PM Bug #55110 (Pending Backport): mount.ceph: mount helper incorrectly passes `ms_mode' mount option...
- seen here - http://pulpito.front.sepia.ceph.com/yuriw-2022-03-28_19:24:40-smoke-quincy-distro-default-smithi/6765567/...
- 01:28 PM Bug #55112 (Resolved): cephfs-shell: saving files doesn't work as expected
- The "get" command in cephfs-shell doesn't behave as expected. Suppose I do this to save a file in a directory that li...
- 01:22 PM Bug #55111: cephfs-shell: tab completion throwing errors
- I have a subvolume on one of my filesystems, and when I try to walk down into that subvolume using tab-completion in ...
- 01:18 PM Bug #55111 (New): cephfs-shell: tab completion throwing errors
- 11:48 AM Bug #55108 (New): cephfs: recursive stats (ceph.dir.rbytes) on snapshotted directory shows invali...
- I am observing that the 'ceph.dir.rbytes' shows correct value on subvolume snapshot as long as the subvolume data is ...
- 10:03 AM Bug #54625 (Fix Under Review): Issue removing subvolume with retained snapshots - Possible quincy...
03/28/2022
- 03:13 PM Bug #54625: Issue removing subvolume with retained snapshots - Possible quincy regression?
- Great, thank you for the assistance.
I've removed the unneeded subvolume delete action from the test and it is now p... - 02:20 PM Backport #54573: pacific: mgr/volumes: The 'mode' argument is not honored on idempotent subvolume...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45474
merged - 02:16 PM Backport #52635: pacific: mds sends cap updates with btime zeroed out
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45163
merged - 02:16 PM Backport #52875: pacific: qa: test_dirfrag_limit
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45565
merged - 02:15 PM Backport #52680: pacific: Add option in `fs new` command to start rank 0 in failed state
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45565
merged - 02:14 PM Backport #52427: pacific: qa: "error reading sessionmap 'mds1_sessionmap'"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45564
merged - 01:40 PM Backport #55055 (In Progress): quincy: mgr/snap-schedule: scheduled snapshots are not created aft...
- 01:40 PM Backport #55056 (In Progress): pacific: mgr/snap-schedule: scheduled snapshots are not created af...
03/25/2022
- 03:07 PM Backport #52629: octopus: pybind/mgr/volumes: first subvolume permissions set perms on /volumes a...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43224
merged - 02:40 PM Backport #53759: pacific: mds: heartbeat timeout by _prefetch_dirfrags during up:rejoin
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44551
merged - 06:45 AM Bug #54971 (Fix Under Review): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds...
- 01:55 AM Backport #55056 (Resolved): pacific: mgr/snap-schedule: scheduled snapshots are not created after...
- https://github.com/ceph/ceph/pull/45906
- 01:55 AM Backport #55055 (Resolved): quincy: mgr/snap-schedule: scheduled snapshots are not created after ...
- https://github.com/ceph/ceph/pull/45672
- 01:54 AM Bug #54052 (Pending Backport): mgr/snap-schedule: scheduled snapshots are not created after ceph-...
03/24/2022
- 05:39 PM Bug #54406: cephadm/mgr-nfs-upgrade: cluster [WRN] overall HEALTH_WARN no active mgr
- /a/yuriw-2022-03-23_14:51:02-rados-wip-yuri4-testing-2022-03-21-1648-pacific-distro-default-smithi/6756019
- 01:33 PM Bug #55041 (Resolved): mgr/volumes: display in-progress clones for a snapshot
- Right now, there is no way of knowing the set of clone operations n-progress/pending for a subvolume snapshot (unless...
- 11:07 AM Bug #54625: Issue removing subvolume with retained snapshots - Possible quincy regression?
- Design/Code Behavior:
---------------------
Looked into the code further. It's designed in a such way that we sh... - 09:36 AM Bug #54625: Issue removing subvolume with retained snapshots - Possible quincy regression?
- Hi John,
> Now I expect that last "subvolume rm" to pass as the retained snapshot has been deleted on the previous... - 11:07 AM Backport #55040 (Rejected): pacific: ceph-fuse: mount -a on already mounted folder should be ignored
- https://github.com/ceph/ceph/pull/45925
- 11:06 AM Backport #55039 (Resolved): quincy: ceph-fuse: mount -a on already mounted folder should be ignored
- https://github.com/ceph/ceph/pull/45939
- 11:01 AM Bug #46075 (Pending Backport): ceph-fuse: mount -a on already mounted folder should be ignored
- 08:09 AM Bug #54653 (Fix Under Review): crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag...
- 07:02 AM Bug #54653: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_map.count(stag))
- There has one issue in ceph-fuse code that when lookup/create/mkdir/link/readdir, etc, it will make fake fuse inode n...
03/23/2022
- 07:46 PM Feature #54472 (Fix Under Review): mgr/volumes: allow users to add metadata (key-value pairs) to ...
- 09:06 AM Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
- Ivan Guan wrote:
> Xiubo Li wrote:
> > > t5: mds rdlock_start filelock and got the rdlock but can’t acquire the xlo... - 08:53 AM Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
- Xiubo Li wrote:
> > t5: mds rdlock_start filelock and got the rdlock but can’t acquire the xlock of linklock, so wai... - 08:20 AM Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
- > t5: mds rdlock_start filelock and got the rdlock but can’t acquire the xlock of linklock, so wait here
Here you ... - 07:02 AM Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
- Ivan Guan wrote:
> *the logs after t6 moment:*
> *send safe reply after writing journal to disk*
>
> 2022-02-21... - 07:01 AM Bug #54421: mds: assert fail in Server::_dir_is_nonempty() because xlocker of filelock is -1
- h2. the logs after t6 moment:
h3. send safe reply after writing journal to disk
2022-02-21 15:11:40.015369 7f1d3...
03/22/2022
- 11:36 PM Backport #52636: pacific: MDSMonitor: removes MDS coming out of quorum election
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/43698
merged - 09:09 PM Backport #52680 (In Progress): pacific: Add option in `fs new` command to start rank 0 in failed ...
- 07:33 PM Backport #52875 (In Progress): pacific: qa: test_dirfrag_limit
- 01:32 PM Backport #52875: pacific: qa: test_dirfrag_limit
- Ramana,
Could you please check if backport is feasible and post one if it is?
Cheers,
Venky - 06:50 PM Backport #52427 (In Progress): pacific: qa: "error reading sessionmap 'mds1_sessionmap'"
- 01:33 PM Backport #52427: pacific: qa: "error reading sessionmap 'mds1_sessionmap'"
- Ramana, please take this.
- 05:17 PM Backport #54223 (Rejected): pacific: mgr/volumes: `fs volume rename` command
- The feature is meant for quincy onwards.
- 05:17 PM Backport #54222 (Rejected): octopus: mgr/volumes: `fs volume rename` command
- The feature is meant for Ceph quincy onwards.
- 05:03 PM Feature #51162 (Pending Backport): mgr/volumes: `fs volume rename` command
- 05:03 PM Feature #51162 (Rejected): mgr/volumes: `fs volume rename` command
- The feature is meant for quincy and later releases.
- 12:31 AM Feature #51162: mgr/volumes: `fs volume rename` command
- Venky, this tracker was originally planned for quincy. It depends on a feature tracker https://tracker.ceph.com/issue...
- 04:36 PM Fix #54317 (Fix Under Review): qa: add testing in fs:workload for different kinds of subvolumes
- 02:18 PM Backport #54478: pacific: ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not exp...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45351
merged - 02:15 PM Backport #54256: pacific: mgr/volumes: uid/gid of the clone is incorrect
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45205
merged - 02:15 PM Backport #54335: pacific: mgr/volumes: A deleted subvolumegroup when listed using "ceph fs subvol...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45205
mergedhttps://tracker.ceph.com/issues/54332 - 02:15 PM Backport #54332: pacific: mgr/volumes: File Quota attributes not getting inherited to the cloned ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45205
merged - 05:36 AM Feature #54978 (In Progress): cephfs-top:addition of filesystem menu(improving GUI)
03/21/2022
- 10:43 PM Backport #54223 (In Progress): pacific: mgr/volumes: `fs volume rename` command
- 10:36 PM Backport #54221 (In Progress): quincy: mgr/volumes: `fs volume rename` command
- 05:22 PM Bug #54670 (Duplicate): crash: Client::_get_vino(Inode*)
- Duplicate of https://tracker.ceph.com/issues/54743
- 01:04 PM Feature #54978 (Resolved): cephfs-top:addition of filesystem menu(improving GUI)
- It deals with the option of having menu for selecting filesystems:
1. to display all filesystems metrics at once.
2... - 12:54 PM Bug #54557 (Triaged): scrub repair does not clear earlier damage health status
- 12:53 PM Bug #54560 (Triaged): snap_schedule: avoid throwing traceback for bad or missing arguments
- 12:50 PM Bug #54606 (Triaged): check-counter task runs till max job timeout
- 12:49 PM Bug #54625 (Triaged): Issue removing subvolume with retained snapshots - Possible quincy regression?
- 12:47 PM Bug #54653 (Triaged): crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_map...
- 12:43 PM Bug #54743 (Triaged): crash: Client::_get_vino(Inode*)
- 12:42 PM Bug #54971 (Triaged): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics....
- 10:20 AM Bug #54971 (Resolved): Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics...
- Seen here - https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-def...
- 11:43 AM Bug #54976 (Duplicate): mds: Test failure: test_filelock_eviction (tasks.cephfs.test_client_recov...
- https://tracker.ceph.com/issues/44565
- 11:37 AM Bug #54976 (Duplicate): mds: Test failure: test_filelock_eviction (tasks.cephfs.test_client_recov...
- https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/...
03/19/2022
- 01:30 AM Bug #54961 (New): crash: std::_Rb_tree<metareqid_t, metareqid_t, std::_Identity<metareqid_t>, std...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9a2bf47292bd4dc84dcdf199...- 01:30 AM Bug #54959 (New): crash: tcmalloc::ThreadCache::FetchFromCentralCache(unsigned int, int, void* (*...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4e5d7408c2347625b1145ba6...- 01:29 AM Bug #54943 (Duplicate): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [w...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5b6323621cb40594236b5ec8...- 01:27 AM Bug #54893 (New): crash: pthread_kill()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9fb644e7578933dc792e46ec...- 01:25 AM Bug #54840 (New): crash: void MDCache::handle_cache_rejoin_weak(ceph::cref_t<MMDSCacheRejoin>&): ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e688dea9acbbc7db8676a075...- 01:25 AM Bug #54834 (Duplicate): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&):...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fbdb1c22c9e19c3039dd2acf...- 01:25 AM Bug #54833 (Pending Backport): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<ML...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=93eb25af7720dff70a580dca...- 01:25 AM Bug #54824 (New): crash: pthread_kill()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6cbe446e1c71d34d8441b9f2...- 01:24 AM Bug #54798 (New): crash: double const ceph::common::ConfigProxy::get_val<double>(std::basic_strin...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a6be1eafcec3445c3e9779d3...- 01:23 AM Bug #54765 (New): crash: void MDCache::rejoin_send_rejoins(): assert(auth >= 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=671c68d3b0d2638ffcd6dd94...- 01:23 AM Bug #54760 (Closed): crash: void CDir::try_remove_dentries_for_stray(): assert(dn->get_linkage()-...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3a7fca349de2f63168745e7a...- 01:22 AM Bug #54747 (New): crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): assert(ino...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=13291700f923cdc78c6a9b9d...- 01:22 AM Bug #54743 (Triaged): crash: Client::_get_vino(Inode*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7160918e1db8c7e75cc34967...- 01:22 AM Bug #54741 (New): crash: MDSTableClient::got_journaled_ack(unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=366ab44cb3c1d002359d6d1d...- 01:22 AM Bug #54730 (Resolved): crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state ==...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0cc462317baee377357357c8...- 01:21 AM Bug #54715 (New): crash: void MDCache::handle_cache_rejoin_weak(ceph::cref_t<MMDSCacheRejoin>&): ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=749d32887b2f7880076f66d4...- 01:20 AM Bug #54701 (Resolved): crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CD...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3aaddcbc011a81127704b874...- 01:19 AM Bug #54680 (New): crash: int Client::_do_remount(bool): abort
*New crash events were reported via Telemetry with newer versions (['16.2.4', '16.2.7']) than encountered in Tracke...- 01:19 AM Bug #54670 (Duplicate): crash: Client::_get_vino(Inode*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=bbf2e21ab50312877746899b...- 01:18 AM Bug #54665 (New): crash: void ObjectCacher::bh_write_commit(int64_t, sobject_t, std::vector<std::...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=40f5abbee44ea6074376240b...- 01:18 AM Bug #54653 (Resolved): crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_ma...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=89d76bc0664d80c1ac55706e...- 01:17 AM Bug #54644 (New): crash: void SessionMap::replay_open_sessions(version_t, std::map<client_t, enti...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b4f84116e68bc35269f3752a...- 01:17 AM Bug #54643 (Duplicate): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): ass...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3bfe2bce55d41f67f48676a9...- 01:17 AM Bug #54636 (New): crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOC...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b5ee69eea19cfb224a80033a...
03/18/2022
- 09:10 PM Bug #54625 (Resolved): Issue removing subvolume with retained snapshots - Possible quincy regress...
- I'm hitting a situation with test code that occurs only on quincy at this time.
To summarize:
* ceph fs subvolume c... - 01:20 PM Bug #44565 (New): src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == L...
- Hit this here: https://pulpito.ceph.com/vshankar-2022-03-18_02:56:29-fs:upgrade-wip-vshankar-testing-20220317-101203-...
03/17/2022
- 09:50 PM Backport #54477: quincy: ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not expe...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45331
merged - 02:40 PM Backport #53760 (In Progress): pacific: snap scheduler: cephfs snapshot schedule status doesn't l...
- 01:35 PM Feature #50470 (Fix Under Review): cephfs-top: multiple file system support
- 01:16 PM Backport #54532 (In Progress): pacific: mds,client: suppport getvxattr RPC
- 01:06 PM Bug #48805 (Resolved): mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blog...
- * now available on master and quincy
- 12:21 PM Bug #54606: check-counter task runs till max job timeout
- Maybe related to: https://tracker.ceph.com/issues/50546
- 12:20 PM Bug #54606 (Triaged): check-counter task runs till max job timeout
- Seen here - http://pulpito.front.sepia.ceph.com/yuriw-2022-03-14_18:57:01-fs-wip-yuri2-testing-2022-03-14-0946-quincy...
- 11:50 AM Bug #54557: scrub repair does not clear earlier damage health status
- Miland asked me to try: After you run "scrub repair" followed by a "scrub" without any issues, and if the "damage ls"...
- 06:49 AM Backport #54573 (In Progress): pacific: mgr/volumes: The 'mode' argument is not honored on idempo...
03/16/2022
- 10:20 AM Backport #54578 (Resolved): quincy: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and...
- https://github.com/ceph/ceph/pull/46647
- 10:20 AM Backport #54577 (Resolved): pacific: pybind/cephfs: Add mapping for Ernno 13:Permission Denied an...
- https://github.com/ceph/ceph/pull/46646
- 10:15 AM Bug #54237 (Pending Backport): pybind/cephfs: Add mapping for Ernno 13:Permission Denied and addi...
- 06:20 AM Backport #54574 (In Progress): quincy: mgr/volumes: The 'mode' argument is not honored on idempot...
- 03:55 AM Backport #54574 (Resolved): quincy: mgr/volumes: The 'mode' argument is not honored on idempotent...
- https://github.com/ceph/ceph/pull/45405
- 03:55 AM Backport #54573 (Resolved): pacific: mgr/volumes: The 'mode' argument is not honored on idempoten...
- https://github.com/ceph/ceph/pull/45474
- 03:54 AM Bug #54375 (Pending Backport): mgr/volumes: The 'mode' argument is not honored on idempotent subv...
03/15/2022
- 11:18 AM Bug #54560 (Pending Backport): snap_schedule: avoid throwing traceback for bad or missing arguments
- validate all arguments before commencing to execute snap_schedule and retention commands
- 08:39 AM Bug #53750 (Resolved): mds: FAILED ceph_assert(mut->is_wrlocked(&pin->filelock))
- 08:36 AM Backport #53865 (Resolved): octopus: mds: FAILED ceph_assert(mut->is_wrlocked(&pin->filelock))
- 02:54 AM Bug #54557 (Pending Backport): scrub repair does not clear earlier damage health status
- From Chris Palmer on cpeh-users.ceph.io mailing list ...
Reading this thread made me realise I had overlooked ceph... - 01:37 AM Bug #54411: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem i...
- Thanks Kotresh, the mgr will connect the cephfs cluster via libcephfs does internl required filesystem operations:
...
03/14/2022
- 03:28 PM Bug #54501: libcephfs: client needs to update the mtime and change attr when snaps are created an...
- Nikhil,
Please take this (Xiubo is be a bit tied up with some other work).
Cheers,
Venky - 02:26 PM Documentation #54551 (Resolved): docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds ca...
- The docu _ADDING AN MDS_ starts by @Create an mds data point /var/lib/ceph/mds/ceph-${id}@ (according to Google, this...
- 01:13 PM Backport #54533 (In Progress): quincy: mds,client: suppport getvxattr RPC
- 12:28 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- The issue happens the auth mds for the primary hardlink sends a `dentry_unlink' message to replica MDSs and the repli...
- 09:16 AM Bug #54546 (New): mds: crash due to corrupt inode and omap entry
- A corrupted on-disk inode causes the MDS to crash with an assert. The backtrace looks something like:...
- 07:17 AM Bug #54411 (Fix Under Review): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon...
- 06:16 AM Bug #54411: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem i...
- This is one issue similar with https://tracker.ceph.com/issues/53293, which is for the kclient. But this is for libce...
- 06:08 AM Bug #54411: mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); 1 filesystem i...
- The mds crashed in :...
- 05:29 AM Bug #54411 (In Progress): mds_upgrade_sequence: "overall HEALTH_WARN 4 failed cephadm daemon(s); ...
03/11/2022
- 10:16 AM Backport #54478 (In Progress): pacific: ceph-fuse: If nonroot user runs ceph-fuse mount on then p...
- 03:06 AM Backport #54533 (Resolved): quincy: mds,client: suppport getvxattr RPC
- https://github.com/ceph/ceph/pull/45377
- 03:06 AM Backport #54532 (Resolved): pacific: mds,client: suppport getvxattr RPC
- https://github.com/ceph/ceph/pull/45487
- 03:00 AM Bug #51062 (Pending Backport): mds,client: suppport getvxattr RPC
03/10/2022
- 10:48 PM Bug #48873: test_cluster_set_reset_user_config: AssertionError: NFS Ganesha cluster deployment fa...
- /a/yuriw-2022-03-10_01:04:51-rados-wip-yuri5-testing-2022-03-07-0958-distro-default-smithi/6728547
- 08:51 AM Backport #54477 (In Progress): quincy: ceph-fuse: If nonroot user runs ceph-fuse mount on then pa...
- 06:29 AM Bug #54512 (Closed): 'client ls' shows only one client
- My bad, This is not a bug.
- 05:45 AM Bug #54512 (Closed): 'client ls' shows only one client
- 'client ls' shows only one client when there are two clients mounting 2 different filesystems.
Steps to reproduce:...
03/09/2022
- 03:55 PM Feature #54472: mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes
- Venky Shankar wrote:
> The user-metadata could be in a separate section in .meta, probably having the subvolume uu... - 06:43 AM Bug #54461 (Fix Under Review): ffsb.sh test failure
- I just revert the buggy commit because these two issues are contradict each other. The old issue needs it to close th...
03/08/2022
- 09:05 PM Bug #54501 (Pending Backport): libcephfs: client needs to update the mtime and change attr when s...
- This issue was identified by Jeff here, https://bugzilla.redhat.com/show_bug.cgi?id=1975689#c21
The libcephfs clie... - 01:48 PM Bug #54462: Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status...
- Venky Shankar wrote:
> Jeff thinks it might be a permission issue.
This is a different one - https://tracker.ceph... - 10:32 AM Bug #53911: client: client session state stuck in opening and hang all the time
- This fixing will introduce new bug in https://tracker.ceph.com/issues/54461.
- 10:28 AM Bug #54461: ffsb.sh test failure
- This bug is introduced by https://tracker.ceph.com/issues/53911.
- 08:09 AM Bug #54461: ffsb.sh test failure
- In remote/smithi156/log/ceph-mds.b.log.gz, we can see that the reconnection is denied by mds.b:...
- 07:50 AM Bug #54461: ffsb.sh test failure
- It failed in ffsb code:...
- 10:11 AM Backport #54479 (In Progress): pacific: mgr/stats: be resilient to offline MDS rank-0
- 10:03 AM Backport #54480 (In Progress): quincy: mgr/stats: be resilient to offline MDS rank-0
- 09:45 AM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- OK - super easy to reproduce with some directory pinning. steps -
(requires a ceph user with path restricted caps)
... - 09:25 AM Bug #54375 (Fix Under Review): mgr/volumes: The 'mode' argument is not honored on idempotent subv...
03/07/2022
- 01:47 PM Backport #54477: quincy: ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not expe...
- Nikhil, please take this.
- 05:05 AM Backport #54477 (Resolved): quincy: ceph-fuse: If nonroot user runs ceph-fuse mount on then path ...
- https://github.com/ceph/ceph/pull/45331
- 01:46 PM Backport #54480: quincy: mgr/stats: be resilient to offline MDS rank-0
- Jos, please take this.
- 05:11 AM Backport #54480 (Resolved): quincy: mgr/stats: be resilient to offline MDS rank-0
- https://github.com/ceph/ceph/pull/45291
- 01:46 PM Bug #54460 (Triaged): snaptest-multiple-capsnaps.sh test failure
- 01:45 PM Bug #54461 (Triaged): ffsb.sh test failure
- 01:44 PM Bug #54462 (Triaged): Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 w...
- Jeff thinks it might be a permission issue.
- 01:19 PM Bug #54046: unaccessible dentries after fsstress run with namespace-restricted caps
- Hitting this bug involves having hardlinks to inodes which are authoritative in another active mds. When a non-primar...
- 12:18 PM Bug #52438: qa: ffsb timeout
- Created one pr in ffsb https://github.com/ceph/ffsb/pull/3 to fix it
- 12:16 PM Bug #52438: qa: ffsb timeout
- Actually the `ffsb` test finished very fast and took 346.70 seconds:
```
2022-02-28T08:29:36.007 INFO:tasks.worku... - 05:11 AM Backport #54479 (Resolved): pacific: mgr/stats: be resilient to offline MDS rank-0
- https://github.com/ceph/ceph/pull/45293
- 05:07 AM Bug #50033 (Pending Backport): mgr/stats: be resilient to offline MDS rank-0
- 05:05 AM Backport #54478 (Resolved): pacific: ceph-fuse: If nonroot user runs ceph-fuse mount on then path...
- https://github.com/ceph/ceph/pull/45351
- 05:03 AM Bug #51062 (Resolved): mds,client: suppport getvxattr RPC
- 05:01 AM Bug #54049 (Pending Backport): ceph-fuse: If nonroot user runs ceph-fuse mount on then path is no...
03/04/2022
- 10:00 AM Feature #54472: mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes
- Just FYI - mgr/volumes uses .meta file as a metadata store for persisting subvolume related information (path, state,...
- 09:46 AM Feature #54472 (Resolved): mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes
- This is similar to RBDs `image-meta get/set/list/remove' interfaces. Updating an existing key should be supported.
... - 09:45 AM Bug #54237 (Fix Under Review): pybind/cephfs: Add mapping for Ernno 13:Permission Denied and addi...
03/03/2022
- 04:00 PM Backport #54257 (Resolved): quincy: mgr/volumes: uid/gid of the clone is incorrect
- 03:44 PM Backport #54257: quincy: mgr/volumes: uid/gid of the clone is incorrect
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45165
merged - 02:54 PM Bug #54463: mds: flush mdlog if locked and still has wanted caps not satisfied
- More detail please see bz: https://bugzilla.redhat.com/show_bug.cgi?id=2049653
- 02:45 PM Bug #54463 (Resolved): mds: flush mdlog if locked and still has wanted caps not satisfied
- In _do_cap_update() if one client is releasing the Fw caps the
relevant client range will be erased, and then new_ma... - 02:44 PM Bug #54462 (Duplicate): Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055...
- https://pulpito.ceph.com/yuriw-2022-03-01_20:21:46-fs-wip-yuri-testing-2022-02-28-0823-quincy-distro-default-smithi/6...
- 02:36 PM Bug #54461 (Resolved): ffsb.sh test failure
- https://pulpito.ceph.com/yuriw-2022-03-01_20:21:46-fs-wip-yuri-testing-2022-02-28-0823-quincy-distro-default-smithi/6...
- 02:03 PM Bug #54460 (Resolved): snaptest-multiple-capsnaps.sh test failure
- Test failure on quincy run:
https://pulpito.ceph.com/yuriw-2022-03-01_20:21:46-fs-wip-yuri-testing-2022-02-28-0823... - 01:59 PM Bug #54459 (Fix Under Review): fs:upgrade fails with "hit max job timeout"
- 01:55 PM Bug #54459 (Rejected): fs:upgrade fails with "hit max job timeout"
- fs:upgrade test upgrades from pacific v16.2.4 upto lastest. When running with a distro kernel, which might not unders...
03/02/2022
- 05:08 PM Backport #51201: octopus: qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44800
merged - 04:42 PM Backport #53865: octopus: mds: FAILED ceph_assert(mut->is_wrlocked(&pin->filelock))
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44624
merged - 03:50 PM Backport #54242: octopus: mds: clients can send a "new" op (file operation) and crash the MDS
- Venky Shankar wrote:
> https://github.com/ceph/ceph/pull/44976
merged
03/01/2022
- 06:18 AM Backport #54256 (In Progress): pacific: mgr/volumes: uid/gid of the clone is incorrect
- 06:18 AM Backport #54335 (In Progress): pacific: mgr/volumes: A deleted subvolumegroup when listed using "...
- 06:18 AM Backport #54332 (In Progress): pacific: mgr/volumes: File Quota attributes not getting inherited ...
Also available in: Atom