Project

General

Profile

Activity

From 04/16/2023 to 05/15/2023

05/15/2023

04:52 PM Backport #61167 (Resolved): quincy: [WRN] : client.408214273 isn't responding to mclientcaps(revo...
https://github.com/ceph/ceph/pull/52851
Backported https://tracker.ceph.com/issues/62197 together with this tracker.
Backport Bot
04:52 PM Backport #61166 (Resolved): pacific: [WRN] : client.408214273 isn't responding to mclientcaps(rev...
https://github.com/ceph/ceph/pull/52852
Backported https://tracker.ceph.com/issues/62199 together with this tracker.
Backport Bot
04:52 PM Backport #61165 (Resolved): reef: [WRN] : client.408214273 isn't responding to mclientcaps(revoke...
https://github.com/ceph/ceph/pull/52850
Backported https://tracker.ceph.com/issues/62198 together with this tracker.
Backport Bot
04:51 PM Bug #57244 (Pending Backport): [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ...
Venky Shankar
04:02 PM Backport #59595 (Resolved): pacific: cephfs-top: fix help text for delay
Jos Collin
03:35 PM Backport #59595: pacific: cephfs-top: fix help text for delay
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50715
merged
Yuri Weinstein
04:01 PM Backport #59398 (Resolved): pacific: cephfs-top: cephfs-top -d <seconds> not working as expected
Jos Collin
03:35 PM Backport #59398: pacific: cephfs-top: cephfs-top -d <seconds> not working as expected
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50715
merged
Yuri Weinstein
04:00 PM Backport #58807 (Resolved): pacific: cephfs-top: add an option to dump the computed values to stdout
Jos Collin
03:35 PM Backport #58807: pacific: cephfs-top: add an option to dump the computed values to stdout
Jos Collin wrote:
> Backport PR: https://github.com/ceph/ceph/pull/50715.
merged
Yuri Weinstein
03:37 PM Backport #58881: pacific: mds: Jenkins fails with skipping unrecognized type MClientRequest::Release
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50733
merged
Yuri Weinstein
03:37 PM Backport #58826: pacific: mds: make num_fwd and num_retry to __u32
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50733
merged
Yuri Weinstein
03:36 PM Backport #59007: pacific: mds stuck in 'up:replay' and crashed.
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50725
merged
Yuri Weinstein
03:34 PM Backport #58866: pacific: cephfs-top: Sort menu doesn't show 'No filesystem available' screen whe...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50596
merged
Yuri Weinstein
03:34 PM Backport #59720 (In Progress): pacific: client: read wild pointer when reconnect to mds
Venky Shankar
03:34 PM Backport #59019: pacific: cephfs-data-scan: multiple data pools are not supported
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50523
merged
Yuri Weinstein
03:32 PM Backport #59718 (In Progress): quincy: client: read wild pointer when reconnect to mds
Venky Shankar
03:24 PM Backport #59719 (In Progress): reef: client: read wild pointer when reconnect to mds
Venky Shankar
03:18 PM Backport #61158 (Resolved): reef: client: fix dump mds twice
Backport Bot
11:00 AM Bug #55446: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi... Kotresh Hiremath Ravishankar
10:41 AM Bug #50719: xattr returning from the dead (sic!)
Hi Jeff!
Just wanted to let you know that this issue is still relevant and severe with more recent versions of bot...
Thomas Hukkelberg
08:27 AM Bug #61151 (Fix Under Review): libcephfs: incorrectly showing the size for snapshots when stating...
If the *rstat* is enabled for the *.snap* snapdir it should report the total size for all the snapshots. And at the s... Xiubo Li
07:42 AM Bug #61148: dbench test results in call trace in dmesg
More detail call trace:... Xiubo Li
06:05 AM Bug #61148: dbench test results in call trace in dmesg
Another instance, but this time another workunit: https://pulpito.ceph.com/vshankar-2023-05-12_08:25:27-fs-wip-vshank... Venky Shankar
05:13 AM Bug #61148 (Rejected): dbench test results in call trace in dmesg
https://pulpito.ceph.com/vshankar-2023-05-12_08:25:27-fs-wip-vshankar-testing-20230509.090020-1-testing-default-smith... Venky Shankar
06:57 AM Fix #59667 (Resolved): qa: ignore cluster warning encountered in test_refuse_client_session_on_re...
Venky Shankar
04:50 AM Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- https://pulpito.ceph.com/vshankar-2023-05-12_08:25:27-fs-wip-vshankar-testing-20230509.090020-1-testing-default-smi... Venky Shankar
02:57 AM Bug #61009 (Fix Under Review): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0ea734a1639fc4740189dcd0...
Telemetry Bot
02:57 AM Bug #61008 (New): crash: void interval_set<T, C>::insert(T, T, T*, T*) [with T = inodeno_t; C = s...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8e3f5c0126b1f4f50f08ff5b...
Telemetry Bot
02:57 AM Bug #61004 (New): crash: MDSRank::is_stale_message(boost::intrusive_ptr<Message const> const&) const

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=af82cc6e82ac3651d4918c4a...
Telemetry Bot
02:56 AM Bug #60986 (New): crash: void MDCache::rejoin_send_rejoins(): assert(auth >= 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=71d482317bedfc17674af4f5...
Telemetry Bot
02:56 AM Bug #60980 (New): crash: Session* MDSRank::get_session(ceph::cref_t<Message>&): assert(session->i...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e2a2ae4253fafecb8b3ca014...
Telemetry Bot
02:55 AM Bug #60949 (New): crash: cephfs-journal-tool(

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0d4dcd3a80040e9cf7f44ee7...
Telemetry Bot
02:55 AM Bug #60945 (New): crash: virtual void C_Client_Remount::finish(int): abort

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=594102a81a05cdba00f19a82...
Telemetry Bot
02:49 AM Bug #60685 (New): crash: elist<T>::~elist() [with T = CInode*]: assert(_head.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c4679655cf1f90278c3b054b...
Telemetry Bot
02:49 AM Bug #60679 (New): crash: C_GatherBuilderBase<ContextType, GatherType>::~C_GatherBuilderBase() [wi...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5b6b5acff6ff7a2c7ee0565f...
Telemetry Bot
02:48 AM Bug #60669 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2cf0c9e2e9b09fc177e74e6f...
Telemetry Bot
02:48 AM Bug #60668 (New): crash: void Migrator::export_try_cancel(CDir*, bool): assert(it != export_state...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ff4594c6abd556deefbc7327...
Telemetry Bot
02:48 AM Bug #60665 (New): crash: void MDCache::open_snaprealms(): assert(rejoin_done)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0da14d340ca86eb98a0a866c...
Telemetry Bot
02:48 AM Bug #60664 (New): crash: elist<T>::~elist() [with T = CDentry*]: assert(_head.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=770f229d0a641695e3d43ffb...
Telemetry Bot
02:48 AM Bug #60660 (New): crash: std::_Rb_tree_rebalance_for_erase(std::_Rb_tree_node_base*, std::_Rb_tre...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6bb8c01a74a9dcf2f2500ef7...
Telemetry Bot
02:48 AM Bug #60640 (New): crash: void Journaler::_write_head(Context*): assert(last_written.write_pos >= ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9e5c5d1ecf602782154f9b19...
Telemetry Bot
02:48 AM Bug #60636 (New): crash: elist<T>::~elist() [with T = CDir*]: assert(_head.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5fc520e822550d2d86ab92a1...
Telemetry Bot
02:47 AM Bug #60630 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e4d4bb371a344df64ed9d22c...
Telemetry Bot
02:47 AM Bug #60629 (New): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=978159d0d2675e074cadedff...
Telemetry Bot
02:47 AM Bug #60628 (New): crash: MDCache::purge_inodes(const interval_set<inodeno_t>&, LogSegment*)::<lam...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=de50fd8802ec5732812d300d...
Telemetry Bot
02:47 AM Bug #60627 (New): crash: void MDCache::handle_dentry_unlink(ceph::cref_t<MDentryUnlink>&): assert...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f9f4072660844962480cb518...
Telemetry Bot
02:47 AM Bug #60625 (Resolved): crash: MDSRank::send_message_client(boost::intrusive_ptr<Message> const&, ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c2092a196eb69c3c08a39646...
Telemetry Bot
02:47 AM Bug #60622 (New): crash: void MDCache::handle_dentry_unlink(ceph::cref_t<MDentryUnlink>&): assert...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a081e9516cda7c2c8dfa596c...
Telemetry Bot
02:47 AM Bug #60618 (New): crash: Session* MDSRank::get_session(ceph::cref_t<Message>&): assert(session->i...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b0d320aceb93370daf216e36...
Telemetry Bot
02:47 AM Bug #60607 (New): crash: virtual void MDSCacheObject::bad_put(int): assert(ref_map[by] > 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7ad58204e57e59e10794c28d...
Telemetry Bot
02:47 AM Bug #60606 (New): crash: ceph::buffer::list::iterator_impl<true>::copy(unsigned int, char*)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e841c48356848e8657d3cb5e...
Telemetry Bot
02:47 AM Bug #60600 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=183a087b3f731571c6337b66...
Telemetry Bot
02:46 AM Bug #60598 (New): crash: void MDCache::handle_dentry_unlink(ceph::cref_t<MDentryUnlink>&): assert...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1ebda7a102930caf786ba16e...
Telemetry Bot
02:41 AM Bug #60372 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=42c202ed03daf7e9cbd008db...
Telemetry Bot
02:40 AM Bug #60343 (New): crash: void MDCache::handle_cache_rejoin_ack(ceph::cref_t<MMDSCacheRejoin>&): a...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=374c1bb49952f4c442a939bb...
Telemetry Bot
02:40 AM Bug #60319 (New): crash: std::_Rb_tree<dirfrag_t, dirfrag_t, std::_Identity<dirfrag_t>, std::less...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=20a0ec6572610c3fb12e9f12...
Telemetry Bot
02:39 AM Bug #60303 (New): crash: __pthread_mutex_lock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f88cc08b61611695f0a919ea...
Telemetry Bot
02:38 AM Bug #60241 (New): crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1e491c6d2002d8aa5fa17dac...
Telemetry Bot
02:35 AM Bug #60126 (New): crash: bool MDCache::shutdown_pass(): assert(!migrator->is_importing())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a36f9e1d71483b4075ca8625...
Telemetry Bot
02:34 AM Bug #60109 (New): crash: Session* MDSRank::get_session(ceph::cref_t<Message>&): assert(session->i...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=abb374dbfa3649257e6590ea...
Telemetry Bot
02:34 AM Bug #60092 (New): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&): asser...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=19e45b6ba87e5ddb0df16715...
Telemetry Bot
02:32 AM Bug #60014 (New): crash: void MDCache::remove_replay_cap_reconnect(inodeno_t, client_t): assert(c...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cc7ca4bafc15d4883c77e861...
Telemetry Bot
02:29 AM Bug #59865 (New): crash: CInode::get_dirfrags() const

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=546c0379bf1bc4705b166c60...
Telemetry Bot
02:28 AM Bug #59833 (Pending Backport): crash: void MDLog::trim(int): assert(segments.size() >= pre_segmen...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=21725944feb692959f706d0f...
Telemetry Bot
02:27 AM Bug #59819 (New): crash: virtual CDentry::~CDentry(): assert(batch_ops.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5520f521a1ed7653b6505f60...
Telemetry Bot
02:25 AM Bug #59802 (New): crash: void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*): assert(in...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=78028e7ef848aec87e854972...
Telemetry Bot
02:25 AM Bug #59799 (New): crash: ProtocolV2::handle_auth_request(ceph::buffer::list&)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=34dce467397e70daae79ec8e...
Telemetry Bot
02:24 AM Bug #59785 (Closed): crash: void ScatterLock::set_xlock_snap_sync(MDSContext*): assert(state == L...

*New crash events were reported via Telemetry with newer versions (['17.2.1', '17.2.5']) than encountered in Tracke...
Telemetry Bot
02:23 AM Bug #59768 (Duplicate): crash: void EMetaBlob::replay(MDSRank*, LogSegment*, MDPeerUpdate*): asse...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1a84e31a4bc3ae6dc69d901c...
Telemetry Bot
02:23 AM Bug #59767 (New): crash: MDSDaemon::dump_status(ceph::Formatter*)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=15752cc3020e5047d0724344...
Telemetry Bot
02:23 AM Bug #59766 (New): crash: virtual void ESession::replay(MDSRank*): assert(g_conf()->mds_wipe_sessi...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=36e909287876c9be42c068ca...
Telemetry Bot
02:22 AM Bug #59761 (New): crash: void MDLog::_replay_thread(): assert(journaler->is_readable() || mds->is...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=71bbbf63c8f73aa37e3aa82e...
Telemetry Bot
02:22 AM Bug #59751 (New): crash: MDSDaemon::respawn()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8bfff60bfa4ffd456fa49fb8...
Telemetry Bot
02:14 AM Bug #59749 (New): crash: virtual CInode::~CInode(): assert(batch_ops.empty())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=479afe0191403e023f1878c1...
Telemetry Bot
02:14 AM Bug #58934 (Duplicate): snaptest-git-ceph.sh failure with ceph-fuse
This should be the same issue with https://tracker.ceph.com/issues/59343. Xiubo Li
02:13 AM Bug #59741 (New): crash: void MDCache::remove_inode(CInode*): assert(o->get_num_ref() == 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f60159ef05cf6bfbf51c6688...
Telemetry Bot

05/12/2023

09:24 AM Bug #59736 (New): qa: add one test case for "kclient: ln: failed to create hard link 'file name':...
We need to add one test case for https://tracker.ceph.com/issues/59515.... Xiubo Li

05/11/2023

08:18 PM Bug #56774: crash: Client::_get_vino(Inode*)
Since this issue is marked as "Duplicate" it needs to specify what issue it duplicates in the "Related Issues" field.... Yaarit Hatuka
08:15 PM Bug #56282: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_...
Since this issue is marked as "Duplicate" it needs to specify what issue it duplicates in the "Related Issues" field.... Yaarit Hatuka
05:10 PM Bug #59716 (Fix Under Review): tools/cephfs/first-damage: unicode decode errors break iteration
Patrick Donnelly
12:55 PM Feature #59601: Provide way to abort kernel mount after lazy umount
Niklas Hambuechen wrote:
> Venky Shankar wrote:
> > Have you tried force unmounting the mount (unount -f)?
>
> A...
Venky Shankar
08:53 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
Seen in pacific run
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-...
Kotresh Hiremath Ravishankar
08:48 AM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi... Kotresh Hiremath Ravishankar
08:05 AM Bug #48773: qa: scrub does not complete
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi... Kotresh Hiremath Ravishankar
07:46 AM Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
https://pulpito.ceph.com/yuriw-2023-05-09_19:39:46-fs-wip-yuri4-testing-2023-05-08-0846-pacific-distro-default-smithi... Kotresh Hiremath Ravishankar
05:58 AM Backport #59726 (Resolved): quincy: mds: allow entries to be removed from lost+found directory
https://github.com/ceph/ceph/pull/51689 Backport Bot
05:58 AM Backport #59725 (Resolved): pacific: mds: allow entries to be removed from lost+found directory
https://github.com/ceph/ceph/pull/51687 Backport Bot
05:58 AM Backport #59724 (Resolved): reef: mds: allow entries to be removed from lost+found directory
https://github.com/ceph/ceph/pull/51607 Backport Bot
05:51 AM Bug #59569 (Pending Backport): mds: allow entries to be removed from lost+found directory
Venky Shankar
05:50 AM Backport #59723 (Resolved): reef: qa: run scrub post disaster recovery procedure
https://github.com/ceph/ceph/pull/51606 Backport Bot
05:50 AM Backport #59722 (In Progress): quincy: qa: run scrub post disaster recovery procedure
https://github.com/ceph/ceph/pull/51690 Backport Bot
05:50 AM Backport #59721 (Resolved): pacific: qa: run scrub post disaster recovery procedure
https://github.com/ceph/ceph/pull/51610 Backport Bot
05:50 AM Bug #59527 (Pending Backport): qa: run scrub post disaster recovery procedure
Venky Shankar
03:59 AM Backport #59720 (Resolved): pacific: client: read wild pointer when reconnect to mds
https://github.com/ceph/ceph/pull/51487 Backport Bot
03:59 AM Backport #59719 (Resolved): reef: client: read wild pointer when reconnect to mds
https://github.com/ceph/ceph/pull/51484 Backport Bot
03:59 AM Backport #59718 (Resolved): quincy: client: read wild pointer when reconnect to mds
https://github.com/ceph/ceph/pull/51486 Backport Bot
03:56 AM Bug #59514 (Pending Backport): client: read wild pointer when reconnect to mds
Venky Shankar

05/10/2023

04:54 PM Bug #59716 (Pending Backport): tools/cephfs/first-damage: unicode decode errors break iteration
... Patrick Donnelly
11:35 AM Feature #59714 (Pending Backport): mgr/volumes: Support to reject CephFS clones if cloner threads...
1. CephFS clone creation have a limit of 4 parallel clones at a time and rest
of the clone create requests are queue...
Neeraj Pratap Singh
08:43 AM Backport #59708 (Resolved): reef: Mds crash and fails with assert on prepare_new_inode
https://github.com/ceph/ceph/pull/51506 Backport Bot
08:43 AM Backport #59707 (Resolved): quincy: Mds crash and fails with assert on prepare_new_inode
https://github.com/ceph/ceph/pull/51507 Backport Bot
08:43 AM Backport #59706 (Resolved): pacific: Mds crash and fails with assert on prepare_new_inode
https://github.com/ceph/ceph/pull/51508 Backport Bot
08:36 AM Bug #52280 (Pending Backport): Mds crash and fails with assert on prepare_new_inode
Venky Shankar
04:49 AM Bug #59705 (Fix Under Review): client: only wait for write MDS OPs when unmounting
Xiubo Li
04:46 AM Bug #59705 (Resolved): client: only wait for write MDS OPs when unmounting
We do not care about the read MDS OPs and it's safe by just dropping
them when unmounting.
Xiubo Li

05/09/2023

07:50 PM Bug #59394: ACLs not fully supported.
Milind Changire wrote:
> Brian,
> The command you are using is correct.
> However, the config key is incorrect.
>...
Brian Woods
07:44 PM Bug #59394: ACLs not fully supported.
Milind Changire wrote:
> Brian,
> The command you are using is correct.
> However, the config key is incorrect.
>...
Brian Woods
01:30 PM Bug #59691 (Fix Under Review): mon/MDSMonitor: may lookup non-existent fs in current MDSMap
Patrick Donnelly
01:22 PM Bug #59691 (Resolved): mon/MDSMonitor: may lookup non-existent fs in current MDSMap
... Patrick Donnelly
01:06 PM Feature #59601: Provide way to abort kernel mount after lazy umount
Venky Shankar wrote:
> Have you tried force unmounting the mount (unount -f)?
After *umount --lazy*, the mount po...
Niklas Hambuechen
12:48 PM Bug #59688 (Triaged): mds: idempotence issue in client request
Found the mds may process a same client request twice after session with client rebuild because the network issue.
...
Mer Xuanyi
06:41 AM Bug #59683 (Resolved): Error: Unable to find a match: userspace-rcu-devel libedit-devel device-ma...
- https://pulpito.ceph.com/vshankar-2023-05-06_17:28:05-fs-wip-vshankar-testing-20230506.143554-testing-default-smith... Venky Shankar
05:52 AM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
Another instance: https://pulpito.ceph.com/vshankar-2023-05-06_17:28:05-fs-wip-vshankar-testing-20230506.143554-testi... Venky Shankar
05:23 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
Venky Shankar wrote:
> Xiubo Li wrote:
> > This is a failure with *libcephfs* and have the client side logs:
> >
...
Xiubo Li
04:19 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
Xiubo Li wrote:
> This is a failure with *libcephfs* and have the client side logs:
>
> vshankar-2023-04-06_04:14...
Venky Shankar
01:54 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
This also needs to be fixed in kclient. Xiubo Li
03:52 AM Bug #59682: CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the unit file o...
Thanks for letting us know, Zac. Venky Shankar
02:28 AM Bug #59682 (Resolved): CephFS: Debian cephfs-mirror package in the Ceph repo doesn't install the ...
The Debian package "cephfs-mirror" in the Ceph repository doesn't install the unit file or the man page.
This was ...
Zac Dover

05/08/2023

06:33 PM Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2
Lots more in https://pulpito.ceph.com/yuriw-2023-05-06_14:41:44-rados-pacific-release-distro-default-smithi/ Laura Flores
02:59 PM Feature #59601: Provide way to abort kernel mount after lazy umount
Niklas Hambuechen wrote:
> In some situations, e.g. when changing monitor IPs during an emergency network reconfigur...
Venky Shankar
09:16 AM Bug #59163: mds: stuck in up:rejoin when it cannot "open" missing directory inode
The MDS got marked as down:damaged as it could not decode the CDir fnode:... Venky Shankar
09:09 AM Fix #59667 (In Progress): qa: ignore cluster warning encountered in test_refuse_client_session_on...
Dhairya Parmar
08:48 AM Fix #59667 (Pending Backport): qa: ignore cluster warning encountered in test_refuse_client_sessi...
Seen in http://pulpito.front.sepia.ceph.com/vshankar-2023-05-06_17:28:05-fs-wip-vshankar-testing-20230506.143554-test... Dhairya Parmar
06:34 AM Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
This is a failure with *libcephfs* and have the client side logs:
vshankar-2023-04-06_04:14:11-fs-wip-vshankar-tes...
Xiubo Li
06:13 AM Bug #59343 (Fix Under Review): qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
Xiubo Li
06:13 AM Bug #14557: snaps: failed snaptest-multiple-capsnaps.sh

Xiubo Li
03:16 AM Bug #14557: snaps: failed snaptest-multiple-capsnaps.sh
Xiubo Li wrote:
> vshankar-2023-04-06_04:14:11-fs-wip-vshankar-testing-20230330.105356-testing-default-smithi/723370...
Xiubo Li
03:03 AM Bug #14557: snaps: failed snaptest-multiple-capsnaps.sh
vshankar-2023-04-06_04:14:11-fs-wip-vshankar-testing-20230330.105356-testing-default-smithi/7233705/teuthology.log
...
Xiubo Li
03:06 AM Bug #59626 (Resolved): pacific: FSMissing: File system xxxx does not exist in the map
Venky Shankar

05/05/2023

06:24 PM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
https://github.com/ceph/ceph/pull/51344 merged Yuri Weinstein
03:38 PM Backport #59560: pacific: qa: RuntimeError: more than one file system available
Rishabh Dave wrote:
> https://github.com/ceph/ceph/pull/51232
merged
Yuri Weinstein
11:44 AM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
This is kind of strange, when I initially wanted to test cephfs-top I chose a different virtual ceph cluster which al... Eugen Block
07:39 AM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Jos Collin wrote:
> Eugen Block wrote:
> > Not sure if required but I wanted to add some more information, while ru...
Eugen Block
06:04 AM Bug #59551 (Need More Info): mgr/stats: exception ValueError :invalid literal for int() with base...
xinyu wang wrote:
> 'ceph fs perf stats' command miss some metadata for cephfs client, such as kernel_version.
>
...
Jos Collin
05:25 AM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Eugen Block wrote:
> Not sure if required but I wanted to add some more information, while running cephfs-top the mg...
Jos Collin
09:36 AM Bug #54017: Problem with ceph fs snapshot mirror and read-only folders
This seems the same issue: https://pulpito.ceph.com/vshankar-2023-05-03_16:19:11-fs-wip-vshankar-testing-20230503.142... Xiubo Li
09:17 AM Bug #59657 (Fix Under Review): qa: test with postgres failed (deadlock between link and migrate s...
Xiubo Li
08:40 AM Bug #59657: qa: test with postgres failed (deadlock between link and migrate straydn(rename))
From the logs evicting the unresponding client or closing the sessions could unblock the deadlock issue:... Xiubo Li
07:48 AM Bug #59657 (Resolved): qa: test with postgres failed (deadlock between link and migrate straydn(r...
https://pulpito.ceph.com/vshankar-2023-05-03_16:19:11-fs-wip-vshankar-testing-20230503.142424-testing-default-smithi/... Xiubo Li
08:49 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
Please evict the corresponding client to unblock this deadlock. Mostly this should work, if not then please restart t... Xiubo Li
02:49 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
Another failure the same with this: https://pulpito.ceph.com/vshankar-2023-05-03_16:19:11-fs-wip-vshankar-testing-202... Xiubo Li

05/04/2023

06:06 PM Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2
/a/yuriw-2023-04-25_18:56:08-rados-wip-yuri5-testing-2023-04-25-0837-pacific-distro-default-smithi/7252409 Laura Flores
11:57 AM Bug #59551: mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
Not sure if required but I wanted to add some more information, while running cephfs-top the mgr module crashes all t... Eugen Block
10:57 AM Bug #59626 (Fix Under Review): pacific: FSMissing: File system xxxx does not exist in the map
Rishabh Dave
01:01 AM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
Rishabh Dave wrote:
> @setupfs()@ is not being called quincy onwards which is why we don't see this bug after paci...
Venky Shankar
01:00 AM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
... and also doesn't explain why are we seeing this failure now. The last pacific run https://tracker.ceph.com/projec... Venky Shankar
10:50 AM Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
This issue occurred in Pacific run - /ceph/teuthology-archive/yuriw-2023-04-25_19:03:49-fs-wip-yuri5-testing-2023-04-... Rishabh Dave

05/03/2023

06:30 PM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
Apparently, this exception is expected because @backup_fs@ is being deleted little before the traceback is printed an... Rishabh Dave
01:34 PM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
This commit removed the createfs boolean:... Venky Shankar
01:31 PM Bug #59626: pacific: FSMissing: File system xxxx does not exist in the map
The pacific code is a bit different than other branches where the interesting bit is:
pacific:...
Venky Shankar
01:27 PM Bug #59626 (Resolved): pacific: FSMissing: File system xxxx does not exist in the map
Two separate jobs fail due to this issue
* TestMirroring.test_cephfs_mirror_cancel_sync: https://pulpito.ceph.com/yu...
Venky Shankar
10:09 AM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
Seen in https://pulpito.ceph.com/yuriw-2023-04-26_21:59:47-fs-wip-yuri2-testing-2023-04-26-1247-pacific-distro-defaul... Kotresh Hiremath Ravishankar
04:55 AM Backport #59596 (In Progress): reef: cephfs-top: fix help text for delay
Neeraj Pratap Singh
02:25 AM Backport #59620 (Resolved): quincy: client: fix dump mds twice
Backport Bot

05/02/2023

02:36 PM Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
Rishabh, can you post a backport for this? We are hitting this in Yuri's run: https://pulpito.ceph.com/yuriw-2023-04-... Venky Shankar
02:36 PM Feature #59601 (New): Provide way to abort kernel mount after lazy umount
In some situations, e.g. when changing monitor IPs during an emergency network reconfiguration, CephFS kernel mounts ... Niklas Hambuechen
01:30 PM Feature #59388: mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with uid and/or...
The PR's makes all 31 types of MDS cap parse successfully. Rishabh Dave
01:30 PM Feature #59388 (Fix Under Review): mds/MDSAuthCaps: "fsname", path, root_squash can't be in same ...
Rishabh Dave
01:13 PM Feature #59388: mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with uid and/or...
We can have 5 elements in one MDS Cap -
1. fs name (string)
2. fs path (string)
3. root_squash (bool)
4. uid (i...
Rishabh Dave
12:57 PM Bug #59552 (Triaged): mon: block osd pool mksnap for fs pools
Venky Shankar
10:59 AM Backport #59595 (In Progress): pacific: cephfs-top: fix help text for delay
Jos Collin
06:49 AM Backport #59595 (Resolved): pacific: cephfs-top: fix help text for delay
https://github.com/ceph/ceph/pull/50715 Backport Bot
10:22 AM Bug #58228: mgr/nfs: disallow non-existent paths when creating export
Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya, please pick up additional commits from https://github.com...
Dhairya Parmar
09:38 AM Bug #58228: mgr/nfs: disallow non-existent paths when creating export
Venky Shankar wrote:
> Dhairya, please pick up additional commits from https://github.com/ceph/ceph/pull/51005.
A...
Dhairya Parmar
06:01 AM Bug #58228: mgr/nfs: disallow non-existent paths when creating export
Dhairya, please pick up additional commits from https://github.com/ceph/ceph/pull/51005. Venky Shankar
09:53 AM Backport #59594 (In Progress): quincy: cephfs-top: fix help text for delay
Jos Collin
06:49 AM Backport #59594 (Resolved): quincy: cephfs-top: fix help text for delay
https://github.com/ceph/ceph/pull/50717 Backport Bot
06:49 AM Backport #59596 (Resolved): reef: cephfs-top: fix help text for delay
https://github.com/ceph/ceph/pull/50998 Backport Bot
06:48 AM Bug #59553 (Pending Backport): cephfs-top: fix help text for delay
Venky Shankar

04/28/2023

10:10 PM Bug #59530: mgr-nfs-upgrade: mds.foofs has 0/2
/a/yuriw-2023-04-26_20:20:05-rados-pacific-release-distro-default-smithi/7255292... Laura Flores
07:13 AM Bug #59582 (Pending Backport): snap-schedule: allow retention spec to specify max number of snaps...
Along with daily, weekly, monthly and yearly snaps, users also need a way to mention the max number of snaps they nee... Milind Changire

04/27/2023

12:21 PM Bug #51271 (Resolved): mgr/volumes: use a dedicated libcephfs handle for subvolume API calls
Konstantin Shalygin
12:20 PM Bug #51357 (Resolved): osd: sent kickoff request to MDS and then stuck for 15 minutes until MDS c...
Konstantin Shalygin
12:20 PM Bug #50389 (Resolved): mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or ...
Konstantin Shalygin
12:20 PM Backport #50849 (Rejected): octopus: mds: "cluster [ERR] Error recovering journal 0x203: (2) No...
Octopus is EOL Konstantin Shalygin
12:20 PM Backport #51482 (Rejected): octopus: osd: sent kickoff request to MDS and then stuck for 15 minut...
Octopus is EOL Konstantin Shalygin
12:20 PM Backport #51545 (Rejected): octopus: mgr/volumes: use a dedicated libcephfs handle for subvolume ...
Octopus is EOL Konstantin Shalygin
12:19 PM Bug #51857 (Resolved): client: make sure only to update dir dist from auth mds
Konstantin Shalygin
12:19 PM Backport #51976 (Rejected): octopus: client: make sure only to update dir dist from auth mds
Octopus is EOL Konstantin Shalygin
12:18 PM Backport #53304 (Rejected): octopus: Improve API documentation for struct ceph_client_callback_args
Octopus is EOL Konstantin Shalygin
09:52 AM Bug #59569 (In Progress): mds: allow entries to be removed from lost+found directory
Venky Shankar
09:38 AM Bug #59569 (Resolved): mds: allow entries to be removed from lost+found directory
Post file system recovery, files which have missing backtraces are recovered in lost+found directory. Users could cho... Venky Shankar
09:07 AM Backport #50252 (Rejected): octopus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/cli...
Octopus is EOL Konstantin Shalygin
08:59 AM Backport #59411 (In Progress): reef: snap-schedule: handle non-existent path gracefully during sn...
Milind Changire
08:58 AM Bug #51600 (Resolved): mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and cache_hit_rate ...
Konstantin Shalygin
08:57 AM Backport #51831 (Rejected): octopus: mds: META_POP_READDIR, META_POP_FETCH, META_POP_STORE, and c...
Octopus is EOL Konstantin Shalygin
08:56 AM Backport #52442 (In Progress): pacific: client: fix dump mds twice
Konstantin Shalygin
08:55 AM Backport #52443 (Resolved): octopus: client: fix dump mds twice
Konstantin Shalygin
08:55 AM Backport #59017 (In Progress): pacific: snap-schedule: handle non-existent path gracefully during...
Milind Changire
08:53 AM Bug #51870 (Resolved): pybind/mgr/volumes: first subvolume permissions set perms on /volumes and ...
Konstantin Shalygin
08:53 AM Backport #52629 (Resolved): octopus: pybind/mgr/volumes: first subvolume permissions set perms on...
Konstantin Shalygin
08:53 AM Bug #56727 (Resolved): mgr/volumes: Subvolume creation failed on FIPs enabled system
Konstantin Shalygin
08:53 AM Backport #56979 (Resolved): quincy: mgr/volumes: Subvolume creation failed on FIPs enabled system
Konstantin Shalygin
08:52 AM Backport #56980 (Rejected): octopus: mgr/volumes: Subvolume creation failed on FIPs enabled system
Octopus is EOL Konstantin Shalygin
08:51 AM Backport #53995 (Rejected): octopus: qa: begin grepping kernel logs for kclient warnings/failures...
Octopus is EOL Konstantin Shalygin
06:49 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Venky Shankar wrote:
> Xiubo Li wrote:
> > This https://github.com/ceph/ceph/pull/47457 PR just merged 3 weeks ago ...
Xiubo Li
06:40 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
> This https://github.com/ceph/ceph/pull/47457 PR just merged 3 weeks ago and changed the *MOSDOp* h...
Venky Shankar
05:17 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
This https://github.com/ceph/ceph/pull/47457 PR just merged 3 weeks ago and changed the *MOSDOp* head version to *9* ... Xiubo Li
05:13 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Sorry the comments conflict and assigned it back when committing it. Xiubo Li
05:13 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
> Xiubo Li wrote:
> > https://pulpito.ceph.com/vshankar-2023-04-20_04:50:51-fs-wip-vshankar-testing...
Xiubo Li
05:09 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Taking this from Xiubo (spoke verbally over call). Venky Shankar
04:43 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
> https://pulpito.ceph.com/vshankar-2023-04-20_04:50:51-fs-wip-vshankar-testing-20230412.053558-test...
Xiubo Li
04:14 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
This is a normal case: the client sent two requests with *e95* then the osd handled it and at the same time sent back... Xiubo Li
03:24 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
> This one is using the *pacific* cluster and also has the same issue : https://pulpito.ceph.com/vsh...
Xiubo Li
02:47 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
This one is using the *pacific* cluster and also has the same issue : https://pulpito.ceph.com/vshankar-2023-04-20_04... Xiubo Li

04/26/2023

04:29 PM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
The log I copied is not actually showing the rmdir latency since I filtered incorrectly, sorry about that. This one i... Florian Pritz
04:24 PM Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
Has there been any progress on this in the last year? We believe we are hitting the same issue on a production cluste... Florian Pritz
02:26 PM Backport #59560 (In Progress): pacific: qa: RuntimeError: more than one file system available
Rishabh Dave
02:20 PM Backport #59560 (Resolved): pacific: qa: RuntimeError: more than one file system available
https://github.com/ceph/ceph/pull/51232 Rishabh Dave
02:24 PM Backport #59559 (In Progress): reef: qa: RuntimeError: more than one file system available
Rishabh Dave
02:20 PM Backport #59559 (Resolved): reef: qa: RuntimeError: more than one file system available
https://github.com/ceph/ceph/pull/51231 Rishabh Dave
02:20 PM Backport #59558 (In Progress): quincy: qa: RuntimeError: more than one file system available
https://github.com/ceph/ceph/pull/52241 Rishabh Dave
02:20 PM Bug #59425 (Pending Backport): qa: RuntimeError: more than one file system available
Rishabh Dave
12:48 PM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Venky Shankar wrote:
> BTW - all failures are for ceph-fuse. kclient seems fine.
Yeah, because the upgrade client...
Xiubo Li
12:26 PM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
BTW - all failures are for ceph-fuse. kclient seems fine. Venky Shankar
08:41 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Venky Shankar wrote:
> Xiubo Li wrote:
>
> >
> > This is a upgrade client test case from nautilus, and the clie...
Xiubo Li
06:04 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
>
> This is a upgrade client test case from nautilus, and the client sent two osd request with *...
Venky Shankar
04:44 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> >
> > [...]
> >
> > > The osd requests are just ...
Venky Shankar
06:03 AM Bug #59394: ACLs not fully supported.
Brian,
The command you are using is correct.
However, the config key is incorrect.
Set debug_mds to 20 for all mds...
Milind Changire
05:56 AM Bug #59553 (Fix Under Review): cephfs-top: fix help text for delay
Jos Collin
05:55 AM Bug #59553 (Resolved): cephfs-top: fix help text for delay
... Jos Collin
05:19 AM Bug #59552 (Resolved): mon: block osd pool mksnap for fs pools
disabling of mon-managed snaps for fs pools has been taken care of for 'rados mksnap' path
unfortunately, the 'ceph ...
Milind Changire
03:18 AM Bug #59551 (Resolved): mgr/stats: exception ValueError :invalid literal for int() with base 16: '0x'
'ceph fs perf stats' command miss some metadata for cephfs client, such as kernel_version. ... xinyu wang

04/25/2023

12:52 PM Bug #59530 (Triaged): mgr-nfs-upgrade: mds.foofs has 0/2
Venky Shankar
12:51 PM Bug #59534 (Triaged): qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Inp...
Venky Shankar
05:45 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Venky Shankar wrote:
> Xiubo, I saw updates in this tracker late and assigned the tracker to you in a rush. Do we kn...
Xiubo Li
05:44 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Venky Shankar wrote:
> Xiubo Li wrote:
>
> [...]
>
> > The osd requests are just dropped by *osd.3* because of...
Xiubo Li
05:12 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo Li wrote:
[...]
> The osd requests are just dropped by *osd.3* because of the osdmap versions were mismat...
Venky Shankar
04:44 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Xiubo, I saw updates in this tracker late and assigned the tracker to you in a rush. Do we know what is causing the o... Venky Shankar
04:36 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Another one: https://pulpito.ceph.com/vshankar-2023-04-21_04:40:06-fs-wip-vshankar-testing-20230420.132447-testing-de... Xiubo Li
02:21 AM Bug #59534: qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output ...
Another: http://qa-proxy.ceph.com/teuthology/vshankar-2023-04-20_04:50:51-fs-wip-vshankar-testing-20230412.053558-tes... Xiubo Li
02:12 AM Bug #59534 (Triaged): qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Inp...
https://pulpito.ceph.com/vshankar-2023-04-20_04:50:51-fs-wip-vshankar-testing-20230412.053558-testing-default-smithi/... Xiubo Li
06:41 AM Bug #59527 (In Progress): qa: run scrub post disaster recovery procedure
Venky Shankar
06:11 AM Feature #58057: cephfs-top: enhance fstop tests to cover testing displayed data
Jos had a query regarding this - the idea is to dump (JSON) data which would otherwise be diplayed via ncurses so as ... Venky Shankar
05:40 AM Feature #58057 (In Progress): cephfs-top: enhance fstop tests to cover testing displayed data
Jos Collin
04:33 AM Bug #59463: mgr/nfs: Setting NFS export config using -i option is not working
Dhairya Parmar wrote:
> I was going through the config file content and found `pseudo_path` to be a bit doubtful, it...
Venky Shankar
03:40 AM Bug #59514 (Triaged): client: read wild pointer when reconnect to mds
Venky Shankar

04/24/2023

10:35 PM Bug #59530 (Triaged): mgr-nfs-upgrade: mds.foofs has 0/2
/a/yuriw-2023-04-06_15:37:58-rados-wip-yuri3-testing-2023-04-04-0833-pacific-distro-default-smithi/7234302... Laura Flores
03:49 PM Bug #59394: ACLs not fully supported.
Milind Changire wrote:
> Brian,
> Could you share the MDS debug logs for this specific operation.
> It'll help us ...
Brian Woods
09:32 AM Bug #59394: ACLs not fully supported.
Brian,
Could you share the MDS debug logs for this specific operation.
It'll help us identify the failure point.
...
Milind Changire
12:26 PM Bug #59527 (Pending Backport): qa: run scrub post disaster recovery procedure
test_data_scan does test a variety of data/metadata recovery steps, however, many tests do not run scrub which is rec... Venky Shankar
03:47 AM Bug #59514 (Pending Backport): client: read wild pointer when reconnect to mds
We use `shallow_copy`(24279ef8) for `MetaRequest::set_caller_perms ` in `Client::make_request` but indeed the lifetim... Mer Xuanyi

04/23/2023

01:36 AM Bug #58340: mds: fsstress.sh hangs with multimds (deadlock between unlink and reintegrate straydn...
... Xiubo Li

04/21/2023

09:50 AM Fix #58758: qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
Laura Flores wrote:
> /a/yuriw-2023-03-30_21:29:24-rados-wip-yuri2-testing-2023-03-30-0826-distro-default-smithi/722...
Dhairya Parmar

04/20/2023

12:37 PM Bug #59463: mgr/nfs: Setting NFS export config using -i option is not working
I believe the problem is you're trying to set a config for an export that the module is managing itself. i.e. you're ... Patrick Donnelly
11:20 AM Bug #59463: mgr/nfs: Setting NFS export config using -i option is not working
I was going through the config file content and found `pseudo_path` to be a bit doubtful, it should be `pseudo-path` ... Dhairya Parmar

04/19/2023

09:25 AM Backport #59481 (In Progress): reef: cephfs-top, qa: test the current python version is supported
Jos Collin
06:07 AM Backport #59481 (Resolved): reef: cephfs-top, qa: test the current python version is supported
https://github.com/ceph/ceph/pull/51142 Backport Bot
09:20 AM Backport #59483 (In Progress): quincy: cephfs-top, qa: test the current python version is supported
Jos Collin
06:08 AM Backport #59483 (Resolved): quincy: cephfs-top, qa: test the current python version is supported
https://github.com/ceph/ceph/pull/51354 Backport Bot
09:09 AM Backport #59482 (In Progress): pacific: cephfs-top, qa: test the current python version is supported
Jos Collin
06:08 AM Backport #59482 (Resolved): pacific: cephfs-top, qa: test the current python version is supported
https://github.com/ceph/ceph/pull/51353 Backport Bot
06:41 AM Feature #45021: client: new asok commands for diagnosing cap handling issues
Kotresh, I'm taking this one and 44279 Venky Shankar
06:38 AM Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
Did we RCA this, Kotresh? Venky Shankar
06:02 AM Bug #58677: cephfs-top: test the current python version is supported
Jos, the PR id was incorrect, yes? (I just fixed it). Venky Shankar
06:01 AM Bug #58677 (Pending Backport): cephfs-top: test the current python version is supported
Venky Shankar
05:44 AM Backport #55749 (Resolved): quincy: snap_schedule: remove subvolume(-group) interfaces
Venky Shankar

04/18/2023

03:04 PM Backport #58986: pacific: cephfs-top: Handle `METRIC_TYPE_NONE` fields for sorting
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50597
merged
Yuri Weinstein
02:31 PM Bug #59349 (Fix Under Review): qa: FAIL: test_subvolume_group_quota_exceeded_subvolume_removal_re...
Xiubo Li
09:27 AM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
Venky Shankar wrote:
> Dhairya Parmar wrote:
> > cmd scrub status dumped following JSON:
> >
> > [...]
> >
> ...
Dhairya Parmar
09:00 AM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
Dhairya Parmar wrote:
> cmd scrub status dumped following JSON:
>
> [...]
>
> while it should've something lik...
Venky Shankar
07:03 AM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
ran yuri's branch on fs suite with scrub yaml: http://pulpito.front.sepia.ceph.com/dparmar-2023-04-17_19:11:31-fs:fun... Dhairya Parmar
08:51 AM Feature #57481: mds: enhance scrub to fragment/merge dirfrags
Rishabh, please take this one. Venky Shankar
05:17 AM Bug #59394: ACLs not fully supported.
With the root mount point being /CephFS.
I do have several folders with specific EC and replication pools (hence P...
Brian Woods
05:15 AM Bug #59394: ACLs not fully supported.
The paths given where for illustration only. Exact paths are something closer to:... Brian Woods
04:07 AM Bug #59394: ACLs not fully supported.
Brian,
* Should /CephFS be assumed as the mount point on the host system at which the cephfs is mounted ?
* What wa...
Milind Changire
03:18 AM Bug #59342: qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
Venky Shankar wrote:
> The lkml link says:
>
> > Sure, but I'll hold that request for a while. I updated to binut...
Xiubo Li

04/17/2023

02:34 PM Bug #59350: qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
cmd scrub status dumped following JSON:... Dhairya Parmar
12:45 PM Bug #58878 (Can't reproduce): mds: FAILED ceph_assert(trim_to > trimming_pos)
This was suspected due to various metadata inconsistencies which probably surfaced due to destructive tools being run... Venky Shankar
12:41 PM Bug #59413 (Triaged): cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
Venky Shankar
12:40 PM Bug #59463 (Triaged): mgr/nfs: Setting NFS export config using -i option is not working
Venky Shankar
11:57 AM Bug #59463 (Closed): mgr/nfs: Setting NFS export config using -i option is not working
Unable to set NFS export configuration using config.conf
Steps followed...
Dhairya Parmar
 

Also available in: Atom