General

Profile

wei qiaomiao

Issues

open closed Total
Assigned issues 1 8 9
Reported issues 7 25 32

Activity

04/17/2021

01:24 AM CephFS Bug #50408 (New): mds_session state is stale after restart all mds daemon
After restart all mds daemon, client-mds session is turn into stale state and the nfs service is not recover although... wei qiaomiao
01:14 AM CephFS Bug #50407 (Need More Info): mds_session state is stale after restart all mds
wei qiaomiao

02/04/2021

12:11 PM CephFS Bug #48763: mds memory leak
A similar mds memory leak has been found in our CephFS cluster with ceph version 12.2.12。
Cache status and dump_memp...
wei qiaomiao

02/03/2021

01:44 AM CephFS Bug #48148: mds: Server.cc:6764 FAILED assert(in->filelock.can_read(mdr->get_client()))
The client randomly reads, writes, setsattr, and rmdir to all the directories, but it is not sure what operations hav... wei qiaomiao

01/28/2021

11:18 AM Ceph Revision 19c0bf16 (ceph): cephfs: release client dentry_lease before send caps release to mds
Fixes: https://tracker.ceph.com/issues/47854
Signed-off-by: Wei Qiaomiao <wei.qiaomiao@zte.com.cn>
(cherry picked fr...
wei qiaomiao

12/02/2020

08:08 AM CephFS Bug #48422 (Resolved): mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.count(mds->get_nodeid()))
... wei qiaomiao

11/30/2020

12:34 PM Ceph Revision 21467347 (ceph): cephfs: release client dentry_lease before send caps release to mds
Fixes: https://tracker.ceph.com/issues/47854
Signed-off-by: Wei Qiaomiao <wei.qiaomiao@zte.com.cn>
(cherry picked fr...
wei qiaomiao

11/21/2020

03:01 AM CephFS Bug #48318 (Resolved): Client: the directory's capacity will not be updated after write data into the directory
The reproduction steps are as follows:... wei qiaomiao

11/09/2020

09:06 AM CephFS Bug #48148 (Triaged): mds: Server.cc:6764 FAILED assert(in->filelock.can_read(mdr->get_client()))
In my cluster with a single MDS, ceph version is 12.2.13, Assert will be encountered when a large number of deletion ... wei qiaomiao

10/20/2020

08:23 AM CephFS Bug #47881: mon/MDSMonitor: stop all MDS processes in the cluster at the same time. Some MDS cannot enter the "failed" state
Patrick Donnelly wrote:
> Would `ceph fs fail <fs_name>` not be the command you want?
"ceph mds fail <role_or_gid...
wei qiaomiao

Also available in: Atom