Project

General

Profile

Activity

From 04/11/2018 to 05/10/2018

05/10/2018

10:09 PM Bug #24090 (Resolved): mds: fragmentation in QA is slowing down ops enough for WRNs
http://pulpito.ceph.com/pdonnell-2018-05-08_18:15:09-fs-mimic-testing-basic-smithi/
http://pulpito.ceph.com/pdonnell...
Patrick Donnelly
08:48 PM Bug #24089 (Rejected): mds: print slow requests to debug log when sending health WRN to monitors ...
Nevermind, it is actually printed earlier in the log. Sorry for the noise. Patrick Donnelly
08:46 PM Bug #24089 (Rejected): mds: print slow requests to debug log when sending health WRN to monitors ...
... Patrick Donnelly
08:22 PM Bug #24088 (Duplicate): mon: slow remove_snaps op reported in cluster health log
... Patrick Donnelly
04:57 PM Bug #24087 (Duplicate): client: assert during shutdown after blacklisted
... Patrick Donnelly
11:48 AM Bug #24039: MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
the pjd: http://pulpito.ceph.com/pdonnell-2018-05-04_03:45:51-multimds-master-testing-basic-smithi/2475062/... Zheng Yan
11:25 AM Feature #22446: mds: ask idle client to trim more caps
Glad to see this :)
- Backport set to mimic,luminous
Thanks.
Webert Lima
09:44 AM Bug #23332: kclient: with fstab entry is not coming up reboot
I still don't think this is kernel issue. please path kernel with below change and try again.... Zheng Yan
06:44 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
Xuehan Xu wrote:
> In our online clusters, we encountered the bug #19593. Although we cherry-pick the fixing commits...
Xuehan Xu
04:38 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
https://github.com/ceph/ceph/pull/21923 Xuehan Xu
04:38 AM Bug #24073 (Resolved): PurgeQueue::_consume() could return true when there were no purge queue it...
In our online clusters, we encountered the bug #19593. Although we cherry-pick the fixing commits, the purge queue's ... Xuehan Xu
06:39 AM Bug #24074 (Need More Info): Read ahead in fuse client is broken with large buffer size
If the read is larger than 128K(e.g. 4M as our object size), fuse client will receive read requests as multiple ll_re... Chuan Qiu
04:10 AM Backport #23984 (In Progress): luminous: mds: scrub on fresh file system fails
https://github.com/ceph/ceph/pull/21922 Prashant D
04:07 AM Backport #23982 (In Progress): luminous: qa: TestVolumeClient.test_lifecycle needs updated for ne...
https://github.com/ceph/ceph/pull/21921 Prashant D
04:05 AM Bug #23826: mds: assert after daemon restart
Finish context of MDCache::open_undef_inodes_dirfrags() calls rejoin_gather_finish() without check rejoin_gather. I t... Zheng Yan

05/09/2018

09:29 PM Bug #24072 (Resolved): mds: race with new session from connection and imported session
... Patrick Donnelly
09:14 PM Feature #22446 (New): mds: ask idle client to trim more caps
Patrick Donnelly
09:11 PM Documentation #23611: doc: add description of new fs-client auth profile
Blocked by resolution to #23751. Patrick Donnelly
09:08 PM Feature #22370 (In Progress): cephfs: add kernel client quota support
Patrick Donnelly
09:08 PM Feature #22372: kclient: implement quota handling using new QuotaRealm
Zheng, what's the status on those patches? Patrick Donnelly
09:05 PM Bug #23332 (Need More Info): kclient: with fstab entry is not coming up reboot
Zheng Yan wrote:
> kexec in dmesgs looks suspicious. client mounted cephfs, then used kexec to load kernel image aga...
Patrick Donnelly
09:02 PM Bug #23350: mds: deadlock during unlink and export
Well this is aggravating. I think it's time we plan evictions for clients that do not respond to cap release. Patrick Donnelly
08:56 PM Bug #23394 (Rejected): nfs-ganesha: check cache configuration when exporting FSAL_CEPH
Patrick Donnelly
08:52 PM Feature #14456 (Fix Under Review): mon: prevent older/incompatible clients from mounting the file...
https://github.com/ceph/ceph/pull/21885 Patrick Donnelly
07:06 PM Bug #23855: mds: MClientCaps should carry inode's dirstat
Testing; will revert Nathan Cutler
06:57 PM Bug #23291 (Resolved): client: add way to sync setattr operations to MDS
Nathan Cutler
06:57 PM Backport #23474 (Resolved): luminous: client: allow caller to request that setattr request be syn...
Nathan Cutler
02:55 PM Backport #23474: luminous: client: allow caller to request that setattr request be synchronous
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21109
merged
Yuri Weinstein
06:56 PM Bug #23602 (Resolved): mds: handle client requests when mds is stopping
Nathan Cutler
06:56 PM Backport #23632 (Resolved): luminous: mds: handle client requests when mds is stopping
Nathan Cutler
02:54 PM Backport #23632: luminous: mds: handle client requests when mds is stopping
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21346
merged
Yuri Weinstein
06:56 PM Bug #23541 (Resolved): client: fix request send_to_auth was never really used
Nathan Cutler
06:56 PM Backport #23635 (Resolved): luminous: client: fix request send_to_auth was never really used
Nathan Cutler
02:54 PM Backport #23635: luminous: client: fix request send_to_auth was never really used
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21354
merged
Yuri Weinstein
06:13 PM Bug #24040 (Need More Info): mds: assert in CDir::_committed
Patrick Donnelly
02:14 PM Bug #24040: mds: assert in CDir::_committed
Thanks for the report - it looks like you're using a 11.x ("kraken") version, which is no longer receiving bug fixes.... John Spray
01:56 PM Bug #24039: MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
the fsstress failure: http://pulpito.ceph.com/pdonnell-2018-05-04_03:45:51-multimds-master-testing-basic-smithi/24745... Zheng Yan
01:12 PM Bug #24039: MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
fsstress failure looks like new bug
pjd failure is similar to http://tracker.ceph.com/issues/23327
two dead tasks...
Zheng Yan
11:55 AM Bug #23327: qa: pjd test sees wrong ctime after unlink
http://pulpito.ceph.com/pdonnell-2018-05-04_03:45:51-multimds-master-testing-basic-smithi/2475062/ Zheng Yan
06:08 AM Backport #23951 (In Progress): luminous: mds: stuck during up:stopping
https://github.com/ceph/ceph/pull/21901 Prashant D
03:36 AM Backport #23946: luminous: mds: crash when failover
Opened backport PR#21900 (https://github.com/ceph/ceph/pull/21900). We need to cherry pick PR#21769 once it gets merg... Prashant D
03:29 AM Backport #23950 (In Progress): luminous: mds: stopping rank 0 cannot shutdown until log is trimmed
https://github.com/ceph/ceph/pull/21899 Prashant D

05/08/2018

10:49 PM Backport #24055 (In Progress): luminous: VolumeClient: allow ceph_volume_client to create 'volume...
Patrick Donnelly
10:45 PM Backport #24055 (Resolved): luminous: VolumeClient: allow ceph_volume_client to create 'volumes' ...
https://github.com/ceph/ceph/pull/21897 Patrick Donnelly
10:43 PM Feature #23695 (Pending Backport): VolumeClient: allow ceph_volume_client to create 'volumes' wit...
Mimic PR: https://github.com/ceph/ceph/pull/21896 Patrick Donnelly
10:37 PM Bug #24054 (Resolved): kceph: umount on evicted client blocks forever
Failed test:
/ceph/teuthology-archive/pdonnell-2018-05-08_01:06:46-kcephfs-mimic-testing-basic-smithi/2494030/teut...
Patrick Donnelly
10:33 PM Bug #24053 (Resolved): qa: kernel_mount.py umount must handle timeout arg
... Patrick Donnelly
10:27 PM Bug #24052 (Resolved): repeated eviction of idle client until some IO happens
We see repeated eviction of idle client sessions. We have client_reconnect_stale on the ceph-fuse clients, and these ... Dan van der Ster
08:56 PM Backport #24050 (Resolved): luminous: mds: MClientCaps should carry inode's dirstat
https://github.com/ceph/ceph/pull/22118 Nathan Cutler
08:56 PM Backport #24049 (Resolved): luminous: ceph-fuse: missing dentries in readdir result
https://github.com/ceph/ceph/pull/22119 Nathan Cutler
08:49 PM Bug #23530 (Resolved): mds: kicked out by monitor during rejoin
Nathan Cutler
08:49 PM Backport #23636 (Resolved): luminous: mds: kicked out by monitor during rejoin
Nathan Cutler
07:47 PM Backport #23636: luminous: mds: kicked out by monitor during rejoin
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21366
merged
Yuri Weinstein
08:49 PM Bug #23452 (Resolved): mds: assertion in MDSRank::validate_sessions
Nathan Cutler
08:48 PM Backport #23637 (Resolved): luminous: mds: assertion in MDSRank::validate_sessions
Nathan Cutler
07:46 PM Backport #23637: luminous: mds: assertion in MDSRank::validate_sessions
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21372
merged
Yuri Weinstein
08:48 PM Bug #23625 (Resolved): mds: sessions opened by journal replay do not get dirtied properly
Nathan Cutler
08:48 PM Backport #23702 (Resolved): luminous: mds: sessions opened by journal replay do not get dirtied p...
Nathan Cutler
07:46 PM Backport #23702: luminous: mds: sessions opened by journal replay do not get dirtied properly
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21441
merged
Yuri Weinstein
08:47 PM Bug #23582 (Resolved): MDSMonitor: mds health warnings printed in bad format
Nathan Cutler
08:47 PM Backport #23703 (Resolved): luminous: MDSMonitor: mds health warnings printed in bad format
Nathan Cutler
07:46 PM Backport #23703: luminous: MDSMonitor: mds health warnings printed in bad format
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21447
merged
Yuri Weinstein
08:47 PM Bug #23380 (Resolved): mds: ceph.dir.rctime follows dir ctime not inode ctime
Nathan Cutler
08:47 PM Backport #23750 (Resolved): luminous: mds: ceph.dir.rctime follows dir ctime not inode ctime
Nathan Cutler
07:45 PM Backport #23750: luminous: mds: ceph.dir.rctime follows dir ctime not inode ctime
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21448
merged
Yuri Weinstein
08:46 PM Bug #23764 (Resolved): MDSMonitor: new file systems are not initialized with the pending_fsmap epoch
Nathan Cutler
08:46 PM Backport #23791 (Resolved): luminous: MDSMonitor: new file systems are not initialized with the p...
Nathan Cutler
07:44 PM Backport #23791: luminous: MDSMonitor: new file systems are not initialized with the pending_fsma...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21512
merged
Yuri Weinstein
08:46 PM Bug #23714 (Resolved): slow ceph_ll_sync_inode calls after setattr
Nathan Cutler
08:45 PM Backport #23802 (Resolved): luminous: slow ceph_ll_sync_inode calls after setattr
Nathan Cutler
07:44 PM Backport #23802: luminous: slow ceph_ll_sync_inode calls after setattr
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21542
merged
Yuri Weinstein
08:45 PM Bug #23652 (Resolved): client: fix gid_count check in UserPerm->deep_copy_from()
Nathan Cutler
08:44 PM Backport #23771 (Resolved): luminous: client: fix gid_count check in UserPerm->deep_copy_from()
Nathan Cutler
07:43 PM Backport #23771: luminous: client: fix gid_count check in UserPerm->deep_copy_from()
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21596
merged
Yuri Weinstein
08:44 PM Bug #23762 (Resolved): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
Nathan Cutler
08:43 PM Backport #23792 (Resolved): luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not p...
Nathan Cutler
07:43 PM Backport #23792: luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21732
merged
Yuri Weinstein
08:43 PM Bug #23873 (Resolved): cephfs does not count st_nlink for directories correctly?
Nathan Cutler
08:43 PM Bug #19706: Laggy mon daemons causing MDS failover (symptom: failed to set counters on mds daemon...
I don't have reason to believe use of utime_t caused this issue but it's possible this could fix it: https://github.c... Patrick Donnelly
08:43 PM Backport #23987 (Resolved): luminous: cephfs does not count st_nlink for directories correctly?
Nathan Cutler
07:42 PM Backport #23987: luminous: cephfs does not count st_nlink for directories correctly?
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21796
merged
Yuri Weinstein
08:42 PM Bug #23880 (Resolved): mds: scrub code stuck at trimming log segments
Nathan Cutler
08:42 PM Backport #23930 (Resolved): luminous: mds: scrub code stuck at trimming log segments
Nathan Cutler
07:41 PM Backport #23930: luminous: mds: scrub code stuck at trimming log segments
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21840
merged
Yuri Weinstein
08:41 PM Bug #23813 (Resolved): client: "remove_session_caps still has dirty|flushing caps" when thrashing...
Nathan Cutler
08:41 PM Backport #23934 (Resolved): luminous: client: "remove_session_caps still has dirty|flushing caps"...
Nathan Cutler
07:41 PM Backport #23934: luminous: client: "remove_session_caps still has dirty|flushing caps" when thras...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21844
merged
Yuri Weinstein
06:25 PM Bug #21777 (New): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
Patrick Donnelly
01:36 PM Bug #21777 (Fix Under Review): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
Zheng Yan
12:40 PM Bug #21777: src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
-https://github.com/ceph/ceph/pull/21883- Zheng Yan
03:57 AM Bug #21777 (In Progress): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
Zheng Yan
06:23 PM Bug #24047 (Fix Under Review): MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
https://github.com/ceph/ceph/pull/21883 Patrick Donnelly
06:23 PM Bug #24047 (Resolved): MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
... Patrick Donnelly
04:58 PM Bug #24030: ceph-fuse: double dash meaning
https://github.com/ceph/ceph/pull/21889 Jos Collin
04:07 PM Bug #23885 (Resolved): MDSMonitor: overzealous MDS_ALL_DOWN and MDS_UP_LESS_THAN_MAX health warni...
Mimic PR: https://github.com/ceph/ceph/pull/21888 Patrick Donnelly
02:39 PM Bug #24039: MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
Right, there's something else wrong with the test. Patrick Donnelly
01:35 PM Bug #24039: MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
These are intentional crashes in table transaction test Zheng Yan
07:10 AM Backport #23936 (In Progress): luminous: cephfs-journal-tool: segfault during journal reset
https://github.com/ceph/ceph/pull/21874 Prashant D

05/07/2018

11:40 PM Bug #24040 (Need More Info): mds: assert in CDir::_committed
... zs 吴
10:56 PM Bug #23894 (Pending Backport): ceph-fuse: missing dentries in readdir result
Mimic: https://github.com/ceph/ceph/pull/21867 Patrick Donnelly
10:47 PM Bug #23855 (Pending Backport): mds: MClientCaps should carry inode's dirstat
Mimic PR: https://github.com/ceph/ceph/pull/21866 Patrick Donnelly
10:40 PM Bug #21777: src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
<deleted/> Patrick Donnelly
08:54 PM Bug #21777 (New): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
Deleted: see #24047. Patrick Donnelly
10:00 PM Bug #24039 (Closed): MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
... Patrick Donnelly
02:41 PM Bug #24002 (Resolved): qa: check snap upgrade on multimds cluster
Patrick Donnelly
01:39 PM Bug #24030: ceph-fuse: double dash meaning
Jos, please take a crack at fixing this. Thanks! Patrick Donnelly
04:38 AM Bug #24030 (Closed): ceph-fuse: double dash meaning
... Jos Collin
01:37 PM Bug #23994 (Need More Info): mds: OSD space is not reclaimed until MDS is restarted
Patrick Donnelly
02:47 AM Bug #23994: mds: OSD space is not reclaimed until MDS is restarted
please try again and dump mds' cache (ceph daemon mds.xxx dump cache /tmp/cachedump.x) Zheng Yan
05:41 AM Backport #23934 (In Progress): luminous: client: "remove_session_caps still has dirty|flushing ca...
https://github.com/ceph/ceph/pull/21844 Prashant D
04:36 AM Bug #23768 (Fix Under Review): MDSMonitor: uncommitted state exposed to clients/mdss
https://github.com/ceph/ceph/pull/21842 Patrick Donnelly
04:02 AM Backport #23931 (In Progress): luminous: qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops...
https://github.com/ceph/ceph/pull/21841 Prashant D
03:59 AM Backport #23930 (In Progress): luminous: mds: scrub code stuck at trimming log segments
https://github.com/ceph/ceph/pull/21840 Prashant D

05/06/2018

08:44 PM Bug #24028: CephFS flock() on a directory is broken
I tested flock() logic on different hosts.
on one host:
flock my_dir sleep 1000
on second:
flock my_dir e...
Марк Коренберг
08:42 PM Bug #24028 (Resolved): CephFS flock() on a directory is broken
Accroding to man page, flock() semantics must work also on directory. Actually it works with, say, Ext4. It does not ... Марк Коренберг

05/04/2018

02:12 PM Bug #23994: mds: OSD space is not reclaimed until MDS is restarted
This was on the kernel client. I tried Ubuntu's 4.13.0-39-generic and 4.15.0-15-generic kernels.
With the fuse cli...
Niklas Hambuechen
01:44 PM Bug #23994: mds: OSD space is not reclaimed until MDS is restarted
What client (kernel or fuse), and what version of the client? John Spray
05:09 AM Bug #23885 (Fix Under Review): MDSMonitor: overzealous MDS_ALL_DOWN and MDS_UP_LESS_THAN_MAX heal...
https://github.com/ceph/ceph/pull/21810 Patrick Donnelly
02:07 AM Bug #24002 (Pending Backport): qa: check snap upgrade on multimds cluster
Patrick Donnelly

05/03/2018

10:33 PM Feature #23695: VolumeClient: allow ceph_volume_client to create 'volumes' without namespace isol...
https://github.com/ceph/ceph/pull/21808 Ramana Raja
09:27 PM Bug #24004 (Resolved): mds: curate priority of perf counters sent to mgr
Make sure we have the most interesting statisitcs available for prometheus for dashboard use. Additionally, see if we... Patrick Donnelly
08:36 PM Bug #24002 (Fix Under Review): qa: check snap upgrade on multimds cluster
https://github.com/ceph/ceph/pull/21805 Patrick Donnelly
08:35 PM Bug #24002 (Resolved): qa: check snap upgrade on multimds cluster
To get an idea how the snap format upgrade works on a previously multimds cluster. (No need to exercise the two MDS s... Patrick Donnelly
07:48 PM Cleanup #24001 (Resolved): MDSMonitor: remove vestiges of `mds deactivate`
For Nautilus. Patrick Donnelly
06:02 PM Backport #23946: luminous: mds: crash when failover
Will also need: https://github.com/ceph/ceph/pull/21769 Patrick Donnelly
05:33 PM Feature #23623 (Resolved): mds: mark allow_snaps true by default
Patrick Donnelly
05:33 PM Documentation #23583 (Resolved): doc: update snapshot doc to account for recent changes
Patrick Donnelly
01:41 PM Backport #23987 (In Progress): luminous: cephfs does not count st_nlink for directories correctly?
Patrick Donnelly
10:28 AM Backport #23987 (Resolved): luminous: cephfs does not count st_nlink for directories correctly?
https://github.com/ceph/ceph/pull/21796 Nathan Cutler
01:27 PM Bug #23393 (Fix Under Review): ceph-ansible: update Ganesha config for nfs_file_gw to use optimal...
Ramana Raja
01:26 PM Bug #23393: ceph-ansible: update Ganesha config for nfs_file_gw to use optimal settings
https://github.com/ceph/ceph-ansible/pull/2556 Ramana Raja
01:02 PM Bug #23994 (Need More Info): mds: OSD space is not reclaimed until MDS is restarted
With my Luminous test cluster on Ubuntu I ran into a situation where I filled up an OSD by putting files on CephFS, a... Niklas Hambuechen
10:29 AM Backport #23991 (Resolved): luminous: client: hangs on umount if it had an MDS session evicted
https://github.com/ceph/ceph/pull/22018 Nathan Cutler
10:29 AM Backport #23990 (Rejected): jewel: client: hangs on umount if it had an MDS session evicted
Nathan Cutler
10:28 AM Backport #23989 (Resolved): luminous: mds: don't report slow request for blocked filelock request
https://github.com/ceph/ceph/pull/22782
follow-on fix: https://github.com/ceph/ceph/pull/26048 went into 12.2.11
Nathan Cutler
10:27 AM Backport #23984 (Resolved): luminous: mds: scrub on fresh file system fails
https://github.com/ceph/ceph/pull/21922 Nathan Cutler
10:27 AM Backport #23982 (Resolved): luminous: qa: TestVolumeClient.test_lifecycle needs updated for new e...
https://github.com/ceph/ceph/pull/21921 Nathan Cutler
12:00 AM Bug #23958: mds: scrub doesn't always return JSON results
Zheng Yan wrote:
> recursive scrub is async, it does not return anything
Good point, thanks. Even so, we shoudl r...
Patrick Donnelly

05/02/2018

11:56 PM Bug #16842 (Can't reproduce): mds: replacement MDS crashes on InoTable release
Patrick Donnelly
10:57 PM Bug #23975 (Pending Backport): qa: TestVolumeClient.test_lifecycle needs updated for new eviction...
Patrick Donnelly
07:53 PM Bug #23975 (Fix Under Review): qa: TestVolumeClient.test_lifecycle needs updated for new eviction...
https://github.com/ceph/ceph/pull/21789 Patrick Donnelly
06:59 PM Bug #23975 (Resolved): qa: TestVolumeClient.test_lifecycle needs updated for new eviction behavior
... Patrick Donnelly
08:50 PM Bug #23768 (New): MDSMonitor: uncommitted state exposed to clients/mdss
Moving this back to fs. This is a different bug Josh. Patrick Donnelly
08:44 PM Bug #23768 (Resolved): MDSMonitor: uncommitted state exposed to clients/mdss
backport is tracked in the fs bug Josh Durgin
06:06 PM Bug #23972 (New): Ceph MDS Crash from client mounting aufs over cephfs

Here is a rough outline of my topology
https://pastebin.com/HQqbMxyj
---
I can reliably crash all (in my case...
Sean Sullivan
05:02 PM Feature #17230 (In Progress): ceph_volume_client: py3 compatible
Patrick Donnelly
04:08 PM Bug #10915 (Pending Backport): client: hangs on umount if it had an MDS session evicted
Patrick Donnelly
02:21 PM Bug #23960 (Pending Backport): mds: scrub on fresh file system fails
Patrick Donnelly
02:20 PM Bug #23873 (Pending Backport): cephfs does not count st_nlink for directories correctly?
Patrick Donnelly
02:20 PM Bug #22428 (Pending Backport): mds: don't report slow request for blocked filelock request
Patrick Donnelly
03:10 AM Bug #23958: mds: scrub doesn't always return JSON results
recursive scrub is async, it does not return anything Zheng Yan

05/01/2018

10:42 PM Bug #23960 (In Progress): mds: scrub on fresh file system fails
https://github.com/ceph/ceph/pull/21762 Patrick Donnelly
10:21 PM Bug #23960 (Resolved): mds: scrub on fresh file system fails
In a fresh vstart cluster:... Patrick Donnelly
04:01 PM Bug #23958 (Resolved): mds: scrub doesn't always return JSON results
On a vstart cluster:... Patrick Donnelly
06:52 AM Backport #23951 (Resolved): luminous: mds: stuck during up:stopping
https://github.com/ceph/ceph/pull/21901 Nathan Cutler
06:52 AM Backport #23950 (Resolved): luminous: mds: stopping rank 0 cannot shutdown until log is trimmed
https://github.com/ceph/ceph/pull/21899 Nathan Cutler
06:29 AM Bug #23826: mds: assert after daemon restart
checking MDSMap::is_rejoining() is not required here. If there are recovering mds which haven't entered rejoin state.... Zheng Yan
12:29 AM Bug #23923 (Pending Backport): mds: stopping rank 0 cannot shutdown until log is trimmed
Patrick Donnelly
12:29 AM Bug #23919 (Pending Backport): mds: stuck during up:stopping
Patrick Donnelly

04/30/2018

09:05 PM Bug #23448 (Resolved): nfs-ganesha: fails to parse rados URLs with '.' in object name
Yes. Jeff Layton
08:51 PM Bug #23448: nfs-ganesha: fails to parse rados URLs with '.' in object name
Is this resolved? Patrick Donnelly
08:00 PM Backport #23946 (Resolved): luminous: mds: crash when failover
https://github.com/ceph/ceph/pull/21900 Nathan Cutler
07:21 PM Bug #23826: mds: assert after daemon restart
Here's one possible way this could happen I think:
1. All MDS are rejoin or later.
2. A up:rejoin MDS does:
3....
Patrick Donnelly
07:00 PM Bug #23826: mds: assert after daemon restart
Adding log from failed MDS.
Looks like it's receiving handle_cache_rejoin_ack message while in replay.
Patrick Donnelly
06:53 PM Bug #23518 (Pending Backport): mds: crash when failover
Patrick Donnelly
01:43 PM Bug #23883: kclient: CephFS kernel client hang
v4.9 is quite old at this point, so it would be helpful to know if this is something that has already been fixed in m... Jeff Layton
06:54 AM Backport #23932 (In Progress): jewel: client: avoid second lock on client_lock
Jos Collin
04:38 AM Backport #23792 (In Progress): luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap no...
https://github.com/ceph/ceph/pull/21732 Patrick Donnelly
03:59 AM Backport #23933 (In Progress): luminous: client: avoid second lock on client_lock
Jos Collin

04/29/2018

08:31 PM Backport #23936 (Resolved): luminous: cephfs-journal-tool: segfault during journal reset
https://github.com/ceph/ceph/pull/21874 Nathan Cutler
08:30 PM Backport #23935 (Resolved): luminous: mds: may send LOCK_SYNC_MIX message to starting MDS
https://github.com/ceph/ceph/pull/21990 Nathan Cutler
08:30 PM Backport #23934 (Resolved): luminous: client: "remove_session_caps still has dirty|flushing caps"...
https://github.com/ceph/ceph/pull/21844 Nathan Cutler
08:30 PM Backport #23933 (Resolved): luminous: client: avoid second lock on client_lock
https://github.com/ceph/ceph/pull/21730 Nathan Cutler
08:30 PM Backport #23932 (Resolved): jewel: client: avoid second lock on client_lock
https://github.com/ceph/ceph/pull/21734 Nathan Cutler
08:30 PM Backport #23931 (Resolved): luminous: qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < ...
https://github.com/ceph/ceph/pull/21841 Nathan Cutler
08:30 PM Backport #23930 (Resolved): luminous: mds: scrub code stuck at trimming log segments
https://github.com/ceph/ceph/pull/21840 Nathan Cutler
08:07 PM Bug #23815 (Pending Backport): client: avoid second lock on client_lock
Patrick Donnelly
08:06 PM Bug #23813 (Pending Backport): client: "remove_session_caps still has dirty|flushing caps" when t...
Patrick Donnelly
08:06 PM Bug #23812 (Pending Backport): mds: may send LOCK_SYNC_MIX message to starting MDS
Patrick Donnelly
08:06 PM Bug #20549 (Pending Backport): cephfs-journal-tool: segfault during journal reset
Patrick Donnelly
08:05 PM Bug #23829 (Pending Backport): qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_...
Patrick Donnelly
08:05 PM Bug #23880 (Pending Backport): mds: scrub code stuck at trimming log segments
Patrick Donnelly
01:44 AM Bug #23919 (Fix Under Review): mds: stuck during up:stopping
Zheng Yan wrote:
> I think we should call Locker::_readlock_kick in this case.
https://github.com/ceph/ceph/pull/...
Patrick Donnelly
01:15 AM Bug #23927 (Rejected): qa: test_full failure in test_barrier
https://github.com/ceph/ceph/pull/21668#pullrequestreview-116152567 Patrick Donnelly
12:54 AM Bug #23927: qa: test_full failure in test_barrier
Here too: http://pulpito.ceph.com/pdonnell-2018-04-28_06:20:24-fs-wip-pdonnell-testing-20180428.041811-testing-basic-... Patrick Donnelly
12:49 AM Bug #23927 (Rejected): qa: test_full failure in test_barrier
... Patrick Donnelly
12:36 AM Bug #23923 (Fix Under Review): mds: stopping rank 0 cannot shutdown until log is trimmed
https://github.com/ceph/ceph/pull/21719 Patrick Donnelly

04/28/2018

06:59 PM Bug #23923 (Resolved): mds: stopping rank 0 cannot shutdown until log is trimmed
... Patrick Donnelly
03:53 PM Bug #23883: kclient: CephFS kernel client hang
Hi Wei,
this is a very interesting problem, from your description, i would like to share my thought:
this shoul...
dongdong tao
10:10 AM Bug #23883: kclient: CephFS kernel client hang
client kernel dmesg:... wei jin
10:09 AM Bug #23883: kclient: CephFS kernel client hang
... wei jin
08:02 AM Bug #23883: kclient: CephFS kernel client hang
debug_mds = 10. only for period that mds is recovering Zheng Yan
07:53 AM Bug #23883: kclient: CephFS kernel client hang
Zheng Yan wrote:
> please upload mds log
which level?
after setting debug_mds = 20 and debug_ms = 1, log file is...
wei jin
05:03 AM Bug #23883: kclient: CephFS kernel client hang
please upload mds log Zheng Yan
10:34 AM Bug #22428 (Fix Under Review): mds: don't report slow request for blocked filelock request
https://github.com/ceph/ceph/pull/21715 Zheng Yan
07:50 AM Bug #23919: mds: stuck during up:stopping
I think we should call Locker::_readlock_kick in this case. Zheng Yan
04:02 AM Bug #23919: mds: stuck during up:stopping
/ceph/tmp/pdonnell/bz1566016/0x20000205a64.log.gz
holds the output of
zgrep -C5 0x20000205a64 ceph-mds.magna05...
Patrick Donnelly
03:52 AM Bug #23919: mds: stuck during up:stopping
crux of the issue appears to be here:... Patrick Donnelly
06:34 AM Bug #23920: Multiple ceph-fuse and one ceph-client.admin.log
I am using the method you said to modify,After that I found three questions:
1、when I run ceph-fuse ,There will be...
yuanli zhu
06:04 AM Bug #23920: Multiple ceph-fuse and one ceph-client.admin.log
because I have two ceph-fuse,how can i set config use the command as below for each ceph-fuse:
ceph daemon clien...
yuanli zhu
05:00 AM Bug #23920 (Rejected): Multiple ceph-fuse and one ceph-client.admin.log
config issue. you should set log file config option like
log file = /var/log/ceph/ceph-client.$pid.log
Zheng Yan
02:28 AM Bug #23920 (Rejected): Multiple ceph-fuse and one ceph-client.admin.log
I use the command as below:
/usr/bin/ceph-fuse -c /etc/ceph/ceph.conf /nas/test1 -r /test1
/usr/bin/ceph-fu...
yuanli zhu
04:54 AM Bug #23894 (Fix Under Review): ceph-fuse: missing dentries in readdir result
https://github.com/ceph/ceph/pull/21712 Zheng Yan
01:37 AM Bug #23894: ceph-fuse: missing dentries in readdir result
libcephfs does not handle session stale message properly
steps to reproduce:
1. create two ceph-fuse mounts, mo...
Zheng Yan

04/27/2018

10:27 PM Bug #23919 (Resolved): mds: stuck during up:stopping
... Patrick Donnelly
10:16 PM Backport #23833: luminous: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
I think they are separate issues but I will take a look. Patrick Donnelly
07:42 PM Backport #23833: luminous: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
@Patrick - this one looks like it could benefit from being done in a single PR along with http://tracker.ceph.com/iss... Nathan Cutler
05:34 PM Backport #23833: luminous: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
Travis Nielsen wrote:
> What is the timeline for the backport? Rook would like to see it in 12.2.6. Thanks!
It sh...
Patrick Donnelly
05:16 PM Backport #23833: luminous: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
What is the timeline for the backport? Rook would like to see it in 12.2.6. Thanks! Travis Nielsen
07:40 PM Backport #23792: luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
@Patrick could you take this one? Nathan Cutler
05:05 PM Bug #23658: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
When this issue hits, is there a way to recover? For example, to forcefully remove the multiple filesystems that are ... Travis Nielsen
11:58 AM Bug #23873: cephfs does not count st_nlink for directories correctly?
Peter Mauritius wrote:
> The Dovecot mail server does not work properly, if mailbox files are stored on cephfs and a...
Jeff Layton
10:18 AM Bug #23883: kclient: CephFS kernel client hang
Zheng Yan wrote:
> besides, 4.4/4.9 kernel is too old for using multimds.
It is very difficult to upgrade kernel ...
wei jin
03:55 AM Documentation #23897 (In Progress): doc: create snapshot user doc
Include suggested upgrade procedure: https://github.com/ceph/ceph/pull/21374/commits/e05ebd08ea895626f4a2a52805f17e61... Patrick Donnelly
12:50 AM Bug #23894 (Resolved): ceph-fuse: missing dentries in readdir result
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-April/026224.html Zheng Yan

04/26/2018

11:54 PM Bug #23883: kclient: CephFS kernel client hang
besides, 4.4/4.9 kernel is too old for using multimds. Zheng Yan
11:47 PM Bug #23883: kclient: CephFS kernel client hang
need mds log to check what happened Zheng Yan
08:09 PM Bug #23883: kclient: CephFS kernel client hang

Patrick Donnelly
10:19 AM Bug #23883 (New): kclient: CephFS kernel client hang
ceph: 12.2.4/12.2.5
os: debian jessie
kernel: 4.9/4.4
After restart all mds(6 in total, 5 active, 1 standby), cl...
wei jin
10:23 PM Backport #23638 (In Progress): luminous: ceph-fuse: getgroups failure causes exception
Patrick Donnelly
08:01 PM Bug #23421: ceph-fuse: stop ceph-fuse if no root permissions?
Jos Collin wrote:
> The hang doesn't exist in the latest code.
>
> The following is my latest finding:
>
> [.....
Patrick Donnelly
10:07 AM Bug #23421: ceph-fuse: stop ceph-fuse if no root permissions?
The hang doesn't exist in the latest code.
The following is my latest finding:...
Jos Collin
05:20 PM Bug #23873: cephfs does not count st_nlink for directories correctly?
The Dovecot mail server does not work properly, if mailbox files are stored on cephfs and a mailbox prefix is configu... Peter Mauritius
04:39 AM Bug #23873: cephfs does not count st_nlink for directories correctly?
Zheng Yan wrote:
> If I remember right, this is not required by POSIX (btrfs does not do this). how NFS behaves depe...
Patrick Donnelly
02:34 AM Bug #23873: cephfs does not count st_nlink for directories correctly?
If I remember right, this is not required by POSIX (btrfs does not do this). how NFS behaves depends on the exported ... Zheng Yan
11:10 AM Bug #23885 (Resolved): MDSMonitor: overzealous MDS_ALL_DOWN and MDS_UP_LESS_THAN_MAX health warni...

This is what we currently get when starting with vstart, which creates MDS daemons before creating the filesystem:
...
John Spray
10:35 AM Bug #23855 (Fix Under Review): mds: MClientCaps should carry inode's dirstat
https://github.com/ceph/ceph/pull/21668 Zheng Yan
09:49 AM Bug #23880 (Fix Under Review): mds: scrub code stuck at trimming log segments
https://github.com/ceph/ceph/pull/21664 Zheng Yan
07:49 AM Bug #23880 (Resolved): mds: scrub code stuck at trimming log segments
/a/pdonnell-2018-04-25_18:15:51-kcephfs-wip-pdonnell-testing-20180425.144904-testing-basic-smithi/2439034 Zheng Yan
01:26 AM Feature #17854: mds: only evict an unresponsive client when another client wants its caps
Rishabh Dave wrote:
> I am planning to start working on this feature. How can I get a client to be unresponsive with...
Zheng Yan
12:49 AM Bug #23332: kclient: with fstab entry is not coming up reboot
kexec in dmesgs looks suspicious. client mounted cephfs, then used kexec to load kernel image again. All issues happe... Zheng Yan

04/25/2018

09:08 PM Feature #17854 (In Progress): mds: only evict an unresponsive client when another client wants it...
Patrick Donnelly
07:39 PM Feature #17854: mds: only evict an unresponsive client when another client wants its caps
I am planning to start working on this feature. How can I get a client to be unresponsive without evicting it? Rishabh Dave
08:24 PM Bug #23873 (Fix Under Review): cephfs does not count st_nlink for directories correctly?
https://github.com/ceph/ceph/pull/21652 Patrick Donnelly
07:42 PM Bug #23873 (Resolved): cephfs does not count st_nlink for directories correctly?
Not sure if this behavior is by intention, but if you create a empty directory on cephfs and call stat on the directo... Danny Al-Gaaf
06:09 PM Bug #23332: kclient: with fstab entry is not coming up reboot
Luis Henriques wrote:
> Actually, the first failure seems to be a bit before:
> [...]
> The client seems to be try...
Shreekara Shastry
04:58 PM Bug #23848 (Rejected): mds: stuck shutdown procedure
Patrick Donnelly
04:06 AM Bug #23848: mds: stuck shutdown procedure
... Patrick Donnelly
04:00 AM Bug #23848 (Rejected): mds: stuck shutdown procedure
The following outputs in an infinite loop:... Patrick Donnelly
01:10 PM Bug #23855 (Resolved): mds: MClientCaps should carry inode's dirstat
inode's dirstat gets updated by request reply, but not by cap message. this is problematic.
For example:
...
MDS...
Zheng Yan
08:33 AM Bug #22428: mds: don't report slow request for blocked filelock request
In case you need more examples, we're seeing this recently on 12.2.4:... Dan van der Ster
02:55 AM Bug #16842: mds: replacement MDS crashes on InoTable release
make we should mark this as "need more info" or "can't reproduce" Zheng Yan
02:08 AM Backport #23698: luminous: mds: load balancer fixes
https://github.com/ceph/ceph/pull/21412 Zheng Yan

04/24/2018

07:50 PM Bug #23829 (Fix Under Review): qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_...
Zheng Yan wrote:
> It's test case issue. The test caused so much trouble. I'd like to drop/disable it
Agreed.
...
Patrick Donnelly
12:24 PM Bug #23829: qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_ops * 1.25)
It's test case issue. The test caused so much trouble. I'd like to drop/disable it Zheng Yan
07:37 PM Bug #23837 (Fix Under Review): client: deleted inode's Bufferhead which was in STATE::Tx would le...
Patrick Donnelly
10:44 AM Bug #23837: client: deleted inode's Bufferhead which was in STATE::Tx would lead a assert fail
fixed by: https://github.com/ceph/ceph/pull/21615 Ivan Guan
09:45 AM Bug #23837 (Resolved): client: deleted inode's Bufferhead which was in STATE::Tx would lead a ass...
... Ivan Guan
07:07 PM Backport #23671 (In Progress): luminous: mds: MDBalancer using total (all time) request count in ...
https://github.com/ceph/ceph/pull/21412/commits/1a5b7eaac572f1810d0453b053781e6bc8185dd2 Patrick Donnelly
06:55 PM Tasks #23844 (In Progress): client: break client_lock
See past efforts on this. Matt Benjamin did some prototyping on Firefly. Those patches will likely be unusable but co... Patrick Donnelly
11:19 AM Backport #23835 (In Progress): luminous: mds: fix occasional dir rstat inconsistency between mult...
https://github.com/ceph/ceph/pull/21617 Prashant D
05:48 AM Backport #23835 (Resolved): luminous: mds: fix occasional dir rstat inconsistency between multi-M...
https://github.com/ceph/ceph/pull/21617 Nathan Cutler
11:10 AM Backport #23308 (In Progress): luminous: doc: Fix -d option in ceph-fuse doc
Jos Collin
08:24 AM Bug #20549 (Fix Under Review): cephfs-journal-tool: segfault during journal reset
https://github.com/ceph/ceph/pull/21610 Zheng Yan
07:09 AM Feature #23362: mds: add drop_cache command
https://github.com/ceph/ceph/pull/21566 Rishabh Dave
05:47 AM Backport #23834 (Rejected): jewel: MDSMonitor: crash after assigning standby-replay daemon in mul...
Nathan Cutler
05:47 AM Backport #23833 (Resolved): luminous: MDSMonitor: crash after assigning standby-replay daemon in ...
https://github.com/ceph/ceph/pull/22603 Nathan Cutler
04:42 AM Bug #23567 (Resolved): MDSMonitor: successive changes to max_mds can allow hole in ranks
Patrick Donnelly
04:35 AM Bug #23538 (Pending Backport): mds: fix occasional dir rstat inconsistency between multi-MDSes
Patrick Donnelly
04:34 AM Bug #23658 (Pending Backport): MDSMonitor: crash after assigning standby-replay daemon in multifs...
Patrick Donnelly
04:33 AM Bug #23799 (Resolved): MDSMonitor: creates invalid transition from up:creating to up:shutdown
Patrick Donnelly
04:32 AM Bug #23800 (Resolved): MDSMonitor: setting fs down twice will wipe old_max_mds
Patrick Donnelly

04/23/2018

08:26 PM Bug #23829 (Resolved): qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_ops * 1.25)
... Patrick Donnelly
07:51 PM Bug #20549: cephfs-journal-tool: segfault during journal reset
Another: http://pulpito.ceph.com/pdonnell-2018-04-23_17:22:02-kcephfs-wip-pdonnell-testing-20180423.033341-testing-ba... Patrick Donnelly
05:51 PM Bug #23814 (Rejected): mds: newly active mds aborts may abort in handle_file_lock
Patrick Donnelly
08:40 AM Bug #23814: mds: newly active mds aborts may abort in handle_file_lock
I think this is related to #23812. The patch for #23812 makes mds skip sending lock message to 'starting' mds. The sk... Zheng Yan
05:50 PM Bug #23812: mds: may send LOCK_SYNC_MIX message to starting MDS
https://github.com/ceph/ceph/pull/21601 Patrick Donnelly
05:10 PM Backport #22860 (Resolved): luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercyc...
Looks like a different assertion so perhaps a new bug. I'll create a separate issue for this. Patrick Donnelly
03:49 PM Backport #22860 (In Progress): luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in power...
... Sage Weil
03:36 PM Backport #23151 (In Progress): luminous: doc: update ceph-fuse with FUSE options
Jos Collin
01:54 PM Bug #23826 (Duplicate): mds: assert after daemon restart
... Patrick Donnelly
01:26 PM Backport #23475 (In Progress): luminous: ceph-fuse: trim ceph-fuse -V output
Jos Collin
11:54 AM Backport #23771 (In Progress): luminous: client: fix gid_count check in UserPerm->deep_copy_from()
Jos Collin
11:50 AM Backport #23771: luminous: client: fix gid_count check in UserPerm->deep_copy_from()
https://github.com/ceph/ceph/pull/21596 Jos Collin
10:45 AM Bug #23813 (Fix Under Review): client: "remove_session_caps still has dirty|flushing caps" when t...
https://github.com/ceph/ceph/pull/21593 Zheng Yan
08:53 AM Bug #23518 (Fix Under Review): mds: crash when failover
https://github.com/ceph/ceph/pull/21592 Zheng Yan
07:49 AM Bug #23815: client: avoid second lock on client_lock
supriti singh wrote:
> supriti singh wrote:
> > In function ll_get_stripe_osd client_lock is taken. But its acquire...
supriti singh
03:52 AM Backport #23818 (In Progress): luminous: client: add option descriptions and review levels (e.g. ...
https://github.com/ceph/ceph/pull/21589 Prashant D

04/21/2018

09:42 PM Backport #23818 (Resolved): luminous: client: add option descriptions and review levels (e.g. LEV...
https://github.com/ceph/ceph/pull/21589 Nathan Cutler
07:52 AM Bug #23815 (Fix Under Review): client: avoid second lock on client_lock
Jos Collin
07:43 AM Bug #23815: client: avoid second lock on client_lock
supriti singh wrote:
> In function ll_get_stripe_osd client_lock is taken. But its acquired again in ll_get_inodeno(...
supriti singh
07:35 AM Bug #23815 (Resolved): client: avoid second lock on client_lock
In function ll_get_stripe_osd client_lock is taken. But its acquired again in ll_get_inodeno(). Avoid double locking.... supriti singh
05:32 AM Bug #23814 (Rejected): mds: newly active mds aborts may abort in handle_file_lock
... Patrick Donnelly
05:03 AM Bug #23813 (Resolved): client: "remove_session_caps still has dirty|flushing caps" when thrashing...
While doing a simple copy of /usr with ceph-fuse and thrashing max_mds between 1 and 2, I got these errors from ceph-... Patrick Donnelly
12:30 AM Bug #23812 (Fix Under Review): mds: may send LOCK_SYNC_MIX message to starting MDS
-https://github.com/ceph/ceph/pull/21577- Patrick Donnelly
12:28 AM Bug #23812 (Resolved): mds: may send LOCK_SYNC_MIX message to starting MDS
From mds.0:... Patrick Donnelly

04/20/2018

04:36 PM Feature #14456: mon: prevent older/incompatible clients from mounting the file system
Pre-mimic clients, yes. Patrick Donnelly
06:58 AM Feature #14456: mon: prevent older/incompatible clients from mounting the file system
prevent pre-luminous client to connect? Zheng Yan
04:33 PM Bug #21848 (Fix Under Review): client: re-expand admin_socket metavariables in child process
Patrick Donnelly
03:52 AM Bug #21848: client: re-expand admin_socket metavariables in child process
https://github.com/ceph/ceph/pull/21544
Patrick, could you pls take a look at this new fix? Now it is not only for...
Zhi Zhang
11:41 AM Bug #23518 (In Progress): mds: crash when failover
Zheng Yan
08:31 AM Bug #23518: mds: crash when failover
This one is related to http://tracker.ceph.com/issues/23503. #23503 can explain why session was evicted Zheng Yan
07:29 AM Bug #23327: qa: pjd test sees wrong ctime after unlink
should close this if it does not happen again Zheng Yan
05:52 AM Documentation #23583 (In Progress): doc: update snapshot doc to account for recent changes
by commit "mds: update dev document of cephfs snapshot" in RP https://github.com/ceph/ceph/pull/21374 Zheng Yan
02:31 AM Backport #23802 (In Progress): luminous: slow ceph_ll_sync_inode calls after setattr
https://github.com/ceph/ceph/pull/21542 Prashant D

04/19/2018

11:05 PM Bug #23755 (Resolved): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestStrays)
Patrick Donnelly
10:12 PM Fix #4708 (Rejected): MDS: journaler pre-zeroing is dangerous
Thanks for explaining Zheng. Closing this. Patrick Donnelly
01:44 PM Fix #4708 (Need More Info): MDS: journaler pre-zeroing is dangerous
I don't think it's still a problem. new mds takes over a rank after it see old mds is blacklisted in osdmap. There is... Zheng Yan
10:03 PM Backport #23790: luminous: mds: crash during shutdown_pass
Please just remove the global_snaprealm part of the backport. Patrick Donnelly
10:50 AM Backport #23790 (Need More Info): luminous: mds: crash during shutdown_pass
To backport this PR, we need complete PR#16779 (https://github.com/ceph/ceph/pull/16779) having changes related to mu... Prashant D
05:25 AM Backport #23790 (Resolved): luminous: mds: crash during shutdown_pass
https://github.com/ceph/ceph/pull/23015 Nathan Cutler
10:00 PM Bug #22933 (Pending Backport): client: add option descriptions and review levels (e.g. LEVEL_DEV)
Patrick Donnelly
08:06 PM Backport #23802 (Resolved): luminous: slow ceph_ll_sync_inode calls after setattr
https://github.com/ceph/ceph/pull/21542 Nathan Cutler
06:59 PM Bug #23800 (Fix Under Review): MDSMonitor: setting fs down twice will wipe old_max_mds
https://github.com/ceph/ceph/pull/21536 Patrick Donnelly
06:42 PM Bug #23800 (Resolved): MDSMonitor: setting fs down twice will wipe old_max_mds
Patrick Donnelly
06:49 PM Bug #23799 (Fix Under Review): MDSMonitor: creates invalid transition from up:creating to up:shut...
https://github.com/ceph/ceph/pull/21535 Patrick Donnelly
06:36 PM Bug #23799 (Resolved): MDSMonitor: creates invalid transition from up:creating to up:shutdown
... Patrick Donnelly
06:11 PM Bug #23714 (Pending Backport): slow ceph_ll_sync_inode calls after setattr
Patrick Donnelly
03:31 PM Bug #23797 (Can't reproduce): qa: cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
This is v12.2.5 QE validation
Run: http://pulpito.ceph.com/yuriw-2018-04-17_21:20:41-knfs-luminous-testing-basic-s...
Yuri Weinstein
09:52 AM Bug #23332: kclient: with fstab entry is not coming up reboot
Actually, the first failure seems to be a bit before:... Luis Henriques
08:12 AM Backport #23791 (In Progress): luminous: MDSMonitor: new file systems are not initialized with th...
https://github.com/ceph/ceph/pull/21512 Prashant D
05:25 AM Backport #23791 (Resolved): luminous: MDSMonitor: new file systems are not initialized with the p...
https://github.com/ceph/ceph/pull/21512 Nathan Cutler
05:25 AM Backport #23792 (Resolved): luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not p...
https://github.com/ceph/ceph/pull/21732 Nathan Cutler
03:24 AM Bug #23658 (Fix Under Review): MDSMonitor: crash after assigning standby-replay daemon in multifs...
Zheng Yan
02:48 AM Bug #23658: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
https://github.com/ceph/ceph/pull/21510 Zheng Yan

04/18/2018

09:43 PM Feature #20606 (Resolved): mds: improve usability of cluster rank manipulation and setting cluste...
Patrick Donnelly
09:42 PM Subtask #20864 (Resolved): kill allow_multimds
Patrick Donnelly
09:42 PM Feature #20610 (Resolved): MDSMonitor: add new command to shrink the cluster in an automated way
Patrick Donnelly
09:41 PM Feature #20608 (Resolved): MDSMonitor: rename `ceph fs set <fs_name> cluster_down` to `ceph fs se...
Patrick Donnelly
09:41 PM Feature #20609 (Resolved): MDSMonitor: add new command `ceph fs set <fs_name> down` to bring the ...
Patrick Donnelly
09:40 PM Bug #23764 (Pending Backport): MDSMonitor: new file systems are not initialized with the pending_...
Patrick Donnelly
09:39 PM Bug #23766 (Pending Backport): mds: crash during shutdown_pass
Patrick Donnelly
09:38 PM Bug #23762 (Pending Backport): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_...
Patrick Donnelly
06:36 PM Feature #3244 (New): qa: integrate Ganesha into teuthology testing to regularly exercise Ganesha ...
Jeff, fixed the wording to be clear. Patrick Donnelly
06:14 PM Feature #3244 (Rejected): qa: integrate Ganesha into teuthology testing to regularly exercise Gan...
I'm going to suggest that we just close this bug. We're doing this as a matter of course with the current work to cle... Jeff Layton
05:43 PM Bug #23421 (Need More Info): ceph-fuse: stop ceph-fuse if no root permissions?
Jos, please get hte client logs so we can diagnose. Patrick Donnelly
01:01 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
Thanks, dongdong! That seems to resolve the problem. Pull request is up here:
https://github.com/ceph/ceph/pull/21...
Jeff Layton
11:53 AM Backport #23770 (In Progress): luminous: ceph-fuse: return proper exit code
https://github.com/ceph/ceph/pull/21495 Prashant D
08:49 AM Documentation #23775: PendingReleaseNotes: add notes for major Mimic features
FYI: https://github.com/ceph/ceph/pull/21374 already include mds upgrade process Zheng Yan

04/17/2018

11:10 PM Documentation #23775 (Resolved): PendingReleaseNotes: add notes for major Mimic features
mds upgrade process, snapshots, kernel quotas, etc. Patrick Donnelly
11:08 PM Bug #23421 (Closed): ceph-fuse: stop ceph-fuse if no root permissions?
Patrick Donnelly
08:31 AM Bug #23421: ceph-fuse: stop ceph-fuse if no root permissions?
Patrick Donnelly wrote:
> Jos, can you get more detailed debug logs when this happens? It is probably not related to...
Jos Collin
07:00 PM Backport #23771 (Resolved): luminous: client: fix gid_count check in UserPerm->deep_copy_from()
https://github.com/ceph/ceph/pull/21596 Nathan Cutler
07:00 PM Backport #23770 (Resolved): luminous: ceph-fuse: return proper exit code
https://github.com/ceph/ceph/pull/21495 Nathan Cutler
04:29 PM Bug #23755 (Fix Under Review): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestS...
Patrick Donnelly
04:29 PM Bug #23755 (Pending Backport): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestS...
Patrick Donnelly
01:11 PM Bug #23755 (Fix Under Review): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestS...
https://github.com/ceph/ceph/pull/21472 Zheng Yan
04:16 PM Bug #23768 (Resolved): MDSMonitor: uncommitted state exposed to clients/mdss
e.g.
https://github.com/ceph/ceph/pull/21458#discussion_r182041693
and
https://github.com/ceph/ceph/pull/214...
Patrick Donnelly
02:08 PM Backport #23704 (In Progress): luminous: ceph-fuse: broken directory permission checking
https://github.com/ceph/ceph/pull/21475 Prashant D
01:51 PM Bug #23665: ceph-fuse: return proper exit code
backporter note: please include https://github.com/ceph/ceph/pull/21473 Patrick Donnelly
11:59 AM Feature #23623 (Fix Under Review): mds: mark allow_snaps true by default
by one commit in https://github.com/ceph/ceph/pull/21374 Zheng Yan
03:50 AM Bug #23762 (Fix Under Review): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_...
https://github.com/ceph/ceph/pull/21458 Patrick Donnelly
01:50 AM Bug #23762: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
Patrick Donnelly
03:16 AM Bug #23766 (Fix Under Review): mds: crash during shutdown_pass
https://github.com/ceph/ceph/pull/21457 Patrick Donnelly
03:13 AM Bug #23766 (Resolved): mds: crash during shutdown_pass
... Patrick Donnelly
01:49 AM Bug #23764 (Fix Under Review): MDSMonitor: new file systems are not initialized with the pending_...
https://github.com/ceph/ceph/pull/21456 Patrick Donnelly
01:41 AM Bug #23764 (In Progress): MDSMonitor: new file systems are not initialized with the pending_fsmap...
Patrick Donnelly
01:41 AM Bug #23764 (Resolved): MDSMonitor: new file systems are not initialized with the pending_fsmap epoch
Problem here: https://github.com/ceph/ceph/blob/60e8a63fdc21654d7f199a67f3f410a9e33c8134/src/mds/FSMap.cc#L234
FSM...
Patrick Donnelly

04/16/2018

08:10 PM Bug #23762 (In Progress): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
Patrick Donnelly
08:07 PM Bug #23762 (Resolved): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
https://github.com/ceph/ceph/blob/60e8a63fdc21654d7f199a67f3f410a9e33c8134/src/mon/MDSMonitor.cc#L162-L166 Patrick Donnelly
02:13 PM Backport #23750 (In Progress): luminous: mds: ceph.dir.rctime follows dir ctime not inode ctime
https://github.com/ceph/ceph/pull/21448 Prashant D
02:09 PM Backport #23703 (In Progress): luminous: MDSMonitor: mds health warnings printed in bad format
https://github.com/ceph/ceph/pull/21447 Prashant D
12:20 PM Backport #23703: luminous: MDSMonitor: mds health warnings printed in bad format
I'm on it. Prashant D
01:50 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
I tried to roll a standalone testcase for this, but it didn't stall out in the same way. I'm not quite sure what caus... Jeff Layton
11:36 AM Feature #21156: mds: speed up recovery with many open inodes
Patrick Donnelly wrote:
> Very unlikely because of the new structure in the metadata pool adds unacceptable risk for...
Webert Lima
10:44 AM Backport #23702 (In Progress): luminous: mds: sessions opened by journal replay do not get dirtie...
https://github.com/ceph/ceph/pull/21441 Prashant D
03:29 AM Bug #23652 (Pending Backport): client: fix gid_count check in UserPerm->deep_copy_from()
Patrick Donnelly
03:29 AM Bug #23665 (Pending Backport): ceph-fuse: return proper exit code
Patrick Donnelly
03:25 AM Bug #23755 (Resolved): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestStrays)
... Patrick Donnelly
03:16 AM Bug #22933 (Fix Under Review): client: add option descriptions and review levels (e.g. LEVEL_DEV)
https://github.com/ceph/ceph/pull/21434 Patrick Donnelly

04/15/2018

06:30 PM Bug #23751: mon: use fs-client profile for fs authorize mon caps
Actually I think this gives blanket permission to read from OSDs for all pools so we may actually want to remove this... Patrick Donnelly
06:27 PM Bug #23751 (New): mon: use fs-client profile for fs authorize mon caps
This is simpler and consistent. Patrick Donnelly
05:40 PM Backport #23750 (Resolved): luminous: mds: ceph.dir.rctime follows dir ctime not inode ctime
https://github.com/ceph/ceph/pull/21448 Nathan Cutler
05:39 PM Bug #23724 (Resolved): qa: broad snapshot functionality testing across clients
Ganesha FSAL
ceph-fuse
kclient
Patrick Donnelly
02:02 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
I think patrick is right, maybe we should call flush_mdlog_sync to make mds do the mdlog flush before we wait on the ... dongdong tao
03:35 AM Bug #23714: slow ceph_ll_sync_inode calls after setattr
Sounds like Ganesha is blocked on a journal flush by the MDS. Patrick Donnelly
02:50 AM Bug #23723 (New): qa: incorporate smallfile workload
Add smallfile workload
https://github.com/distributed-system-analysis/smallfile
to fs:workloads suite.
Patrick Donnelly

04/14/2018

03:30 PM Bug #23715: "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-distro-basic-ovh
Patrick, any requirements against running k* suites on ovh ? Yuri Weinstein
07:10 AM Bug #23715: "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-distro-basic-ovh
Yuri, I have not seen this on smithi, so I assume it only happens in virtual environments. Nathan Cutler
12:27 AM Cleanup #23718 (Resolved): qa: merge fs/kcephfs suites
and remove redundant tests (e.g. inline on/off with administrative tests like changing max_mds). Patrick Donnelly
12:15 AM Cleanup #23717 (New): cephfs: consider renaming max_mds to a better name
It is no longer considered a "max" and having fewer ranks than max_mds is considered a bad configuration which genera... Patrick Donnelly
12:12 AM Bug #23567 (Fix Under Review): MDSMonitor: successive changes to max_mds can allow hole in ranks
https://github.com/ceph/ceph/pull/16608
QA and fix here.
Patrick Donnelly

04/13/2018

09:53 PM Bug #23715: "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-distro-basic-ovh
Nathan, I am not sure if you have seen this.
Suspect also in http://pulpito.ceph.com/teuthology-2018-04-11_04:15:02-...
Yuri Weinstein
09:52 PM Bug #23715 (Closed): "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-dist...
Run: http://pulpito.ceph.com/teuthology-2018-04-11_04:10:03-fs-jewel-distro-basic-ovh/
Jobs: 40 jobs
Logs: teutholo...
Yuri Weinstein
09:40 PM Feature #21156: mds: speed up recovery with many open inodes
Webert Lima wrote:
> Hi, thank you very much for this.
>
> I see this
> > Target version: Ceph - v13.0.0
>
> ...
Patrick Donnelly
08:33 PM Feature #21156: mds: speed up recovery with many open inodes
Hi, thank you very much for this.
I see this
> Target version: Ceph - v13.0.0
So I'm not even asking for a bac...
Webert Lima
09:32 PM Backport #23698 (In Progress): luminous: mds: load balancer fixes
Nathan Cutler
09:07 AM Backport #23698: luminous: mds: load balancer fixes
https://github.com/ceph/ceph/pull/21412 Zheng Yan
08:54 AM Backport #23698 (New): luminous: mds: load balancer fixes
backport https://github.com/ceph/ceph/pull/19220 to luminous Nathan Cutler
08:41 AM Backport #23698 (Pending Backport): luminous: mds: load balancer fixes
Nathan Cutler
02:26 AM Backport #23698 (Resolved): luminous: mds: load balancer fixes
https://github.com/ceph/ceph/pull/21412 Zheng Yan
06:49 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
I'll see if I can cook up libcephfs standalone testcase for this. Jeff Layton
06:48 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
We recently added some calls to ceph_ll_sync_inode in ganesha, to be done after a setattr request. Testing with cthon... Jeff Layton
06:40 PM Bug #23714 (Resolved): slow ceph_ll_sync_inode calls after setattr
Jeff Layton
04:55 PM Bug #23697 (Pending Backport): mds: load balancer fixes
Patrick Donnelly
08:56 AM Bug #23697: mds: load balancer fixes
Sorry for the confused edits. The "backport-create-issue" script is much better at this than I am. It's enough to cha... Nathan Cutler
08:36 AM Bug #23697 (New): mds: load balancer fixes
Nathan Cutler
02:23 AM Bug #23697 (Resolved): mds: load balancer fixes
https://github.com/ceph/ceph/pull/19220 Zheng Yan
09:48 AM Bug #21848: client: re-expand admin_socket metavariables in child process
Hi Patrick,
Sorry for missing this for a long time. I will take a look recently to see whether there is a better f...
Zhi Zhang
08:35 AM Backport #23705 (Rejected): jewel: ceph-fuse: broken directory permission checking
Nathan Cutler
08:35 AM Backport #23704 (Resolved): luminous: ceph-fuse: broken directory permission checking
https://github.com/ceph/ceph/pull/21475 Nathan Cutler
08:35 AM Backport #23703 (Resolved): luminous: MDSMonitor: mds health warnings printed in bad format
https://github.com/ceph/ceph/pull/21447 Nathan Cutler
08:34 AM Backport #23702 (Resolved): luminous: mds: sessions opened by journal replay do not get dirtied p...
https://github.com/ceph/ceph/pull/21441 Nathan Cutler
02:07 AM Feature #17434: qa: background rsync task for FS workunits
Current work on this by Ramakrishnan: https://github.com/ceph/ceph/pull/12503 Patrick Donnelly
01:26 AM Bug #23509 (Pending Backport): ceph-fuse: broken directory permission checking
Patrick Donnelly
01:25 AM Bug #23582 (Pending Backport): MDSMonitor: mds health warnings printed in bad format
Patrick Donnelly
01:24 AM Bug #23625 (Pending Backport): mds: sessions opened by journal replay do not get dirtied properly
Patrick Donnelly
01:10 AM Feature #22417 (Resolved): support purge queue with cephfs-journal-tool
Zheng Yan

04/12/2018

10:50 PM Feature #20608 (In Progress): MDSMonitor: rename `ceph fs set <fs_name> cluster_down` to `ceph fs...
Doug, thinking about this more, I'd like to keep "cluster_down" (as "joinable" or not) because it simplifies qa testi... Patrick Donnelly
09:40 PM Feature #23695 (Resolved): VolumeClient: allow ceph_volume_client to create 'volumes' without nam...
https://bugzilla.redhat.com/show_bug.cgi?id=1566194
to address the needs of
https://github.com/kubernetes-incub...
Patrick Donnelly
07:39 PM Documentation #23568 (Resolved): doc: outline the steps for upgrading an MDS cluster
Patrick Donnelly
07:39 PM Backport #23634 (Resolved): luminous: doc: outline the steps for upgrading an MDS cluster
Patrick Donnelly
07:27 PM Backport #23634: luminous: doc: outline the steps for upgrading an MDS cluster
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21352
merged
Yuri Weinstein
07:19 PM Bug #23665 (Fix Under Review): ceph-fuse: return proper exit code
https://github.com/ceph/ceph/pull/21396 Patrick Donnelly
12:53 AM Bug #23665 (Resolved): ceph-fuse: return proper exit code
from mailling list... Zheng Yan
05:00 PM Feature #23689 (New): qa: test major/minor version upgrades
We should verify the upgrade process [1] works and that older clients with parallel I/O still function correctly.
...
Patrick Donnelly
10:19 AM Backport #23637 (In Progress): luminous: mds: assertion in MDSRank::validate_sessions
https://github.com/ceph/ceph/pull/21372 Prashant D
04:17 AM Backport #23636 (In Progress): luminous: mds: kicked out by monitor during rejoin
https://github.com/ceph/ceph/pull/21366 Prashant D
01:35 AM Backport #23671 (Resolved): luminous: mds: MDBalancer using total (all time) request count in loa...
https://github.com/ceph/ceph/pull/21412 Nathan Cutler
01:34 AM Backport #23669 (Resolved): luminous: doc: create doc outlining steps to bring down cluster
https://github.com/ceph/ceph/pull/22872 Nathan Cutler

04/11/2018

07:05 PM Feature #20611 (New): MDSMonitor: do not show cluster health warnings for file system intentional...
Doug, I was just thinking about this and a valid reason to not want a HEALTH_ERR is if you have dozens or hundreds of... Patrick Donnelly
01:42 PM Feature #20611 (Fix Under Review): MDSMonitor: do not show cluster health warnings for file syste...
Douglas Fuller
01:42 PM Feature #20611: MDSMonitor: do not show cluster health warnings for file system intentionally mar...
See https://github.com/ceph/ceph/pull/16608, which implements the opposite of this behavior. Whenever a filesystem is... Douglas Fuller
06:54 PM Feature #20607 (Rejected): MDSMonitor: change "mds deactivate" to clearer "mds rejoin"
This is rejected in favor of removing `mds deactivate`. Patrick Donnelly
05:31 PM Documentation #23427 (Pending Backport): doc: create doc outlining steps to bring down cluster
Patrick Donnelly
05:29 PM Bug #23658 (Resolved): MDSMonitor: crash after assigning standby-replay daemon in multifs setup
From: https://github.com/rook/rook/issues/1027... Patrick Donnelly
05:19 PM Bug #23567: MDSMonitor: successive changes to max_mds can allow hole in ranks
Doug, I tested with master but I believe it also happened with your PR. I can't remember. Patrick Donnelly
03:02 PM Bug #23567 (Need More Info): MDSMonitor: successive changes to max_mds can allow hole in ranks
Was this before or after https://github.com/ceph/ceph/pull/16608 ? Douglas Fuller
03:14 PM Bug #23652 (Fix Under Review): client: fix gid_count check in UserPerm->deep_copy_from()
https://github.com/ceph/ceph/pull/21341 Jos Collin
03:10 PM Bug #23652 (Resolved): client: fix gid_count check in UserPerm->deep_copy_from()
Fix gid_count check in UserPerm->deep_copy_from(). Allocate gids only if gid_count > 0. Jos Collin
02:34 PM Backport #23635 (In Progress): luminous: client: fix request send_to_auth was never really used
https://github.com/ceph/ceph/pull/21354 Prashant D
01:46 PM Feature #20609 (In Progress): MDSMonitor: add new command `ceph fs set <fs_name> down` to bring t...
https://github.com/ceph/ceph/pull/16608 overhauls this behavior, and re-implements the cluster_down flag for this fun... Douglas Fuller
01:45 PM Feature #20606 (Fix Under Review): mds: improve usability of cluster rank manipulation and settin...
https://github.com/ceph/ceph/pull/16608 Douglas Fuller
01:44 PM Feature #20610 (Fix Under Review): MDSMonitor: add new command to shrink the cluster in an automa...
https://github.com/ceph/ceph/pull/16608 Douglas Fuller
01:43 PM Feature #20608 (Rejected): MDSMonitor: rename `ceph fs set <fs_name> cluster_down` to `ceph fs se...
This behavior is overhauled in https://github.com/ceph/ceph/pull/16608 . Douglas Fuller
01:30 PM Backport #23634 (In Progress): luminous: doc: outline the steps for upgrading an MDS cluster
https://github.com/ceph/ceph/pull/21352 Prashant D
01:30 PM Bug #23393 (Resolved): ceph-ansible: update Ganesha config for nfs_file_gw to use optimal settings
Ramana Raja
01:26 PM Bug #23643 (Resolved): qa: osd_mon_report_interval typo in test_full.py
Sage Weil
10:27 AM Backport #23632 (In Progress): luminous: mds: handle client requests when mds is stopping
https://github.com/ceph/ceph/pull/21346 Prashant D
09:09 AM Backport #23632: luminous: mds: handle client requests when mds is stopping
I am on it. Prashant D
05:14 AM Bug #21745 (Pending Backport): mds: MDBalancer using total (all time) request count in load stati...
Patrick Donnelly
01:00 AM Feature #22372: kclient: implement quota handling using new QuotaRealm
by following commits in testing branch
ceph: quota: report root dir quota usage in statfs …
ceph: quota: add cou...
Zheng Yan
12:47 AM Bug #18730 (Closed): mds: backtrace issues getxattr for every file with cap on rejoin
should be resolved by open file table https://github.com/ceph/ceph/pull/20132 Zheng Yan
12:44 AM Fix #5268 (Closed): mds: fix/clean up file size/mtime recovery code
current code does parallel object checks. Zheng Yan
12:39 AM Bug #4212 (Closed): mds: open_snap_parents isn't called all the times it needs to be
with the new snaprealm format, there is no need to open past parent Zheng Yan
12:37 AM Bug #21412 (Closed): cephfs: too many cephfs snapshots chokes the system
Zheng Yan
12:37 AM Bug #21412: cephfs: too many cephfs snapshots chokes the system
this is actually osd issue. I talk to josh at cephalocon. He said it has already been fixed Zheng Yan
 

Also available in: Atom