Project

General

Profile

Activity

From 04/03/2018 to 05/02/2018

05/02/2018

11:56 PM Bug #16842 (Can't reproduce): mds: replacement MDS crashes on InoTable release
Patrick Donnelly
10:57 PM Bug #23975 (Pending Backport): qa: TestVolumeClient.test_lifecycle needs updated for new eviction...
Patrick Donnelly
07:53 PM Bug #23975 (Fix Under Review): qa: TestVolumeClient.test_lifecycle needs updated for new eviction...
https://github.com/ceph/ceph/pull/21789 Patrick Donnelly
06:59 PM Bug #23975 (Resolved): qa: TestVolumeClient.test_lifecycle needs updated for new eviction behavior
... Patrick Donnelly
08:50 PM Bug #23768 (New): MDSMonitor: uncommitted state exposed to clients/mdss
Moving this back to fs. This is a different bug Josh. Patrick Donnelly
08:44 PM Bug #23768 (Resolved): MDSMonitor: uncommitted state exposed to clients/mdss
backport is tracked in the fs bug Josh Durgin
06:06 PM Bug #23972 (New): Ceph MDS Crash from client mounting aufs over cephfs

Here is a rough outline of my topology
https://pastebin.com/HQqbMxyj
---
I can reliably crash all (in my case...
Sean Sullivan
05:02 PM Feature #17230 (In Progress): ceph_volume_client: py3 compatible
Patrick Donnelly
04:08 PM Bug #10915 (Pending Backport): client: hangs on umount if it had an MDS session evicted
Patrick Donnelly
02:21 PM Bug #23960 (Pending Backport): mds: scrub on fresh file system fails
Patrick Donnelly
02:20 PM Bug #23873 (Pending Backport): cephfs does not count st_nlink for directories correctly?
Patrick Donnelly
02:20 PM Bug #22428 (Pending Backport): mds: don't report slow request for blocked filelock request
Patrick Donnelly
03:10 AM Bug #23958: mds: scrub doesn't always return JSON results
recursive scrub is async, it does not return anything Zheng Yan

05/01/2018

10:42 PM Bug #23960 (In Progress): mds: scrub on fresh file system fails
https://github.com/ceph/ceph/pull/21762 Patrick Donnelly
10:21 PM Bug #23960 (Resolved): mds: scrub on fresh file system fails
In a fresh vstart cluster:... Patrick Donnelly
04:01 PM Bug #23958 (Resolved): mds: scrub doesn't always return JSON results
On a vstart cluster:... Patrick Donnelly
06:52 AM Backport #23951 (Resolved): luminous: mds: stuck during up:stopping
https://github.com/ceph/ceph/pull/21901 Nathan Cutler
06:52 AM Backport #23950 (Resolved): luminous: mds: stopping rank 0 cannot shutdown until log is trimmed
https://github.com/ceph/ceph/pull/21899 Nathan Cutler
06:29 AM Bug #23826: mds: assert after daemon restart
checking MDSMap::is_rejoining() is not required here. If there are recovering mds which haven't entered rejoin state.... Zheng Yan
12:29 AM Bug #23923 (Pending Backport): mds: stopping rank 0 cannot shutdown until log is trimmed
Patrick Donnelly
12:29 AM Bug #23919 (Pending Backport): mds: stuck during up:stopping
Patrick Donnelly

04/30/2018

09:05 PM Bug #23448 (Resolved): nfs-ganesha: fails to parse rados URLs with '.' in object name
Yes. Jeff Layton
08:51 PM Bug #23448: nfs-ganesha: fails to parse rados URLs with '.' in object name
Is this resolved? Patrick Donnelly
08:00 PM Backport #23946 (Resolved): luminous: mds: crash when failover
https://github.com/ceph/ceph/pull/21900 Nathan Cutler
07:21 PM Bug #23826: mds: assert after daemon restart
Here's one possible way this could happen I think:
1. All MDS are rejoin or later.
2. A up:rejoin MDS does:
3....
Patrick Donnelly
07:00 PM Bug #23826: mds: assert after daemon restart
Adding log from failed MDS.
Looks like it's receiving handle_cache_rejoin_ack message while in replay.
Patrick Donnelly
06:53 PM Bug #23518 (Pending Backport): mds: crash when failover
Patrick Donnelly
01:43 PM Bug #23883: kclient: CephFS kernel client hang
v4.9 is quite old at this point, so it would be helpful to know if this is something that has already been fixed in m... Jeff Layton
06:54 AM Backport #23932 (In Progress): jewel: client: avoid second lock on client_lock
Jos Collin
04:38 AM Backport #23792 (In Progress): luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap no...
https://github.com/ceph/ceph/pull/21732 Patrick Donnelly
03:59 AM Backport #23933 (In Progress): luminous: client: avoid second lock on client_lock
Jos Collin

04/29/2018

08:31 PM Backport #23936 (Resolved): luminous: cephfs-journal-tool: segfault during journal reset
https://github.com/ceph/ceph/pull/21874 Nathan Cutler
08:30 PM Backport #23935 (Resolved): luminous: mds: may send LOCK_SYNC_MIX message to starting MDS
https://github.com/ceph/ceph/pull/21990 Nathan Cutler
08:30 PM Backport #23934 (Resolved): luminous: client: "remove_session_caps still has dirty|flushing caps"...
https://github.com/ceph/ceph/pull/21844 Nathan Cutler
08:30 PM Backport #23933 (Resolved): luminous: client: avoid second lock on client_lock
https://github.com/ceph/ceph/pull/21730 Nathan Cutler
08:30 PM Backport #23932 (Resolved): jewel: client: avoid second lock on client_lock
https://github.com/ceph/ceph/pull/21734 Nathan Cutler
08:30 PM Backport #23931 (Resolved): luminous: qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < ...
https://github.com/ceph/ceph/pull/21841 Nathan Cutler
08:30 PM Backport #23930 (Resolved): luminous: mds: scrub code stuck at trimming log segments
https://github.com/ceph/ceph/pull/21840 Nathan Cutler
08:07 PM Bug #23815 (Pending Backport): client: avoid second lock on client_lock
Patrick Donnelly
08:06 PM Bug #23813 (Pending Backport): client: "remove_session_caps still has dirty|flushing caps" when t...
Patrick Donnelly
08:06 PM Bug #23812 (Pending Backport): mds: may send LOCK_SYNC_MIX message to starting MDS
Patrick Donnelly
08:06 PM Bug #20549 (Pending Backport): cephfs-journal-tool: segfault during journal reset
Patrick Donnelly
08:05 PM Bug #23829 (Pending Backport): qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_...
Patrick Donnelly
08:05 PM Bug #23880 (Pending Backport): mds: scrub code stuck at trimming log segments
Patrick Donnelly
01:44 AM Bug #23919 (Fix Under Review): mds: stuck during up:stopping
Zheng Yan wrote:
> I think we should call Locker::_readlock_kick in this case.
https://github.com/ceph/ceph/pull/...
Patrick Donnelly
01:15 AM Bug #23927 (Rejected): qa: test_full failure in test_barrier
https://github.com/ceph/ceph/pull/21668#pullrequestreview-116152567 Patrick Donnelly
12:54 AM Bug #23927: qa: test_full failure in test_barrier
Here too: http://pulpito.ceph.com/pdonnell-2018-04-28_06:20:24-fs-wip-pdonnell-testing-20180428.041811-testing-basic-... Patrick Donnelly
12:49 AM Bug #23927 (Rejected): qa: test_full failure in test_barrier
... Patrick Donnelly
12:36 AM Bug #23923 (Fix Under Review): mds: stopping rank 0 cannot shutdown until log is trimmed
https://github.com/ceph/ceph/pull/21719 Patrick Donnelly

04/28/2018

06:59 PM Bug #23923 (Resolved): mds: stopping rank 0 cannot shutdown until log is trimmed
... Patrick Donnelly
03:53 PM Bug #23883: kclient: CephFS kernel client hang
Hi Wei,
this is a very interesting problem, from your description, i would like to share my thought:
this shoul...
dongdong tao
10:10 AM Bug #23883: kclient: CephFS kernel client hang
client kernel dmesg:... wei jin
10:09 AM Bug #23883: kclient: CephFS kernel client hang
... wei jin
08:02 AM Bug #23883: kclient: CephFS kernel client hang
debug_mds = 10. only for period that mds is recovering Zheng Yan
07:53 AM Bug #23883: kclient: CephFS kernel client hang
Zheng Yan wrote:
> please upload mds log
which level?
after setting debug_mds = 20 and debug_ms = 1, log file is...
wei jin
05:03 AM Bug #23883: kclient: CephFS kernel client hang
please upload mds log Zheng Yan
10:34 AM Bug #22428 (Fix Under Review): mds: don't report slow request for blocked filelock request
https://github.com/ceph/ceph/pull/21715 Zheng Yan
07:50 AM Bug #23919: mds: stuck during up:stopping
I think we should call Locker::_readlock_kick in this case. Zheng Yan
04:02 AM Bug #23919: mds: stuck during up:stopping
/ceph/tmp/pdonnell/bz1566016/0x20000205a64.log.gz
holds the output of
zgrep -C5 0x20000205a64 ceph-mds.magna05...
Patrick Donnelly
03:52 AM Bug #23919: mds: stuck during up:stopping
crux of the issue appears to be here:... Patrick Donnelly
06:34 AM Bug #23920: Multiple ceph-fuse and one ceph-client.admin.log
I am using the method you said to modify,After that I found three questions:
1、when I run ceph-fuse ,There will be...
yuanli zhu
06:04 AM Bug #23920: Multiple ceph-fuse and one ceph-client.admin.log
because I have two ceph-fuse,how can i set config use the command as below for each ceph-fuse:
ceph daemon clien...
yuanli zhu
05:00 AM Bug #23920 (Rejected): Multiple ceph-fuse and one ceph-client.admin.log
config issue. you should set log file config option like
log file = /var/log/ceph/ceph-client.$pid.log
Zheng Yan
02:28 AM Bug #23920 (Rejected): Multiple ceph-fuse and one ceph-client.admin.log
I use the command as below:
/usr/bin/ceph-fuse -c /etc/ceph/ceph.conf /nas/test1 -r /test1
/usr/bin/ceph-fu...
yuanli zhu
04:54 AM Bug #23894 (Fix Under Review): ceph-fuse: missing dentries in readdir result
https://github.com/ceph/ceph/pull/21712 Zheng Yan
01:37 AM Bug #23894: ceph-fuse: missing dentries in readdir result
libcephfs does not handle session stale message properly
steps to reproduce:
1. create two ceph-fuse mounts, mo...
Zheng Yan

04/27/2018

10:27 PM Bug #23919 (Resolved): mds: stuck during up:stopping
... Patrick Donnelly
10:16 PM Backport #23833: luminous: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
I think they are separate issues but I will take a look. Patrick Donnelly
07:42 PM Backport #23833: luminous: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
@Patrick - this one looks like it could benefit from being done in a single PR along with http://tracker.ceph.com/iss... Nathan Cutler
05:34 PM Backport #23833: luminous: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
Travis Nielsen wrote:
> What is the timeline for the backport? Rook would like to see it in 12.2.6. Thanks!
It sh...
Patrick Donnelly
05:16 PM Backport #23833: luminous: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
What is the timeline for the backport? Rook would like to see it in 12.2.6. Thanks! Travis Nielsen
07:40 PM Backport #23792: luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
@Patrick could you take this one? Nathan Cutler
05:05 PM Bug #23658: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
When this issue hits, is there a way to recover? For example, to forcefully remove the multiple filesystems that are ... Travis Nielsen
11:58 AM Bug #23873: cephfs does not count st_nlink for directories correctly?
Peter Mauritius wrote:
> The Dovecot mail server does not work properly, if mailbox files are stored on cephfs and a...
Jeff Layton
10:18 AM Bug #23883: kclient: CephFS kernel client hang
Zheng Yan wrote:
> besides, 4.4/4.9 kernel is too old for using multimds.
It is very difficult to upgrade kernel ...
wei jin
03:55 AM Documentation #23897 (In Progress): doc: create snapshot user doc
Include suggested upgrade procedure: https://github.com/ceph/ceph/pull/21374/commits/e05ebd08ea895626f4a2a52805f17e61... Patrick Donnelly
12:50 AM Bug #23894 (Resolved): ceph-fuse: missing dentries in readdir result
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-April/026224.html Zheng Yan

04/26/2018

11:54 PM Bug #23883: kclient: CephFS kernel client hang
besides, 4.4/4.9 kernel is too old for using multimds. Zheng Yan
11:47 PM Bug #23883: kclient: CephFS kernel client hang
need mds log to check what happened Zheng Yan
08:09 PM Bug #23883: kclient: CephFS kernel client hang

Patrick Donnelly
10:19 AM Bug #23883 (New): kclient: CephFS kernel client hang
ceph: 12.2.4/12.2.5
os: debian jessie
kernel: 4.9/4.4
After restart all mds(6 in total, 5 active, 1 standby), cl...
wei jin
10:23 PM Backport #23638 (In Progress): luminous: ceph-fuse: getgroups failure causes exception
Patrick Donnelly
08:01 PM Bug #23421: ceph-fuse: stop ceph-fuse if no root permissions?
Jos Collin wrote:
> The hang doesn't exist in the latest code.
>
> The following is my latest finding:
>
> [.....
Patrick Donnelly
10:07 AM Bug #23421: ceph-fuse: stop ceph-fuse if no root permissions?
The hang doesn't exist in the latest code.
The following is my latest finding:...
Jos Collin
05:20 PM Bug #23873: cephfs does not count st_nlink for directories correctly?
The Dovecot mail server does not work properly, if mailbox files are stored on cephfs and a mailbox prefix is configu... Peter Mauritius
04:39 AM Bug #23873: cephfs does not count st_nlink for directories correctly?
Zheng Yan wrote:
> If I remember right, this is not required by POSIX (btrfs does not do this). how NFS behaves depe...
Patrick Donnelly
02:34 AM Bug #23873: cephfs does not count st_nlink for directories correctly?
If I remember right, this is not required by POSIX (btrfs does not do this). how NFS behaves depends on the exported ... Zheng Yan
11:10 AM Bug #23885 (Resolved): MDSMonitor: overzealous MDS_ALL_DOWN and MDS_UP_LESS_THAN_MAX health warni...

This is what we currently get when starting with vstart, which creates MDS daemons before creating the filesystem:
...
John Spray
10:35 AM Bug #23855 (Fix Under Review): mds: MClientCaps should carry inode's dirstat
https://github.com/ceph/ceph/pull/21668 Zheng Yan
09:49 AM Bug #23880 (Fix Under Review): mds: scrub code stuck at trimming log segments
https://github.com/ceph/ceph/pull/21664 Zheng Yan
07:49 AM Bug #23880 (Resolved): mds: scrub code stuck at trimming log segments
/a/pdonnell-2018-04-25_18:15:51-kcephfs-wip-pdonnell-testing-20180425.144904-testing-basic-smithi/2439034 Zheng Yan
01:26 AM Feature #17854: mds: only evict an unresponsive client when another client wants its caps
Rishabh Dave wrote:
> I am planning to start working on this feature. How can I get a client to be unresponsive with...
Zheng Yan
12:49 AM Bug #23332: kclient: with fstab entry is not coming up reboot
kexec in dmesgs looks suspicious. client mounted cephfs, then used kexec to load kernel image again. All issues happe... Zheng Yan

04/25/2018

09:08 PM Feature #17854 (In Progress): mds: only evict an unresponsive client when another client wants it...
Patrick Donnelly
07:39 PM Feature #17854: mds: only evict an unresponsive client when another client wants its caps
I am planning to start working on this feature. How can I get a client to be unresponsive without evicting it? Rishabh Dave
08:24 PM Bug #23873 (Fix Under Review): cephfs does not count st_nlink for directories correctly?
https://github.com/ceph/ceph/pull/21652 Patrick Donnelly
07:42 PM Bug #23873 (Resolved): cephfs does not count st_nlink for directories correctly?
Not sure if this behavior is by intention, but if you create a empty directory on cephfs and call stat on the directo... Danny Al-Gaaf
06:09 PM Bug #23332: kclient: with fstab entry is not coming up reboot
Luis Henriques wrote:
> Actually, the first failure seems to be a bit before:
> [...]
> The client seems to be try...
Shreekara Shastry
04:58 PM Bug #23848 (Rejected): mds: stuck shutdown procedure
Patrick Donnelly
04:06 AM Bug #23848: mds: stuck shutdown procedure
... Patrick Donnelly
04:00 AM Bug #23848 (Rejected): mds: stuck shutdown procedure
The following outputs in an infinite loop:... Patrick Donnelly
01:10 PM Bug #23855 (Resolved): mds: MClientCaps should carry inode's dirstat
inode's dirstat gets updated by request reply, but not by cap message. this is problematic.
For example:
...
MDS...
Zheng Yan
08:33 AM Bug #22428: mds: don't report slow request for blocked filelock request
In case you need more examples, we're seeing this recently on 12.2.4:... Dan van der Ster
02:55 AM Bug #16842: mds: replacement MDS crashes on InoTable release
make we should mark this as "need more info" or "can't reproduce" Zheng Yan
02:08 AM Backport #23698: luminous: mds: load balancer fixes
https://github.com/ceph/ceph/pull/21412 Zheng Yan

04/24/2018

07:50 PM Bug #23829 (Fix Under Review): qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_...
Zheng Yan wrote:
> It's test case issue. The test caused so much trouble. I'd like to drop/disable it
Agreed.
...
Patrick Donnelly
12:24 PM Bug #23829: qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_ops * 1.25)
It's test case issue. The test caused so much trouble. I'd like to drop/disable it Zheng Yan
07:37 PM Bug #23837 (Fix Under Review): client: deleted inode's Bufferhead which was in STATE::Tx would le...
Patrick Donnelly
10:44 AM Bug #23837: client: deleted inode's Bufferhead which was in STATE::Tx would lead a assert fail
fixed by: https://github.com/ceph/ceph/pull/21615 Ivan Guan
09:45 AM Bug #23837 (Resolved): client: deleted inode's Bufferhead which was in STATE::Tx would lead a ass...
... Ivan Guan
07:07 PM Backport #23671 (In Progress): luminous: mds: MDBalancer using total (all time) request count in ...
https://github.com/ceph/ceph/pull/21412/commits/1a5b7eaac572f1810d0453b053781e6bc8185dd2 Patrick Donnelly
06:55 PM Tasks #23844 (In Progress): client: break client_lock
See past efforts on this. Matt Benjamin did some prototyping on Firefly. Those patches will likely be unusable but co... Patrick Donnelly
11:19 AM Backport #23835 (In Progress): luminous: mds: fix occasional dir rstat inconsistency between mult...
https://github.com/ceph/ceph/pull/21617 Prashant D
05:48 AM Backport #23835 (Resolved): luminous: mds: fix occasional dir rstat inconsistency between multi-M...
https://github.com/ceph/ceph/pull/21617 Nathan Cutler
11:10 AM Backport #23308 (In Progress): luminous: doc: Fix -d option in ceph-fuse doc
Jos Collin
08:24 AM Bug #20549 (Fix Under Review): cephfs-journal-tool: segfault during journal reset
https://github.com/ceph/ceph/pull/21610 Zheng Yan
07:09 AM Feature #23362: mds: add drop_cache command
https://github.com/ceph/ceph/pull/21566 Rishabh Dave
05:47 AM Backport #23834 (Rejected): jewel: MDSMonitor: crash after assigning standby-replay daemon in mul...
Nathan Cutler
05:47 AM Backport #23833 (Resolved): luminous: MDSMonitor: crash after assigning standby-replay daemon in ...
https://github.com/ceph/ceph/pull/22603 Nathan Cutler
04:42 AM Bug #23567 (Resolved): MDSMonitor: successive changes to max_mds can allow hole in ranks
Patrick Donnelly
04:35 AM Bug #23538 (Pending Backport): mds: fix occasional dir rstat inconsistency between multi-MDSes
Patrick Donnelly
04:34 AM Bug #23658 (Pending Backport): MDSMonitor: crash after assigning standby-replay daemon in multifs...
Patrick Donnelly
04:33 AM Bug #23799 (Resolved): MDSMonitor: creates invalid transition from up:creating to up:shutdown
Patrick Donnelly
04:32 AM Bug #23800 (Resolved): MDSMonitor: setting fs down twice will wipe old_max_mds
Patrick Donnelly

04/23/2018

08:26 PM Bug #23829 (Resolved): qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_ops * 1.25)
... Patrick Donnelly
07:51 PM Bug #20549: cephfs-journal-tool: segfault during journal reset
Another: http://pulpito.ceph.com/pdonnell-2018-04-23_17:22:02-kcephfs-wip-pdonnell-testing-20180423.033341-testing-ba... Patrick Donnelly
05:51 PM Bug #23814 (Rejected): mds: newly active mds aborts may abort in handle_file_lock
Patrick Donnelly
08:40 AM Bug #23814: mds: newly active mds aborts may abort in handle_file_lock
I think this is related to #23812. The patch for #23812 makes mds skip sending lock message to 'starting' mds. The sk... Zheng Yan
05:50 PM Bug #23812: mds: may send LOCK_SYNC_MIX message to starting MDS
https://github.com/ceph/ceph/pull/21601 Patrick Donnelly
05:10 PM Backport #22860 (Resolved): luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercyc...
Looks like a different assertion so perhaps a new bug. I'll create a separate issue for this. Patrick Donnelly
03:49 PM Backport #22860 (In Progress): luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in power...
... Sage Weil
03:36 PM Backport #23151 (In Progress): luminous: doc: update ceph-fuse with FUSE options
Jos Collin
01:54 PM Bug #23826 (Duplicate): mds: assert after daemon restart
... Patrick Donnelly
01:26 PM Backport #23475 (In Progress): luminous: ceph-fuse: trim ceph-fuse -V output
Jos Collin
11:54 AM Backport #23771 (In Progress): luminous: client: fix gid_count check in UserPerm->deep_copy_from()
Jos Collin
11:50 AM Backport #23771: luminous: client: fix gid_count check in UserPerm->deep_copy_from()
https://github.com/ceph/ceph/pull/21596 Jos Collin
10:45 AM Bug #23813 (Fix Under Review): client: "remove_session_caps still has dirty|flushing caps" when t...
https://github.com/ceph/ceph/pull/21593 Zheng Yan
08:53 AM Bug #23518 (Fix Under Review): mds: crash when failover
https://github.com/ceph/ceph/pull/21592 Zheng Yan
07:49 AM Bug #23815: client: avoid second lock on client_lock
supriti singh wrote:
> supriti singh wrote:
> > In function ll_get_stripe_osd client_lock is taken. But its acquire...
supriti singh
03:52 AM Backport #23818 (In Progress): luminous: client: add option descriptions and review levels (e.g. ...
https://github.com/ceph/ceph/pull/21589 Prashant D

04/21/2018

09:42 PM Backport #23818 (Resolved): luminous: client: add option descriptions and review levels (e.g. LEV...
https://github.com/ceph/ceph/pull/21589 Nathan Cutler
07:52 AM Bug #23815 (Fix Under Review): client: avoid second lock on client_lock
Jos Collin
07:43 AM Bug #23815: client: avoid second lock on client_lock
supriti singh wrote:
> In function ll_get_stripe_osd client_lock is taken. But its acquired again in ll_get_inodeno(...
supriti singh
07:35 AM Bug #23815 (Resolved): client: avoid second lock on client_lock
In function ll_get_stripe_osd client_lock is taken. But its acquired again in ll_get_inodeno(). Avoid double locking.... supriti singh
05:32 AM Bug #23814 (Rejected): mds: newly active mds aborts may abort in handle_file_lock
... Patrick Donnelly
05:03 AM Bug #23813 (Resolved): client: "remove_session_caps still has dirty|flushing caps" when thrashing...
While doing a simple copy of /usr with ceph-fuse and thrashing max_mds between 1 and 2, I got these errors from ceph-... Patrick Donnelly
12:30 AM Bug #23812 (Fix Under Review): mds: may send LOCK_SYNC_MIX message to starting MDS
-https://github.com/ceph/ceph/pull/21577- Patrick Donnelly
12:28 AM Bug #23812 (Resolved): mds: may send LOCK_SYNC_MIX message to starting MDS
From mds.0:... Patrick Donnelly

04/20/2018

04:36 PM Feature #14456: mon: prevent older/incompatible clients from mounting the file system
Pre-mimic clients, yes. Patrick Donnelly
06:58 AM Feature #14456: mon: prevent older/incompatible clients from mounting the file system
prevent pre-luminous client to connect? Zheng Yan
04:33 PM Bug #21848 (Fix Under Review): client: re-expand admin_socket metavariables in child process
Patrick Donnelly
03:52 AM Bug #21848: client: re-expand admin_socket metavariables in child process
https://github.com/ceph/ceph/pull/21544
Patrick, could you pls take a look at this new fix? Now it is not only for...
Zhi Zhang
11:41 AM Bug #23518 (In Progress): mds: crash when failover
Zheng Yan
08:31 AM Bug #23518: mds: crash when failover
This one is related to http://tracker.ceph.com/issues/23503. #23503 can explain why session was evicted Zheng Yan
07:29 AM Bug #23327: qa: pjd test sees wrong ctime after unlink
should close this if it does not happen again Zheng Yan
05:52 AM Documentation #23583 (In Progress): doc: update snapshot doc to account for recent changes
by commit "mds: update dev document of cephfs snapshot" in RP https://github.com/ceph/ceph/pull/21374 Zheng Yan
02:31 AM Backport #23802 (In Progress): luminous: slow ceph_ll_sync_inode calls after setattr
https://github.com/ceph/ceph/pull/21542 Prashant D

04/19/2018

11:05 PM Bug #23755 (Resolved): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestStrays)
Patrick Donnelly
10:12 PM Fix #4708 (Rejected): MDS: journaler pre-zeroing is dangerous
Thanks for explaining Zheng. Closing this. Patrick Donnelly
01:44 PM Fix #4708 (Need More Info): MDS: journaler pre-zeroing is dangerous
I don't think it's still a problem. new mds takes over a rank after it see old mds is blacklisted in osdmap. There is... Zheng Yan
10:03 PM Backport #23790: luminous: mds: crash during shutdown_pass
Please just remove the global_snaprealm part of the backport. Patrick Donnelly
10:50 AM Backport #23790 (Need More Info): luminous: mds: crash during shutdown_pass
To backport this PR, we need complete PR#16779 (https://github.com/ceph/ceph/pull/16779) having changes related to mu... Prashant D
05:25 AM Backport #23790 (Resolved): luminous: mds: crash during shutdown_pass
https://github.com/ceph/ceph/pull/23015 Nathan Cutler
10:00 PM Bug #22933 (Pending Backport): client: add option descriptions and review levels (e.g. LEVEL_DEV)
Patrick Donnelly
08:06 PM Backport #23802 (Resolved): luminous: slow ceph_ll_sync_inode calls after setattr
https://github.com/ceph/ceph/pull/21542 Nathan Cutler
06:59 PM Bug #23800 (Fix Under Review): MDSMonitor: setting fs down twice will wipe old_max_mds
https://github.com/ceph/ceph/pull/21536 Patrick Donnelly
06:42 PM Bug #23800 (Resolved): MDSMonitor: setting fs down twice will wipe old_max_mds
Patrick Donnelly
06:49 PM Bug #23799 (Fix Under Review): MDSMonitor: creates invalid transition from up:creating to up:shut...
https://github.com/ceph/ceph/pull/21535 Patrick Donnelly
06:36 PM Bug #23799 (Resolved): MDSMonitor: creates invalid transition from up:creating to up:shutdown
... Patrick Donnelly
06:11 PM Bug #23714 (Pending Backport): slow ceph_ll_sync_inode calls after setattr
Patrick Donnelly
03:31 PM Bug #23797 (Can't reproduce): qa: cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
This is v12.2.5 QE validation
Run: http://pulpito.ceph.com/yuriw-2018-04-17_21:20:41-knfs-luminous-testing-basic-s...
Yuri Weinstein
09:52 AM Bug #23332: kclient: with fstab entry is not coming up reboot
Actually, the first failure seems to be a bit before:... Luis Henriques
08:12 AM Backport #23791 (In Progress): luminous: MDSMonitor: new file systems are not initialized with th...
https://github.com/ceph/ceph/pull/21512 Prashant D
05:25 AM Backport #23791 (Resolved): luminous: MDSMonitor: new file systems are not initialized with the p...
https://github.com/ceph/ceph/pull/21512 Nathan Cutler
05:25 AM Backport #23792 (Resolved): luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not p...
https://github.com/ceph/ceph/pull/21732 Nathan Cutler
03:24 AM Bug #23658 (Fix Under Review): MDSMonitor: crash after assigning standby-replay daemon in multifs...
Zheng Yan
02:48 AM Bug #23658: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
https://github.com/ceph/ceph/pull/21510 Zheng Yan

04/18/2018

09:43 PM Feature #20606 (Resolved): mds: improve usability of cluster rank manipulation and setting cluste...
Patrick Donnelly
09:42 PM Subtask #20864 (Resolved): kill allow_multimds
Patrick Donnelly
09:42 PM Feature #20610 (Resolved): MDSMonitor: add new command to shrink the cluster in an automated way
Patrick Donnelly
09:41 PM Feature #20608 (Resolved): MDSMonitor: rename `ceph fs set <fs_name> cluster_down` to `ceph fs se...
Patrick Donnelly
09:41 PM Feature #20609 (Resolved): MDSMonitor: add new command `ceph fs set <fs_name> down` to bring the ...
Patrick Donnelly
09:40 PM Bug #23764 (Pending Backport): MDSMonitor: new file systems are not initialized with the pending_...
Patrick Donnelly
09:39 PM Bug #23766 (Pending Backport): mds: crash during shutdown_pass
Patrick Donnelly
09:38 PM Bug #23762 (Pending Backport): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_...
Patrick Donnelly
06:36 PM Feature #3244 (New): qa: integrate Ganesha into teuthology testing to regularly exercise Ganesha ...
Jeff, fixed the wording to be clear. Patrick Donnelly
06:14 PM Feature #3244 (Rejected): qa: integrate Ganesha into teuthology testing to regularly exercise Gan...
I'm going to suggest that we just close this bug. We're doing this as a matter of course with the current work to cle... Jeff Layton
05:43 PM Bug #23421 (Need More Info): ceph-fuse: stop ceph-fuse if no root permissions?
Jos, please get hte client logs so we can diagnose. Patrick Donnelly
01:01 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
Thanks, dongdong! That seems to resolve the problem. Pull request is up here:
https://github.com/ceph/ceph/pull/21...
Jeff Layton
11:53 AM Backport #23770 (In Progress): luminous: ceph-fuse: return proper exit code
https://github.com/ceph/ceph/pull/21495 Prashant D
08:49 AM Documentation #23775: PendingReleaseNotes: add notes for major Mimic features
FYI: https://github.com/ceph/ceph/pull/21374 already include mds upgrade process Zheng Yan

04/17/2018

11:10 PM Documentation #23775 (Resolved): PendingReleaseNotes: add notes for major Mimic features
mds upgrade process, snapshots, kernel quotas, etc. Patrick Donnelly
11:08 PM Bug #23421 (Closed): ceph-fuse: stop ceph-fuse if no root permissions?
Patrick Donnelly
08:31 AM Bug #23421: ceph-fuse: stop ceph-fuse if no root permissions?
Patrick Donnelly wrote:
> Jos, can you get more detailed debug logs when this happens? It is probably not related to...
Jos Collin
07:00 PM Backport #23771 (Resolved): luminous: client: fix gid_count check in UserPerm->deep_copy_from()
https://github.com/ceph/ceph/pull/21596 Nathan Cutler
07:00 PM Backport #23770 (Resolved): luminous: ceph-fuse: return proper exit code
https://github.com/ceph/ceph/pull/21495 Nathan Cutler
04:29 PM Bug #23755 (Fix Under Review): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestS...
Patrick Donnelly
04:29 PM Bug #23755 (Pending Backport): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestS...
Patrick Donnelly
01:11 PM Bug #23755 (Fix Under Review): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestS...
https://github.com/ceph/ceph/pull/21472 Zheng Yan
04:16 PM Bug #23768 (Resolved): MDSMonitor: uncommitted state exposed to clients/mdss
e.g.
https://github.com/ceph/ceph/pull/21458#discussion_r182041693
and
https://github.com/ceph/ceph/pull/214...
Patrick Donnelly
02:08 PM Backport #23704 (In Progress): luminous: ceph-fuse: broken directory permission checking
https://github.com/ceph/ceph/pull/21475 Prashant D
01:51 PM Bug #23665: ceph-fuse: return proper exit code
backporter note: please include https://github.com/ceph/ceph/pull/21473 Patrick Donnelly
11:59 AM Feature #23623 (Fix Under Review): mds: mark allow_snaps true by default
by one commit in https://github.com/ceph/ceph/pull/21374 Zheng Yan
03:50 AM Bug #23762 (Fix Under Review): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_...
https://github.com/ceph/ceph/pull/21458 Patrick Donnelly
01:50 AM Bug #23762: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
Patrick Donnelly
03:16 AM Bug #23766 (Fix Under Review): mds: crash during shutdown_pass
https://github.com/ceph/ceph/pull/21457 Patrick Donnelly
03:13 AM Bug #23766 (Resolved): mds: crash during shutdown_pass
... Patrick Donnelly
01:49 AM Bug #23764 (Fix Under Review): MDSMonitor: new file systems are not initialized with the pending_...
https://github.com/ceph/ceph/pull/21456 Patrick Donnelly
01:41 AM Bug #23764 (In Progress): MDSMonitor: new file systems are not initialized with the pending_fsmap...
Patrick Donnelly
01:41 AM Bug #23764 (Resolved): MDSMonitor: new file systems are not initialized with the pending_fsmap epoch
Problem here: https://github.com/ceph/ceph/blob/60e8a63fdc21654d7f199a67f3f410a9e33c8134/src/mds/FSMap.cc#L234
FSM...
Patrick Donnelly

04/16/2018

08:10 PM Bug #23762 (In Progress): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
Patrick Donnelly
08:07 PM Bug #23762 (Resolved): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
https://github.com/ceph/ceph/blob/60e8a63fdc21654d7f199a67f3f410a9e33c8134/src/mon/MDSMonitor.cc#L162-L166 Patrick Donnelly
02:13 PM Backport #23750 (In Progress): luminous: mds: ceph.dir.rctime follows dir ctime not inode ctime
https://github.com/ceph/ceph/pull/21448 Prashant D
02:09 PM Backport #23703 (In Progress): luminous: MDSMonitor: mds health warnings printed in bad format
https://github.com/ceph/ceph/pull/21447 Prashant D
12:20 PM Backport #23703: luminous: MDSMonitor: mds health warnings printed in bad format
I'm on it. Prashant D
01:50 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
I tried to roll a standalone testcase for this, but it didn't stall out in the same way. I'm not quite sure what caus... Jeff Layton
11:36 AM Feature #21156: mds: speed up recovery with many open inodes
Patrick Donnelly wrote:
> Very unlikely because of the new structure in the metadata pool adds unacceptable risk for...
Webert Lima
10:44 AM Backport #23702 (In Progress): luminous: mds: sessions opened by journal replay do not get dirtie...
https://github.com/ceph/ceph/pull/21441 Prashant D
03:29 AM Bug #23652 (Pending Backport): client: fix gid_count check in UserPerm->deep_copy_from()
Patrick Donnelly
03:29 AM Bug #23665 (Pending Backport): ceph-fuse: return proper exit code
Patrick Donnelly
03:25 AM Bug #23755 (Resolved): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestStrays)
... Patrick Donnelly
03:16 AM Bug #22933 (Fix Under Review): client: add option descriptions and review levels (e.g. LEVEL_DEV)
https://github.com/ceph/ceph/pull/21434 Patrick Donnelly

04/15/2018

06:30 PM Bug #23751: mon: use fs-client profile for fs authorize mon caps
Actually I think this gives blanket permission to read from OSDs for all pools so we may actually want to remove this... Patrick Donnelly
06:27 PM Bug #23751 (New): mon: use fs-client profile for fs authorize mon caps
This is simpler and consistent. Patrick Donnelly
05:40 PM Backport #23750 (Resolved): luminous: mds: ceph.dir.rctime follows dir ctime not inode ctime
https://github.com/ceph/ceph/pull/21448 Nathan Cutler
05:39 PM Bug #23724 (Resolved): qa: broad snapshot functionality testing across clients
Ganesha FSAL
ceph-fuse
kclient
Patrick Donnelly
02:02 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
I think patrick is right, maybe we should call flush_mdlog_sync to make mds do the mdlog flush before we wait on the ... dongdong tao
03:35 AM Bug #23714: slow ceph_ll_sync_inode calls after setattr
Sounds like Ganesha is blocked on a journal flush by the MDS. Patrick Donnelly
02:50 AM Bug #23723 (New): qa: incorporate smallfile workload
Add smallfile workload
https://github.com/distributed-system-analysis/smallfile
to fs:workloads suite.
Patrick Donnelly

04/14/2018

03:30 PM Bug #23715: "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-distro-basic-ovh
Patrick, any requirements against running k* suites on ovh ? Yuri Weinstein
07:10 AM Bug #23715: "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-distro-basic-ovh
Yuri, I have not seen this on smithi, so I assume it only happens in virtual environments. Nathan Cutler
12:27 AM Cleanup #23718 (Resolved): qa: merge fs/kcephfs suites
and remove redundant tests (e.g. inline on/off with administrative tests like changing max_mds). Patrick Donnelly
12:15 AM Cleanup #23717 (New): cephfs: consider renaming max_mds to a better name
It is no longer considered a "max" and having fewer ranks than max_mds is considered a bad configuration which genera... Patrick Donnelly
12:12 AM Bug #23567 (Fix Under Review): MDSMonitor: successive changes to max_mds can allow hole in ranks
https://github.com/ceph/ceph/pull/16608
QA and fix here.
Patrick Donnelly

04/13/2018

09:53 PM Bug #23715: "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-distro-basic-ovh
Nathan, I am not sure if you have seen this.
Suspect also in http://pulpito.ceph.com/teuthology-2018-04-11_04:15:02-...
Yuri Weinstein
09:52 PM Bug #23715 (Closed): "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-dist...
Run: http://pulpito.ceph.com/teuthology-2018-04-11_04:10:03-fs-jewel-distro-basic-ovh/
Jobs: 40 jobs
Logs: teutholo...
Yuri Weinstein
09:40 PM Feature #21156: mds: speed up recovery with many open inodes
Webert Lima wrote:
> Hi, thank you very much for this.
>
> I see this
> > Target version: Ceph - v13.0.0
>
> ...
Patrick Donnelly
08:33 PM Feature #21156: mds: speed up recovery with many open inodes
Hi, thank you very much for this.
I see this
> Target version: Ceph - v13.0.0
So I'm not even asking for a bac...
Webert Lima
09:32 PM Backport #23698 (In Progress): luminous: mds: load balancer fixes
Nathan Cutler
09:07 AM Backport #23698: luminous: mds: load balancer fixes
https://github.com/ceph/ceph/pull/21412 Zheng Yan
08:54 AM Backport #23698 (New): luminous: mds: load balancer fixes
backport https://github.com/ceph/ceph/pull/19220 to luminous Nathan Cutler
08:41 AM Backport #23698 (Pending Backport): luminous: mds: load balancer fixes
Nathan Cutler
02:26 AM Backport #23698 (Resolved): luminous: mds: load balancer fixes
https://github.com/ceph/ceph/pull/21412 Zheng Yan
06:49 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
I'll see if I can cook up libcephfs standalone testcase for this. Jeff Layton
06:48 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
We recently added some calls to ceph_ll_sync_inode in ganesha, to be done after a setattr request. Testing with cthon... Jeff Layton
06:40 PM Bug #23714 (Resolved): slow ceph_ll_sync_inode calls after setattr
Jeff Layton
04:55 PM Bug #23697 (Pending Backport): mds: load balancer fixes
Patrick Donnelly
08:56 AM Bug #23697: mds: load balancer fixes
Sorry for the confused edits. The "backport-create-issue" script is much better at this than I am. It's enough to cha... Nathan Cutler
08:36 AM Bug #23697 (New): mds: load balancer fixes
Nathan Cutler
02:23 AM Bug #23697 (Resolved): mds: load balancer fixes
https://github.com/ceph/ceph/pull/19220 Zheng Yan
09:48 AM Bug #21848: client: re-expand admin_socket metavariables in child process
Hi Patrick,
Sorry for missing this for a long time. I will take a look recently to see whether there is a better f...
Zhi Zhang
08:35 AM Backport #23705 (Rejected): jewel: ceph-fuse: broken directory permission checking
Nathan Cutler
08:35 AM Backport #23704 (Resolved): luminous: ceph-fuse: broken directory permission checking
https://github.com/ceph/ceph/pull/21475 Nathan Cutler
08:35 AM Backport #23703 (Resolved): luminous: MDSMonitor: mds health warnings printed in bad format
https://github.com/ceph/ceph/pull/21447 Nathan Cutler
08:34 AM Backport #23702 (Resolved): luminous: mds: sessions opened by journal replay do not get dirtied p...
https://github.com/ceph/ceph/pull/21441 Nathan Cutler
02:07 AM Feature #17434: qa: background rsync task for FS workunits
Current work on this by Ramakrishnan: https://github.com/ceph/ceph/pull/12503 Patrick Donnelly
01:26 AM Bug #23509 (Pending Backport): ceph-fuse: broken directory permission checking
Patrick Donnelly
01:25 AM Bug #23582 (Pending Backport): MDSMonitor: mds health warnings printed in bad format
Patrick Donnelly
01:24 AM Bug #23625 (Pending Backport): mds: sessions opened by journal replay do not get dirtied properly
Patrick Donnelly
01:10 AM Feature #22417 (Resolved): support purge queue with cephfs-journal-tool
Zheng Yan

04/12/2018

10:50 PM Feature #20608 (In Progress): MDSMonitor: rename `ceph fs set <fs_name> cluster_down` to `ceph fs...
Doug, thinking about this more, I'd like to keep "cluster_down" (as "joinable" or not) because it simplifies qa testi... Patrick Donnelly
09:40 PM Feature #23695 (Resolved): VolumeClient: allow ceph_volume_client to create 'volumes' without nam...
https://bugzilla.redhat.com/show_bug.cgi?id=1566194
to address the needs of
https://github.com/kubernetes-incub...
Patrick Donnelly
07:39 PM Documentation #23568 (Resolved): doc: outline the steps for upgrading an MDS cluster
Patrick Donnelly
07:39 PM Backport #23634 (Resolved): luminous: doc: outline the steps for upgrading an MDS cluster
Patrick Donnelly
07:27 PM Backport #23634: luminous: doc: outline the steps for upgrading an MDS cluster
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21352
merged
Yuri Weinstein
07:19 PM Bug #23665 (Fix Under Review): ceph-fuse: return proper exit code
https://github.com/ceph/ceph/pull/21396 Patrick Donnelly
12:53 AM Bug #23665 (Resolved): ceph-fuse: return proper exit code
from mailling list... Zheng Yan
05:00 PM Feature #23689 (New): qa: test major/minor version upgrades
We should verify the upgrade process [1] works and that older clients with parallel I/O still function correctly.
...
Patrick Donnelly
10:19 AM Backport #23637 (In Progress): luminous: mds: assertion in MDSRank::validate_sessions
https://github.com/ceph/ceph/pull/21372 Prashant D
04:17 AM Backport #23636 (In Progress): luminous: mds: kicked out by monitor during rejoin
https://github.com/ceph/ceph/pull/21366 Prashant D
01:35 AM Backport #23671 (Resolved): luminous: mds: MDBalancer using total (all time) request count in loa...
https://github.com/ceph/ceph/pull/21412 Nathan Cutler
01:34 AM Backport #23669 (Resolved): luminous: doc: create doc outlining steps to bring down cluster
https://github.com/ceph/ceph/pull/22872 Nathan Cutler

04/11/2018

07:05 PM Feature #20611 (New): MDSMonitor: do not show cluster health warnings for file system intentional...
Doug, I was just thinking about this and a valid reason to not want a HEALTH_ERR is if you have dozens or hundreds of... Patrick Donnelly
01:42 PM Feature #20611 (Fix Under Review): MDSMonitor: do not show cluster health warnings for file syste...
Douglas Fuller
01:42 PM Feature #20611: MDSMonitor: do not show cluster health warnings for file system intentionally mar...
See https://github.com/ceph/ceph/pull/16608, which implements the opposite of this behavior. Whenever a filesystem is... Douglas Fuller
06:54 PM Feature #20607 (Rejected): MDSMonitor: change "mds deactivate" to clearer "mds rejoin"
This is rejected in favor of removing `mds deactivate`. Patrick Donnelly
05:31 PM Documentation #23427 (Pending Backport): doc: create doc outlining steps to bring down cluster
Patrick Donnelly
05:29 PM Bug #23658 (Resolved): MDSMonitor: crash after assigning standby-replay daemon in multifs setup
From: https://github.com/rook/rook/issues/1027... Patrick Donnelly
05:19 PM Bug #23567: MDSMonitor: successive changes to max_mds can allow hole in ranks
Doug, I tested with master but I believe it also happened with your PR. I can't remember. Patrick Donnelly
03:02 PM Bug #23567 (Need More Info): MDSMonitor: successive changes to max_mds can allow hole in ranks
Was this before or after https://github.com/ceph/ceph/pull/16608 ? Douglas Fuller
03:14 PM Bug #23652 (Fix Under Review): client: fix gid_count check in UserPerm->deep_copy_from()
https://github.com/ceph/ceph/pull/21341 Jos Collin
03:10 PM Bug #23652 (Resolved): client: fix gid_count check in UserPerm->deep_copy_from()
Fix gid_count check in UserPerm->deep_copy_from(). Allocate gids only if gid_count > 0. Jos Collin
02:34 PM Backport #23635 (In Progress): luminous: client: fix request send_to_auth was never really used
https://github.com/ceph/ceph/pull/21354 Prashant D
01:46 PM Feature #20609 (In Progress): MDSMonitor: add new command `ceph fs set <fs_name> down` to bring t...
https://github.com/ceph/ceph/pull/16608 overhauls this behavior, and re-implements the cluster_down flag for this fun... Douglas Fuller
01:45 PM Feature #20606 (Fix Under Review): mds: improve usability of cluster rank manipulation and settin...
https://github.com/ceph/ceph/pull/16608 Douglas Fuller
01:44 PM Feature #20610 (Fix Under Review): MDSMonitor: add new command to shrink the cluster in an automa...
https://github.com/ceph/ceph/pull/16608 Douglas Fuller
01:43 PM Feature #20608 (Rejected): MDSMonitor: rename `ceph fs set <fs_name> cluster_down` to `ceph fs se...
This behavior is overhauled in https://github.com/ceph/ceph/pull/16608 . Douglas Fuller
01:30 PM Backport #23634 (In Progress): luminous: doc: outline the steps for upgrading an MDS cluster
https://github.com/ceph/ceph/pull/21352 Prashant D
01:30 PM Bug #23393 (Resolved): ceph-ansible: update Ganesha config for nfs_file_gw to use optimal settings
Ramana Raja
01:26 PM Bug #23643 (Resolved): qa: osd_mon_report_interval typo in test_full.py
Sage Weil
10:27 AM Backport #23632 (In Progress): luminous: mds: handle client requests when mds is stopping
https://github.com/ceph/ceph/pull/21346 Prashant D
09:09 AM Backport #23632: luminous: mds: handle client requests when mds is stopping
I am on it. Prashant D
05:14 AM Bug #21745 (Pending Backport): mds: MDBalancer using total (all time) request count in load stati...
Patrick Donnelly
01:00 AM Feature #22372: kclient: implement quota handling using new QuotaRealm
by following commits in testing branch
ceph: quota: report root dir quota usage in statfs …
ceph: quota: add cou...
Zheng Yan
12:47 AM Bug #18730 (Closed): mds: backtrace issues getxattr for every file with cap on rejoin
should be resolved by open file table https://github.com/ceph/ceph/pull/20132 Zheng Yan
12:44 AM Fix #5268 (Closed): mds: fix/clean up file size/mtime recovery code
current code does parallel object checks. Zheng Yan
12:39 AM Bug #4212 (Closed): mds: open_snap_parents isn't called all the times it needs to be
with the new snaprealm format, there is no need to open past parent Zheng Yan
12:37 AM Bug #21412 (Closed): cephfs: too many cephfs snapshots chokes the system
Zheng Yan
12:37 AM Bug #21412: cephfs: too many cephfs snapshots chokes the system
this is actually osd issue. I talk to josh at cephalocon. He said it has already been fixed Zheng Yan

04/10/2018

11:22 PM Feature #13688 (Resolved): mds: performance: journal inodes with capabilities to limit rejoin tim...
Fixed by Zheng's openfile table: https://github.com/ceph/ceph/pull/20132 Patrick Donnelly
11:20 PM Feature #14456: mon: prevent older/incompatible clients from mounting the file system
I think the right direction is to allow setting a flag on the MDSMap to prevent older clients from connecting to the ... Patrick Donnelly
11:18 PM Bug #22482 (Won't Fix): qa: MDS can apparently journal new file on "full" metadata pool
This is expected. The MDS is treated specially by the OSDs to allow some writes when the pool is full. Patrick Donnelly
11:13 PM Feature #15507: MDS: support "watching" an inode/dentry
https://bugzilla.redhat.com/show_bug.cgi?id=1561326 Patrick Donnelly
10:36 PM Bug #23615 (Rejected): qa: test for "snapid allocation/deletion mismatch with monitor"
From Zheng:
> mds deletes old snapshots by MRemoveSnaps message. Monitor does not do snap_seq auto-increment wh...
Patrick Donnelly
06:45 PM Bug #23643 (Resolved): qa: osd_mon_report_interval typo in test_full.py
https://github.com/ceph/ceph/blob/577737d007c05bc7a3972158be8c520ab73a1517/qa/tasks/cephfs/test_full.py#L137 Patrick Donnelly
06:33 PM Bug #23624 (Resolved): cephfs-foo-tool crashes immediately it starts
Patrick Donnelly
08:27 AM Bug #23624 (Fix Under Review): cephfs-foo-tool crashes immediately it starts
https://github.com/ceph/ceph/pull/21321 Zheng Yan
08:14 AM Bug #23624 (Resolved): cephfs-foo-tool crashes immediately it starts
http://pulpito.ceph.com/pdonnell-2018-04-06_22:48:23-kcephfs-master-testing-basic-smithi/
Zheng Yan
05:54 PM Backport #23642 (Rejected): luminous: mds: the number of inode showed by "mds perf dump" not corr...
Nathan Cutler
05:54 PM Backport #23641 (Resolved): luminous: auth|doc: fs authorize error for existing credentials confu...
https://github.com/ceph/ceph/pull/22963 Nathan Cutler
05:53 PM Backport #23638 (Resolved): luminous: ceph-fuse: getgroups failure causes exception
https://github.com/ceph/ceph/pull/21687 Nathan Cutler
05:53 PM Backport #23637 (Resolved): luminous: mds: assertion in MDSRank::validate_sessions
https://github.com/ceph/ceph/pull/21372 Nathan Cutler
05:53 PM Backport #23636 (Resolved): luminous: mds: kicked out by monitor during rejoin
https://github.com/ceph/ceph/pull/21366 Nathan Cutler
05:53 PM Backport #23635 (Resolved): luminous: client: fix request send_to_auth was never really used
https://github.com/ceph/ceph/pull/21354 Nathan Cutler
05:53 PM Backport #23634 (Resolved): luminous: doc: outline the steps for upgrading an MDS cluster
https://github.com/ceph/ceph/pull/21352 Nathan Cutler
05:53 PM Backport #23632 (Resolved): luminous: mds: handle client requests when mds is stopping
https://github.com/ceph/ceph/pull/21346 Nathan Cutler
02:24 PM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
the two issue are not the same, but they are caused by the same reason: mds take too much time to handle MDSMap messa... dongdong tao
09:24 AM Bug #23625 (Fix Under Review): mds: sessions opened by journal replay do not get dirtied properly
https://github.com/ceph/ceph/pull/21323 Zheng Yan
09:12 AM Bug #23625 (Resolved): mds: sessions opened by journal replay do not get dirtied properly
http://pulpito.ceph.com/pdonnell-2018-04-06_01:22:39-multimds-wip-pdonnell-testing-20180405.233852-testing-basic-smit... Zheng Yan
07:07 AM Backport #23158 (Resolved): jewel: mds: underwater dentry check in CDir::_omap_fetched is racy
Nathan Cutler
04:46 AM Bug #23380 (Pending Backport): mds: ceph.dir.rctime follows dir ctime not inode ctime
Patrick Donnelly
04:45 AM Bug #23452 (Pending Backport): mds: assertion in MDSRank::validate_sessions
Patrick Donnelly
04:45 AM Bug #23446 (Pending Backport): ceph-fuse: getgroups failure causes exception
Patrick Donnelly
04:44 AM Bug #23530 (Pending Backport): mds: kicked out by monitor during rejoin
Patrick Donnelly
04:43 AM Bug #23602 (Pending Backport): mds: handle client requests when mds is stopping
Patrick Donnelly
04:43 AM Bug #23541 (Pending Backport): client: fix request send_to_auth was never really used
Patrick Donnelly
04:42 AM Feature #23623 (Resolved): mds: mark allow_snaps true by default
Patrick Donnelly
04:41 AM Bug #23491 (Resolved): fs: quota backward compatibility
Patrick Donnelly
01:22 AM Bug #21745 (Fix Under Review): mds: MDBalancer using total (all time) request count in load stati...
https://github.com/ceph/ceph/pull/19220/commits/e9689c1ff7e75394298c0e86aa9ed4e703391c3e Zheng Yan

04/09/2018

11:29 PM Bug #22824 (Resolved): Journaler::flush() may flush less data than expected, which causes flush w...
Patrick Donnelly
11:29 PM Backport #22967 (Resolved): luminous: Journaler::flush() may flush less data than expected, which...
Patrick Donnelly
11:17 PM Backport #22967: luminous: Journaler::flush() may flush less data than expected, which causes flu...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20431
merged
Yuri Weinstein
11:29 PM Bug #22221 (Resolved): qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34 vs 0
Patrick Donnelly
11:29 PM Backport #22383 (Resolved): luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), ...
Patrick Donnelly
11:17 PM Backport #22383: luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21173
merged
Yuri Weinstein
11:17 PM Backport #23154 (Resolved): luminous: mds: FAILED assert (p != active_requests.end()) in MDReques...
Patrick Donnelly
11:16 PM Backport #23154: luminous: mds: FAILED assert (p != active_requests.end()) in MDRequestRef MDCach...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21176
merged
Yuri Weinstein
10:59 PM Bug #23571 (Resolved): mds: make sure that MDBalancer uses heartbeat info from the same epoch
Patrick Donnelly
10:58 PM Backport #23572 (Resolved): luminous: mds: make sure that MDBalancer uses heartbeat info from the...
Patrick Donnelly
10:58 PM Bug #23569 (Resolved): mds: counter decay incorrect
Patrick Donnelly
10:58 PM Backport #23570 (Resolved): luminous: mds: counter decay incorrect
Patrick Donnelly
10:58 PM Bug #23560 (Resolved): mds: mds gets significantly behind on trimming while creating millions of ...
Patrick Donnelly
10:57 PM Backport #23561 (Resolved): luminous: mds: mds gets significantly behind on trimming while creati...
Patrick Donnelly
10:34 PM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
dongdong tao wrote:
> Patrick Donnelly wrote:
> > Dongdong, I think fast dispatch may not be the answer here. We're...
Patrick Donnelly
08:12 PM Bug #23519 (In Progress): mds: mds got laggy because of MDSBeacon stuck in mqueue
Patrick Donnelly
02:27 PM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
Patrick Donnelly wrote:
> Dongdong, I think fast dispatch may not be the answer here. We're not yet sure on the caus...
dongdong tao
01:39 PM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
Dongdong, I think fast dispatch may not be the answer here. We're not yet sure on the cause. Do you have ideas? Patrick Donnelly
09:07 PM Bug #10423 (Closed): update hadoop gitbuilders
stale Patrick Donnelly
09:03 PM Bug #20593 (Pending Backport): mds: the number of inode showed by "mds perf dump" not correct aft...
Patrick Donnelly
09:01 PM Feature #21156 (Resolved): mds: speed up recovery with many open inodes
Patrick Donnelly
09:00 PM Bug #21765 (Pending Backport): auth|doc: fs authorize error for existing credentials confusing/un...
Please backport: https://github.com/ceph/ceph/pull/17678/commits/447b3d4852acd2db656c973cc224fb77d3fff590 Patrick Donnelly
08:56 PM Feature #22545 (Duplicate): add dump inode command to mds
Patrick Donnelly
08:52 PM Bug #6613 (Closed): samba is crashing in teuthology
Closing as stale. Patrick Donnelly
08:52 PM Feature #358: mds: efficient revert to snapshot
Also consider cloning snapshots. Patrick Donnelly
08:50 PM Documentation #21172 (Duplicate): doc: Export over NFS
Patrick Donnelly
08:48 PM Bug #21745: mds: MDBalancer using total (all time) request count in load statistics
https://github.com/ceph/ceph/pull/19220/commits/fb8d07772ffd3b061d2752c6b3375f6cb187be4b
Zheng, please amend the a...
Patrick Donnelly
08:43 PM Bug #19101 (Closed): "samba3error [Unknown error/failure. Missing torture_fail() or torture_asser...
Not looking at samba right now. Patrick Donnelly
08:39 PM Bug #23234 (Won't Fix): mds: damage detected while opening remote dentry
Sorry, we won't look at bugs for multiple actives pre-Luminous. Patrick Donnelly
08:37 PM Bug #21412: cephfs: too many cephfs snapshots chokes the system
Zheng, is this issue resolved with the snapshot changes for Mimic? Patrick Donnelly
08:36 PM Bug #20494 (Closed): cephfs_data_scan: try_remove_dentries_for_stray assertion failure
Closing due to inactivity. Patrick Donnelly
08:31 PM Bug #19255 (Can't reproduce): qa: test_full_fclose failure
Patrick Donnelly
08:27 PM Bug #22788 (Won't Fix): ceph-fuse performance issues with rsync
Patrick Donnelly
08:26 PM Feature #12274 (In Progress): mds: start forward scrubs from all subtree roots, skip non-auth met...
Patrick Donnelly
08:21 PM Bug #23615 (Rejected): qa: test for "snapid allocation/deletion mismatch with monitor"
See email thread. Patrick Donnelly
08:11 PM Bug #23556 (Closed): segfault in LibCephFS.ShutdownRace (jewel 10.2.11 integration testing)
Patrick Donnelly
07:45 PM Feature #21888 (Fix Under Review): Adding [--repair] option for cephfs-journal-tool make it can r...
Patrick Donnelly
07:29 PM Feature #23362 (In Progress): mds: add drop_cache command
Patrick Donnelly
06:39 PM Feature #23376: nfsgw: add NFS-Ganesha to service map similar to "rgw-nfs"
The service map is a librados resource consumed by ceph-mgr. It periodically gets perfcounters, for example. When l... Matt Benjamin
06:23 PM Feature #23376: nfsgw: add NFS-Ganesha to service map similar to "rgw-nfs"
I'm not aware of any bugs open on this. Is there any background on the rgw-nfs map at all? I've not looked at the ser... Jeff Layton
05:22 PM Documentation #23611 (Need More Info): doc: add description of new fs-client auth profile
On that page: http://docs.ceph.com/docs/master/rados/operations/user-management/#authorization-capabilities
https:...
Patrick Donnelly
04:00 PM Documentation #23568 (Pending Backport): doc: outline the steps for upgrading an MDS cluster
Patrick Donnelly
01:37 PM Bug #23518: mds: crash when failover
Are you still hitting the issue or has it gone away? If so `debug mds = 20` logs would be helpful.. Patrick Donnelly
01:32 PM Bug #23393 (Fix Under Review): ceph-ansible: update Ganesha config for nfs_file_gw to use optimal...
https://github.com/ceph/ceph-ansible/pull/2503 Ramana Raja
01:26 PM Bug #23538 (Fix Under Review): mds: fix occasional dir rstat inconsistency between multi-MDSes
Patrick Donnelly
01:25 PM Bug #23530 (Fix Under Review): mds: kicked out by monitor during rejoin
Patrick Donnelly
12:59 PM Bug #23602 (Resolved): mds: handle client requests when mds is stopping
https://github.com/ceph/ceph/pull/21167 Zheng Yan

04/08/2018

04:35 PM Bug #23211 (Resolved): client: prevent fallback to remount when dentry_invalidate_cb is true but ...
Nathan Cutler
04:35 PM Backport #23356 (Resolved): jewel: client: prevent fallback to remount when dentry_invalidate_cb ...
Nathan Cutler
02:54 PM Bug #23332: kclient: with fstab entry is not coming up reboot
Zheng Yan wrote:
> In messages_ceph-sshreeka-run379-node5-client
> [...]
>
> looks like fstab didn't include c...
Shreekara Shastry
01:12 AM Bug #23332: kclient: with fstab entry is not coming up reboot
In messages_ceph-sshreeka-run379-node5-client ... Zheng Yan
06:36 AM Bug #23503: mds: crash during pressure test
Patrick Donnelly wrote:
> wei jin wrote:
> > Hi, Patrick, I have a question: after pinning base dir, will subdirs s...
wei jin

04/06/2018

10:28 PM Bug #23332 (New): kclient: with fstab entry is not coming up reboot
Patrick Donnelly
10:25 PM Documentation #23583 (Resolved): doc: update snapshot doc to account for recent changes
http://docs.ceph.com/docs/master/dev/cephfs-snapshots/ Patrick Donnelly
10:23 PM Bug #22741 (Resolved): osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-mas...
Patrick Donnelly
10:21 PM Backport #22696 (In Progress): luminous: client: dirty caps may never get the chance to flush
https://github.com/ceph/ceph/pull/21278 Patrick Donnelly
10:11 PM Backport #22696: luminous: client: dirty caps may never get the chance to flush
I'd prefer not to, I'll try to resolve the conflicts. Patrick Donnelly
10:05 PM Bug #23582 (Fix Under Review): MDSMonitor: mds health warnings printed in bad format
https://github.com/ceph/ceph/pull/21276 Patrick Donnelly
08:29 PM Bug #23582 (Resolved): MDSMonitor: mds health warnings printed in bad format
Example:... Patrick Donnelly
05:32 PM Bug #23033 (Resolved): qa: ignore more warnings during mds-full test
Patrick Donnelly
05:32 PM Backport #23060 (Resolved): luminous: qa: ignore more warnings during mds-full test
Patrick Donnelly
05:28 PM Bug #22483 (Resolved): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obsolete
Patrick Donnelly
05:27 PM Bug #21402 (Resolved): mds: move remaining containers in CDentry/CDir/CInode to mempool
Patrick Donnelly
05:27 PM Backport #22972 (Resolved): luminous: mds: move remaining containers in CDentry/CDir/CInode to me...
Patrick Donnelly
05:26 PM Bug #22288 (Resolved): mds: assert when inode moves during scrub
Patrick Donnelly
05:26 PM Backport #23016 (Resolved): luminous: mds: assert when inode moves during scrub
Patrick Donnelly
03:20 AM Backport #23572 (In Progress): luminous: mds: make sure that MDBalancer uses heartbeat info from ...
Patrick Donnelly
03:18 AM Backport #23572 (Resolved): luminous: mds: make sure that MDBalancer uses heartbeat info from the...
https://github.com/ceph/ceph/pull/21267 Patrick Donnelly
03:17 AM Bug #23571 (Resolved): mds: make sure that MDBalancer uses heartbeat info from the same epoch
Already fixed in: https://github.com/ceph/ceph/pull/18941/ Patrick Donnelly
03:15 AM Backport #23570 (In Progress): luminous: mds: counter decay incorrect
Patrick Donnelly
03:12 AM Backport #23570 (Resolved): luminous: mds: counter decay incorrect
https://github.com/ceph/ceph/pull/21266 Patrick Donnelly
03:12 AM Bug #23569 (Resolved): mds: counter decay incorrect
Fixed by https://github.com/ceph/ceph/pull/18776
Issue for backport.
Patrick Donnelly
03:07 AM Documentation #23427 (Fix Under Review): doc: create doc outlining steps to bring down cluster
https://github.com/ceph/ceph/pull/21265 Patrick Donnelly
01:13 AM Bug #21584 (Resolved): FAILED assert(get_version() < pv) in CDir::mark_dirty
Nathan Cutler
01:13 AM Backport #22031 (Resolved): jewel: FAILED assert(get_version() < pv) in CDir::mark_dirty
Nathan Cutler
12:02 AM Bug #23532 (Resolved): doc: create PendingReleaseNotes and add dev doc for openfile table purpose...
Patrick Donnelly

04/05/2018

10:52 PM Documentation #23568 (Fix Under Review): doc: outline the steps for upgrading an MDS cluster
https://github.com/ceph/ceph/pull/21263 Patrick Donnelly
10:37 PM Documentation #23568 (Resolved): doc: outline the steps for upgrading an MDS cluster
Until we have versioned MDS-MDS messages and feature flags obeyed by MDSs during upgrades (e.g. require_mimic_mds), t... Patrick Donnelly
09:22 PM Bug #23567 (Resolved): MDSMonitor: successive changes to max_mds can allow hole in ranks
With 3 MDS, approximately this sequence:... Patrick Donnelly
07:39 PM Backport #22384 (Resolved): jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), act...
Nathan Cutler
07:09 PM Bug #23503: mds: crash during pressure test
wei jin wrote:
> Hi, Patrick, I have a question: after pinning base dir, will subdirs still be migrated to other act...
Patrick Donnelly
06:39 PM Bug #22263 (Resolved): client reconnect gather race
Nathan Cutler
06:39 PM Backport #22380 (Resolved): jewel: client reconnect gather race
Nathan Cutler
06:38 PM Bug #22631 (Resolved): mds: crashes because of old pool id in journal header
Nathan Cutler
06:38 PM Backport #22764 (Resolved): jewel: mds: crashes because of old pool id in journal header
Nathan Cutler
06:37 PM Bug #21383 (Resolved): qa: failures from pjd fstest
Nathan Cutler
06:37 PM Backport #21489 (Resolved): jewel: qa: failures from pjd fstest
Nathan Cutler
04:26 PM Bug #22821 (Resolved): mds: session reference leak
Nathan Cutler
04:26 PM Backport #22970 (Resolved): jewel: mds: session reference leak
Nathan Cutler
01:21 PM Bug #23529 (Resolved): TmapMigratePP.DataScan asserts in jewel
Nathan Cutler
04:19 AM Backport #23561 (In Progress): luminous: mds: mds gets significantly behind on trimming while cre...
https://github.com/ceph/ceph/pull/21256 Patrick Donnelly
04:14 AM Backport #23561 (Resolved): luminous: mds: mds gets significantly behind on trimming while creati...
https://github.com/ceph/ceph/pull/21256 Patrick Donnelly
04:14 AM Bug #23560 (Pending Backport): mds: mds gets significantly behind on trimming while creating mill...
Patrick Donnelly
03:43 AM Bug #23491 (Fix Under Review): fs: quota backward compatibility
https://github.com/ceph/ceph/pull/21255 Zheng Yan

04/04/2018

11:49 PM Bug #23560 (Fix Under Review): mds: mds gets significantly behind on trimming while creating mill...
https://github.com/ceph/ceph/pull/21254 Patrick Donnelly
11:44 PM Bug #23560 (Resolved): mds: mds gets significantly behind on trimming while creating millions of ...
Under create heavy workloads, MDS sometimes reaches ~60 untrimmed segments for brief periods. I suggest we bump mds_l... Patrick Donnelly
09:26 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
Luminous test revert PR: https://github.com/ceph/ceph/pull/21251 Nathan Cutler
09:22 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
Jeff Layton wrote:
> To be clear, I think we may want to leave off the patch that adds the new testcase from this se...
Nathan Cutler
09:13 PM Bug #20988 (Resolved): client: dual client segfault with racing ceph_shutdown
Patrick Donnelly
09:13 PM Backport #21526 (Closed): jewel: client: dual client segfault with racing ceph_shutdown
Dropping this due to the age of jewel, dubious value, and lack of dual-client use-case for jewel. Patrick Donnelly
08:48 PM Bug #23556: segfault in LibCephFS.ShutdownRace (jewel 10.2.11 integration testing)
Tentatively agreed to drop the PR, because "jewel is near EOL and we don't have a use-case with dual clients for jewel" Nathan Cutler
08:31 PM Bug #23556: segfault in LibCephFS.ShutdownRace (jewel 10.2.11 integration testing)
Looks to be coming from this commit, because it's the one that adds the ShutdownRace test case:
https://github.com...
Nathan Cutler
08:23 PM Bug #23556 (Closed): segfault in LibCephFS.ShutdownRace (jewel 10.2.11 integration testing)
The test runs libcephfs/test.sh workunit, which in turn runs the ceph_test_libcephfs binary from the ceph-test packag... Nathan Cutler
05:51 PM Bug #23210 (Resolved): ceph-fuse: exported nfs get "stale file handle" when mds migrating
Sorry, this should not be backported. ceph-fuse "support" for NFS is only for Mimic. Patrick Donnelly
06:05 AM Bug #23210 (Pending Backport): ceph-fuse: exported nfs get "stale file handle" when mds migrating
@Patrick, please confirm that this should be backported to luminous and which master PR. Nathan Cutler
07:50 AM Bug #16807 (Resolved): Crash in handle_slave_rename_prep
http://tracker.ceph.com/issues/16768 already fixed Zheng Yan
07:47 AM Bug #22353 (Resolved): kclient: ceph_getattr() return zero st_dev for normal inode
Zheng Yan
07:44 AM Feature #4501 (Resolved): Identify fields in CDir which aren't permanently necessary
Zheng Yan
07:43 AM Tasks #4499 (Resolved): Identify fields in CInode which aren't permanently necessary
Zheng Yan
07:43 AM Feature #14427 (Resolved): qa: run snapshot tests under thrashing
Zheng Yan
07:41 AM Feature #21877 (Resolved): quota and snaprealm integation
Zheng Yan
07:38 AM Feature #22371 (Resolved): mds: implement QuotaRealm to obviate parent quota lookup
https://github.com/ceph/ceph/pull/18424 Zheng Yan
07:36 AM Bug #3254 (Resolved): mds: Replica inode's parent snaprealms are not open
Zheng Yan
07:36 AM Bug #3254: mds: Replica inode's parent snaprealms are not open
opening snaprealm parents is no longer required with the new snaprealm format
https://github.com/ceph/ceph/pull/16779
Zheng Yan
07:34 AM Bug #1938 (Resolved): mds: snaptest-2 doesn't pass with 3 MDS system
https://github.com/ceph/ceph/pull/16779 Zheng Yan
07:34 AM Bug #925 (Resolved): mds: update replica snaprealm on rename
https://github.com/ceph/ceph/pull/16779 Zheng Yan

04/03/2018

08:07 PM Documentation #23271 (Resolved): doc: create install/setup guide for NFS-Ganesha w/ CephFS
Patrick Donnelly
06:44 PM Bug #23436 (Resolved): Client::_read() always return 0 when reading from inline data
https://github.com/ceph/ceph/pull/21221
Wrote/tested that before I saw your PR, sorry Zheng.
Patrick Donnelly
02:46 AM Bug #23436 (Fix Under Review): Client::_read() always return 0 when reading from inline data
previous RP is buggy
https://github.com/ceph/ceph/pull/21186
Zheng Yan
06:41 PM Bug #23210 (Resolved): ceph-fuse: exported nfs get "stale file handle" when mds migrating
Patrick Donnelly
01:38 PM Bug #23509: ceph-fuse: broken directory permission checking
Pull request here:
https://github.com/ceph/ceph/pull/21181
Jeff Layton
12:04 PM Bug #23250 (Need More Info): mds: crash during replay: interval_set.h: 396: FAILED assert(p->firs...
Zheng Yan
11:54 AM Bug #21070 (Resolved): MDS: MDS is laggy or crashed When deleting a large number of files
Zheng Yan
10:08 AM Bug #23529 (Fix Under Review): TmapMigratePP.DataScan asserts in jewel
https://github.com/ceph/ceph/pull/21208 Zheng Yan
09:30 AM Backport #21450 (Closed): jewel: MDS: MDS is laggy or crashed When deleting a large number of files
jewel does not have this bug Zheng Yan
09:14 AM Bug #23541 (Fix Under Review): client: fix request send_to_auth was never really used
Nathan Cutler
03:20 AM Bug #23541: client: fix request send_to_auth was never really used
https://github.com/ceph/ceph/pull/21191 Zhi Zhang
03:20 AM Bug #23541 (Resolved): client: fix request send_to_auth was never really used
Client request's send_to_auth was never really used in choose_target_mds, although it would be set to true when getti... Zhi Zhang
09:09 AM Bug #23532 (Fix Under Review): doc: create PendingReleaseNotes and add dev doc for openfile table...
https://github.com/ceph/ceph/pull/21204 Zheng Yan
07:39 AM Bug #23491: fs: quota backward compatibility
will there be feature bit for mimic. If there will be, it's easy to add a mds version check to client Zheng Yan
06:59 AM Bug #23491: fs: quota backward compatibility
old user space client talking to new mds is OK. new client talking to old mds may have problem. Zheng Yan
02:59 AM Backport #23356 (In Progress): jewel: client: prevent fallback to remount when dentry_invalidate_...
Nathan Cutler
02:46 AM Backport #23157 (In Progress): luminous: mds: underwater dentry check in CDir::_omap_fetched is racy
Nathan Cutler
02:44 AM Backport #23158 (In Progress): jewel: mds: underwater dentry check in CDir::_omap_fetched is racy
Nathan Cutler
 

Also available in: Atom