Activity
From 01/02/2019 to 01/31/2019
01/31/2019
- 10:13 PM Backport #38132 (In Progress): luminous: mds: stopping MDS with a large cache (40+GB) causes it t...
- 08:11 PM Backport #38132 (Resolved): luminous: mds: stopping MDS with a large cache (40+GB) causes it to m...
- https://github.com/ceph/ceph/pull/26232
- 10:12 PM Backport #38130 (In Progress): luminous: mds: provide a limit for the maximum number of caps a cl...
- 08:11 PM Backport #38130 (Resolved): luminous: mds: provide a limit for the maximum number of caps a clien...
- https://github.com/ceph/ceph/pull/26232
- 09:25 PM Backport #38129 (In Progress): mimic: mds: provide a limit for the maximum number of caps a clien...
- 08:11 PM Backport #38129 (Resolved): mimic: mds: provide a limit for the maximum number of caps a client m...
- https://github.com/ceph/ceph/pull/28452
- 09:25 PM Backport #38131 (In Progress): mimic: mds: stopping MDS with a large cache (40+GB) causes it to m...
- 08:11 PM Backport #38131 (Resolved): mimic: mds: stopping MDS with a large cache (40+GB) causes it to miss...
- https://github.com/ceph/ceph/pull/28452
- 08:36 PM Bug #38087 (Resolved): mds: blacklists all clients on eviction
- 08:10 PM Bug #37723 (Pending Backport): mds: stopping MDS with a large cache (40+GB) causes it to miss hea...
- 08:10 PM Feature #38022 (Pending Backport): mds: provide a limit for the maximum number of caps a client m...
- 05:53 PM Bug #38128: msgr: unexpected "handle_cephx_auth got bad authorizer, auth_reply_len=0"
- What makes this stranger is the connection mysteriously reconnects. I don't see how in the logs. Additionally, the cl...
- 05:41 PM Bug #38128 (Resolved): msgr: unexpected "handle_cephx_auth got bad authorizer, auth_reply_len=0"
- From the client:...
- 10:11 AM Bug #37944 (Resolved): qa: test_damage needs to silence MDS_READ_ONLY
- 10:11 AM Backport #37952 (Resolved): mimic: qa: test_damage needs to silence MDS_READ_ONLY
- 03:54 AM Backport #38104 (In Progress): luminous: client: session flush does not cause cap release message...
- -https://github.com/ceph/ceph/pull/26217-
- 02:22 AM Backport #38103 (In Progress): mimic: client: session flush does not cause cap release message flush
- -https://github.com/ceph/ceph/pull/26218-
- 12:43 AM Bug #36547 (Won't Fix): mds_beacon_grace and mds_beacon_interval should have a canonical setting
- This is addressed via the distributed config.
- 12:23 AM Backport #38102 (In Progress): luminous: mds: cache drop should trim cache before flushing journal
- https://github.com/ceph/ceph/pull/26215
- 12:18 AM Backport #38101 (In Progress): mimic: mds: cache drop should trim cache before flushing journal
- https://github.com/ceph/ceph/pull/26214
01/30/2019
- 06:19 PM Bug #35850 (Resolved): mds: runs out of file descriptors after several respawns
- 06:18 PM Backport #36209 (Resolved): mimic: mds: runs out of file descriptors after several respawns
- 05:01 PM Backport #36209: mimic: mds: runs out of file descriptors after several respawns
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25822
merged - 06:18 PM Bug #36651 (Resolved): ceph-volume-client: cannot set mode for cephfs volumes as required by Open...
- 06:18 PM Backport #37426 (Resolved): mimic: ceph-volume-client: cannot set mode for cephfs volumes as requ...
- 04:52 PM Backport #37426: mimic: ceph-volume-client: cannot set mode for cephfs volumes as required by Ope...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25413
merged - 06:17 PM Bug #36390 (Resolved): qa: teuthology may hang on diagnostic commands for fuse mount
- 06:17 PM Backport #36578 (Resolved): mimic: qa: teuthology may hang on diagnostic commands for fuse mount
- 04:52 PM Backport #36578: mimic: qa: teuthology may hang on diagnostic commands for fuse mount
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25515
merged - 06:17 PM Bug #36676 (Resolved): qa: wrong setting for msgr failures
- 06:17 PM Backport #37424 (Resolved): mimic: qa: wrong setting for msgr failures
- 04:51 PM Backport #37424: mimic: qa: wrong setting for msgr failures
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25517
merged - 06:16 PM Bug #37399 (Resolved): mds: severe internal fragment when decoding xattr_map from log event
- 06:16 PM Backport #37603 (Resolved): mimic: mds: severe internal fragment when decoding xattr_map from log...
- 04:51 PM Backport #37603: mimic: mds: severe internal fragment when decoding xattr_map from log event
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25519
merged - 06:16 PM Bug #37368 (Resolved): mds: directories pinned keep being replicated back and forth between expor...
- 06:16 PM Backport #37607 (Resolved): mimic: mds: directories pinned keep being replicated back and forth b...
- 04:50 PM Backport #37607: mimic: mds: directories pinned keep being replicated back and forth between expo...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25521
merged - 06:15 PM Bug #37394 (Resolved): mds: PurgeQueue write error handler does not handle EBLACKLISTED
- 06:15 PM Backport #37605 (Resolved): mimic: mds: PurgeQueue write error handler does not handle EBLACKLISTED
- 04:49 PM Backport #37605: mimic: mds: PurgeQueue write error handler does not handle EBLACKLISTED
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25523
merged - 06:15 PM Bug #36594 (Resolved): qa: pjd test appears to require more than 3h timeout for some configurations
- 06:15 PM Backport #37611 (Resolved): mimic: qa: pjd test appears to require more than 3h timeout for some ...
- 04:49 PM Backport #37611: mimic: qa: pjd test appears to require more than 3h timeout for some configurations
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25557
merged - 06:14 PM Bug #37567 (Resolved): mds: fix incorrect l_pq_executing_ops statistics when meet an invalid item...
- 06:14 PM Backport #37626 (Resolved): mimic: mds: fix incorrect l_pq_executing_ops statistics when meet an ...
- 04:48 PM Backport #37626: mimic: mds: fix incorrect l_pq_executing_ops statistics when meet an invalid ite...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25559
merged - 06:14 PM Bug #37566 (Resolved): mds: do not call Journaler::_trim twice
- 06:14 PM Backport #37628 (Resolved): mimic: mds: do not call Journaler::_trim twice
- 04:47 PM Backport #37628: mimic: mds: do not call Journaler::_trim twice
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25561
merged - 06:13 PM Bug #36703 (Resolved): MDS admin socket command `dump cache` with a very large cache will hang/ki...
- 06:13 PM Backport #37609 (Resolved): mimic: MDS admin socket command `dump cache` with a very large cache ...
- 04:47 PM Backport #37609: mimic: MDS admin socket command `dump cache` with a very large cache will hang/k...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25642
merged - 06:13 PM Bug #37333 (Resolved): fuse client can't read file due to can't acquire Fr
- 06:13 PM Backport #37699 (Resolved): mimic: fuse client can't read file due to can't acquire Fr
- 04:46 PM Backport #37699: mimic: fuse client can't read file due to can't acquire Fr
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25676
merged - 06:12 PM Backport #37695 (Resolved): mimic: client: fix failure in quota size limitation when using samba
- 04:45 PM Backport #37695: mimic: client: fix failure in quota size limitation when using samba
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25678
merged - 06:12 PM Bug #37464 (Resolved): race of updating wanted caps
- 06:12 PM Backport #37634 (Resolved): mimic: race of updating wanted caps
- 04:45 PM Backport #37634: mimic: race of updating wanted caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25680
merged - 06:11 PM Bug #37516 (Resolved): mds: remove duplicated l_mdc_num_strays perfcounter set
- 06:11 PM Backport #37632 (Resolved): mimic: mds: remove duplicated l_mdc_num_strays perfcounter set
- 04:43 PM Backport #37632: mimic: mds: remove duplicated l_mdc_num_strays perfcounter set
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25681
merged - 06:11 PM Bug #37546 (Resolved): client: do not move f->pos untill success write
- 06:11 PM Backport #37631 (Resolved): luminous: client: do not move f->pos untill success write
- 06:10 PM Backport #37630 (Resolved): mimic: client: do not move f->pos untill success write
- 04:41 PM Backport #37630: mimic: client: do not move f->pos untill success write
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25683
merged - 06:10 PM Bug #37724 (Resolved): MDSMonitor: ignores stopping MDS that was formerly laggy
- 06:10 PM Backport #37738 (Resolved): mimic: MDSMonitor: ignores stopping MDS that was formerly laggy
- 04:41 PM Backport #37738: mimic: MDSMonitor: ignores stopping MDS that was formerly laggy
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25685
merged - 06:10 PM Bug #37670 (Resolved): standby-replay MDS spews message to log every second
- 06:09 PM Backport #37757 (Resolved): mimic: standby-replay MDS spews message to log every second
- 04:40 PM Backport #37757: mimic: standby-replay MDS spews message to log every second
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25803
merged - 06:09 PM Bug #36079 (Resolved): ceph-fuse: hang because it miss reconnect phase when hot standby mds switc...
- 06:09 PM Backport #37828 (Resolved): mimic: ceph-fuse: hang because it miss reconnect phase when hot stand...
- 04:40 PM Backport #37828: mimic: ceph-fuse: hang because it miss reconnect phase when hot standby mds swit...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25903
merged - 06:08 PM Backport #37907 (Resolved): mimic: mds: wait shorter intervals to send beacon if laggy
- 04:39 PM Backport #37907: mimic: mds: wait shorter intervals to send beacon if laggy
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25980
merged - 06:08 PM Bug #37836 (Resolved): qa: test_damage performs truncate test on same object repeatedly
- 06:08 PM Backport #37923 (Resolved): mimic: qa: test_damage performs truncate test on same object repeatedly
- 04:39 PM Backport #37923: mimic: qa: test_damage performs truncate test on same object repeatedly
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26047
merged - 06:07 PM Bug #37837 (Resolved): qa: test_damage expectations wrong for Truncate on some objects
- 06:07 PM Backport #37921 (Resolved): mimic: qa: test_damage expectations wrong for Truncate on some objects
- 04:39 PM Backport #37921: mimic: qa: test_damage expectations wrong for Truncate on some objects
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26047
merged - 06:06 PM Backport #37759 (Resolved): mimic: mds: mds state change race
- 06:05 PM Backport #37988 (Resolved): mimic: MDSMonitor: missing osdmon writeable check
- 04:36 PM Backport #37988: mimic: MDSMonitor: missing osdmon writeable check
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26069
merged - 04:36 PM Backport #37952: mimic: qa: test_damage needs to silence MDS_READ_ONLY
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/26072
merged - 01:24 PM Backport #37906 (In Progress): mimic: make cephfs-data-scan reconstruct snaptable
- @Damian - please just go ahead. Once you have been doing upstream work for awhile, you will be added to the relevant ...
- 12:56 PM Backport #38104 (Resolved): luminous: client: session flush does not cause cap release message flush
- https://github.com/ceph/ceph/pull/26271
- 12:56 PM Backport #38103 (Resolved): mimic: client: session flush does not cause cap release message flush
- https://github.com/ceph/ceph/pull/26424
- 12:56 PM Backport #38102 (Resolved): luminous: mds: cache drop should trim cache before flushing journal
- https://github.com/ceph/ceph/pull/26215
- 12:55 PM Backport #38101 (Resolved): mimic: mds: cache drop should trim cache before flushing journal
- https://github.com/ceph/ceph/pull/26214
- 12:55 PM Backport #38100 (Rejected): luminous: mds: remove cache drop admin socket command
- 12:55 PM Backport #38099 (Resolved): mimic: mds: remove cache drop admin socket command
- https://github.com/ceph/ceph/pull/29210
- 12:55 PM Backport #38098 (Resolved): luminous: mds: optimize revoking stale caps
- https://github.com/ceph/ceph/pull/26278
- 12:55 PM Backport #38097 (Resolved): mimic: mds: optimize revoking stale caps
- https://github.com/ceph/ceph/pull/28585
01/29/2019
- 10:24 PM Bug #38087 (Fix Under Review): mds: blacklists all clients on eviction
- 10:01 PM Bug #38087 (In Progress): mds: blacklists all clients on eviction
- 10:01 PM Bug #38087 (Resolved): mds: blacklists all clients on eviction
- This is due to the recent messenger overhaul. specifically:
https://github.com/ceph/ceph/blob/7fa1e3c37f8c7fb709ae... - 08:05 PM Backport #38085 (Rejected): mimic: mds: log new client sessions with various metadata
- 08:05 PM Backport #38084 (Resolved): luminous: mds: log new client sessions with various metadata
- https://github.com/ceph/ceph/pull/26257
- 09:24 AM Backport #37906: mimic: make cephfs-data-scan reconstruct snaptable
- Please add me as assignee, as per mail to ceph-devel (Damian Wojsław)
01/27/2019
- 05:20 AM Feature #37678 (Pending Backport): mds: log new client sessions with various metadata
- 12:53 AM Bug #38054 (Fix Under Review): mds: broadcast quota message to client when disable quota
- 12:50 AM Bug #38020 (Pending Backport): mds: remove cache drop admin socket command
- 12:49 AM Bug #38010 (Pending Backport): mds: cache drop should trim cache before flushing journal
- 12:48 AM Bug #38009 (Pending Backport): client: session flush does not cause cap release message flush
01/26/2019
- 07:19 AM Bug #38054 (Resolved): mds: broadcast quota message to client when disable quota
- When disable quota by setting quota.max_files or quota.max_bytes
to zero, client does not receive the quota broadcas...
01/25/2019
- 07:26 PM Feature #38052 (New): mds: provide interface to control/view internal operations
- Once a cache drop is in progress, it cannot be stopped until it completes. Provide a generic interface for viewing th...
- 06:53 PM Bug #38043 (Pending Backport): mds: optimize revoking stale caps
- 09:03 AM Bug #38043 (Resolved): mds: optimize revoking stale caps
01/23/2019
- 06:07 PM Cleanup #37954 (Resolved): ceph: cleanup status output for CephFS file systems, especially for mu...
- 05:51 PM Feature #38022 (Resolved): mds: provide a limit for the maximum number of caps a client may have
- This is to prevent unsustainable situations where a client has so many outstanding caps that a linear traversal/opera...
- 03:54 PM Backport #37977 (Resolved): luminous: infinite loop in OpTracker::check_ops_in_flight
- 09:37 AM Backport #37977 (In Progress): luminous: infinite loop in OpTracker::check_ops_in_flight
- https://github.com/ceph/ceph/pull/26088
- 09:26 AM Backport #37977: luminous: infinite loop in OpTracker::check_ops_in_flight
- you are right, sorry
- 05:01 AM Backport #37977: luminous: infinite loop in OpTracker::check_ops_in_flight
- Unless I'm mistaken, that pull request doesn't look like it does anything useful....
- 02:57 PM Bug #38020 (Fix Under Review): mds: remove cache drop admin socket command
- 02:48 PM Bug #38020 (Resolved): mds: remove cache drop admin socket command
- `cache drop` is a long running command that will block the asok interface
(while the tell version does not). Attempt...
01/22/2019
- 08:38 PM Bug #38010 (Fix Under Review): mds: cache drop should trim cache before flushing journal
- 08:33 PM Bug #38010 (Resolved): mds: cache drop should trim cache before flushing journal
- Otherwise dirty inodes remained pinned and cannot be trimmed.
- 06:54 PM Bug #38009 (Fix Under Review): client: session flush does not cause cap release message flush
- 06:51 PM Bug #38009 (Resolved): client: session flush does not cause cap release message flush
- When the client receives CEPH_SESSION_FLUSHMSG, it simply sends an ACK back to the MDS. At least for the cache drop u...
- 12:50 PM Backport #37762 (Resolved): luminous: mds: deadlock when setting config value via admin socket
- 08:39 AM Backport #37952 (In Progress): mimic: qa: test_damage needs to silence MDS_READ_ONLY
- 06:06 AM Backport #37988 (In Progress): mimic: MDSMonitor: missing osdmon writeable check
- https://github.com/ceph/ceph/pull/26069
- 02:26 AM Backport #37989 (In Progress): luminous: MDSMonitor: missing osdmon writeable check
- https://github.com/ceph/ceph/pull/26065
01/21/2019
- 11:28 PM Backport #37977 (Resolved): luminous: infinite loop in OpTracker::check_ops_in_flight
- 11:26 PM Backport #37977: luminous: infinite loop in OpTracker::check_ops_in_flight
- https://github.com/ceph/ceph/pull/26048 merged
- 02:16 AM Backport #37977 (Fix Under Review): luminous: infinite loop in OpTracker::check_ops_in_flight
- 02:06 AM Backport #37977 (Resolved): luminous: infinite loop in OpTracker::check_ops_in_flight
- caused by backport 02faf3dc321dfa782cac62ffa7e9f46f90feedbd (#23989)
First attempt to fix: https://github.com/ceph... - 06:02 PM Backport #37480 (In Progress): mimic: mds: MDCache.cc: 11673: abort()
- 02:38 PM Bug #37979: mds: use up to 80G memory when have large stress
- Thanks for the report. Not sure on the cause yet.
- 06:53 AM Bug #37979: mds: use up to 80G memory when have large stress
- I have encountered similar problems. The mon daemon's buffer_anon use too much memery,about 10G,but I have no idea.
... - 02:48 AM Bug #37979 (New): mds: use up to 80G memory when have large stress
- ceph version 12.2.7-569-gac5687a (ac5687af649a114f3ed6d6a73d8cf475fded987f) luminous (stable)...
- 02:37 PM Bug #37970 (Rejected): require help for two problems:failing to caps release/failing to cache pre...
- Please seek help on the ceph-users mailing list for this type of issue.
- 02:36 PM Bug #37971 (Duplicate): misbehaving cephfs mount
- 11:52 AM Backport #37898 (In Progress): mimic: mds: purge queue recovery hangs during boot if PQ journal i...
- 09:14 AM Backport #37989 (Resolved): luminous: MDSMonitor: missing osdmon writeable check
- https://github.com/ceph/ceph/pull/26065
- 09:14 AM Backport #37988 (Resolved): mimic: MDSMonitor: missing osdmon writeable check
- https://github.com/ceph/ceph/pull/26069
- 08:37 AM Backport #37759 (In Progress): mimic: mds: mds state change race
- 01:15 AM Backport #37923 (In Progress): mimic: qa: test_damage performs truncate test on same object repea...
- https://github.com/ceph/ceph/pull/26047
01/20/2019
- 01:22 AM Feature #20611 (Resolved): MDSMonitor: do not show cluster health warnings for file system intent...
- 01:19 AM Bug #37929 (Pending Backport): MDSMonitor: missing osdmon writeable check
- 01:18 AM Feature #37085 (Resolved): add command to bring cluster down rapidly
- 01:17 AM Bug #24721 (Resolved): mds: accept an inode number in hex for dump_inode command
01/19/2019
- 10:30 AM Bug #37970: require help for two problems:failing to caps release/failing to cache pressure release
- same as BUG #37971
- 10:24 AM Bug #37970 (Rejected): require help for two problems:failing to caps release/failing to cache pre...
- I use cephfs mount in kubernetes, the issue is that some apps getting stuck at client side when two cases above arise...
- 10:29 AM Bug #37971 (Duplicate): misbehaving cephfs mount
- I use cephfs mount in kubernetes, the issue is that some apps getting stuck at client side when two cases above arise...
01/18/2019
- 08:42 PM Backport #37758 (Resolved): luminous: standby-replay MDS spews message to log every second
- 08:34 PM Backport #37758: luminous: standby-replay MDS spews message to log every second
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25804
merged - 08:42 PM Backport #37829 (Resolved): luminous: ceph-fuse: hang because it miss reconnect phase when hot st...
- 08:33 PM Backport #37829: luminous: ceph-fuse: hang because it miss reconnect phase when hot standby mds s...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25904
merged - 08:42 PM Backport #37953 (Resolved): luminous: qa: test_damage needs to silence MDS_READ_ONLY
- 08:30 PM Backport #37953: luminous: qa: test_damage needs to silence MDS_READ_ONLY
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/26011
merged
- 05:29 PM Bug #37956 (Resolved): qa/workunits/cephtool/test.sh:1124: test_mon_mds: ceph mds stat fails
- https://github.com/ceph/ceph/pull/26019
- 03:09 AM Bug #37956 (Fix Under Review): qa/workunits/cephtool/test.sh:1124: test_mon_mds: ceph mds stat f...
- -https://github.com/ceph/ceph/pull/26018-
- 03:47 PM Backport #37762: luminous: mds: deadlock when setting config value via admin socket
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25833
merged - 05:43 AM Bug #37723 (In Progress): mds: stopping MDS with a large cache (40+GB) causes it to miss heartbeats
- 01:40 AM Cleanup #37954 (Fix Under Review): ceph: cleanup status output for CephFS file systems, especiall...
01/17/2019
- 11:45 PM Bug #37956: qa/workunits/cephtool/test.sh:1124: test_mon_mds: ceph mds stat fails
- From a cursory look, this appears to be because hidden commands are not included in the "get_command_descriptions" li...
- 10:22 PM Bug #37956 (Resolved): qa/workunits/cephtool/test.sh:1124: test_mon_mds: ceph mds stat fails
- ...
- 06:37 PM Cleanup #37954: ceph: cleanup status output for CephFS file systems, especially for multifs
- This should also take into account multiple file systems and the case where there are enough file systems to span mul...
- 06:36 PM Cleanup #37954 (Resolved): ceph: cleanup status output for CephFS file systems, especially for mu...
- In particular, there's extraneous spaces and the information is hard to parse without consulting documentation:
<p... - 06:32 PM Feature #20611 (Fix Under Review): MDSMonitor: do not show cluster health warnings for file syste...
- 04:41 PM Feature #20611 (In Progress): MDSMonitor: do not show cluster health warnings for file system int...
- Suggest we silence the health warning only when the cluster is marked down (not failed).
- 06:22 PM Backport #37953 (In Progress): luminous: qa: test_damage needs to silence MDS_READ_ONLY
- 06:19 PM Backport #37953 (Resolved): luminous: qa: test_damage needs to silence MDS_READ_ONLY
- https://github.com/ceph/ceph/pull/26011
- 06:19 PM Backport #37952 (Resolved): mimic: qa: test_damage needs to silence MDS_READ_ONLY
- https://github.com/ceph/ceph/pull/26072
- 06:18 PM Bug #37944 (Pending Backport): qa: test_damage needs to silence MDS_READ_ONLY
- 06:00 PM Bug #37726 (In Progress): mds: high debug logging with many subtrees is slow
- Rishabh Dave wrote:
> I created 100 directories and wrote 10 files in each directory after pinning all the directori... - 05:06 PM Bug #37726: mds: high debug logging with many subtrees is slow
- I created 100 directories and wrote 10 files in each directory after pinning all the directories on a MDS. If I repro...
- 05:02 PM Backport #37819 (In Progress): mimic: mds: create separate config for heartbeat timeout
- 04:15 PM Bug #36349 (Can't reproduce): mds: src/mds/MDCache.cc: 1637: FAILED ceph_assert(follows >= realm-...
- Haven't seen this since. Closing as can't reproduce. Probably noise from messenger changes?
- 03:23 PM Feature #26996 (In Progress): cephfs: get capability cache hits by clients to provide introspecti...
- 03:23 PM Feature #24285 (In Progress): mgr: add module which displays current usage of file system (`fs top`)
- 01:47 PM Backport #37481 (In Progress): luminous: mds: MDCache.cc: 11673: abort()
- 11:37 AM Bug #24872 (Resolved): qa: client socket inaccessible without sudo
- 04:06 AM Backport #37921 (In Progress): mimic: qa: test_damage expectations wrong for Truncate on some obj...
- -https://github.com/ceph/ceph/pull/26001-
01/16/2019
- 08:52 PM Backport #37924 (Resolved): luminous: qa: test_damage performs truncate test on same object repea...
- 08:31 PM Backport #37924: luminous: qa: test_damage performs truncate test on same object repeatedly
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25967Reviewed-by: Yuri Weinstein <yweinste@redhat.com>
... - 08:52 PM Backport #37899 (Resolved): luminous: mds: purge queue recovery hangs during boot if PQ journal i...
- 08:30 PM Backport #37899: luminous: mds: purge queue recovery hangs during boot if PQ journal is damaged
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25968
merged - 08:51 PM Backport #37922 (Resolved): luminous: qa: test_damage expectations wrong for Truncate on some obj...
- 08:31 PM Backport #37922: luminous: qa: test_damage expectations wrong for Truncate on some objects
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25967
merged - 06:54 PM Bug #37944 (Fix Under Review): qa: test_damage needs to silence MDS_READ_ONLY
- 06:50 PM Bug #37944 (Resolved): qa: test_damage needs to silence MDS_READ_ONLY
- *sigh*, my fault for not adding this to the whitelist:
http://pulpito.ceph.com/pdonnell-2019-01-15_17:43:35-fs-wip... - 06:37 PM Backport #37760 (In Progress): luminous: mds: mds state change race
- 06:18 PM Documentation #24580 (Resolved): doc: complete documentation for `ceph fs` administration commands
- 01:41 AM Documentation #24580 (Fix Under Review): doc: complete documentation for `ceph fs` administration...
- 02:23 PM Backport #37635 (Resolved): luminous: race of updating wanted caps
- 01:03 PM Backport #37635: luminous: race of updating wanted caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25762
merged - 02:22 PM Bug #36350 (Resolved): mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during max_mds t...
- 02:22 PM Backport #37092 (Resolved): luminous: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" d...
- 01:02 PM Backport #37092: luminous: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during max_m...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25826
mergedReviewed-by: Venky Shankar <vshankar@redhat.... - 05:01 AM Backport #37907 (In Progress): mimic: mds: wait shorter intervals to send beacon if laggy
- https://github.com/ceph/ceph/pull/25980
- 04:32 AM Backport #37908 (In Progress): luminous: mds: wait shorter intervals to send beacon if laggy
- https://github.com/ceph/ceph/pull/25979
- 02:19 AM Cleanup #37931: MDSMonitor: rename `mds repaired` to `fs repaired`
- This should also include other commands like `ceph mds rmfailed`. I'll need to think about this more.
- 01:37 AM Cleanup #37931 (In Progress): MDSMonitor: rename `mds repaired` to `fs repaired`
- 01:36 AM Cleanup #37931 (New): MDSMonitor: rename `mds repaired` to `fs repaired`
- This command operates on ranks, not MDS daemons.
01/15/2019
- 10:27 PM Bug #37929 (Fix Under Review): MDSMonitor: missing osdmon writeable check
- 10:22 PM Bug #37929 (Resolved): MDSMonitor: missing osdmon writeable check
- https://github.com/ceph/ceph/blob/38a99f04f47854465d5545fbcc9b78dbfc119b9b/src/mon/FSCommands.cc#L721
failing gids... - 10:24 PM Feature #37085 (Fix Under Review): add command to bring cluster down rapidly
- 08:35 PM Bug #24721 (Fix Under Review): mds: accept an inode number in hex for dump_inode command
- 06:06 PM Backport #37899 (In Progress): luminous: mds: purge queue recovery hangs during boot if PQ journa...
- 05:52 PM Backport #37922 (In Progress): luminous: qa: test_damage expectations wrong for Truncate on some ...
- 02:08 PM Backport #37922 (Resolved): luminous: qa: test_damage expectations wrong for Truncate on some obj...
- https://github.com/ceph/ceph/pull/25967
- 05:52 PM Backport #37924 (In Progress): luminous: qa: test_damage performs truncate test on same object re...
- 02:08 PM Backport #37924 (Resolved): luminous: qa: test_damage performs truncate test on same object repea...
- https://github.com/ceph/ceph/pull/25967
- 02:08 PM Backport #37923 (Resolved): mimic: qa: test_damage performs truncate test on same object repeatedly
- https://github.com/ceph/ceph/pull/26047
- 02:08 PM Backport #37921 (Resolved): mimic: qa: test_damage expectations wrong for Truncate on some objects
- https://github.com/ceph/ceph/pull/26047
01/14/2019
- 11:42 PM Cleanup #37864 (Resolved): client: use Message smart ptr to manage Message lifetime
- 11:42 PM Cleanup #37864 (Pending Backport): client: use Message smart ptr to manage Message lifetime
- 07:37 PM Bug #37837 (Pending Backport): qa: test_damage expectations wrong for Truncate on some objects
- 07:37 PM Bug #37836 (Pending Backport): qa: test_damage performs truncate test on same object repeatedly
- 07:31 PM Bug #37787 (Resolved): BUILDSTDERR: *** ERROR: ambiguous python shebang in /usr/sbin/mount.fuse.c...
- 02:38 PM Backport #37762 (In Progress): luminous: mds: deadlock when setting config value via admin socket
- 02:35 PM Bug #37853 (Rejected): remove snapped dir success but core dump by bad backtrace in _purge_stray_...
- 10:43 AM Backport #37908 (Resolved): luminous: mds: wait shorter intervals to send beacon if laggy
- https://github.com/ceph/ceph/pull/25979
- 10:43 AM Backport #37907 (Resolved): mimic: mds: wait shorter intervals to send beacon if laggy
- https://github.com/ceph/ceph/pull/25980
- 10:43 AM Backport #37906 (Resolved): mimic: make cephfs-data-scan reconstruct snaptable
- https://github.com/ceph/ceph/pull/31281
- 10:42 AM Backport #37899 (Resolved): luminous: mds: purge queue recovery hangs during boot if PQ journal i...
- https://github.com/ceph/ceph/pull/25968
- 10:41 AM Backport #37898 (Resolved): mimic: mds: purge queue recovery hangs during boot if PQ journal is d...
- https://github.com/ceph/ceph/pull/26055
01/12/2019
- 06:43 PM Feature #36413 (Pending Backport): make cephfs-data-scan reconstruct snaptable
- 06:40 PM Bug #36367 (Pending Backport): mds: wait shorter intervals to send beacon if laggy
- 12:11 AM Backport #37700 (Resolved): luminous: fuse client can't read file due to can't acquire Fr
- 12:09 AM Backport #37700: luminous: fuse client can't read file due to can't acquire Fr
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25677
merged - 12:10 AM Backport #37633 (Resolved): luminous: mds: remove duplicated l_mdc_num_strays perfcounter set
- 12:08 AM Backport #37633: luminous: mds: remove duplicated l_mdc_num_strays perfcounter set
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25682
merged - 12:10 AM Backport #37737 (Resolved): luminous: MDSMonitor: ignores stopping MDS that was formerly laggy
- 12:07 AM Backport #37737: luminous: MDSMonitor: ignores stopping MDS that was formerly laggy
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25686
merged - 12:10 AM Bug #36365 (Resolved): qa: increase rm timeout for workunit cleanup
- 12:10 AM Backport #36502 (Resolved): luminous: qa: increase rm timeout for workunit cleanup
- 12:07 AM Backport #36502: luminous: qa: increase rm timeout for workunit cleanup
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25696
merged - 12:09 AM Bug #36335 (Resolved): qa: infinite timeout on asok command causes job to die
- 12:09 AM Backport #36504 (Resolved): luminous: qa: infinite timeout on asok command causes job to die
- 12:05 AM Backport #36504: luminous: qa: infinite timeout on asok command causes job to die
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25805
mergedReviewed-by: Patrick Donnelly <pdonnell@redh... - 12:09 AM Backport #37739 (Resolved): luminous: extend reconnect period when mds is busy
- 12:06 AM Backport #37739: luminous: extend reconnect period when mds is busy
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25784
merged
01/11/2019
- 08:12 PM Backport #37820 (Resolved): luminous: mds: create separate config for heartbeat timeout
- 01:44 PM Backport #37820: luminous: mds: create separate config for heartbeat timeout
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25889
merged - 06:32 PM Bug #37540 (Resolved): luminous: MDSMap session timeout cannot be modified
- 01:44 PM Backport #24912: luminous: qa: multifs requires 4 mds but gets only 2
- https://github.com/ceph/ceph/pull/25890 merged
- 03:09 AM Bug #36273: qa: add background task for some units which drops MDS cache
- Patrick Donnelly wrote:
> Venky, I believe you did some work on this?
I never got time to post the PR -- it needs... - 12:59 AM Bug #36273: qa: add background task for some units which drops MDS cache
- Venky, I believe you did some work on this?
- 12:42 AM Backport #37829 (In Progress): luminous: ceph-fuse: hang because it miss reconnect phase when hot...
- https://github.com/ceph/ceph/pull/25904
- 12:40 AM Backport #37828 (In Progress): mimic: ceph-fuse: hang because it miss reconnect phase when hot st...
- https://github.com/ceph/ceph/pull/25903
01/10/2019
- 06:33 PM Cleanup #37864 (Fix Under Review): client: use Message smart ptr to manage Message lifetime
- 06:33 PM Cleanup #37864 (Resolved): client: use Message smart ptr to manage Message lifetime
- Follows #24306.
- 06:09 PM Backport #37627 (Resolved): luminous: mds: fix incorrect l_pq_executing_ops statistics when meet ...
- 04:33 PM Backport #37627: luminous: mds: fix incorrect l_pq_executing_ops statistics when meet an invalid ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25560
merged - 06:08 PM Backport #37610 (Resolved): luminous: qa: pjd test appears to require more than 3h timeout for so...
- 04:33 PM Backport #37610: luminous: qa: pjd test appears to require more than 3h timeout for some configur...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25558
merged - 06:08 PM Backport #37623 (Resolved): luminous: qa: client socket inaccessible without sudo
- 04:34 PM Backport #37623: luminous: qa: client socket inaccessible without sudo
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25516
merged - 06:07 PM Backport #36577 (Resolved): luminous: qa: teuthology may hang on diagnostic commands for fuse mount
- 04:34 PM Backport #36577: luminous: qa: teuthology may hang on diagnostic commands for fuse mount
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25516
merged - 06:06 PM Backport #37629 (Resolved): luminous: mds: do not call Journaler::_trim twice
- 04:32 PM Backport #37629: luminous: mds: do not call Journaler::_trim twice
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25562
merged - 06:06 PM Backport #37608 (Resolved): luminous: MDS admin socket command `dump cache` with a very large cac...
- 04:32 PM Backport #37608: luminous: MDS admin socket command `dump cache` with a very large cache will han...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25567
merged - 05:05 PM Bug #37853: remove snapped dir success but core dump by bad backtrace in _purge_stray_purged
- Snapshots are not stable in Luminous. Why are you using them?
- 07:29 AM Bug #37853: remove snapped dir success but core dump by bad backtrace in _purge_stray_purged
- reproduce simlar result:
2019-01-10 14:27:21.092670 7fa7bc8cc700 -1 log_channel(cluster) log [ERR] : bad backtrace... - 07:21 AM Bug #37853 (Rejected): remove snapped dir success but core dump by bad backtrace in _purge_stray_...
- 2019-01-08 10:04:39.651547 7fc3d5e9a700 -1 log_channel(cluster) log [ERR] : bad backtrace on directory inode 0x100000...
- 01:57 PM Bug #26969: kclient: mount unexpectedly gets osdmap updates causing test to fail
- seen in luminous too: http://qa-proxy.ceph.com/teuthology/yuriw-2019-01-08_22:47:29-kcephfs-wip-yuri2-testing-2019-01...
- 05:17 AM Backport #24912: luminous: qa: multifs requires 4 mds but gets only 2
- https://github.com/ceph/ceph/pull/24328
https://github.com/ceph/ceph/pull/25890 - 03:50 AM Backport #37820 (In Progress): luminous: mds: create separate config for heartbeat timeout
- https://github.com/ceph/ceph/pull/25889
- 01:21 AM Backport #37818 (In Progress): mimic: mds crashes frequently when using snapshots in CephFS on mimic
- https://github.com/ceph/ceph/pull/25885
01/09/2019
- 05:21 AM Backport #37823 (In Progress): luminous: mds: output client IP of blacklisted/evicted clients to ...
- 05:18 AM Backport #37822 (In Progress): mimic: mds: output client IP of blacklisted/evicted clients to clu...
- 12:46 AM Bug #37837 (Fix Under Review): qa: test_damage expectations wrong for Truncate on some objects
- 12:35 AM Bug #37837 (In Progress): qa: test_damage expectations wrong for Truncate on some objects
- 12:34 AM Bug #37837 (Resolved): qa: test_damage expectations wrong for Truncate on some objects
- e.g. 500.* object expectation is ignored. Also, truncate on the directory contents objects (e.g. 1.00000..0) does not...
01/08/2019
- 11:58 PM Bug #37543 (Pending Backport): mds: purge queue recovery hangs during boot if PQ journal is damaged
- 11:55 PM Bug #37836 (Fix Under Review): qa: test_damage performs truncate test on same object repeatedly
- 11:55 PM Bug #37836 (In Progress): qa: test_damage performs truncate test on same object repeatedly
- 11:51 PM Bug #37836 (Resolved): qa: test_damage performs truncate test on same object repeatedly
- Tested with 2fb665194f61914711454c2084eb1539bd3588b5^.
See: https://github.com/ceph/ceph/blob/1eb33745a894d238e451... - 04:28 PM Backport #37829 (Resolved): luminous: ceph-fuse: hang because it miss reconnect phase when hot st...
- https://github.com/ceph/ceph/pull/25904
- 04:27 PM Backport #37828 (Resolved): mimic: ceph-fuse: hang because it miss reconnect phase when hot stand...
- https://github.com/ceph/ceph/pull/25903
- 04:26 PM Backport #37823 (Resolved): luminous: mds: output client IP of blacklisted/evicted clients to clu...
- https://github.com/ceph/ceph/pull/25858
- 04:26 PM Backport #37822 (Resolved): mimic: mds: output client IP of blacklisted/evicted clients to cluste...
- https://github.com/ceph/ceph/pull/25857
- 04:26 PM Backport #37820 (Resolved): luminous: mds: create separate config for heartbeat timeout
- https://github.com/ceph/ceph/pull/25889
- 04:25 PM Backport #37819 (Resolved): mimic: mds: create separate config for heartbeat timeout
- https://github.com/ceph/ceph/pull/26010
- 04:25 PM Backport #37818 (Resolved): mimic: mds crashes frequently when using snapshots in CephFS on mimic
- https://github.com/ceph/ceph/pull/25885
- 03:03 PM Bug #37721 (Pending Backport): mds crashes frequently when using snapshots in CephFS on mimic
- 03:02 PM Bug #36079 (Pending Backport): ceph-fuse: hang because it miss reconnect phase when hot standby m...
- 03:00 PM Bug #36189 (Pending Backport): ceph-fuse client can't read or write due to backward cap_gen
- 01:57 AM Backport #37092 (In Progress): luminous: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)...
01/07/2019
- 10:58 PM Cleanup #4387 (Resolved): mds: EMetaBlob::client_reqs doesn't need to be a list
- Addressed in d12f3e311854d372d69dbf998e552a875ea9f621.
- 10:57 PM Documentation #3335 (Rejected): doc: Explain kernel dynamic printk debugging
- mail thread link is dead
- 10:55 PM Feature #20: client: recover from a killed session (w/ blacklist)
- I'm going to suggest attacking this problem from the other direction.
- 09:11 PM Backport #36209 (In Progress): mimic: mds: runs out of file descriptors after several respawns
- 08:38 PM Backport #36503 (Resolved): mimic: qa: infinite timeout on asok command causes job to die
- 08:38 PM Bug #27657 (Resolved): mds: retry remounting in ceph-fuse on dcache invalidation
- 08:37 PM Backport #35932 (Resolved): mimic: mds: retry remounting in ceph-fuse on dcache invalidation
- 08:26 PM Bug #26961 (Resolved): mds: fix instances of wrongly sending client messages outside of MDSRank::...
- 08:26 PM Backport #36147 (Rejected): luminous: mds: fix instances of wrongly sending client messages outsi...
- Closing this, the original fix has unknown value.
- 08:26 PM Backport #36148 (Rejected): mimic: mds: fix instances of wrongly sending client messages outside ...
- Closing this, the original fix has unknown value.
- 08:24 PM Backport #37608 (In Progress): luminous: MDS admin socket command `dump cache` with a very large ...
- 08:22 PM Bug #26926 (Resolved): mds: migrate strays part by part when shutdown mds
- 08:22 PM Backport #32091 (Resolved): luminous: mds: migrate strays part by part when shutdown mds
- 08:21 PM Backport #37606 (Resolved): luminous: mds: directories pinned keep being replicated back and fort...
- 04:15 PM Backport #37606: luminous: mds: directories pinned keep being replicated back and forth between e...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25522
merged - 08:21 PM Backport #37604 (Resolved): luminous: mds: PurgeQueue write error handler does not handle EBLACKL...
- 04:14 PM Backport #37604: luminous: mds: PurgeQueue write error handler does not handle EBLACKLISTED
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25524
merged - 08:20 PM Backport #37602 (Resolved): luminous: mds: severe internal fragment when decoding xattr_map from ...
- 04:16 PM Backport #37602: luminous: mds: severe internal fragment when decoding xattr_map from log event
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25520
merged - 08:20 PM Backport #37425 (Resolved): luminous: ceph-volume-client: cannot set mode for cephfs volumes as r...
- 04:17 PM Backport #37425: luminous: ceph-volume-client: cannot set mode for cephfs volumes as required by ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25407
merged - 08:19 PM Backport #37423 (Resolved): luminous: qa: wrong setting for msgr failures
- 04:16 PM Backport #37423: luminous: qa: wrong setting for msgr failures
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25518
merged - 08:19 PM Bug #36320 (Resolved): mds: cache drop command requires timeout argument when it is supposed to b...
- 08:19 PM Backport #36695 (Resolved): luminous: mds: cache drop command requires timeout argument when it i...
- 08:19 PM Bug #36668 (Resolved): client: request next osdmap for blacklisted client
- 08:18 PM Backport #36691 (Resolved): luminous: client: request next osdmap for blacklisted client
- 04:19 PM Backport #36691: luminous: client: request next osdmap for blacklisted client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24986
merged - 08:18 PM Backport #36642 (Resolved): luminous: Internal fragment of ObjectCacher
- 08:16 PM Bug #36221 (Resolved): mds: rctime not set on system inode (root) at startup
- 08:16 PM Backport #36460 (Resolved): luminous: mds: rctime not set on system inode (root) at startup
- 04:18 PM Backport #36460: luminous: mds: rctime not set on system inode (root) at startup
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25043
merged - 08:16 PM Feature #36352 (Resolved): client: explicitly show blacklisted state via asok status command
- 08:16 PM Backport #36456 (Resolved): luminous: client: explicitly show blacklisted state via asok status c...
- 04:18 PM Backport #36456: luminous: client: explicitly show blacklisted state via asok status command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24994
merged - 08:16 PM Feature #23362 (Resolved): mds: add drop_cache command
- 08:16 PM Backport #36281 (Resolved): luminous: mds: add drop_cache command
- 08:15 PM Bug #35828 (Resolved): qa: RuntimeError: FSCID 10 has no rank 1
- 08:15 PM Backport #36279 (Resolved): luminous: qa: RuntimeError: FSCID 10 has no rank 1
- 08:15 PM Bug #24780 (Resolved): Some cephfs tool commands silently operate on only rank 0, even if multipl...
- 08:15 PM Backport #36217 (Resolved): luminous: Some cephfs tool commands silently operate on only rank 0, ...
- 08:14 PM Bug #35961 (Resolved): nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- 08:14 PM Backport #36206 (Resolved): luminous: nfs-ganesha: ceph_fsal_setattr2 returned Operation not perm...
- 08:14 PM Bug #36093 (Resolved): mds: fix mds damaged due to unexpected journal length
- 08:14 PM Backport #36200 (Resolved): luminous: mds: fix mds damaged due to unexpected journal length
- 08:14 PM Bug #24858 (Resolved): qa: test_recovery_pool tries asok on wrong node
- 08:14 PM Backport #24929 (Resolved): luminous: qa: test_recovery_pool tries asok on wrong node
- 08:13 PM Bug #24238 (Resolved): test gets ENOSPC from bluestore block device
- 08:13 PM Backport #24759 (Resolved): luminous: test gets ENOSPC from bluestore block device
01/06/2019
- 12:44 PM Backport #36504 (In Progress): luminous: qa: infinite timeout on asok command causes job to die
- 12:21 PM Backport #37758 (In Progress): luminous: standby-replay MDS spews message to log every second
- 12:13 PM Backport #37757 (In Progress): mimic: standby-replay MDS spews message to log every second
01/04/2019
- 10:31 PM Bug #37787 (Fix Under Review): BUILDSTDERR: *** ERROR: ambiguous python shebang in /usr/sbin/moun...
- 06:17 AM Bug #37787 (In Progress): BUILDSTDERR: *** ERROR: ambiguous python shebang in /usr/sbin/mount.fus...
- 06:09 AM Bug #37787 (Resolved): BUILDSTDERR: *** ERROR: ambiguous python shebang in /usr/sbin/mount.fuse.c...
- Due to https://fedoraproject.org/wiki/Changes/Make_ambiguous_python_shebangs_error the Fedora rawhide build fails on ...
- 04:06 AM Backport #37740 (In Progress): mimic: extend reconnect period when mds is busy
- 04:03 AM Backport #37739 (In Progress): luminous: extend reconnect period when mds is busy
- 01:41 AM Cleanup #37674 (Pending Backport): mds: create separate config for heartbeat timeout
01/03/2019
- 06:22 PM Feature #12282 (Resolved): mds: progress/abort/pause interface for ongoing scrubs
- Cancelling backport of this. This will just be a new Nautilus feature.
Good work Venky! - 04:43 PM Backport #36695: luminous: mds: cache drop command requires timeout argument when it is supposed ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24468
merged - 04:43 PM Backport #36281: luminous: mds: add drop_cache command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24468
merged - 04:42 PM Backport #36217: luminous: Some cephfs tool commands silently operate on only rank 0, even if mul...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24728
merged - 04:42 PM Backport #24759: luminous: test gets ENOSPC from bluestore block device
- merged https://github.com/ceph/ceph/pull/24924
- 04:40 PM Backport #32091: luminous: mds: migrate strays part by part when shutdown mds
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24324
merged - 04:39 PM Backport #36200: luminous: mds: fix mds damaged due to unexpected journal length
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24440
merged - 04:38 PM Backport #36206: luminous: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24465
merged - 04:38 PM Backport #36279: luminous: qa: RuntimeError: FSCID 10 has no rank 1
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24552
merged - 04:37 PM Backport #36642: luminous: Internal fragment of ObjectCacher
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24872
merged - 04:34 PM Backport #24929: luminous: qa: test_recovery_pool tries asok on wrong node
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25569
merged - 04:26 PM Backport #35932: mimic: mds: retry remounting in ceph-fuse on dcache invalidation
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24695
merged - 02:45 PM Bug #37639 (Pending Backport): mds: output client IP of blacklisted/evicted clients to cluster log
- 05:06 AM Backport #37635 (In Progress): luminous: race of updating wanted caps
- https://github.com/ceph/ceph/pull/25762
01/02/2019
Also available in: Atom