Activity
From 12/13/2018 to 01/11/2019
01/11/2019
- 08:12 PM Backport #37820 (Resolved): luminous: mds: create separate config for heartbeat timeout
- 01:44 PM Backport #37820: luminous: mds: create separate config for heartbeat timeout
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25889
merged - 06:32 PM Bug #37540 (Resolved): luminous: MDSMap session timeout cannot be modified
- 01:44 PM Backport #24912: luminous: qa: multifs requires 4 mds but gets only 2
- https://github.com/ceph/ceph/pull/25890 merged
- 03:09 AM Bug #36273: qa: add background task for some units which drops MDS cache
- Patrick Donnelly wrote:
> Venky, I believe you did some work on this?
I never got time to post the PR -- it needs... - 12:59 AM Bug #36273: qa: add background task for some units which drops MDS cache
- Venky, I believe you did some work on this?
- 12:42 AM Backport #37829 (In Progress): luminous: ceph-fuse: hang because it miss reconnect phase when hot...
- https://github.com/ceph/ceph/pull/25904
- 12:40 AM Backport #37828 (In Progress): mimic: ceph-fuse: hang because it miss reconnect phase when hot st...
- https://github.com/ceph/ceph/pull/25903
01/10/2019
- 06:33 PM Cleanup #37864 (Fix Under Review): client: use Message smart ptr to manage Message lifetime
- 06:33 PM Cleanup #37864 (Resolved): client: use Message smart ptr to manage Message lifetime
- Follows #24306.
- 06:09 PM Backport #37627 (Resolved): luminous: mds: fix incorrect l_pq_executing_ops statistics when meet ...
- 04:33 PM Backport #37627: luminous: mds: fix incorrect l_pq_executing_ops statistics when meet an invalid ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25560
merged - 06:08 PM Backport #37610 (Resolved): luminous: qa: pjd test appears to require more than 3h timeout for so...
- 04:33 PM Backport #37610: luminous: qa: pjd test appears to require more than 3h timeout for some configur...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25558
merged - 06:08 PM Backport #37623 (Resolved): luminous: qa: client socket inaccessible without sudo
- 04:34 PM Backport #37623: luminous: qa: client socket inaccessible without sudo
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25516
merged - 06:07 PM Backport #36577 (Resolved): luminous: qa: teuthology may hang on diagnostic commands for fuse mount
- 04:34 PM Backport #36577: luminous: qa: teuthology may hang on diagnostic commands for fuse mount
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25516
merged - 06:06 PM Backport #37629 (Resolved): luminous: mds: do not call Journaler::_trim twice
- 04:32 PM Backport #37629: luminous: mds: do not call Journaler::_trim twice
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25562
merged - 06:06 PM Backport #37608 (Resolved): luminous: MDS admin socket command `dump cache` with a very large cac...
- 04:32 PM Backport #37608: luminous: MDS admin socket command `dump cache` with a very large cache will han...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25567
merged - 05:05 PM Bug #37853: remove snapped dir success but core dump by bad backtrace in _purge_stray_purged
- Snapshots are not stable in Luminous. Why are you using them?
- 07:29 AM Bug #37853: remove snapped dir success but core dump by bad backtrace in _purge_stray_purged
- reproduce simlar result:
2019-01-10 14:27:21.092670 7fa7bc8cc700 -1 log_channel(cluster) log [ERR] : bad backtrace... - 07:21 AM Bug #37853 (Rejected): remove snapped dir success but core dump by bad backtrace in _purge_stray_...
- 2019-01-08 10:04:39.651547 7fc3d5e9a700 -1 log_channel(cluster) log [ERR] : bad backtrace on directory inode 0x100000...
- 01:57 PM Bug #26969: kclient: mount unexpectedly gets osdmap updates causing test to fail
- seen in luminous too: http://qa-proxy.ceph.com/teuthology/yuriw-2019-01-08_22:47:29-kcephfs-wip-yuri2-testing-2019-01...
- 05:17 AM Backport #24912: luminous: qa: multifs requires 4 mds but gets only 2
- https://github.com/ceph/ceph/pull/24328
https://github.com/ceph/ceph/pull/25890 - 03:50 AM Backport #37820 (In Progress): luminous: mds: create separate config for heartbeat timeout
- https://github.com/ceph/ceph/pull/25889
- 01:21 AM Backport #37818 (In Progress): mimic: mds crashes frequently when using snapshots in CephFS on mimic
- https://github.com/ceph/ceph/pull/25885
01/09/2019
- 05:21 AM Backport #37823 (In Progress): luminous: mds: output client IP of blacklisted/evicted clients to ...
- 05:18 AM Backport #37822 (In Progress): mimic: mds: output client IP of blacklisted/evicted clients to clu...
- 12:46 AM Bug #37837 (Fix Under Review): qa: test_damage expectations wrong for Truncate on some objects
- 12:35 AM Bug #37837 (In Progress): qa: test_damage expectations wrong for Truncate on some objects
- 12:34 AM Bug #37837 (Resolved): qa: test_damage expectations wrong for Truncate on some objects
- e.g. 500.* object expectation is ignored. Also, truncate on the directory contents objects (e.g. 1.00000..0) does not...
01/08/2019
- 11:58 PM Bug #37543 (Pending Backport): mds: purge queue recovery hangs during boot if PQ journal is damaged
- 11:55 PM Bug #37836 (Fix Under Review): qa: test_damage performs truncate test on same object repeatedly
- 11:55 PM Bug #37836 (In Progress): qa: test_damage performs truncate test on same object repeatedly
- 11:51 PM Bug #37836 (Resolved): qa: test_damage performs truncate test on same object repeatedly
- Tested with 2fb665194f61914711454c2084eb1539bd3588b5^.
See: https://github.com/ceph/ceph/blob/1eb33745a894d238e451... - 04:28 PM Backport #37829 (Resolved): luminous: ceph-fuse: hang because it miss reconnect phase when hot st...
- https://github.com/ceph/ceph/pull/25904
- 04:27 PM Backport #37828 (Resolved): mimic: ceph-fuse: hang because it miss reconnect phase when hot stand...
- https://github.com/ceph/ceph/pull/25903
- 04:26 PM Backport #37823 (Resolved): luminous: mds: output client IP of blacklisted/evicted clients to clu...
- https://github.com/ceph/ceph/pull/25858
- 04:26 PM Backport #37822 (Resolved): mimic: mds: output client IP of blacklisted/evicted clients to cluste...
- https://github.com/ceph/ceph/pull/25857
- 04:26 PM Backport #37820 (Resolved): luminous: mds: create separate config for heartbeat timeout
- https://github.com/ceph/ceph/pull/25889
- 04:25 PM Backport #37819 (Resolved): mimic: mds: create separate config for heartbeat timeout
- https://github.com/ceph/ceph/pull/26010
- 04:25 PM Backport #37818 (Resolved): mimic: mds crashes frequently when using snapshots in CephFS on mimic
- https://github.com/ceph/ceph/pull/25885
- 03:03 PM Bug #37721 (Pending Backport): mds crashes frequently when using snapshots in CephFS on mimic
- 03:02 PM Bug #36079 (Pending Backport): ceph-fuse: hang because it miss reconnect phase when hot standby m...
- 03:00 PM Bug #36189 (Pending Backport): ceph-fuse client can't read or write due to backward cap_gen
- 01:57 AM Backport #37092 (In Progress): luminous: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)...
01/07/2019
- 10:58 PM Cleanup #4387 (Resolved): mds: EMetaBlob::client_reqs doesn't need to be a list
- Addressed in d12f3e311854d372d69dbf998e552a875ea9f621.
- 10:57 PM Documentation #3335 (Rejected): doc: Explain kernel dynamic printk debugging
- mail thread link is dead
- 10:55 PM Feature #20: client: recover from a killed session (w/ blacklist)
- I'm going to suggest attacking this problem from the other direction.
- 09:11 PM Backport #36209 (In Progress): mimic: mds: runs out of file descriptors after several respawns
- 08:38 PM Backport #36503 (Resolved): mimic: qa: infinite timeout on asok command causes job to die
- 08:38 PM Bug #27657 (Resolved): mds: retry remounting in ceph-fuse on dcache invalidation
- 08:37 PM Backport #35932 (Resolved): mimic: mds: retry remounting in ceph-fuse on dcache invalidation
- 08:26 PM Bug #26961 (Resolved): mds: fix instances of wrongly sending client messages outside of MDSRank::...
- 08:26 PM Backport #36147 (Rejected): luminous: mds: fix instances of wrongly sending client messages outsi...
- Closing this, the original fix has unknown value.
- 08:26 PM Backport #36148 (Rejected): mimic: mds: fix instances of wrongly sending client messages outside ...
- Closing this, the original fix has unknown value.
- 08:24 PM Backport #37608 (In Progress): luminous: MDS admin socket command `dump cache` with a very large ...
- 08:22 PM Bug #26926 (Resolved): mds: migrate strays part by part when shutdown mds
- 08:22 PM Backport #32091 (Resolved): luminous: mds: migrate strays part by part when shutdown mds
- 08:21 PM Backport #37606 (Resolved): luminous: mds: directories pinned keep being replicated back and fort...
- 04:15 PM Backport #37606: luminous: mds: directories pinned keep being replicated back and forth between e...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25522
merged - 08:21 PM Backport #37604 (Resolved): luminous: mds: PurgeQueue write error handler does not handle EBLACKL...
- 04:14 PM Backport #37604: luminous: mds: PurgeQueue write error handler does not handle EBLACKLISTED
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25524
merged - 08:20 PM Backport #37602 (Resolved): luminous: mds: severe internal fragment when decoding xattr_map from ...
- 04:16 PM Backport #37602: luminous: mds: severe internal fragment when decoding xattr_map from log event
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25520
merged - 08:20 PM Backport #37425 (Resolved): luminous: ceph-volume-client: cannot set mode for cephfs volumes as r...
- 04:17 PM Backport #37425: luminous: ceph-volume-client: cannot set mode for cephfs volumes as required by ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25407
merged - 08:19 PM Backport #37423 (Resolved): luminous: qa: wrong setting for msgr failures
- 04:16 PM Backport #37423: luminous: qa: wrong setting for msgr failures
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25518
merged - 08:19 PM Bug #36320 (Resolved): mds: cache drop command requires timeout argument when it is supposed to b...
- 08:19 PM Backport #36695 (Resolved): luminous: mds: cache drop command requires timeout argument when it i...
- 08:19 PM Bug #36668 (Resolved): client: request next osdmap for blacklisted client
- 08:18 PM Backport #36691 (Resolved): luminous: client: request next osdmap for blacklisted client
- 04:19 PM Backport #36691: luminous: client: request next osdmap for blacklisted client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24986
merged - 08:18 PM Backport #36642 (Resolved): luminous: Internal fragment of ObjectCacher
- 08:16 PM Bug #36221 (Resolved): mds: rctime not set on system inode (root) at startup
- 08:16 PM Backport #36460 (Resolved): luminous: mds: rctime not set on system inode (root) at startup
- 04:18 PM Backport #36460: luminous: mds: rctime not set on system inode (root) at startup
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25043
merged - 08:16 PM Feature #36352 (Resolved): client: explicitly show blacklisted state via asok status command
- 08:16 PM Backport #36456 (Resolved): luminous: client: explicitly show blacklisted state via asok status c...
- 04:18 PM Backport #36456: luminous: client: explicitly show blacklisted state via asok status command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24994
merged - 08:16 PM Feature #23362 (Resolved): mds: add drop_cache command
- 08:16 PM Backport #36281 (Resolved): luminous: mds: add drop_cache command
- 08:15 PM Bug #35828 (Resolved): qa: RuntimeError: FSCID 10 has no rank 1
- 08:15 PM Backport #36279 (Resolved): luminous: qa: RuntimeError: FSCID 10 has no rank 1
- 08:15 PM Bug #24780 (Resolved): Some cephfs tool commands silently operate on only rank 0, even if multipl...
- 08:15 PM Backport #36217 (Resolved): luminous: Some cephfs tool commands silently operate on only rank 0, ...
- 08:14 PM Bug #35961 (Resolved): nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- 08:14 PM Backport #36206 (Resolved): luminous: nfs-ganesha: ceph_fsal_setattr2 returned Operation not perm...
- 08:14 PM Bug #36093 (Resolved): mds: fix mds damaged due to unexpected journal length
- 08:14 PM Backport #36200 (Resolved): luminous: mds: fix mds damaged due to unexpected journal length
- 08:14 PM Bug #24858 (Resolved): qa: test_recovery_pool tries asok on wrong node
- 08:14 PM Backport #24929 (Resolved): luminous: qa: test_recovery_pool tries asok on wrong node
- 08:13 PM Bug #24238 (Resolved): test gets ENOSPC from bluestore block device
- 08:13 PM Backport #24759 (Resolved): luminous: test gets ENOSPC from bluestore block device
01/06/2019
- 12:44 PM Backport #36504 (In Progress): luminous: qa: infinite timeout on asok command causes job to die
- 12:21 PM Backport #37758 (In Progress): luminous: standby-replay MDS spews message to log every second
- 12:13 PM Backport #37757 (In Progress): mimic: standby-replay MDS spews message to log every second
01/04/2019
- 10:31 PM Bug #37787 (Fix Under Review): BUILDSTDERR: *** ERROR: ambiguous python shebang in /usr/sbin/moun...
- 06:17 AM Bug #37787 (In Progress): BUILDSTDERR: *** ERROR: ambiguous python shebang in /usr/sbin/mount.fus...
- 06:09 AM Bug #37787 (Resolved): BUILDSTDERR: *** ERROR: ambiguous python shebang in /usr/sbin/mount.fuse.c...
- Due to https://fedoraproject.org/wiki/Changes/Make_ambiguous_python_shebangs_error the Fedora rawhide build fails on ...
- 04:06 AM Backport #37740 (In Progress): mimic: extend reconnect period when mds is busy
- 04:03 AM Backport #37739 (In Progress): luminous: extend reconnect period when mds is busy
- 01:41 AM Cleanup #37674 (Pending Backport): mds: create separate config for heartbeat timeout
01/03/2019
- 06:22 PM Feature #12282 (Resolved): mds: progress/abort/pause interface for ongoing scrubs
- Cancelling backport of this. This will just be a new Nautilus feature.
Good work Venky! - 04:43 PM Backport #36695: luminous: mds: cache drop command requires timeout argument when it is supposed ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24468
merged - 04:43 PM Backport #36281: luminous: mds: add drop_cache command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24468
merged - 04:42 PM Backport #36217: luminous: Some cephfs tool commands silently operate on only rank 0, even if mul...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24728
merged - 04:42 PM Backport #24759: luminous: test gets ENOSPC from bluestore block device
- merged https://github.com/ceph/ceph/pull/24924
- 04:40 PM Backport #32091: luminous: mds: migrate strays part by part when shutdown mds
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24324
merged - 04:39 PM Backport #36200: luminous: mds: fix mds damaged due to unexpected journal length
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24440
merged - 04:38 PM Backport #36206: luminous: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24465
merged - 04:38 PM Backport #36279: luminous: qa: RuntimeError: FSCID 10 has no rank 1
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24552
merged - 04:37 PM Backport #36642: luminous: Internal fragment of ObjectCacher
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24872
merged - 04:34 PM Backport #24929: luminous: qa: test_recovery_pool tries asok on wrong node
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25569
merged - 04:26 PM Backport #35932: mimic: mds: retry remounting in ceph-fuse on dcache invalidation
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24695
merged - 02:45 PM Bug #37639 (Pending Backport): mds: output client IP of blacklisted/evicted clients to cluster log
- 05:06 AM Backport #37635 (In Progress): luminous: race of updating wanted caps
- https://github.com/ceph/ceph/pull/25762
01/02/2019
- 08:32 AM Bug #37721 (Fix Under Review): mds crashes frequently when using snapshots in CephFS on mimic
12/25/2018
- 10:46 PM Backport #37762 (Resolved): luminous: mds: deadlock when setting config value via admin socket
- https://github.com/ceph/ceph/pull/25833
- 10:46 PM Backport #37761 (Rejected): mimic: mds: deadlock when setting config value via admin socket
- https://github.com/ceph/ceph/pull/29664
- 10:45 PM Backport #37760 (Resolved): luminous: mds: mds state change race
- https://github.com/ceph/ceph/pull/26005
- 10:45 PM Backport #37759 (Resolved): mimic: mds: mds state change race
- https://github.com/ceph/ceph/pull/26051
- 10:45 PM Backport #37758 (Resolved): luminous: standby-replay MDS spews message to log every second
- https://github.com/ceph/ceph/pull/25804
- 10:45 PM Backport #37757 (Resolved): mimic: standby-replay MDS spews message to log every second
- https://github.com/ceph/ceph/pull/25803
- 06:21 PM Bug #37670 (Pending Backport): standby-replay MDS spews message to log every second
- 05:56 AM Backport #36502 (In Progress): luminous: qa: increase rm timeout for workunit cleanup
12/24/2018
- 10:17 AM Documentation #37746: doc: how to mount a subdir with ceph-fuse/kclient
- In order not to loose linked info from https://forum.proxmox.com/threads/mount-cephfs-using-ceph-fuse-on-boot.23608/,...
- 10:16 AM Documentation #37746 (Resolved): doc: how to mount a subdir with ceph-fuse/kclient
I looked in these links:
http://docs.ceph.com/docs/master/cephfs/fuse/
http://docs.ceph.com/docs/master/cephfs/fs...
12/23/2018
- 03:24 AM Backport #37737 (In Progress): luminous: MDSMonitor: ignores stopping MDS that was formerly laggy
- 03:20 AM Backport #37738 (In Progress): mimic: MDSMonitor: ignores stopping MDS that was formerly laggy
12/22/2018
- 08:33 PM Bug #37594 (Pending Backport): mds: mds state change race
- 08:32 PM Bug #24823 (Pending Backport): mds: deadlock when setting config value via admin socket
- 02:33 PM Backport #37740 (Resolved): mimic: extend reconnect period when mds is busy
- https://github.com/ceph/ceph/pull/25785
- 02:33 PM Backport #37739 (Resolved): luminous: extend reconnect period when mds is busy
- https://github.com/ceph/ceph/pull/25784
- 02:33 PM Backport #37738 (Resolved): mimic: MDSMonitor: ignores stopping MDS that was formerly laggy
- https://github.com/ceph/ceph/pull/25685
- 02:32 PM Backport #37737 (Resolved): luminous: MDSMonitor: ignores stopping MDS that was formerly laggy
- https://github.com/ceph/ceph/pull/25686
- 04:42 AM Backport #37631 (In Progress): luminous: client: do not move f->pos untill success write
- 04:38 AM Backport #37630 (In Progress): mimic: client: do not move f->pos untill success write
- 04:34 AM Backport #37633 (In Progress): luminous: mds: remove duplicated l_mdc_num_strays perfcounter set
- 04:30 AM Backport #37632 (In Progress): mimic: mds: remove duplicated l_mdc_num_strays perfcounter set
- 04:13 AM Backport #37634 (In Progress): mimic: race of updating wanted caps
- 04:04 AM Backport #37696 (In Progress): luminous: client: fix failure in quota size limitation when using ...
- 03:53 AM Backport #37695 (In Progress): mimic: client: fix failure in quota size limitation when using samba
- 03:48 AM Backport #37700 (In Progress): luminous: fuse client can't read file due to can't acquire Fr
- 03:44 AM Backport #37699 (In Progress): mimic: fuse client can't read file due to can't acquire Fr
- 01:18 AM Bug #37644 (Pending Backport): extend reconnect period when mds is busy
- 01:13 AM Bug #37724 (Pending Backport): MDSMonitor: ignores stopping MDS that was formerly laggy
12/21/2018
- 10:30 AM Bug #23262: kclient: nofail option not supported
- any update on this?
thanks!
12/20/2018
- 08:06 PM Bug #37724 (Fix Under Review): MDSMonitor: ignores stopping MDS that was formerly laggy
- 03:50 PM Bug #37724 (Resolved): MDSMonitor: ignores stopping MDS that was formerly laggy
- An MDS that was marked laggy (but not removed) is ignored by the MDSMonitor if it is stopping:...
- 05:56 PM Bug #37725: mds: stopping MDS with subtrees pinnned cannot finish stopping
- Actually, this just seems to be really slow when there are lots of subtrees (and large cache without outstanding caps...
- 04:30 PM Bug #37725 (Can't reproduce): mds: stopping MDS with subtrees pinnned cannot finish stopping
- Apparently due to checks that prevent export of pinned directories.
This should be reproducible with:
ceph fs s... - 05:46 PM Bug #37726 (Resolved): mds: high debug logging with many subtrees is slow
- In various places the MDS prints subtrees to the debug log. We should truncate the list if the number of subtrees is ...
- 03:14 PM Bug #37723 (Resolved): mds: stopping MDS with a large cache (40+GB) causes it to miss heartbeats
- ...
- 12:34 PM Bug #37721 (Resolved): mds crashes frequently when using snapshots in CephFS on mimic
- After we started to use snapshots in CephFS we expect frequent crashes (every couple of hours) of the active mds daem...
- 06:40 AM Backport #37609 (In Progress): mimic: MDS admin socket command `dump cache` with a very large cac...
12/19/2018
- 12:17 AM Bug #37543 (Fix Under Review): mds: purge queue recovery hangs during boot if PQ journal is damaged
12/18/2018
- 11:11 AM Backport #37700 (Resolved): luminous: fuse client can't read file due to can't acquire Fr
- https://github.com/ceph/ceph/pull/25677
- 11:11 AM Backport #37699 (Resolved): mimic: fuse client can't read file due to can't acquire Fr
- https://github.com/ceph/ceph/pull/25676
- 11:10 AM Backport #37696 (Rejected): luminous: client: fix failure in quota size limitation when using samba
- 11:10 AM Backport #37695 (Resolved): mimic: client: fix failure in quota size limitation when using samba
- https://github.com/ceph/ceph/pull/25678
- 04:18 AM Bug #37547 (Pending Backport): client: fix failure in quota size limitation when using samba
- 04:17 AM Bug #37333 (Pending Backport): fuse client can't read file due to can't acquire Fr
- 04:10 AM Bug #37681 (Resolved): qa: power off still resulted in client sending session close
- ...
12/17/2018
- 10:34 PM Feature #37678 (Fix Under Review): mds: log new client sessions with various metadata
- 10:27 PM Feature #37678 (Resolved): mds: log new client sessions with various metadata
- Including time to create/journal the new session, any throttling on the new session message, mount point, and client ...
- 04:35 PM Cleanup #37674 (Fix Under Review): mds: create separate config for heartbeat timeout
- 04:30 PM Cleanup #37674 (Resolved): mds: create separate config for heartbeat timeout
- Currently the MDS uses the mds_beacon_grace for the heartbeat timeout. If we need to increase the beacon grace becaus...
- 02:52 PM Bug #37670 (Fix Under Review): standby-replay MDS spews message to log every second
- 02:40 PM Bug #37670 (Resolved): standby-replay MDS spews message to log every second
- I used the mgr volumes module on my rook cluster to create a new cephfs. The orchestrator started up 2 MDS':...
- 02:37 PM Bug #37547 (Fix Under Review): client: fix failure in quota size limitation when using samba
- 04:13 AM Backport #37608: luminous: MDS admin socket command `dump cache` with a very large cache will han...
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > btw, cannot see the `Pull request ID` section to update the PR i...
12/14/2018
- 07:32 PM Backport #24929 (In Progress): luminous: qa: test_recovery_pool tries asok on wrong node
- 07:12 PM Bug #37617: CephFS did not recover re-plugging network cable
- Patrick Donnelly wrote:
> We do not have a tracker yet. Work is planned in the near future on this and we'll create ... - 05:26 PM Backport #37608: luminous: MDS admin socket command `dump cache` with a very large cache will han...
- Venky Shankar wrote:
> btw, cannot see the `Pull request ID` section to update the PR id...
Backports still follo... - 04:53 PM Backport #37608: luminous: MDS admin socket command `dump cache` with a very large cache will han...
- btw, cannot see the `Pull request ID` section to update the PR id...
- 10:36 AM Backport #37608 (Need More Info): luminous: MDS admin socket command `dump cache` with a very lar...
- 10:21 AM Backport #37608 (In Progress): luminous: MDS admin socket command `dump cache` with a very large ...
- 10:37 AM Backport #37609 (Need More Info): mimic: MDS admin socket command `dump cache` with a very large ...
- 10:18 AM Backport #37609 (In Progress): mimic: MDS admin socket command `dump cache` with a very large cac...
- 10:26 AM Backport #37629 (In Progress): luminous: mds: do not call Journaler::_trim twice
- 10:25 AM Backport #37628 (In Progress): mimic: mds: do not call Journaler::_trim twice
- 10:24 AM Backport #37627 (In Progress): luminous: mds: fix incorrect l_pq_executing_ops statistics when me...
- 10:24 AM Backport #37626 (In Progress): mimic: mds: fix incorrect l_pq_executing_ops statistics when meet ...
- 10:23 AM Backport #37610 (In Progress): luminous: qa: pjd test appears to require more than 3h timeout for...
- 10:22 AM Backport #37611 (In Progress): mimic: qa: pjd test appears to require more than 3h timeout for so...
12/13/2018
- 11:31 PM Bug #37617: CephFS did not recover re-plugging network cable
- Niklas Hambuechen wrote:
> Hey Patrick,
>
> > Currently, it is necessary to restart the client when this happens.... - 11:22 PM Bug #37617: CephFS did not recover re-plugging network cable
- Hey Patrick,
> Currently, it is necessary to restart the client when this happens.
Is there already a feature r... - 05:26 PM Bug #37617 (Rejected): CephFS did not recover re-plugging network cable
- > I would expect Ceph to recover automatically from this short 11-minute network interruption.
Ceph will recover b... - 11:29 PM Feature #9755 (Resolved): Fence late clients during reconnect timeout
- This has been corrected but this issue was never closed.
- 01:35 PM Bug #37644 (Fix Under Review): extend reconnect period when mds is busy
- 01:25 PM Bug #37644 (Resolved): extend reconnect period when mds is busy
- 03:59 AM Bug #21754 (Rejected): mds: src/osdc/Journaler.cc: 402: FAILED assert(!r)
- Seems this no longer happens anymore.
- 01:17 AM Backport #37481: luminous: mds: MDCache.cc: 11673: abort()
- this one too please
- 01:16 AM Backport #37480: mimic: mds: MDCache.cc: 11673: abort()
- Zheng, please handle this.
- 01:01 AM Bug #37639 (Fix Under Review): mds: output client IP of blacklisted/evicted clients to cluster log
Also available in: Atom