Project

General

Profile

Actions

Bug #23848

closed

mds: stuck shutdown procedure

Added by Patrick Donnelly almost 6 years ago. Updated almost 6 years ago.

Status:
Rejected
Priority:
High
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
luminous
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS
Labels (FS):
multimds
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The following outputs in an infinite loop:

2018-04-25 03:59:12.370831 7f5fe3724700 20 mds.beacon.li569-18 0 slow request found
2018-04-25 03:59:13.863322 7f5fe2f23700 10 mds.beacon.li569-18 _send up:stopping seq 6125
2018-04-25 03:59:13.865970 7f5fe5f29700 10 mds.beacon.li569-18 handle_mds_beacon up:stopping seq 6125 rtt 0.002612
2018-04-25 03:59:17.370052 7f5fe3724700  7 mds.1.cache trim bytes_used=9kB limit=4GB reservation=0.05% count=0
2018-04-25 03:59:17.370120 7f5fe3724700  7 mds.1.cache trim_lru trimming 0 items from LRU size=2 mid=0 pintail=2 pinned=2
2018-04-25 03:59:17.370135 7f5fe3724700  7 mds.1.cache trim_lru trimmed 0 items
2018-04-25 03:59:17.370144 7f5fe3724700 20 mds.1.cache trim: maybe trimming base: [inode 0x1 [...2,head] / rep@0.1 v98 snaprealm=0x55bb5134a780 f(v0 m2018-04-25 03:45:17.892060 4=0+4) n(v15 rc2018-04-25 03:55:44.238018 b1629718024 56859=56846+13)/n(v0 1=0+1) (inest mix) (iversion lock) | dirfrag=1 dirwaiter=0 discoverbase=0 0x55bb50687500]
2018-04-25 03:59:17.370179 7f5fe3724700 10 mds.1.cache trim_client_leases
2018-04-25 03:59:17.370737 7f5fe3724700  2 mds.1.cache check_memory_usage total 415688, rss 34236, heap 313964, baseline 313964, buffers 0, 0 / 4 inodes have caps, 0 caps, 0 caps per inode
2018-04-25 03:59:17.370763 7f5fe3724700 10 mds.1.log trim 1 / 30 segments, 1 / -1 events, 0 (0) expiring, 0 (0) expired
2018-04-25 03:59:17.370770 7f5fe3724700 10 mds.1.log _trim_expired_segments waiting for 1/4194304 to expire
2018-04-25 03:59:17.370778 7f5fe3724700 10 mds.1.server find_idle_sessions.  laggy until 0.000000
2018-04-25 03:59:17.370784 7f5fe3724700 10 mds.1.locker scatter_tick
2018-04-25 03:59:17.370787 7f5fe3724700 20 mds.1.locker caps_tick 0 revoking caps
2018-04-25 03:59:17.370791 7f5fe3724700 20 mds.1.1445 updating export targets, currently 0 ranks are targets
2018-04-25 03:59:17.370794 7f5fe3724700 10 mds.1.log trim 1 / 30 segments, 1 / -1 events, 0 (0) expiring, 0 (0) expired
2018-04-25 03:59:17.370797 7f5fe3724700 10 mds.1.log _trim_expired_segments waiting for 1/4194304 to expire
2018-04-25 03:59:17.370800 7f5fe3724700  7 mds.1.cache shutdown_pass
2018-04-25 03:59:17.370803 7f5fe3724700 10 mds.1.cache shutdown_export_strays
2018-04-25 03:59:17.370808 7f5fe3724700  7 mds.1.cache trim bytes_used=9kB limit=4GB reservation=0.05% count=18446744073709551615
2018-04-25 03:59:17.370820 7f5fe3724700  7 mds.1.cache trim_lru trimming 18446744073709551615 items from LRU size=2 mid=0 pintail=2 pinned=2
2018-04-25 03:59:17.370831 7f5fe3724700  7 mds.1.cache trim_lru trimmed 0 items
2018-04-25 03:59:17.370835 7f5fe3724700 20 mds.1.cache trim: maybe trimming base: [inode 0x1 [...2,head] / rep@0.1 v98 snaprealm=0x55bb5134a780 f(v0 m2018-04-25 03:45:17.892060 4=0+4) n(v15 rc2018-04-25 03:55:44.238018 b1629718024 56859=56846+13)/n(v0 1=0+1) (inest mix) (iversion lock) | dirfrag=1 dirwaiter=0 discoverbase=0 0x55bb50687500]
2018-04-25 03:59:17.370850 7f5fe3724700  5 mds.1.cache lru size now 2/0
2018-04-25 03:59:17.370853 7f5fe3724700  7 mds.1.cache looking for subtrees to export to mds0
2018-04-25 03:59:17.370856 7f5fe3724700  7 mds.1.cache still have 2 subtrees
2018-04-25 03:59:17.370859 7f5fe3724700 15 mds.1.cache show_subtrees
2018-04-25 03:59:17.370867 7f5fe3724700 10 mds.1.cache |__ 1    auth [dir 0x101 ~mds1/ [2,head] auth v=1 cv=0/0 dir_auth=1 state=1073741824 f(v0 10=0+10) n(v0 10=0+10) hs=0+0,ss=0+0 | child=0 subtree=1 subtreetemp=0 waiter=0 authpin=0 0x55bb5079e000]
2018-04-25 03:59:17.370887 7f5fe3724700 10 mds.1.cache |__ 0     rep [dir 0x1 / [2,head] rep@0.1 REP dir_auth=0 state=0 f(v0 m2018-04-25 03:45:17.892060 4=0+4) n(v13 rc2018-04-25 03:55:40.094123 b1579203286 54824=54812+12)/n(v13 rc2018-04-25 03:55:39.878990 b1573821568 54445=54433+12) hs=2+0,ss=0+0 | dnwaiter=0 child=1 subtree=1 0x55bb5079ea80]
2018-04-25 03:59:17.370905 7f5fe3724700 10 mds.1.migrator show_importing
2018-04-25 03:59:17.370909 7f5fe3724700 10 mds.1.migrator  importing from 0: (2) discovered 0x100000003e9
2018-04-25 03:59:17.370916 7f5fe3724700 10 mds.1.migrator  importing from 0: (2) discovered 0x10000000bbb
2018-04-25 03:59:17.370919 7f5fe3724700 10 mds.1.migrator show_exporting
2018-04-25 03:59:17.370921 7f5fe3724700  7 mds.1.1445 shutdown_pass=false

While trying to reproduce an unrelated issue where I have 2 actives and some client I/O (crefi), the above happened while deactivating rank 1.

Actions #1

Updated by Patrick Donnelly almost 6 years ago

  • Description updated (diff)
Actions #2

Updated by Patrick Donnelly almost 6 years ago

2018-04-25 04:01:10.485603 7f5fe5f29700  5 mds.li569-18 handle_mds_map epoch 1458 from mds.0
2018-04-25 04:01:10.485606 7f5fe5f29700  5 mds.li569-18  old map epoch 1458 <= 1458, discarding
2018-04-25 04:01:10.485639 7f5fe5f29700  7 mds.1.cache add_replica_dir added [dir 0x100000003e9 /tmp.z4bjp4xdRQ/ [2,head] rep@0.1 state=0 f(v0 m2018-04-25 03:45:17.855933 1=0+1) n(v12 rc2018-04-25 03:55:44.238018 b449796252 15880=15878+2) hs=0+0,ss=0+0 0x55bb5079e380] nonce 1
2018-04-25 04:01:10.485674 7f5fe5f29700  7 mds.1.migrator handle_export_prep on [dir 0x100000003e9 /tmp.z4bjp4xdRQ/ [2,head] rep@0.1 state=0 f(v0 m2018-04-25 03:45:17.855933 1=0+1) n(v12 rc2018-04-25 03:55:44.238018 b449796252 15880=15878+2) hs=0+0,ss=0+0 0x55bb5079e380] (first pass)
2018-04-25 04:01:10.485684 7f5fe5f29700 15 mds.1.cache show_subtrees
2018-04-25 04:01:10.485692 7f5fe5f29700 10 mds.1.cache |__ 1    auth [dir 0x101 ~mds1/ [2,head] auth v=1 cv=0/0 dir_auth=1 state=1073741824 f(v0 10=0+10) n(v0 10=0+10) hs=0+0,ss=0+0 | child=0 subtree=1 subtreetemp=0 waiter=0 authpin=0 0x55bb5079e000]
2018-04-25 04:01:10.485704 7f5fe5f29700 10 mds.1.cache |__ 0     rep [dir 0x1 / [2,head] rep@0.1 REP dir_auth=0 state=0 f(v0 m2018-04-25 03:45:17.892060 4=0+4) n(v13 rc2018-04-25 03:55:40.094123 b1579203286 54824=54812+12)/n(v13 rc2018-04-25 03:55:39.878990 b1573821568 54445=54433+12) hs=2+0,ss=0+0 | dnwaiter=0 child=1 subtree=1 0x55bb5079ea80]
2018-04-25 04:01:10.485714 7f5fe5f29700  7 mds.1.migrator doing assim on [dir 0x100000003e9 /tmp.z4bjp4xdRQ/ [2,head] rep@0.1 state=0 f(v0 m2018-04-25 03:45:17.855933 1=0+1) n(v12 rc2018-04-25 03:55:44.238018 b449796252 15880=15878+2) hs=0+0,ss=0+0 0x55bb5079e380]
...
2018-04-25 04:01:12.373739 7f5fe3724700 15 mds.1.cache.ino(0x1) close_snaprealm snaprealm(0x1 seq 1 lc 0 cr 0 cps 1 snaps={} 0x55bb5134a780)
2018-04-25 04:01:12.373752 7f5fe3724700  7 mds.1.cache sending cache_expire to 0
2018-04-25 04:01:12.373818 7f5fe3724700  5 mds.1.cache lru size now 0/0
2018-04-25 04:01:12.373823 7f5fe3724700  7 mds.1.cache looking for subtrees to export to mds0
2018-04-25 04:01:12.373828 7f5fe3724700 10 mds.1.log trim 1 / 30 segments, 1 / 0 events, 0 (0) expiring, 0 (0) expired
2018-04-25 04:01:12.373834 7f5fe3724700  6 mds.1.journal LogSegment(1/4194304).try_to_expire
2018-04-25 04:01:12.373841 7f5fe3724700  4 mds.1.sessionmap save_if_dirty: writing 0
2018-04-25 04:01:12.373845 7f5fe3724700  6 mds.1.journal LogSegment(1/4194304).try_to_expire success
2018-04-25 04:01:12.373847 7f5fe3724700 10 mds.1.log try_expire expired segment 1/4194304
2018-04-25 04:01:12.373850 7f5fe3724700  5 mds.1.log _expired segment 1/4194304, 1 events
2018-04-25 04:01:12.373853 7f5fe3724700  5 mds.1.log _expired not expiring 1/4194304, last one and !capped
2018-04-25 04:01:12.373857 7f5fe3724700 10 mds.1.log _trim_expired_segments waiting for 1/4194304 to expire
2018-04-25 04:01:12.373859 7f5fe3724700  7 mds.1.cache capping the log
2018-04-25 04:01:12.373861 7f5fe3724700  5 mds.1.log cap
2018-04-25 04:01:12.373863 7f5fe3724700 10 mds.1.log trim 1 / 30 segments, 1 / -1 events, 0 (0) expiring, 0 (0) expired
2018-04-25 04:01:12.373866 7f5fe3724700 10 mds.1.log _trim_expired_segments waiting for 1/4194304 to expire
2018-04-25 04:01:12.373867 7f5fe3724700  7 mds.1.cache waiting for log to flush.. 1 in 1 segments
2018-04-25 04:01:12.373870 7f5fe3724700  7 mds.1.1445 shutdown_pass=false
2018-04-25 04:01:12.373874 7f5fe3724700 20 mds.beacon.li569-18 0 slow request found
2018-04-25 04:01:13.101080 7f5fe5f29700  7 mds.1.locker get_lock don't have ino 0x100000003e9
2018-04-25 04:01:13.101087 7f5fe5f29700 10 mds.1.locker don't have object 0x100000003e9.head, must have trimmed, dropping
2018-04-25 04:01:13.102566 7f5fe5f29700  7 mds.1.locker get_lock don't have ino 0x10000000bbb
2018-04-25 04:01:13.102568 7f5fe5f29700 10 mds.1.locker don't have object 0x10000000bbb.head, must have trimmed, dropping
2018-04-25 04:01:13.869738 7f5fe2f23700 10 mds.beacon.li569-18 _send up:stopping seq 6155
2018-04-25 04:01:13.873017 7f5fe5f29700 10 mds.beacon.li569-18 handle_mds_beacon up:stopping seq 6155 rtt 0.003216
2018-04-25 04:01:17.057406 7f5fe5f29700  5 mds.li569-18 handle_mds_map epoch 1459 from mon.2
2018-04-25 04:01:17.057442 7f5fe5f29700 10 mds.li569-18      my compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=file layout v2}
2018-04-25 04:01:17.057469 7f5fe5f29700 10 mds.li569-18  mdsmap compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
2018-04-25 04:01:17.057473 7f5fe5f29700 10 mds.li569-18 map says I am 192.168.176.106:6800/2712896550 mds.1.1445 state up:stopping
2018-04-25 04:01:17.057477 7f5fe5f29700 10 mds.li569-18 handle_mds_map: handling map as rank 1
2018-04-25 04:01:17.372448 7f5fe3724700  7 mds.1.cache trim bytes_used=2kB limit=4GB reservation=0.05% count=0
2018-04-25 04:01:17.372492 7f5fe3724700  7 mds.1.cache trim_lru trimming 0 items from LRU size=0 mid=0 pintail=0 pinned=0
2018-04-25 04:01:17.372502 7f5fe3724700  7 mds.1.cache trim_lru trimmed 0 items
2018-04-25 04:01:17.372509 7f5fe3724700 10 mds.1.cache trim_client_leases
2018-04-25 04:01:17.372904 7f5fe3724700  2 mds.1.cache check_memory_usage total 415688, rss 34012, heap 313964, baseline 313964, buffers 0, 0 / 1 inodes have caps, 0 caps, 0 caps per inode
2018-04-25 04:01:17.372920 7f5fe3724700 10 mds.1.log trim 1 / 30 segments, 1 / -1 events, 0 (0) expiring, 0 (0) expired
2018-04-25 04:01:17.372925 7f5fe3724700 10 mds.1.log _trim_expired_segments waiting for 1/4194304 to expire
2018-04-25 04:01:17.372930 7f5fe3724700 10 mds.1.server find_idle_sessions.  laggy until 0.000000
2018-04-25 04:01:17.372935 7f5fe3724700 10 mds.1.locker scatter_tick
2018-04-25 04:01:17.372938 7f5fe3724700 20 mds.1.locker caps_tick 0 revoking caps
2018-04-25 04:01:17.372941 7f5fe3724700 20 mds.1.1445 updating export targets, currently 0 ranks are targets
2018-04-25 04:01:17.372943 7f5fe3724700 10 mds.1.log trim 1 / 30 segments, 1 / -1 events, 0 (0) expiring, 0 (0) expired
2018-04-25 04:01:17.372945 7f5fe3724700 10 mds.1.log _trim_expired_segments waiting for 1/4194304 to expire
2018-04-25 04:01:17.372946 7f5fe3724700  7 mds.1.cache shutdown_pass
2018-04-25 04:01:17.372948 7f5fe3724700 10 mds.1.cache shutdown_export_strays
2018-04-25 04:01:17.372951 7f5fe3724700  7 mds.1.cache trim bytes_used=2kB limit=4GB reservation=0.05% count=18446744073709551615
2018-04-25 04:01:17.372957 7f5fe3724700  7 mds.1.cache trim_lru trimming 18446744073709551615 items from LRU size=0 mid=0 pintail=0 pinned=0
2018-04-25 04:01:17.372963 7f5fe3724700  7 mds.1.cache trim_lru trimmed 0 items
2018-04-25 04:01:17.372965 7f5fe3724700  5 mds.1.cache lru size now 0/0
2018-04-25 04:01:17.372966 7f5fe3724700  7 mds.1.cache looking for subtrees to export to mds0
2018-04-25 04:01:17.372969 7f5fe3724700 10 mds.1.log trim 1 / 30 segments, 1 / 0 events, 0 (0) expiring, 0 (0) expired
2018-04-25 04:01:17.372972 7f5fe3724700  6 mds.1.journal LogSegment(1/4194304).try_to_expire
2018-04-25 04:01:17.372978 7f5fe3724700  4 mds.1.sessionmap save_if_dirty: writing 0
2018-04-25 04:01:17.372980 7f5fe3724700  6 mds.1.journal LogSegment(1/4194304).try_to_expire success
2018-04-25 04:01:17.372982 7f5fe3724700 10 mds.1.log try_expire expired segment 1/4194304
2018-04-25 04:01:17.372984 7f5fe3724700  5 mds.1.log _expired segment 1/4194304, 1 events
2018-04-25 04:01:17.372987 7f5fe3724700 10 mds.1.log _trim_expired_segments trimming expired 1/0x400000
2018-04-25 04:01:17.373113 7f5fe3724700  7 mds.1.cache writing header for (now-empty) journal
2018-04-25 04:01:17.373163 7f5fe3724700  7 mds.1.1445 shutdown_pass=false
2018-04-25 04:01:17.373168 7f5fe3724700 20 mds.beacon.li569-18 0 slow request found
2018-04-25 04:01:17.869858 7f5fe2f23700 10 mds.beacon.li569-18 _send up:stopping seq 6156
2018-04-25 04:01:17.873113 7f5fe5f29700 10 mds.beacon.li569-18 handle_mds_beacon up:stopping seq 6156 rtt 0.003222
2018-04-25 04:01:20.102572 7f5fe5f29700  5 mds.li569-18 handle_mds_map epoch 1460 from mon.2
2018-04-25 04:01:20.102607 7f5fe5f29700 10 mds.li569-18      my compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=file layout v2}
2018-04-25 04:01:20.102612 7f5fe5f29700 10 mds.li569-18  mdsmap compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
2018-04-25 04:01:20.102615 7f5fe5f29700 10 mds.li569-18 map says I am 192.168.176.106:6800/2712896550 mds.1.1445 state up:stopping
2018-04-25 04:01:20.102619 7f5fe5f29700 10 mds.li569-18 handle_mds_map: handling map as rank 1
2018-04-25 04:01:21.870002 7f5fe2f23700 10 mds.beacon.li569-18 _send up:stopping seq 6157
2018-04-25 04:01:21.873076 7f5fe5f29700 10 mds.beacon.li569-18 handle_mds_beacon up:stopping seq 6157 rtt 0.003041
2018-04-25 04:01:22.372565 7f5fe3724700  7 mds.1.cache trim bytes_used=2kB limit=4GB reservation=0.05% count=0
2018-04-25 04:01:22.372622 7f5fe3724700  7 mds.1.cache trim_lru trimming 0 items from LRU size=0 mid=0 pintail=0 pinned=0
2018-04-25 04:01:22.372635 7f5fe3724700  7 mds.1.cache trim_lru trimmed 0 items
2018-04-25 04:01:22.372643 7f5fe3724700 10 mds.1.cache trim_client_leases
2018-04-25 04:01:22.373201 7f5fe3724700  2 mds.1.cache check_memory_usage total 415688, rss 34012, heap 313964, baseline 313964, buffers 0, 0 / 1 inodes have caps, 0 caps, 0 caps per inode
2018-04-25 04:01:22.373227 7f5fe3724700 10 mds.1.log trim 0 / 30 segments, 0 / -1 events, 0 (0) expiring, 0 (0) expired
2018-04-25 04:01:22.373237 7f5fe3724700 10 mds.1.server find_idle_sessions.  laggy until 0.000000
2018-04-25 04:01:22.373245 7f5fe3724700 10 mds.1.locker scatter_tick
2018-04-25 04:01:22.373248 7f5fe3724700 20 mds.1.locker caps_tick 0 revoking caps
2018-04-25 04:01:22.373252 7f5fe3724700 20 mds.1.1445 updating export targets, currently 0 ranks are targets
2018-04-25 04:01:22.373256 7f5fe3724700 10 mds.1.log trim 0 / 30 segments, 0 / -1 events, 0 (0) expiring, 0 (0) expired
2018-04-25 04:01:22.373259 7f5fe3724700  7 mds.1.cache shutdown_pass
2018-04-25 04:01:22.373262 7f5fe3724700 10 mds.1.cache shutdown_export_strays
2018-04-25 04:01:22.373268 7f5fe3724700  7 mds.1.cache trim bytes_used=2kB limit=4GB reservation=0.05% count=18446744073709551615
2018-04-25 04:01:22.373279 7f5fe3724700  7 mds.1.cache trim_lru trimming 18446744073709551615 items from LRU size=0 mid=0 pintail=0 pinned=0
2018-04-25 04:01:22.373292 7f5fe3724700  7 mds.1.cache trim_lru trimmed 0 items
2018-04-25 04:01:22.373296 7f5fe3724700  5 mds.1.cache lru size now 0/0
2018-04-25 04:01:22.373299 7f5fe3724700  7 mds.1.cache looking for subtrees to export to mds0
2018-04-25 04:01:22.373302 7f5fe3724700 10 mds.1.log trim 0 / 30 segments, 0 / 0 events, 0 (0) expiring, 0 (0) expired
2018-04-25 04:01:22.373310 7f5fe3724700 10 mds.1.cache remove_subtree [dir 0x101 ~mds1/ [2,head] auth v=1 cv=0/0 dir_auth=1 state=1073741824 f(v0 10=0+10) n(v0 10=0+10) hs=0+0,ss=0+0 | child=0 subtree=1 subtreetemp=0 waiter=0 authpin=0 0x55bb5079e000]
2018-04-25 04:01:22.373334 7f5fe3724700 14 mds.1.cache.ino(0x101) close_dirfrag *
2018-04-25 04:01:22.373339 7f5fe3724700 12 mds.1.cache.dir(0x101) remove_null_dentries [dir 0x101 ~mds1/ [2,head] auth v=1 cv=0/0 dir_auth=1 state=1073741824 f(v0 10=0+10) n(v0 10=0+10) hs=0+0,ss=0+0 0x55bb5079e000]
2018-04-25 04:01:22.373358 7f5fe3724700 14 mds.1.cache remove_inode [inode 0x101 [...2,head] ~mds1/ auth v1 snaprealm=0x55bb50539980 f(v0 10=0+10) n(v0 11=0+11) (iversion lock) 0x55bb50686e00]
2018-04-25 04:01:22.373374 7f5fe3724700 15 mds.1.cache.ino(0x101) close_snaprealm snaprealm(0x101 seq 1 lc 0 cr 0 cps 1 snaps={} 0x55bb50539980)
2018-04-25 04:01:22.373410 7f5fe3724700  2 mds.1.cache shutdown done.
2018-04-25 04:01:22.373415 7f5fe3724700  7 mds.1.1445 shutdown_pass=true, finished w/ shutdown, moving to down:stopped
2018-04-25 04:01:22.373418 7f5fe3724700  2 mds.1.1445 stopping_done
2018-04-25 04:01:22.373421 7f5fe3724700  3 mds.1.1445 request_state down:stopped
2018-04-25 04:01:22.373424 7f5fe3724700 10 mds.beacon.li569-18 set_want_state: up:stopping -> down:stopped
2018-04-25 04:01:22.373443 7f5fe3724700 10 mds.beacon.li569-18 _send down:stopped seq 6158
2018-04-25 04:01:22.373532 7f5fe3724700 20 mds.beacon.li569-18 0 slow request found
2018-04-25 04:01:22.462566 7f59fc3a11c0  0 ceph version 12.2.1-11-gac31005 (ac31005c1ca3a663b20fc037c9a54a0e47412181) luminous (stable), process (unknown), pid 24767
2018-04-25 04:01:22.466402 7f59fc3a11c0  0 pidfile_write: ignore empty --pid-file
2018-04-25 04:01:26.582236 7f59f5457700  1 mds.li569-18 handle_mds_map standby

Finally finished...

Actions #3

Updated by Patrick Donnelly almost 6 years ago

  • Status changed from New to Rejected
Actions

Also available in: Atom PDF