Project

General

Profile

Actions

Bug #13135

closed

high mon memory usage after upgrade firefly -> hammer

Added by Corin Langosch over 8 years ago. Updated over 8 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Yesterday I upgraded a firefly cluster to hammer 0.94.3. More info about my cluster and another issue can be found here: http://tracker.ceph.com/issues/13134

I noticed the mon daemons are constantly growing in memory usage, starting with 50-80 MB and then growing to over 2000 MB (and then I restarted them). The cluster was always in healthy state after restarting all mons for the last time. But after only about 30 minutes memory usage of for example mon.c grew from 80MB to 190MB. Here are the memory usages:

root@r-ch103:~# ps aux | grep ceph
root     12081  2.9  0.9 1020820 640492 ?      Sl   Sep16  19:16 /usr/bin/ceph-mon -i a --pid-file /var/run/ceph/mon.a.pid -c /etc/ceph/ceph.conf --cluster ceph
root     19382 23.2  0.7 1637528 525732 ?      Ssl  Sep16 141:49 /usr/bin/ceph-osd -i 16 --pid-file /var/run/ceph/osd.16.pid -c /etc/ceph/ceph.conf --cluster ceph
root     27656  0.0  0.0   9388   928 pts/0    S+   08:18   0:00 grep --color=auto ceph
root     28243 21.1  0.7 1663072 498556 ?      Ssl  Sep16 125:01 /usr/bin/ceph-osd -i 3 --pid-file /var/run/ceph/osd.3.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch103:~# /etc/init.d/ceph restart mon
=== mon.a ===
=== mon.a ===
Stopping Ceph mon.a on r-ch103...kill 12081...done
=== mon.a ===
Starting Ceph mon.a on r-ch103...
2015-09-17 08:19:29.843757 7eff5a918780 -1 compacting monitor store ...
Starting ceph-create-keys on r-ch103...
root@r-ch103:~# ps aux | grep ceph
root     19382 23.2  0.7 1637528 524852 ?      Ssl  Sep16 142:13 /usr/bin/ceph-osd -i 16 --pid-file /var/run/ceph/osd.16.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28068  5.0  0.0 224456 36252 pts/0    Sl   08:19   0:01 /usr/bin/ceph-mon -i a --pid-file /var/run/ceph/mon.a.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28205  0.0  0.0   9388   932 pts/0    S+   08:20   0:00 grep --color=auto ceph
root     28243 21.1  0.7 1662304 497800 ?      Ssl  Sep16 125:22 /usr/bin/ceph-osd -i 3 --pid-file /var/run/ceph/osd.3.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch103:~# ps aux | grep ceph
root     19382 23.2  0.7 1637016 508476 ?      Ssl  Sep16 143:07 /usr/bin/ceph-osd -i 16 --pid-file /var/run/ceph/osd.16.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28068  2.0  0.0 252144 56416 pts/0    Sl   08:19   0:05 /usr/bin/ceph-mon -i a --pid-file /var/run/ceph/mon.a.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28243 21.1  0.7 1663072 478140 ?      Ssl  Sep16 126:15 /usr/bin/ceph-osd -i 3 --pid-file /var/run/ceph/osd.3.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28667  0.0  0.0   9388   932 pts/0    S+   08:23   0:00 grep --color=auto ceph
root@r-ch103:~# ps aux | grep ceph
root     19382 23.2  0.7 1637528 482944 ?      Ssl  Sep16 145:13 /usr/bin/ceph-osd -i 16 --pid-file /var/run/ceph/osd.16.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28068  1.8  0.1 276136 80460 pts/0    Sl   08:19   0:15 /usr/bin/ceph-mon -i a --pid-file /var/run/ceph/mon.a.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28243 21.1  0.6 1663072 460252 ?      Ssl  Sep16 128:12 /usr/bin/ceph-osd -i 3 --pid-file /var/run/ceph/osd.3.pid -c /etc/ceph/ceph.conf --cluster ceph
root     29426  0.0  0.0   9388   932 pts/0    S+   08:33   0:00 grep --color=auto ceph
root@r-ch103:~# ps aux | grep ceph
root     19382 23.2  0.7 1637528 461884 ?      Ssl  Sep16 148:52 /usr/bin/ceph-osd -i 16 --pid-file /var/run/ceph/osd.16.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28068  1.9  0.2 329844 134076 pts/0   Sl   08:19   0:34 /usr/bin/ceph-mon -i a --pid-file /var/run/ceph/mon.a.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28243 21.1  0.6 1662176 452392 ?      Ssl  Sep16 131:33 /usr/bin/ceph-osd -i 3 --pid-file /var/run/ceph/osd.3.pid -c /etc/ceph/ceph.conf --cluster ceph
root     30495  0.0  0.0   9388   928 pts/0    S+   08:49   0:00 grep --color=auto ceph
root@r-ch103:~# ps aux | grep ceph
root     19382 23.2  0.6 1637528 453500 ?      Ssl  Sep16 151:13 /usr/bin/ceph-osd -i 16 --pid-file /var/run/ceph/osd.16.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28068  2.0  0.2 368436 171068 pts/0   Sl   08:19   0:48 /usr/bin/ceph-mon -i a --pid-file /var/run/ceph/mon.a.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28243 21.1  0.6 1663072 431108 ?      Ssl  Sep16 133:47 /usr/bin/ceph-osd -i 3 --pid-file /var/run/ceph/osd.3.pid -c /etc/ceph/ceph.conf --cluster ceph
root     31300  0.0  0.0   9388   932 pts/0    S+   08:59   0:00 grep --color=auto ceph
root@r-ch104:~# ps aux | grep ceph
root     13772  0.0  0.0   9384   928 pts/0    S+   08:18   0:00 grep --color=auto ceph
root     23698  1.7  3.6 2659996 2416880 ?     Sl   Sep16  11:24 /usr/bin/ceph-mon -i b --pid-file /var/run/ceph/mon.b.pid -c /etc/ceph/ceph.conf --cluster ceph
root     26540 20.7  0.7 1620936 504968 ?      Ssl  Sep16 131:21 /usr/bin/ceph-osd -i 17 --pid-file /var/run/ceph/osd.17.pid -c /etc/ceph/ceph.conf --cluster ceph
root     29198 22.3  0.7 1728640 510976 ?      Ssl  Sep16 136:12 /usr/bin/ceph-osd -i 6 --pid-file /var/run/ceph/osd.6.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch104:~# /etc/init.d/ceph restart mon
=== mon.b ===
=== mon.b ===
Stopping Ceph mon.b on r-ch104...kill 23698...done
=== mon.b ===
Starting Ceph mon.b on r-ch104...
2015-09-17 08:19:18.013282 7fa8e167a780 -1 compacting monitor store ...
Starting ceph-create-keys on r-ch104...
root@r-ch104:~# ps aux | grep ceph
root     14082  3.9  0.0 326068 57272 pts/0    Sl   08:19   0:01 /usr/bin/ceph-mon -i b --pid-file /var/run/ceph/mon.b.pid -c /etc/ceph/ceph.conf --cluster ceph
root     14336  0.0  0.0   9384   932 pts/0    S+   08:20   0:00 grep --color=auto ceph
root     26540 20.7  0.7 1620936 487968 ?      Ssl  Sep16 131:41 /usr/bin/ceph-osd -i 17 --pid-file /var/run/ceph/osd.17.pid -c /etc/ceph/ceph.conf --cluster ceph
root     29198 22.3  0.7 1728640 511332 ?      Ssl  Sep16 136:34 /usr/bin/ceph-osd -i 6 --pid-file /var/run/ceph/osd.6.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch104:~# ps aux | grep ceph
root     14082  1.6  0.1 354812 82920 pts/0    Sl   08:19   0:04 /usr/bin/ceph-mon -i b --pid-file /var/run/ceph/mon.b.pid -c /etc/ceph/ceph.conf --cluster ceph
root     14431  0.0  0.0   9384   924 pts/0    S+   08:23   0:00 grep --color=auto ceph
root     26540 20.8  0.7 1620936 487556 ?      Ssl  Sep16 132:34 /usr/bin/ceph-osd -i 17 --pid-file /var/run/ceph/osd.17.pid -c /etc/ceph/ceph.conf --cluster ceph
root     29198 22.3  0.7 1728640 493852 ?      Ssl  Sep16 137:33 /usr/bin/ceph-osd -i 6 --pid-file /var/run/ceph/osd.6.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch104:~# ps aux | grep ceph
root     14082  1.5  0.1 382004 111520 pts/0   Sl   08:19   0:12 /usr/bin/ceph-mon -i b --pid-file /var/run/ceph/mon.b.pid -c /etc/ceph/ceph.conf --cluster ceph
root     14722  0.0  0.0   9384   932 pts/0    S+   08:33   0:00 grep --color=auto ceph
root     26540 20.8  0.7 1620936 472456 ?      Ssl  Sep16 134:34 /usr/bin/ceph-osd -i 17 --pid-file /var/run/ceph/osd.17.pid -c /etc/ceph/ceph.conf --cluster ceph
root     29198 22.3  0.7 1728640 481416 ?      Ssl  Sep16 139:40 /usr/bin/ceph-osd -i 6 --pid-file /var/run/ceph/osd.6.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch104:~# ps aux | grep ceph
root     14082  1.5  0.2 441696 171620 pts/0   Sl   08:19   0:27 /usr/bin/ceph-mon -i b --pid-file /var/run/ceph/mon.b.pid -c /etc/ceph/ceph.conf --cluster ceph
root     15222  0.0  0.0   9384   932 pts/0    S+   08:49   0:00 grep --color=auto ceph
root     26540 20.8  0.6 1620936 438600 ?      Ssl  Sep16 138:05 /usr/bin/ceph-osd -i 17 --pid-file /var/run/ceph/osd.17.pid -c /etc/ceph/ceph.conf --cluster ceph
root     29198 22.4  0.6 1728640 453088 ?      Ssl  Sep16 143:36 /usr/bin/ceph-osd -i 6 --pid-file /var/run/ceph/osd.6.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch104:~# ps aux | grep ceph
root     14082  1.6  0.3 482292 210324 pts/0   Sl   08:19   0:39 /usr/bin/ceph-mon -i b --pid-file /var/run/ceph/mon.b.pid -c /etc/ceph/ceph.conf --cluster ceph
root     15708  0.0  0.0   9384   932 pts/0    S+   09:00   0:00 grep --color=auto ceph
root     26540 20.8  0.6 1620936 430284 ?      Ssl  Sep16 140:24 /usr/bin/ceph-osd -i 17 --pid-file /var/run/ceph/osd.17.pid -c /etc/ceph/ceph.conf --cluster ceph
root     29198 22.4  0.6 1728640 425096 ?      Ssl  Sep16 146:07 /usr/bin/ceph-osd -i 6 --pid-file /var/run/ceph/osd.6.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:~# ps aux | grep ceph
root     20835  0.0  0.0   9384   932 pts/0    S+   08:10   0:00 grep --color=auto ceph
root     23550  1.7  3.5 2659800 2364780 ?     Sl   Sep16  11:18 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     25512 21.3  0.8 1671084 555572 ?      Ssl  Sep16 137:32 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28444 22.7  1.0 1728056 668176 ?      Ssl  Sep16 142:01 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:/var/log/ceph# ps aux | grep ceph
root     21467 25.8  0.7 1506260 466856 ?      Ssl  08:16   1:48 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.5  0.1 397484 83584 pts/0    Sl   08:18   0:04 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22744  0.0  0.0   9388   932 pts/0    R+   08:23   0:00 grep --color=auto ceph
root     25512 21.4  0.7 1671980 521548 ?      Ssl  Sep16 140:45 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:/var/log/ceph# ps aux | grep ceph
root     21467 25.7  0.7 1506260 467868 ?      Ssl  08:16   1:49 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.6  0.1 399052 85124 pts/0    Sl   08:18   0:05 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22746  0.0  0.0   9388   932 pts/0    R+   08:24   0:00 grep --color=auto ceph
root     25512 21.4  0.7 1671980 521752 ?      Ssl  Sep16 140:47 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:/var/log/ceph# ps aux | grep ceph
root     21467 25.7  0.7 1506260 467868 ?      Ssl  08:16   1:49 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.6  0.1 399052 85124 pts/0    Sl   08:18   0:05 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22748  0.0  0.0   9388   932 pts/0    R+   08:24   0:00 grep --color=auto ceph
root     25512 21.4  0.7 1671980 521936 ?      Ssl  Sep16 140:47 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:/var/log/ceph# ps aux | grep ceph
root     21467 22.0  0.7 1512428 469344 ?      Ssl  08:16   3:40 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.4  0.1 418364 104800 pts/0   Sl   08:18   0:12 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     23295  0.0  0.0   9388   932 pts/0    R+   08:33   0:00 grep --color=auto ceph
root     25512 21.4  0.7 1671980 505556 ?      Ssl  Sep16 142:44 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:/var/log/ceph# ps aux | grep ceph
root     21467 22.0  0.6 1512428 444644 ?      Ssl  08:16   3:40 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.4  0.1 418364 104876 pts/0   Sl   08:18   0:12 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     23299  0.0  0.0   9388   928 pts/0    S+   08:33   0:00 grep --color=auto ceph
root     25512 21.4  0.7 1671980 505556 ?      Ssl  Sep16 142:44 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:/var/log/ceph# ps aux | grep ceph
root     21467 22.0  0.6 1512428 444660 ?      Ssl  08:16   3:40 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.4  0.1 418364 103960 pts/0   Sl   08:18   0:12 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     23301  0.0  0.0   9388   928 pts/0    S+   08:33   0:00 grep --color=auto ceph
root     25512 21.4  0.7 1671980 505812 ?      Ssl  Sep16 142:45 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:/var/log/ceph# ps aux | grep ceph
root     21467 22.0  0.6 1512428 444660 ?      Ssl  08:16   3:40 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.4  0.1 418364 103960 pts/0   Sl   08:18   0:12 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     23303  0.0  0.0   9388   928 pts/0    R+   08:33   0:00 grep --color=auto ceph
root     25512 21.4  0.7 1671980 505812 ?      Ssl  Sep16 142:45 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:/var/log/ceph# ps aux | grep ceph
root     21467 22.0  0.6 1512428 443740 ?      Ssl  08:16   3:40 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.4  0.1 418364 105524 pts/0   Sl   08:18   0:12 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     23325  0.0  0.0   9388   932 pts/0    R+   08:33   0:00 grep --color=auto ceph
root     25512 21.4  0.7 1671980 505812 ?      Ssl  Sep16 142:45 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:/var/log/ceph# ps aux | grep ceph
root     21467 22.0  0.6 1512428 443740 ?      Ssl  08:16   3:40 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.4  0.1 418364 105524 pts/0   Sl   08:18   0:12 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     23328  0.0  0.0   9388   932 pts/0    R+   08:33   0:00 grep --color=auto ceph
root     25512 21.4  0.7 1671980 505812 ?      Ssl  Sep16 142:45 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:/var/log/ceph# ps aux | grep ceph
root     21467 22.0  0.6 1512428 443740 ?      Ssl  08:16   3:40 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.4  0.1 418364 105524 pts/0   Sl   08:18   0:12 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     23330  0.0  0.0   9388   932 pts/0    S+   08:33   0:00 grep --color=auto ceph
root     25512 21.4  0.7 1671980 505812 ?      Ssl  Sep16 142:45 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:/var/log/ceph# ps aux | grep ceph
root     21467 21.7  0.6 1516540 428032 ?      Ssl  08:16   7:05 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.4  0.2 469632 157040 pts/0   Sl   08:18   0:26 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     23950  0.0  0.0   9388   928 pts/0    S+   08:49   0:00 grep --color=auto ceph
root     25512 21.4  0.7 1671980 470492 ?      Ssl  Sep16 146:24 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root@r-ch105:/var/log/ceph# ps aux | grep ceph
root     21467 21.7  0.6 1517696 408620 ?      Ssl  08:16   9:24 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.4  0.2 508536 194144 pts/0   Sl   08:18   0:36 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     24513  0.0  0.0   9388   932 pts/0    S+   09:00   0:00 grep --color=auto ceph
root     25512 21.4  0.6 1671980 453616 ?      Ssl  Sep16 148:49 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
  • Why are the memory usages that different (mon.a = 640 MB, while mon.b and mon.c around 2000 MB)?
  • What can I do to further investigate and fix this? (it's a production cluster so I cannot "experiment freely")
Actions #1

Updated by Corin Langosch over 8 years ago

Update after about 1h since mon restart (cluster always healthy):

root@r-ch103:~# date; ps aux | grep ceph
Thu Sep 17 09:19:02 CEST 2015
root     19382 23.2  0.6 1637656 432892 ?      Ssl  Sep16 155:27 /usr/bin/ceph-osd -i 16 --pid-file /var/run/ceph/osd.16.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28068  2.0  0.3 402200 206052 pts/0   Sl   08:19   1:14 /usr/bin/ceph-mon -i a --pid-file /var/run/ceph/mon.a.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28243 21.1  0.6 1663072 413328 ?      Ssl  Sep16 137:39 /usr/bin/ceph-osd -i 3 --pid-file /var/run/ceph/osd.3.pid -c /etc/ceph/ceph.conf --cluster ceph
root     32755  0.0  0.0   9388   932 pts/0    S+   09:19   0:00 grep --color=auto ceph

root@r-ch104:~# date; ps aux | grep ceph
Thu Sep 17 09:19:00 CEST 2015
root     14082  1.6  0.4 576804 305116 pts/0   Sl   08:19   1:00 /usr/bin/ceph-mon -i b --pid-file /var/run/ceph/mon.b.pid -c /etc/ceph/ceph.conf --cluster ceph
root     16404  0.0  0.0   9384   924 pts/0    S+   09:18   0:00 grep --color=auto ceph
root     26540 20.8  0.6 1620936 438476 ?      Ssl  Sep16 144:17 /usr/bin/ceph-osd -i 17 --pid-file /var/run/ceph/osd.17.pid -c /etc/ceph/ceph.conf --cluster ceph
root     29198 22.4  0.5 1728128 390352 ?      Ssl  Sep16 150:22 /usr/bin/ceph-osd -i 6 --pid-file /var/run/ceph/osd.6.pid -c /etc/ceph/ceph.conf --cluster ceph

root@r-ch105:/var/log/ceph# date; ps aux | grep ceph
Thu Sep 17 09:18:54 CEST 2015
root     21467 21.2  0.5 1517696 388012 ?      Ssl  08:16  13:09 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.5  0.4 604024 291560 pts/0   Sl   08:18   0:56 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     25414  0.0  0.0   9388   928 pts/0    S+   09:18   0:00 grep --color=auto ceph
root     25512 21.4  0.6 1671980 429272 ?      Ssl  Sep16 152:51 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
Actions #2

Updated by Corin Langosch over 8 years ago

Here's another update. Now aleady over 460MB (started with 80MB)...

root@r-ch103:~# date; ps aux | grep ceph
Thu Sep 17 10:04:50 CEST 2015
root      3538  0.0  0.0   9384   932 pts/0    S+   10:04   0:00 grep --color=auto ceph
root     19382 23.2  0.6 1637656 441452 ?      Ssl  Sep16 166:17 /usr/bin/ceph-osd -i 16 --pid-file /var/run/ceph/osd.16.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28068  2.1  0.4 505760 309852 pts/0   Sl   08:19   2:15 /usr/bin/ceph-mon -i a --pid-file /var/run/ceph/mon.a.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28243 21.2  0.5 1663072 364160 ?      Ssl  Sep16 148:11 /usr/bin/ceph-osd -i 3 --pid-file /var/run/ceph/osd.3.pid -c /etc/ceph/ceph.conf --cluster ceph

root@r-ch104:~# date; ps aux | grep ceph
Thu Sep 17 10:04:54 CEST 2015
root     14082  1.6  0.7 733680 461948 pts/0   Sl   08:19   1:43 /usr/bin/ceph-mon -i b --pid-file /var/run/ceph/mon.b.pid -c /etc/ceph/ceph.conf --cluster ceph
root     17843  0.0  0.0   9384   928 pts/0    S+   10:04   0:00 grep --color=auto ceph
root     26540 20.7  0.6 1620936 429644 ?      Ssl  Sep16 153:04 /usr/bin/ceph-osd -i 17 --pid-file /var/run/ceph/osd.17.pid -c /etc/ceph/ceph.conf --cluster ceph
root     29198 22.4  0.6 1728640 398644 ?      Ssl  Sep16 160:29 /usr/bin/ceph-osd -i 6 --pid-file /var/run/ceph/osd.6.pid -c /etc/ceph/ceph.conf --cluster ceph

root@r-ch105:/var/log/ceph# date; ps aux | grep ceph
Thu Sep 17 10:05:01 CEST 2015
root     21467 21.5  0.5 1520780 340332 ?      Ssl  08:16  23:14 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.5  0.6 761972 448144 pts/0   Sl   08:18   1:40 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     25512 21.5  0.6 1672108 427268 ?      Ssl  Sep16 163:06 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root     27512  0.0  0.0   9388   932 pts/0    S+   10:05   0:00 grep --color=auto ceph
Actions #3

Updated by Corin Langosch over 8 years ago

Here's another update. Now aleady over 610MB (started with 80MB)...

root@r-ch103:~# date; ps aux | grep ceph
Thu Sep 17 11:43:32 CEST 2015
root     11882  0.0  0.0   9384   932 pts/0    S+   11:43   0:00 grep --color=auto ceph
root     19382 23.1  0.7 1636888 470232 ?      Ssl  Sep16 188:34 /usr/bin/ceph-osd -i 16 --pid-file /var/run/ceph/osd.16.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28068  2.2  0.8 770172 571624 pts/0   Sl   08:19   4:29 /usr/bin/ceph-mon -i a --pid-file /var/run/ceph/mon.a.pid -c /etc/ceph/ceph.conf --cluster ceph
root     28243 21.8  0.7 1663328 474996 ?      Ssl  Sep16 173:48 /usr/bin/ceph-osd -i 3 --pid-file /var/run/ceph/osd.3.pid -c /etc/ceph/ceph.conf --cluster ceph

root@r-ch104:~# date; ps aux | grep ceph
Thu Sep 17 11:43:37 CEST 2015
root     14082  1.6  0.9 892620 616360 pts/0   Sl   08:19   3:27 /usr/bin/ceph-mon -i b --pid-file /var/run/ceph/mon.b.pid -c /etc/ceph/ceph.conf --cluster ceph
root     21311  0.0  0.0   9384   932 pts/0    S+   11:43   0:00 grep --color=auto ceph
root     26540 20.7  0.6 1621064 459432 ?      Ssl  Sep16 173:18 /usr/bin/ceph-osd -i 17 --pid-file /var/run/ceph/osd.17.pid -c /etc/ceph/ceph.conf --cluster ceph
root     29198 22.4  0.6 1728640 412092 ?      Ssl  Sep16 182:59 /usr/bin/ceph-osd -i 6 --pid-file /var/run/ceph/osd.6.pid -c /etc/ceph/ceph.conf --cluster ceph

root@r-ch105:/var/log/ceph# date; ps aux | grep ceph; date
Thu Sep 17 11:43:43 CEST 2015
root     21467 21.7  0.5 1524892 360144 ?      Ssl  08:16  44:56 /usr/bin/ceph-osd -i 9 --pid-file /var/run/ceph/osd.9.pid -c /etc/ceph/ceph.conf --cluster ceph
root     22286  1.6  0.9 929192 612404 pts/0   Sl   08:18   3:22 /usr/bin/ceph-mon -i c --pid-file /var/run/ceph/mon.c.pid -c /etc/ceph/ceph.conf --cluster ceph
root     25512 21.5  0.6 1672108 439276 ?      Ssl  Sep16 184:47 /usr/bin/ceph-osd -i 18 --pid-file /var/run/ceph/osd.18.pid -c /etc/ceph/ceph.conf --cluster ceph
root     32729  0.0  0.0   9388   928 pts/0    S+   11:43   0:00 grep --color=auto ceph

There's nothing suspicious in the logs:

2015-09-17 08:19:00.628157 7fa1560fa780  0 ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b), process ceph-mon, pid 22282
2015-09-17 08:19:00.701497 7fa1560fa780  0 starting mon.c rank 2 at 10.0.0.7:6789/0 mon_data /var/lib/ceph/mon/ceph-c fsid 4ac0e21b-6ea2-4ac7-8114-122bd9ba55d6
2015-09-17 08:19:00.759098 7fa1560fa780  0 mon.c@-1(probing).mds e1 print_map
epoch    1
flags    0
created    2013-02-17 13:50:11.545034
modified    2013-02-17 13:50:11.545056
tableserver    0
root    0
session_timeout    60
session_autoclose    300
max_file_size    1099511627776
last_failure    0
last_failure_osd_epoch    0
compat    compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object}
max_mds    1
in    
up    {}
failed    
stopped    
data_pools    0
metadata_pool    1
inline_data    disabled

2015-09-17 08:19:00.760264 7fa1560fa780  0 mon.c@-1(probing).osd e21077 crush map has features 33816576, adjusting msgr requires
2015-09-17 08:19:00.760276 7fa1560fa780  0 mon.c@-1(probing).osd e21077 crush map has features 33816576, adjusting msgr requires
2015-09-17 08:19:00.760278 7fa1560fa780  0 mon.c@-1(probing).osd e21077 crush map has features 33816576, adjusting msgr requires
2015-09-17 08:19:00.760281 7fa1560fa780  0 mon.c@-1(probing).osd e21077 crush map has features 33816576, adjusting msgr requires
2015-09-17 08:19:00.763224 7fa1560fa780 -1 compacting monitor store ...
2015-09-17 08:19:02.620117 7fa1560fa780 -1 done compacting
2015-09-17 08:19:02.620674 7fa1560fa780  0 mon.c@-1(probing) e5  my rank is now 2 (was -1)
2015-09-17 08:19:02.622646 7fa1560df700  0 -- 10.0.0.7:6789/0 >> 10.0.0.5:6789/0 pipe(0x4a00000 sd=19 :6789 s=0 pgs=0 cs=0 l=0 c=0x5d9e840).accept connect_seq 2 vs existing 0 state connecting
2015-09-17 08:19:02.622677 7fa1560df700  0 -- 10.0.0.7:6789/0 >> 10.0.0.5:6789/0 pipe(0x4a00000 sd=19 :6789 s=0 pgs=0 cs=0 l=0 c=0x5d9e840).accept we reset (peer sent cseq 2, 0x4a04500.cseq = 0), sending RESETSESSION
2015-09-17 08:19:02.623544 7fa1560df700  0 -- 10.0.0.7:6789/0 >> 10.0.0.5:6789/0 pipe(0x4a00000 sd=19 :6789 s=0 pgs=0 cs=0 l=0 c=0x5d9e840).accept connect_seq 0 vs existing 0 state connecting
2015-09-17 08:19:02.625088 7fa14cc6d700  0 -- 10.0.0.7:6789/0 >> 10.0.0.6:6789/0 pipe(0x4930000 sd=21 :54740 s=2 pgs=404051 cs=1 l=0 c=0x4548b00).reader missed message?  skipped from seq 0 to 850922193
2015-09-17 08:19:02.625350 7fa14ee73700  0 log_channel(cluster) log [INF] : mon.c calling new monitor election
2015-09-17 08:19:02.625640 7fa1560df700  0 -- 10.0.0.7:6789/0 >> 10.0.0.5:6789/0 pipe(0x4a00000 sd=19 :6789 s=2 pgs=252830 cs=1 l=0 c=0x4548dc0).reader missed message?  skipped from seq 0 to 691064278
2015-09-17 08:19:02.772312 7fa151f66700  0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
2015-09-17 08:19:02.772480 7fa151f66700  0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
2015-09-17 08:19:16.502890 7fa14cc6d700  0 -- 10.0.0.7:6789/0 >> 10.0.0.6:6789/0 pipe(0x4930000 sd=21 :54740 s=2 pgs=404051 cs=1 l=0 c=0x4548b00).fault with nothing to send, going to standby
2015-09-17 08:19:19.808949 7fa14ca67700  0 -- 10.0.0.7:6789/0 >> 10.0.0.6:6789/0 pipe(0x4a04500 sd=20 :6789 s=0 pgs=0 cs=0 l=0 c=0x5d9e000).accept connect_seq 0 vs existing 1 state standby
2015-09-17 08:19:19.808984 7fa14ca67700  0 -- 10.0.0.7:6789/0 >> 10.0.0.6:6789/0 pipe(0x4a04500 sd=20 :6789 s=0 pgs=0 cs=0 l=0 c=0x5d9e000).accept peer reset, then tried to connect to us, replacing
2015-09-17 08:19:22.300710 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:19:22.300756 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1011793' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:19:28.435567 7fa1560df700  0 -- 10.0.0.7:6789/0 >> 10.0.0.5:6789/0 pipe(0x4a00000 sd=19 :6789 s=2 pgs=252830 cs=1 l=0 c=0x4548dc0).fault, initiating reconnect
2015-09-17 08:19:28.435983 7fa14d670700  0 -- 10.0.0.7:6789/0 >> 10.0.0.5:6789/0 pipe(0x4a00000 sd=19 :6789 s=1 pgs=252830 cs=2 l=0 c=0x4548dc0).fault
2015-09-17 08:19:31.551376 7fa14d670700  0 -- 10.0.0.7:6789/0 >> 10.0.0.5:6789/0 pipe(0x4a00000 sd=19 :35665 s=1 pgs=252830 cs=2 l=0 c=0x4548dc0).connect got RESETSESSION
2015-09-17 08:19:31.551953 7fa144eee700  0 -- 10.0.0.7:6789/0 >> 10.0.0.5:6789/0 pipe(0x6ce6500 sd=120 :6789 s=0 pgs=0 cs=0 l=0 c=0x6bffb00).accept connect_seq 0 vs existing 0 state connecting
2015-09-17 08:20:00.761394 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:20:01.281578 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 08:20:01.281648 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1028168' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 08:20:01.824710 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 08:20:01.824783 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.8:0/1011867' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 08:20:32.112520 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:20:32.112581 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1012016' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:21:00.762184 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:22:00.762936 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:22:31.997888 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:22:31.997966 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1012304' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:23:00.763626 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:23:32.006140 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:23:32.006196 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1012431' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:24:00.764318 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:25:00.765066 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:25:01.594450 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 08:25:01.594496 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1028747' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 08:26:00.765834 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:27:00.766573 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:28:00.767271 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:29:00.767932 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:30:00.768677 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:30:01.896690 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 08:30:01.896846 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1029104' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 08:31:00.769421 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:32:00.770266 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:33:00.771028 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:33:32.974124 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:33:32.974237 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1013708' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:34:00.771783 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:34:33.093714 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:34:33.093770 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1013835' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:35:00.772577 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:35:01.727964 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 08:35:01.728059 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.8:0/1012495' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 08:35:02.211883 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 08:35:02.211950 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.7:0/1023370' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 08:36:00.773349 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:37:00.774112 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:38:00.774861 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:39:00.775657 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:39:33.189555 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:39:33.189704 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1014474' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:40:00.776177 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:40:01.513350 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 08:40:01.513431 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1029843' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 08:40:01.758183 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 08:40:01.758275 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1014930' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 08:40:33.232252 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:40:33.232316 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1014601' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:41:00.776865 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:42:00.777617 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:43:00.778363 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:44:00.779099 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:45:00.779858 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:45:01.815035 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 08:45:01.815129 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1030172' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 08:45:02.060311 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 08:45:02.060374 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1015085' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 08:45:33.259082 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:45:33.259242 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1015250' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:46:00.780599 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:47:00.781065 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:47:33.207628 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:47:33.207686 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1015506' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:48:00.781750 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:49:00.782552 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:50:00.783310 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:50:02.109266 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 08:50:02.109351 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1030507' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 08:51:00.784095 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:52:00.784925 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:52:33.312997 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:52:33.313095 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1016144' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:53:00.785670 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:53:33.364320 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:53:33.364376 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1016271' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:54:00.786374 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:54:33.259905 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:54:33.259960 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1016398' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:55:00.787116 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:56:00.787885 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:56:33.392800 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 08:56:33.392857 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1016659' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 08:57:00.788653 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:58:00.789357 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 08:59:00.790136 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:00:00.790941 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:00:01.700510 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:00:01.700608 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1031305' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:00:33.555479 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:00:33.555670 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1017511' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:01:00.791670 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:02:00.792384 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:03:00.793174 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:04:00.793853 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:05:00.794580 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:06:00.795356 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:07:00.796009 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:08:00.796620 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:09:00.797420 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:10:00.798321 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:10:01.301040 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:10:01.301123 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1032114' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:10:01.319468 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:10:01.319533 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.7:0/1024999' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:10:01.916523 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:10:01.916593 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.8:0/1014319' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:10:33.760938 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:10:33.761015 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1018888' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:11:00.799048 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:11:33.581119 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:11:33.581184 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1019016' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:12:00.799544 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:12:33.608414 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:12:33.608469 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1019143' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:13:00.800027 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:13:33.836590 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:13:33.836652 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1019270' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:14:00.800992 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:15:00.801643 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:16:00.802406 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:16:33.831014 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:16:33.831108 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1019658' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:17:00.802935 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:18:00.803657 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:19:00.804257 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:20:00.805799 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:20:02.133291 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:20:02.133386 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.7:0/1025450' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:21:00.807002 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:22:00.807704 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:23:00.808445 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:24:00.809190 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:25:00.809917 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:26:00.810574 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:27:00.811302 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:27:33.848215 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:27:33.848293 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1021074' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:28:00.812040 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:28:34.328784 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:28:34.328843 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1021201' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:29:00.812853 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:29:33.935911 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:29:33.935986 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1021328' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:30:00.813642 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:30:01.884029 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:30:01.884108 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1016734' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:31:00.814386 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:31:35.617319 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:31:35.617501 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1021584' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:32:00.815092 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:32:35.612158 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:32:35.612207 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1021711' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:33:00.815935 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:34:00.816650 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:35:00.817389 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:35:02.186153 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:35:02.186223 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1016894' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:35:35.884813 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:35:35.884869 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1022095' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:36:00.818146 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:37:00.818922 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:37:35.502821 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:37:35.502881 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1022350' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:38:00.819628 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:38:35.637057 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:38:35.637097 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1022477' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:39:00.820394 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:40:00.821074 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:40:01.373180 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:40:01.373270 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.7:0/1026439' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:40:01.481954 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:40:01.482017 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1017037' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:41:00.821734 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:42:00.822587 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:43:00.823290 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:44:00.824042 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:45:00.824718 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:45:01.439770 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:45:01.439840 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1002108' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:45:01.784841 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:45:01.784903 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1017191' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:45:35.534717 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:45:35.534757 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1023376' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:46:00.825481 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:46:35.801283 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:46:35.801340 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1023504' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:47:00.826123 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:47:35.804591 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:47:35.804647 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1023631' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:48:00.826882 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:49:00.827648 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:50:00.828398 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:50:01.724195 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:50:01.724272 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1002459' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:51:00.828858 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:52:00.829593 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:53:00.830225 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:54:00.830931 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:55:00.831600 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:55:02.035334 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 09:55:02.035418 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1002832' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 09:56:00.832342 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:57:00.833065 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:57:35.756795 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:57:35.756856 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1024908' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:58:00.833783 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 09:58:35.715522 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 09:58:35.715584 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1025035' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 09:59:00.834405 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:00:00.834856 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:00:01.638069 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:00:01.638159 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1017659' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:00:02.278329 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:00:02.278384 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.8:0/1016596' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:01:00.835538 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:02:00.836239 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:03:00.836939 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:04:00.837567 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:05:00.838314 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:05:01.926884 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:05:01.926952 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1017850' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:06:00.839038 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:06:33.541804 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:06:33.541862 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1026104' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:07:00.839731 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:08:00.840424 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:09:00.841115 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:10:00.841804 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:10:01.888820 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:10:01.888890 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.8:0/1017095' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:11:00.842561 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:12:00.843391 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:13:00.844157 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:14:00.844847 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:15:00.845572 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:15:01.521816 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:15:01.521892 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1018154' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:15:02.197014 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:15:02.197110 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.8:0/1017303' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:15:02.222625 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:15:02.222726 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1004247' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:15:33.734603 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:15:33.734662 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1027370' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:16:00.846270 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:16:33.605200 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:16:33.605238 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1027503' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:17:00.846953 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:18:00.847456 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:18:33.923599 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:18:33.923660 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1027780' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:19:00.848199 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:20:00.848925 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:20:01.650675 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:20:01.650755 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.8:0/1017580' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:20:01.654992 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:20:01.655040 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1004651' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:20:01.847152 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:20:01.847225 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.7:0/1028317' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:21:00.849624 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:22:00.850332 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:22:33.698363 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:22:33.698424 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1030492' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:23:00.851083 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:24:00.851932 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:24:33.650951 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:24:33.651016 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1000492' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:25:00.852658 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:25:01.956235 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:25:01.956305 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.8:0/1017989' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:25:02.512287 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:25:02.512471 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1018643' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:26:00.853241 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:27:00.854001 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:28:00.854765 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:28:33.839444 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:28:33.839506 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1002237' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:28:45.397900 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:28:45.397939 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1002268' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:29:00.855524 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:30:00.856211 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:30:01.632615 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:30:01.632689 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1005602' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:30:01.718377 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:30:01.718434 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.8:0/1018382' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:30:01.750327 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:30:01.750395 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.7:0/1029397' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:30:49.586414 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "status"} v 0) v1
2015-09-17 10:30:49.586464 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.8:0/1018440' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2015-09-17 10:31:00.856997 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:32:00.857734 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:33:00.858589 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:33:28.874221 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:33:28.874287 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1002928' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:34:00.859367 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:34:28.769755 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:34:28.769815 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1003055' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:35:00.860063 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:35:02.233809 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:35:02.233859 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1019041' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:36:00.860819 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:37:00.861564 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:38:00.862277 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:39:00.862985 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:40:00.863682 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:41:00.864478 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:42:00.865013 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:43:00.865633 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:44:00.866375 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:44:28.840118 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:44:28.840321 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1004332' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:45:00.867092 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:45:01.525221 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 10:45:01.525310 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1006730' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 10:46:00.867778 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:47:00.868465 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:48:00.869081 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:49:00.869703 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:50:00.870385 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:51:00.871017 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:52:00.871638 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:53:00.872087 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:54:00.872815 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:55:00.873527 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 24% total 945 GB, used 710 GB, avail 235 GB
2015-09-17 10:56:00.874199 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 708 GB, avail 236 GB
2015-09-17 10:56:29.453987 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:56:29.454121 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1008288' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:57:00.874866 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 10:58:00.875537 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 10:58:29.175568 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:58:29.175626 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1008544' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 10:59:00.876196 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 10:59:29.298922 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 10:59:29.298977 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1008671' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 11:00:00.876644 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:00:01.415067 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 11:00:01.415138 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1008182' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 11:01:00.877498 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:01:22.829345 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 11:01:22.829426 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1008926' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 11:02:00.878200 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:03:00.878830 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:04:00.879555 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:05:00.880254 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:05:23.046109 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 11:05:23.046171 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1009437' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 11:06:00.880968 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:06:23.059144 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 11:06:23.059208 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1009565' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 11:07:00.881646 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:08:00.882334 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:09:00.883028 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:10:00.883771 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:10:02.002278 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 11:10:02.002357 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1008969' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 11:10:02.094598 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 11:10:02.094640 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.7:0/1031225' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 11:11:00.884564 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:12:00.885273 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:13:00.885974 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:13:23.724018 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 11:13:23.724088 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1010475' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 11:14:00.886806 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:14:23.367175 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 11:14:23.367230 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1010602' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 11:15:00.887534 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:15:01.671740 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 11:15:01.671884 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1020361' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 11:16:00.888271 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:17:00.889009 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:17:23.603992 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 11:17:23.604055 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1010990' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 11:18:00.889779 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:18:23.491726 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 11:18:23.491796 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1011117' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
2015-09-17 11:19:00.890574 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:20:00.891346 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:20:01.870948 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 11:20:01.871020 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.7:0/1031705' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 11:20:02.038641 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 11:20:02.038715 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.8:0/1020872' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 11:20:02.073843 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 11:20:02.073890 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.6:0/1020541' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 11:21:00.892089 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:22:00.892867 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:23:00.893594 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:24:00.894251 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:25:00.894957 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:25:02.041433 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 11:25:02.041494 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1010576' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 11:26:00.895689 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:27:00.896430 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:28:00.897160 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:29:00.897881 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:30:00.898564 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:30:01.359214 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 11:30:01.359304 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.5:0/1010911' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 11:30:01.600556 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
2015-09-17 11:30:01.600725 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.8:0/1021478' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
2015-09-17 11:31:00.899297 7fa14f674700  0 mon.c@2(peon).data_health(804) update_stats avail 25% total 945 GB, used 707 GB, avail 237 GB
2015-09-17 11:31:22.500432 7fa14ee73700  0 mon.c@2(peon) e5 handle_command mon_command({"prefix": "health"} v 0) v1
2015-09-17 11:31:22.500488 7fa14ee73700  0 log_channel(audit) log [DBG] : from='client.? 10.0.0.9:0/1012867' entity='client.admin' cmd=[{"prefix": "health"}]: dispatch
201
Actions #4

Updated by Sage Weil over 8 years ago

  • Status changed from New to Rejected

600mb is not unreasonable; the default leveldb cache size is 512mb. you can adjust that down if you like!

Actions

Also available in: Atom PDF