Project

General

Profile

Bug #15636 » ceph-mon.ceph4.log

Anonymous, 04/28/2016 12:26 AM

 
2016-04-28 00:57:04.766662 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1762: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 16381 kB/s wr, 7 op/s
2016-04-28 00:57:09.867274 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1763: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 7844 kB/s wr, 3 op/s
2016-04-28 00:57:10.991129 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1764: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 86162 B/s rd, 26511 kB/s wr, 14 op/s
2016-04-28 00:57:14.807351 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1765: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 103 kB/s rd, 53223 kB/s wr, 28 op/s
2016-04-28 00:57:16.157572 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1766: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 70441 kB/s wr, 34 op/s
2016-04-28 00:57:19.866637 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1767: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 70337 kB/s wr, 34 op/s
2016-04-28 00:57:20.968365 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1768: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 77109 kB/s wr, 38 op/s
2016-04-28 00:57:24.864974 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1769: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 73734 kB/s wr, 36 op/s
2016-04-28 00:57:26.139348 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1770: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 62117 kB/s wr, 30 op/s
2016-04-28 00:57:29.813864 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1771: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 71263 kB/s wr, 34 op/s
2016-04-28 00:57:31.038730 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1772: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 73906 kB/s wr, 36 op/s
2016-04-28 00:57:34.821634 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1773: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 65376 kB/s wr, 32 op/s
2016-04-28 00:57:36.063280 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1774: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 73536 kB/s wr, 36 op/s
2016-04-28 00:57:39.829419 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1775: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 68837 kB/s wr, 34 op/s
2016-04-28 00:57:40.962705 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1776: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 62999 kB/s wr, 31 op/s
2016-04-28 00:57:43.448006 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1852 MB, avail 287 GB
2016-04-28 00:57:44.820514 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1777: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 69608 kB/s wr, 34 op/s
2016-04-28 00:57:44.847983 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 00:57:44.848020 7f9991a5d700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/662036855' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 00:57:44.849367 7f9991a5d700 1 mon.ceph4@0(leader).log v1912 check_sub sending message to client.? [2001:470:20ed:26::4]:0/662036855 with 1 entries (version 1912)
2016-04-28 00:57:45.961895 7f99935d2700 1 mon.ceph4@0(leader).log v1913 check_sub sending message to client.? [2001:470:20ed:26::4]:0/662036855 with 1 entries (version 1913)
2016-04-28 00:57:46.053736 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1778: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 73738 kB/s wr, 36 op/s
2016-04-28 00:57:47.195267 7f99935d2700 1 mon.ceph4@0(leader).log v1914 check_sub sending message to client.? [2001:470:20ed:26::4]:0/662036855 with 1 entries (version 1914)
2016-04-28 00:57:49.812373 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1779: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 76116 kB/s wr, 37 op/s
2016-04-28 00:57:50.928188 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1780: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 70648 kB/s wr, 34 op/s
2016-04-28 00:57:51.144638 7f99935d2700 1 mon.ceph4@0(leader).log v1915 check_sub sending message to client.? [2001:470:20ed:26::4]:0/662036855 with 1 entries (version 1915)
2016-04-28 00:57:52.219429 7f99935d2700 1 mon.ceph4@0(leader).log v1916 check_sub sending message to client.? [2001:470:20ed:26::4]:0/662036855 with 1 entries (version 1916)
2016-04-28 00:57:54.869902 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1781: 208 pgs: 208 active+clean; 141 GB data, 320 GB used, 610 GB / 931 GB avail; 74360 kB/s wr, 36 op/s
2016-04-28 00:57:56.052685 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1782: 208 pgs: 208 active+clean; 141 GB data, 321 GB used, 610 GB / 931 GB avail; 78563 kB/s wr, 38 op/s
2016-04-28 00:57:56.135743 7f99935d2700 1 mon.ceph4@0(leader).log v1917 check_sub sending message to client.? [2001:470:20ed:26::4]:0/662036855 with 1 entries (version 1917)
2016-04-28 00:57:57.276887 7f99935d2700 1 mon.ceph4@0(leader).log v1918 check_sub sending message to client.? [2001:470:20ed:26::4]:0/662036855 with 1 entries (version 1918)
2016-04-28 00:57:59.836015 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1783: 208 pgs: 208 active+clean; 141 GB data, 321 GB used, 609 GB / 931 GB avail; 75336 kB/s wr, 36 op/s
2016-04-28 00:58:00.918792 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1784: 208 pgs: 208 active+clean; 141 GB data, 321 GB used, 609 GB / 931 GB avail; 64332 kB/s wr, 31 op/s
2016-04-28 00:58:00.996484 7f99935d2700 1 mon.ceph4@0(leader).log v1919 check_sub sending message to client.? [2001:470:20ed:26::4]:0/662036855 with 1 entries (version 1919)
2016-04-28 00:58:02.118401 7f99935d2700 1 mon.ceph4@0(leader).log v1920 check_sub sending message to client.? [2001:470:20ed:26::4]:0/662036855 with 1 entries (version 1920)
2016-04-28 00:58:04.859956 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1785: 208 pgs: 208 active+clean; 142 GB data, 321 GB used, 609 GB / 931 GB avail; 62162 kB/s wr, 31 op/s
2016-04-28 00:58:05.976249 7f99935d2700 1 mon.ceph4@0(leader).log v1921 check_sub sending message to client.? [2001:470:20ed:26::4]:0/662036855 with 1 entries (version 1921)
2016-04-28 00:58:06.068048 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1786: 208 pgs: 208 active+clean; 142 GB data, 322 GB used, 608 GB / 931 GB avail; 75508 kB/s wr, 37 op/s
2016-04-28 00:58:07.126028 7f99935d2700 1 mon.ceph4@0(leader).log v1922 check_sub sending message to client.? [2001:470:20ed:26::4]:0/662036855 with 1 entries (version 1922)
2016-04-28 00:58:09.825881 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1787: 208 pgs: 208 active+clean; 142 GB data, 322 GB used, 608 GB / 931 GB avail; 59804 kB/s wr, 29 op/s
2016-04-28 00:58:10.900731 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1788: 208 pgs: 208 active+clean; 142 GB data, 322 GB used, 608 GB / 931 GB avail; 106 kB/s rd, 22610 kB/s wr, 13 op/s
2016-04-28 00:58:10.967313 7f99935d2700 1 mon.ceph4@0(leader).log v1923 check_sub sending message to client.? [2001:470:20ed:26::4]:0/662036855 with 1 entries (version 1923)
2016-04-28 00:58:14.808725 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1789: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 103 kB/s rd, 4095 kB/s wr, 4 op/s
2016-04-28 00:58:15.891795 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1790: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail
2016-04-28 00:58:43.448302 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1853 MB, avail 287 GB
2016-04-28 00:58:50.854410 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1791: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 673 B/s rd, 4 op/s
2016-04-28 00:59:43.448576 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1853 MB, avail 287 GB
2016-04-28 01:00:43.449232 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1853 MB, avail 287 GB
2016-04-28 01:00:44.949640 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1792: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 8 B/s rd, 790 kB/s wr, 4 op/s
2016-04-28 01:00:46.269277 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1793: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 4726 B/s rd, 1599 kB/s wr, 8 op/s
2016-04-28 01:00:47.658763 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:00:47.658815 7f9991a5d700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3260048676' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:00:47.665896 7f9991a5d700 1 mon.ceph4@0(leader).log v1929 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1929)
2016-04-28 01:00:48.591119 7f99935d2700 1 mon.ceph4@0(leader).log v1930 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 0 entries (version 1930)
2016-04-28 01:00:49.915696 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1794: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 106 kB/s rd, 49119 kB/s wr, 116 op/s
2016-04-28 01:00:51.032759 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1795: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 69060 kB/s wr, 48 op/s
2016-04-28 01:00:51.132370 7f99935d2700 1 mon.ceph4@0(leader).log v1931 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1931)
2016-04-28 01:00:52.224134 7f99935d2700 1 mon.ceph4@0(leader).log v1932 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1932)
2016-04-28 01:00:54.948488 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1796: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 67133 kB/s wr, 45 op/s
2016-04-28 01:00:56.048442 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1797: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 68364 kB/s wr, 44 op/s
2016-04-28 01:00:56.181504 7f99935d2700 1 mon.ceph4@0(leader).log v1933 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1933)
2016-04-28 01:00:57.281547 7f99935d2700 1 mon.ceph4@0(leader).log v1934 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1934)
2016-04-28 01:00:59.865497 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1798: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 67873 kB/s wr, 45 op/s
2016-04-28 01:01:00.995410 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1799: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 58320 kB/s wr, 38 op/s
2016-04-28 01:01:01.331302 7f99935d2700 1 mon.ceph4@0(leader).log v1935 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1935)
2016-04-28 01:01:02.505793 7f99935d2700 1 mon.ceph4@0(leader).log v1936 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1936)
2016-04-28 01:01:04.931178 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1800: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 63129 kB/s wr, 40 op/s
2016-04-28 01:01:06.038701 7f99935d2700 1 mon.ceph4@0(leader).log v1937 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1937)
2016-04-28 01:01:06.122170 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1801: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 67037 kB/s wr, 46 op/s
2016-04-28 01:01:07.214306 7f99935d2700 1 mon.ceph4@0(leader).log v1938 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1938)
2016-04-28 01:01:09.881332 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1802: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 58850 kB/s wr, 40 op/s
2016-04-28 01:01:10.979295 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1803: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 62020 kB/s wr, 40 op/s
2016-04-28 01:01:11.121643 7f99935d2700 1 mon.ceph4@0(leader).log v1939 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1939)
2016-04-28 01:01:12.222491 7f99935d2700 1 mon.ceph4@0(leader).log v1940 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1940)
2016-04-28 01:01:14.971144 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1804: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 65523 kB/s wr, 46 op/s
2016-04-28 01:01:16.104487 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1805: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 62001 kB/s wr, 42 op/s
2016-04-28 01:01:16.225101 7f99935d2700 1 mon.ceph4@0(leader).log v1941 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1941)
2016-04-28 01:01:17.404254 7f99935d2700 1 mon.ceph4@0(leader).log v1942 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1942)
2016-04-28 01:01:19.912696 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1806: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 61419 kB/s wr, 39 op/s
2016-04-28 01:01:21.011941 7f99935d2700 1 mon.ceph4@0(leader).log v1943 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1943)
2016-04-28 01:01:21.128845 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1807: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 68002 kB/s wr, 44 op/s
2016-04-28 01:01:22.295174 7f99935d2700 1 mon.ceph4@0(leader).log v1944 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1944)
2016-04-28 01:01:24.995016 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1808: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 68658 kB/s wr, 45 op/s
2016-04-28 01:01:26.122445 7f99935d2700 1 mon.ceph4@0(leader).log v1945 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1945)
2016-04-28 01:01:26.253221 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1809: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 58118 kB/s wr, 37 op/s
2016-04-28 01:01:27.387299 7f99935d2700 1 mon.ceph4@0(leader).log v1946 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1946)
2016-04-28 01:01:29.944459 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1810: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 68102 kB/s wr, 44 op/s
2016-04-28 01:01:31.077656 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1811: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 69386 kB/s wr, 48 op/s
2016-04-28 01:01:31.202461 7f99935d2700 1 mon.ceph4@0(leader).log v1947 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1947)
2016-04-28 01:01:32.285579 7f99935d2700 1 mon.ceph4@0(leader).log v1948 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1948)
2016-04-28 01:01:34.860450 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1812: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 40953 kB/s wr, 30 op/s
2016-04-28 01:01:35.926929 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1813: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 11664 kB/s wr, 8 op/s
2016-04-28 01:01:35.993463 7f99935d2700 1 mon.ceph4@0(leader).log v1949 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1949)
2016-04-28 01:01:37.068359 7f99935d2700 1 mon.ceph4@0(leader).log v1950 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1950)
2016-04-28 01:01:43.449538 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1854 MB, avail 287 GB
2016-04-28 01:02:05.898545 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1814: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail
2016-04-28 01:02:06.964902 7f99935d2700 1 mon.ceph4@0(leader).log v1951 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1951)
2016-04-28 01:02:10.881218 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1815: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 205 B/s rd, 0 op/s
2016-04-28 01:02:11.947666 7f99935d2700 1 mon.ceph4@0(leader).log v1952 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1952)
2016-04-28 01:02:43.449817 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1855 MB, avail 287 GB
2016-04-28 01:03:25.031238 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1816: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 941 kB/s wr, 0 op/s
2016-04-28 01:03:26.122460 7f99935d2700 1 mon.ceph4@0(leader).log v1953 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1953)
2016-04-28 01:03:26.259551 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1817: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 28819 B/s rd, 2190 kB/s wr, 2 op/s
2016-04-28 01:03:27.373941 7f99935d2700 1 mon.ceph4@0(leader).log v1954 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1954)
2016-04-28 01:03:29.956081 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1818: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 423 kB/s rd, 46953 kB/s wr, 46 op/s
2016-04-28 01:03:31.055258 7f99935d2700 1 mon.ceph4@0(leader).log v1955 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1955)
2016-04-28 01:03:31.121963 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1819: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 61349 kB/s wr, 40 op/s
2016-04-28 01:03:32.303171 7f99935d2700 1 mon.ceph4@0(leader).log v1956 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1956)
2016-04-28 01:03:34.971663 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1820: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 63391 kB/s wr, 43 op/s
2016-04-28 01:03:36.113851 7f99935d2700 1 mon.ceph4@0(leader).log v1957 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1957)
2016-04-28 01:03:36.206424 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1821: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 62768 kB/s wr, 39 op/s
2016-04-28 01:03:37.354513 7f99935d2700 1 mon.ceph4@0(leader).log v1958 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1958)
2016-04-28 01:03:39.939236 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1822: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 58957 kB/s wr, 37 op/s
2016-04-28 01:03:41.096020 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1823: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 63430 kB/s wr, 41 op/s
2016-04-28 01:03:41.304125 7f99935d2700 1 mon.ceph4@0(leader).log v1959 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1959)
2016-04-28 01:03:42.491152 7f99935d2700 1 mon.ceph4@0(leader).log v1960 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1960)
2016-04-28 01:03:43.450112 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1855 MB, avail 287 GB
2016-04-28 01:03:44.987821 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1824: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 69514 kB/s wr, 43 op/s
2016-04-28 01:03:46.104220 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1825: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 56777 kB/s wr, 36 op/s
2016-04-28 01:03:46.241038 7f99935d2700 1 mon.ceph4@0(leader).log v1961 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1961)
2016-04-28 01:03:47.375287 7f99935d2700 1 mon.ceph4@0(leader).log v1962 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1962)
2016-04-28 01:03:50.055070 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1826: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 52421 kB/s wr, 33 op/s
2016-04-28 01:03:51.153043 7f99935d2700 1 mon.ceph4@0(leader).log v1963 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1963)
2016-04-28 01:03:51.294826 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1827: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 73535 kB/s wr, 47 op/s
2016-04-28 01:03:52.377967 7f99935d2700 1 mon.ceph4@0(leader).log v1964 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1964)
2016-04-28 01:03:54.972927 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1828: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 71300 kB/s wr, 45 op/s
2016-04-28 01:03:56.185268 7f99935d2700 1 mon.ceph4@0(leader).log v1965 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1965)
2016-04-28 01:03:56.319108 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1829: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 51551 kB/s wr, 34 op/s
2016-04-28 01:03:57.402431 7f99935d2700 1 mon.ceph4@0(leader).log v1966 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1966)
2016-04-28 01:04:00.010498 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1830: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 56285 kB/s wr, 37 op/s
2016-04-28 01:04:01.135282 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1831: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 65748 kB/s wr, 43 op/s
2016-04-28 01:04:01.236537 7f99935d2700 1 mon.ceph4@0(leader).log v1967 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1967)
2016-04-28 01:04:02.426600 7f99935d2700 1 mon.ceph4@0(leader).log v1968 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1968)
2016-04-28 01:04:04.959862 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1832: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 60748 kB/s wr, 41 op/s
2016-04-28 01:04:06.026106 7f99935d2700 1 mon.ceph4@0(leader).log v1969 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1969)
2016-04-28 01:04:06.084585 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1833: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 56074 kB/s wr, 36 op/s
2016-04-28 01:04:07.142643 7f99935d2700 1 mon.ceph4@0(leader).log v1970 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1970)
2016-04-28 01:04:09.917423 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1834: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 30307 kB/s wr, 18 op/s
2016-04-28 01:04:11.008975 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1835: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 831 kB/s wr, 0 op/s
2016-04-28 01:04:11.075498 7f99935d2700 1 mon.ceph4@0(leader).log v1971 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1971)
2016-04-28 01:04:12.150412 7f99935d2700 1 mon.ceph4@0(leader).log v1972 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1972)
2016-04-28 01:04:30.948337 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1836: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 340 B/s rd, 0 op/s
2016-04-28 01:04:32.031419 7f99935d2700 1 mon.ceph4@0(leader).log v1973 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3260048676 with 1 entries (version 1973)
2016-04-28 01:04:43.450404 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1856 MB, avail 287 GB
2016-04-28 01:05:03.608254 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:05:03.608288 7f9991a5d700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3312936816' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:05:03.609778 7f9991a5d700 1 mon.ceph4@0(leader).log v1973 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1973)
2016-04-28 01:05:03.777680 7f99935d2700 1 mon.ceph4@0(leader).log v1974 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 0 entries (version 1974)
2016-04-28 01:05:43.450850 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1856 MB, avail 287 GB
2016-04-28 01:05:46.090200 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1837: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 14211 B/s rd, 273 kB/s wr, 0 op/s
2016-04-28 01:05:47.189598 7f99935d2700 1 mon.ceph4@0(leader).log v1975 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1975)
2016-04-28 01:05:49.980946 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1838: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 13497 B/s rd, 2178 kB/s wr, 1 op/s
2016-04-28 01:05:51.172491 7f99935d2700 1 mon.ceph4@0(leader).log v1976 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1976)
2016-04-28 01:05:51.357436 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1839: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 1784 B/s rd, 63286 kB/s wr, 43 op/s
2016-04-28 01:05:52.572735 7f99935d2700 1 mon.ceph4@0(leader).log v1977 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1977)
2016-04-28 01:05:55.038783 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1840: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 1842 B/s rd, 61260 kB/s wr, 43 op/s
2016-04-28 01:05:56.180417 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1841: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 55244 kB/s wr, 37 op/s
2016-04-28 01:05:56.271689 7f99935d2700 1 mon.ceph4@0(leader).log v1978 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1978)
2016-04-28 01:05:57.488323 7f99935d2700 1 mon.ceph4@0(leader).log v1979 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1979)
2016-04-28 01:06:00.049427 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1842: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 60739 kB/s wr, 40 op/s
2016-04-28 01:06:01.204684 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1843: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 66395 kB/s wr, 46 op/s
2016-04-28 01:06:01.371138 7f99935d2700 1 mon.ceph4@0(leader).log v1980 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1980)
2016-04-28 01:06:02.490097 7f99935d2700 1 mon.ceph4@0(leader).log v1981 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1981)
2016-04-28 01:06:05.054204 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1844: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 69518 kB/s wr, 47 op/s
2016-04-28 01:06:06.287271 7f99935d2700 1 mon.ceph4@0(leader).log v1982 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1982)
2016-04-28 01:06:06.370992 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1845: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 62903 kB/s wr, 40 op/s
2016-04-28 01:06:07.525077 7f99935d2700 1 mon.ceph4@0(leader).log v1983 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1983)
2016-04-28 01:06:09.961937 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1846: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 59796 kB/s wr, 43 op/s
2016-04-28 01:06:11.120879 7f99935d2700 1 mon.ceph4@0(leader).log v1984 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1984)
2016-04-28 01:06:11.228688 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1847: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 66346 kB/s wr, 46 op/s
2016-04-28 01:06:12.319844 7f99935d2700 1 mon.ceph4@0(leader).log v1985 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1985)
2016-04-28 01:06:14.972169 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1848: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 62230 kB/s wr, 37 op/s
2016-04-28 01:06:16.077928 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1849: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 66782 kB/s wr, 45 op/s
2016-04-28 01:06:16.244510 7f99935d2700 1 mon.ceph4@0(leader).log v1986 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1986)
2016-04-28 01:06:17.377790 7f99935d2700 1 mon.ceph4@0(leader).log v1987 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1987)
2016-04-28 01:06:20.031883 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1850: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 64732 kB/s wr, 44 op/s
2016-04-28 01:06:21.219004 7f99935d2700 1 mon.ceph4@0(leader).log v1988 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1988)
2016-04-28 01:06:21.344202 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1851: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 57698 kB/s wr, 36 op/s
2016-04-28 01:06:22.453967 7f99935d2700 1 mon.ceph4@0(leader).log v1989 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1989)
2016-04-28 01:06:25.068512 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1852: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 66202 kB/s wr, 42 op/s
2016-04-28 01:06:26.168382 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1853: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 73194 kB/s wr, 51 op/s
2016-04-28 01:06:26.364510 7f99935d2700 1 mon.ceph4@0(leader).log v1990 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1990)
2016-04-28 01:06:27.485061 7f99935d2700 1 mon.ceph4@0(leader).log v1991 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1991)
2016-04-28 01:06:30.076373 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1854: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 64594 kB/s wr, 44 op/s
2016-04-28 01:06:31.176275 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1855: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 54928 kB/s wr, 34 op/s
2016-04-28 01:06:31.259471 7f99935d2700 1 mon.ceph4@0(leader).log v1992 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1992)
2016-04-28 01:06:32.353048 7f99935d2700 1 mon.ceph4@0(leader).log v1993 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1993)
2016-04-28 01:06:34.975684 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1856: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 67327 kB/s wr, 47 op/s
2016-04-28 01:06:36.124158 7f99935d2700 1 mon.ceph4@0(leader).log v1994 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1994)
2016-04-28 01:06:36.234445 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1857: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 61550 kB/s wr, 44 op/s
2016-04-28 01:06:37.400407 7f99935d2700 1 mon.ceph4@0(leader).log v1995 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1995)
2016-04-28 01:06:40.076504 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1858: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 47346 kB/s wr, 31 op/s
2016-04-28 01:06:41.188624 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1859: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 59806 kB/s wr, 39 op/s
2016-04-28 01:06:41.393537 7f99935d2700 1 mon.ceph4@0(leader).log v1996 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1996)
2016-04-28 01:06:42.570358 7f99935d2700 1 mon.ceph4@0(leader).log v1997 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1997)
2016-04-28 01:06:43.451138 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1856 MB, avail 287 GB
2016-04-28 01:06:45.060805 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1860: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 73073 kB/s wr, 48 op/s
2016-04-28 01:06:46.190929 7f99935d2700 1 mon.ceph4@0(leader).log v1998 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1998)
2016-04-28 01:06:46.312663 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1861: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 71134 kB/s wr, 46 op/s
2016-04-28 01:06:47.457656 7f99935d2700 1 mon.ceph4@0(leader).log v1999 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 1999)
2016-04-28 01:06:50.040647 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1862: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 70314 kB/s wr, 48 op/s
2016-04-28 01:06:51.148871 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1863: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 65835 kB/s wr, 46 op/s
2016-04-28 01:06:51.231955 7f99935d2700 1 mon.ceph4@0(leader).log v2000 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2000)
2016-04-28 01:06:52.402688 7f99935d2700 1 mon.ceph4@0(leader).log v2001 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2001)
2016-04-28 01:06:55.041709 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1864: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 608 GB / 931 GB avail; 67644 kB/s wr, 45 op/s
2016-04-28 01:06:56.148324 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1865: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 607 GB / 931 GB avail; 68800 kB/s wr, 44 op/s
2016-04-28 01:06:56.256440 7f99935d2700 1 mon.ceph4@0(leader).log v2002 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2002)
2016-04-28 01:06:57.448920 7f99935d2700 1 mon.ceph4@0(leader).log v2003 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2003)
2016-04-28 01:07:00.064621 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1866: 208 pgs: 208 active+clean; 142 GB data, 323 GB used, 607 GB / 931 GB avail; 67004 kB/s wr, 44 op/s
2016-04-28 01:07:00.588361 7f998e2ca700 1 leveldb: Level-0 table #282: started
2016-04-28 01:07:00.749158 7f998e2ca700 1 leveldb: Level-0 table #282: 4145302 bytes OK
2016-04-28 01:07:00.789851 7f998e2ca700 1 leveldb: Delete type=0 #279

2016-04-28 01:07:00.795243 7f998e2ca700 1 leveldb: Manual compaction at level-0 from 'logm\x001251' @ 72057594037927935 : 1 .. 'logm\x001503' @ 0 : 0; will stop at 'pgmap_pg\x009.7' @ 77046 : 1

2016-04-28 01:07:00.795284 7f998e2ca700 1 leveldb: Compacting 1@0 + 3@1 files
2016-04-28 01:07:00.905952 7f998e2ca700 1 leveldb: Generated table #283: 787 keys, 2140203 bytes
2016-04-28 01:07:01.039390 7f998e2ca700 1 leveldb: Generated table #284: 183 keys, 2164286 bytes
2016-04-28 01:07:01.218061 7f998e2ca700 1 leveldb: Generated table #285: 232 keys, 2142775 bytes
2016-04-28 01:07:01.221224 7f99935d2700 1 mon.ceph4@0(leader).log v2004 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2004)
2016-04-28 01:07:01.330963 7f998e2ca700 1 leveldb: Generated table #286: 241 keys, 2164165 bytes
2016-04-28 01:07:01.389292 7f998e2ca700 1 leveldb: Generated table #287: 853 keys, 398046 bytes
2016-04-28 01:07:01.389329 7f998e2ca700 1 leveldb: Compacted 1@0 + 3@1 files => 9009475 bytes
2016-04-28 01:07:01.455399 7f998e2ca700 1 leveldb: compacted to: files[ 0 5 5 0 0 0 0 ]
2016-04-28 01:07:01.455752 7f998e2ca700 1 leveldb: Delete type=2 #276

2016-04-28 01:07:01.456462 7f998e2ca700 1 leveldb: Delete type=2 #277

2016-04-28 01:07:01.456990 7f998e2ca700 1 leveldb: Delete type=2 #278

2016-04-28 01:07:01.457457 7f998e2ca700 1 leveldb: Delete type=2 #282

2016-04-28 01:07:01.458324 7f998e2ca700 1 leveldb: Manual compaction at level-0 from 'pgmap_pg\x009.7' @ 77046 : 1 .. 'logm\x001503' @ 0 : 0; will stop at (end)

2016-04-28 01:07:01.458394 7f998e2ca700 1 leveldb: Manual compaction at level-1 from 'logm\x001251' @ 72057594037927935 : 1 .. 'logm\x001503' @ 0 : 0; will stop at 'paxos\x003337' @ 69122 : 1

2016-04-28 01:07:01.458401 7f998e2ca700 1 leveldb: Compacting 1@1 + 5@2 files
2016-04-28 01:07:01.564298 7f998e2ca700 1 leveldb: Generated table #288: 717 keys, 2111075 bytes
2016-04-28 01:07:01.697556 7f998e2ca700 1 leveldb: Generated table #289: 163 keys, 2155350 bytes
2016-04-28 01:07:01.805978 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1867: 208 pgs: 208 active+clean; 143 GB data, 323 GB used, 607 GB / 931 GB avail; 60553 kB/s wr, 40 op/s
2016-04-28 01:07:01.839281 7f998e2ca700 1 leveldb: Generated table #290: 145 keys, 2129346 bytes
2016-04-28 01:07:01.914432 7f998e2ca700 1 leveldb: Generated table #291: 296 keys, 1418491 bytes
2016-04-28 01:07:01.914489 7f998e2ca700 1 leveldb: Compacted 1@1 + 5@2 files => 7814262 bytes
2016-04-28 01:07:02.014104 7f998e2ca700 1 leveldb: compacted to: files[ 0 4 4 0 0 0 0 ]
2016-04-28 01:07:02.015681 7f998e2ca700 1 leveldb: Delete type=2 #269

2016-04-28 01:07:02.017244 7f998e2ca700 1 leveldb: Delete type=2 #270

2016-04-28 01:07:02.018375 7f998e2ca700 1 leveldb: Delete type=2 #271

2016-04-28 01:07:02.019619 7f998e2ca700 1 leveldb: Delete type=2 #272

2016-04-28 01:07:02.020647 7f998e2ca700 1 leveldb: Delete type=2 #273

2016-04-28 01:07:02.021419 7f998e2ca700 1 leveldb: Delete type=2 #283

2016-04-28 01:07:02.022220 7f998e2ca700 1 leveldb: Manual compaction at level-1 from 'paxos\x003337' @ 69122 : 1 .. 'logm\x001503' @ 0 : 0; will stop at (end)

2016-04-28 01:07:02.028745 7f998e2ca700 1 leveldb: Level-0 table #293: started
2016-04-28 01:07:02.113834 7f998e2ca700 1 leveldb: Level-0 table #293: 36416 bytes OK
2016-04-28 01:07:02.172730 7f998e2ca700 1 leveldb: Delete type=0 #281

2016-04-28 01:07:02.173034 7f998e2ca700 1 leveldb: Manual compaction at level-0 from 'logm\x00full_1251' @ 72057594037927935 : 1 .. 'logm\x00full_1503' @ 0 : 0; will stop at 'pgmap_pg\x009.7' @ 80269 : 1

2016-04-28 01:07:02.173048 7f998e2ca700 1 leveldb: Compacting 1@0 + 4@1 files
2016-04-28 01:07:02.273383 7f998e2ca700 1 leveldb: Generated table #294: 187 keys, 2179525 bytes
2016-04-28 01:07:02.389318 7f998e2ca700 1 leveldb: Generated table #295: 232 keys, 2142775 bytes
2016-04-28 01:07:02.530840 7f998e2ca700 1 leveldb: Generated table #296: 241 keys, 2164165 bytes
2016-04-28 01:07:02.588901 7f998e2ca700 1 leveldb: Generated table #297: 856 keys, 416432 bytes
2016-04-28 01:07:02.588924 7f998e2ca700 1 leveldb: Compacted 1@0 + 4@1 files => 6902897 bytes
2016-04-28 01:07:02.638669 7f998e2ca700 1 leveldb: compacted to: files[ 0 4 4 0 0 0 0 ]
2016-04-28 01:07:02.638977 7f998e2ca700 1 leveldb: Delete type=2 #284

2016-04-28 01:07:02.639688 7f998e2ca700 1 leveldb: Delete type=2 #285

2016-04-28 01:07:02.640274 7f998e2ca700 1 leveldb: Delete type=2 #286

2016-04-28 01:07:02.640745 7f998e2ca700 1 leveldb: Delete type=2 #287

2016-04-28 01:07:02.641007 7f998e2ca700 1 leveldb: Delete type=2 #293

2016-04-28 01:07:02.641114 7f998e2ca700 1 leveldb: Manual compaction at level-0 from 'pgmap_pg\x009.7' @ 80269 : 1 .. 'logm\x00full_1503' @ 0 : 0; will stop at (end)

2016-04-28 01:07:02.641164 7f998e2ca700 1 leveldb: Manual compaction at level-1 from 'logm\x00full_1251' @ 72057594037927935 : 1 .. 'logm\x00full_1503' @ 0 : 0; will stop at 'paxos\x003520' @ 72985 : 1

2016-04-28 01:07:02.641170 7f998e2ca700 1 leveldb: Compacting 1@1 + 4@2 files
2016-04-28 01:07:02.730844 7f998e2ca700 1 leveldb: Generated table #298: 718 keys, 2111384 bytes
2016-04-28 01:07:02.855845 7f998e2ca700 1 leveldb: Generated table #299: 163 keys, 2155350 bytes
2016-04-28 01:07:03.005752 7f99935d2700 1 mon.ceph4@0(leader).log v2005 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2005)
2016-04-28 01:07:03.005815 7f998e2ca700 1 leveldb: Generated table #300: 145 keys, 2129346 bytes
2016-04-28 01:07:03.110785 7f998e2ca700 1 leveldb: Generated table #301: 372 keys, 2139747 bytes
2016-04-28 01:07:03.197331 7f998e2ca700 1 leveldb: Generated table #302: 108 keys, 1457866 bytes
2016-04-28 01:07:03.197367 7f998e2ca700 1 leveldb: Compacted 1@1 + 4@2 files => 9993693 bytes
2016-04-28 01:07:03.255293 7f998e2ca700 1 leveldb: compacted to: files[ 0 3 5 0 0 0 0 ]
2016-04-28 01:07:03.255818 7f998e2ca700 1 leveldb: Delete type=2 #288

2016-04-28 01:07:03.257152 7f998e2ca700 1 leveldb: Delete type=2 #289

2016-04-28 01:07:03.258482 7f998e2ca700 1 leveldb: Delete type=2 #290

2016-04-28 01:07:03.259621 7f998e2ca700 1 leveldb: Delete type=2 #291

2016-04-28 01:07:03.260472 7f998e2ca700 1 leveldb: Delete type=2 #294

2016-04-28 01:07:03.263223 7f998e2ca700 1 leveldb: Manual compaction at level-1 from 'paxos\x003520' @ 72985 : 1 .. 'logm\x00full_1503' @ 0 : 0; will stop at (end)

2016-04-28 01:07:05.022831 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1868: 208 pgs: 208 active+clean; 143 GB data, 324 GB used, 606 GB / 931 GB avail; 67151 kB/s wr, 46 op/s
2016-04-28 01:07:06.113828 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1869: 208 pgs: 208 active+clean; 143 GB data, 324 GB used, 606 GB / 931 GB avail; 80363 kB/s wr, 56 op/s
2016-04-28 01:07:06.221926 7f99935d2700 1 mon.ceph4@0(leader).log v2006 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2006)
2016-04-28 01:07:07.330337 7f99935d2700 1 mon.ceph4@0(leader).log v2007 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2007)
2016-04-28 01:07:07.407724 7f998e2ca700 1 leveldb: Level-0 table #304: started
2016-04-28 01:07:07.463294 7f998e2ca700 1 leveldb: Level-0 table #304: 114621 bytes OK
2016-04-28 01:07:07.515796 7f998e2ca700 1 leveldb: Delete type=0 #292

2016-04-28 01:07:07.516204 7f998e2ca700 1 leveldb: Manual compaction at level-0 from 'paxos\x003263' @ 72057594037927935 : 1 .. 'paxos\x003515' @ 0 : 0; will stop at 'pgmap_pg\x009.7' @ 80309 : 1

2016-04-28 01:07:07.516221 7f998e2ca700 1 leveldb: Compacting 1@0 + 3@1 files
2016-04-28 01:07:07.614174 7f998e2ca700 1 leveldb: Generated table #305: 483 keys, 2118926 bytes
2016-04-28 01:07:07.713536 7f998e2ca700 1 leveldb: Generated table #306: 241 keys, 2163733 bytes
2016-04-28 01:07:07.774404 7f998e2ca700 1 leveldb: Generated table #307: 872 keys, 549618 bytes
2016-04-28 01:07:07.774511 7f998e2ca700 1 leveldb: Compacted 1@0 + 3@1 files => 4832277 bytes
2016-04-28 01:07:07.829936 7f998e2ca700 1 leveldb: compacted to: files[ 0 3 5 0 0 0 0 ]
2016-04-28 01:07:07.830301 7f998e2ca700 1 leveldb: Delete type=2 #295

2016-04-28 01:07:07.831654 7f998e2ca700 1 leveldb: Delete type=2 #296

2016-04-28 01:07:07.832435 7f998e2ca700 1 leveldb: Delete type=2 #297

2016-04-28 01:07:07.833392 7f998e2ca700 1 leveldb: Delete type=2 #304

2016-04-28 01:07:07.838218 7f998e2ca700 1 leveldb: Manual compaction at level-0 from 'pgmap_pg\x009.7' @ 80309 : 1 .. 'paxos\x003515' @ 0 : 0; will stop at (end)

2016-04-28 01:07:07.838409 7f998e2ca700 1 leveldb: Manual compaction at level-1 from 'paxos\x003263' @ 72057594037927935 : 1 .. 'paxos\x003515' @ 0 : 0; will stop at 'paxos\x003744' @ 76137 : 1

2016-04-28 01:07:07.838438 7f998e2ca700 1 leveldb: Compacting 1@1 + 5@2 files
2016-04-28 01:07:07.939484 7f998e2ca700 1 leveldb: Generated table #308: 721 keys, 2112311 bytes
2016-04-28 01:07:08.047123 7f998e2ca700 1 leveldb: Generated table #309: 163 keys, 2155350 bytes
2016-04-28 01:07:08.164800 7f998e2ca700 1 leveldb: Generated table #310: 145 keys, 2129346 bytes
2016-04-28 01:07:08.311462 7f998e2ca700 1 leveldb: Generated table #311: 326 keys, 2104845 bytes
2016-04-28 01:07:08.373009 7f998e2ca700 1 leveldb: Generated table #312: 130 keys, 1083590 bytes
2016-04-28 01:07:08.373145 7f998e2ca700 1 leveldb: Compacted 1@1 + 5@2 files => 9585442 bytes
2016-04-28 01:07:08.471432 7f998e2ca700 1 leveldb: compacted to: files[ 0 2 5 0 0 0 0 ]
2016-04-28 01:07:08.471925 7f998e2ca700 1 leveldb: Delete type=2 #298

2016-04-28 01:07:08.472888 7f998e2ca700 1 leveldb: Delete type=2 #299

2016-04-28 01:07:08.473631 7f998e2ca700 1 leveldb: Delete type=2 #300

2016-04-28 01:07:08.474260 7f998e2ca700 1 leveldb: Delete type=2 #301

2016-04-28 01:07:08.474837 7f998e2ca700 1 leveldb: Delete type=2 #302

2016-04-28 01:07:08.475461 7f998e2ca700 1 leveldb: Delete type=2 #305

2016-04-28 01:07:08.476163 7f998e2ca700 1 leveldb: Manual compaction at level-1 from 'paxos\x003744' @ 76137 : 1 .. 'paxos\x003515' @ 0 : 0; will stop at (end)

2016-04-28 01:07:08.711345 7f998eecf700 0 -- [2001:470:20ed:26::4]:6789/0 >> [2001:470:20ed:15::5:a902]:0/519970039 pipe(0x5654d689c800 sd=24 :6789 s=0 pgs=0 cs=0 l=0 c=0x5654d629c600).accept peer addr is really [2001:470:20ed:15::5:a902]:0/519970039 (socket is [2001:470:20ed:15::5:a902]:37624/0)
2016-04-28 01:07:10.048423 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1870: 208 pgs: 208 active+clean; 143 GB data, 324 GB used, 606 GB / 931 GB avail; 68755 kB/s wr, 47 op/s
2016-04-28 01:07:11.129320 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1871: 208 pgs: 208 active+clean; 143 GB data, 325 GB used, 606 GB / 931 GB avail; 70904 kB/s wr, 48 op/s
2016-04-28 01:07:11.288434 7f99935d2700 1 mon.ceph4@0(leader).log v2008 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2008)
2016-04-28 01:07:12.404742 7f99935d2700 1 mon.ceph4@0(leader).log v2009 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2009)
2016-04-28 01:07:15.071496 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1872: 208 pgs: 208 active+clean; 143 GB data, 325 GB used, 605 GB / 931 GB avail; 67043 kB/s wr, 45 op/s
2016-04-28 01:07:16.197241 7f99935d2700 1 mon.ceph4@0(leader).log v2010 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2010)
2016-04-28 01:07:16.333209 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1873: 208 pgs: 208 active+clean; 143 GB data, 325 GB used, 605 GB / 931 GB avail; 58761 kB/s wr, 40 op/s
2016-04-28 01:07:17.485507 7f99935d2700 1 mon.ceph4@0(leader).log v2011 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2011)
2016-04-28 01:07:20.070776 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1874: 208 pgs: 208 active+clean; 144 GB data, 326 GB used, 605 GB / 931 GB avail; 68015 kB/s wr, 46 op/s
2016-04-28 01:07:21.311793 7f99935d2700 1 mon.ceph4@0(leader).log v2012 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2012)
2016-04-28 01:07:21.395291 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1875: 208 pgs: 208 active+clean; 144 GB data, 326 GB used, 604 GB / 931 GB avail; 69192 kB/s wr, 48 op/s
2016-04-28 01:07:22.539065 7f99935d2700 1 mon.ceph4@0(leader).log v2013 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2013)
2016-04-28 01:07:25.128961 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1876: 208 pgs: 208 active+clean; 144 GB data, 326 GB used, 604 GB / 931 GB avail; 64685 kB/s wr, 44 op/s
2016-04-28 01:07:26.236464 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1877: 208 pgs: 208 active+clean; 144 GB data, 327 GB used, 604 GB / 931 GB avail; 69339 kB/s wr, 43 op/s
2016-04-28 01:07:26.328282 7f99935d2700 1 mon.ceph4@0(leader).log v2014 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2014)
2016-04-28 01:07:27.502844 7f99935d2700 1 mon.ceph4@0(leader).log v2015 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2015)
2016-04-28 01:07:30.086862 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1878: 208 pgs: 208 active+clean; 144 GB data, 327 GB used, 603 GB / 931 GB avail; 70349 kB/s wr, 49 op/s
2016-04-28 01:07:31.209135 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1879: 208 pgs: 208 active+clean; 144 GB data, 328 GB used, 603 GB / 931 GB avail; 64423 kB/s wr, 46 op/s
2016-04-28 01:07:31.345128 7f99935d2700 1 mon.ceph4@0(leader).log v2016 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2016)
2016-04-28 01:07:32.460629 7f99935d2700 1 mon.ceph4@0(leader).log v2017 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2017)
2016-04-28 01:07:35.094105 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1880: 208 pgs: 208 active+clean; 145 GB data, 328 GB used, 602 GB / 931 GB avail; 67923 kB/s wr, 47 op/s
2016-04-28 01:07:36.235297 7f99935d2700 1 mon.ceph4@0(leader).log v2018 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2018)
2016-04-28 01:07:36.335404 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1881: 208 pgs: 208 active+clean; 145 GB data, 328 GB used, 602 GB / 931 GB avail; 77174 kB/s wr, 53 op/s
2016-04-28 01:07:37.418286 7f99935d2700 1 mon.ceph4@0(leader).log v2019 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2019)
2016-04-28 01:07:40.088366 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1882: 208 pgs: 208 active+clean; 145 GB data, 328 GB used, 602 GB / 931 GB avail; 62292 kB/s wr, 41 op/s
2016-04-28 01:07:41.285166 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1883: 208 pgs: 208 active+clean; 145 GB data, 329 GB used, 602 GB / 931 GB avail; 64325 kB/s wr, 42 op/s
2016-04-28 01:07:41.459574 7f99935d2700 1 mon.ceph4@0(leader).log v2020 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2020)
2016-04-28 01:07:42.593626 7f99935d2700 1 mon.ceph4@0(leader).log v2021 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2021)
2016-04-28 01:07:43.451478 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1849 MB, avail 287 GB
2016-04-28 01:07:45.025960 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1884: 208 pgs: 208 active+clean; 145 GB data, 329 GB used, 601 GB / 931 GB avail; 72084 kB/s wr, 47 op/s
2016-04-28 01:07:46.117509 7f99935d2700 1 mon.ceph4@0(leader).log v2022 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2022)
2016-04-28 01:07:46.234550 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1885: 208 pgs: 208 active+clean; 145 GB data, 329 GB used, 601 GB / 931 GB avail; 62299 kB/s wr, 43 op/s
2016-04-28 01:07:47.359011 7f99935d2700 1 mon.ceph4@0(leader).log v2023 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2023)
2016-04-28 01:07:47.450557 7f99935d2700 1 mon.ceph4@0(leader).osd e68 e68: 2 osds: 2 up, 2 in
2016-04-28 01:07:47.627403 7f99935d2700 0 log_channel(cluster) log [INF] : osdmap e68: 2 osds: 2 up, 2 in
2016-04-28 01:07:48.067697 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1886: 208 pgs: 208 active+clean; 145 GB data, 329 GB used, 601 GB / 931 GB avail; 57258 kB/s wr, 39 op/s
2016-04-28 01:07:48.569573 7f99935d2700 1 mon.ceph4@0(leader).log v2024 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 2 entries (version 2024)
2016-04-28 01:07:48.675825 7f99935d2700 1 mon.ceph4@0(leader).osd e69 e69: 2 osds: 2 up, 2 in
2016-04-28 01:07:48.775734 7f99935d2700 0 log_channel(cluster) log [INF] : osdmap e69: 2 osds: 2 up, 2 in
2016-04-28 01:07:48.825403 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1887: 208 pgs: 208 active+clean; 145 GB data, 329 GB used, 601 GB / 931 GB avail
2016-04-28 01:07:49.833596 7f99935d2700 1 mon.ceph4@0(leader).log v2025 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 2 entries (version 2025)
2016-04-28 01:07:50.963212 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1888: 208 pgs: 208 active+clean; 146 GB data, 329 GB used, 601 GB / 931 GB avail; 52182 kB/s wr, 32 op/s
2016-04-28 01:07:52.058380 7f99935d2700 1 mon.ceph4@0(leader).log v2026 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2026)
2016-04-28 01:07:52.143113 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1889: 208 pgs: 208 active+clean; 146 GB data, 330 GB used, 600 GB / 931 GB avail; 622 B/s rd, 111 MB/s wr, 75 op/s
2016-04-28 01:07:53.199764 7f99935d2700 1 mon.ceph4@0(leader).log v2027 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2027)
2016-04-28 01:07:55.033097 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1890: 208 pgs: 208 active+clean; 146 GB data, 330 GB used, 600 GB / 931 GB avail; 743 B/s rd, 73364 kB/s wr, 54 op/s
2016-04-28 01:07:56.132790 7f99935d2700 1 mon.ceph4@0(leader).log v2028 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2028)
2016-04-28 01:07:56.207907 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1891: 208 pgs: 208 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail; 250 B/s rd, 33106 kB/s wr, 27 op/s
2016-04-28 01:07:57.290966 7f99935d2700 1 mon.ceph4@0(leader).log v2029 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2029)
2016-04-28 01:08:00.032403 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1892: 208 pgs: 208 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail; 7372 kB/s wr, 5 op/s
2016-04-28 01:08:01.107185 7f99935d2700 1 mon.ceph4@0(leader).log v2030 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2030)
2016-04-28 01:08:06.081751 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1893: 208 pgs: 208 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:08:07.156526 7f99935d2700 1 mon.ceph4@0(leader).log v2031 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2031)
2016-04-28 01:08:36.476531 7f998f0d1700 0 -- [2001:470:20ed:26::4]:6789/0 >> [2001:470:20ed:15::5:a902]:0/463643546 pipe(0x5654d689a000 sd=8 :6789 s=0 pgs=0 cs=0 l=0 c=0x5654d629d080).accept peer addr is really [2001:470:20ed:15::5:a902]:0/463643546 (socket is [2001:470:20ed:15::5:a902]:37640/0)
2016-04-28 01:08:40.044490 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1894: 208 pgs: 208 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:08:41.119213 7f99935d2700 1 mon.ceph4@0(leader).log v2032 check_sub sending message to client.? [2001:470:20ed:26::4]:0/3312936816 with 1 entries (version 2032)
2016-04-28 01:08:43.451818 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1851 MB, avail 287 GB
2016-04-28 01:09:43.452102 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1851 MB, avail 287 GB
2016-04-28 01:10:43.452388 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1851 MB, avail 287 GB
2016-04-28 01:11:10.068756 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1895: 208 pgs: 208 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail; 20 B/s rd, 0 B/s wr, 0 op/s
2016-04-28 01:11:11.201941 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1896: 208 pgs: 208 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail; 20 B/s rd, 0 B/s wr, 0 op/s
2016-04-28 01:11:43.452761 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1851 MB, avail 287 GB
2016-04-28 01:12:01.139501 7f998f3d3700 0 -- [2001:470:20ed:26::4]:6789/0 >> [2001:470:20ed:26::4]:0/334494525 pipe(0x5654d5e8e800 sd=8 :6789 s=0 pgs=0 cs=0 l=0 c=0x5654d6143c80).accept peer addr is really [2001:470:20ed:26::4]:0/334494525 (socket is [2001:470:20ed:26::4]:37470/0)
2016-04-28 01:12:06.120637 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1897: 208 pgs: 208 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail; 55 B/s wr, 0 op/s
2016-04-28 01:12:11.111668 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1898: 208 pgs: 208 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail; 68 B/s wr, 0 op/s
2016-04-28 01:12:31.134378 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1899: 208 pgs: 208 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail; 122 B/s wr, 0 op/s
2016-04-28 01:12:36.117104 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1900: 208 pgs: 208 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail; 122 B/s wr, 0 op/s
2016-04-28 01:12:43.453016 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1851 MB, avail 287 GB
2016-04-28 01:12:56.299735 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd pool rm", "pool": "bench"} v 0) v1
2016-04-28 01:12:56.299794 7f9991a5d700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/503390377' entity='client.admin' cmd=[{"prefix": "osd pool rm", "pool": "bench"}]: dispatch
2016-04-28 01:13:10.310026 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"pool2": "bench", "prefix": "osd pool rm", "sure": "--yes-i-really-really-mean-it", "pool": "bench"} v 0) v1
2016-04-28 01:13:10.310085 7f9991a5d700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2975964864' entity='client.admin' cmd=[{"pool2": "bench", "prefix": "osd pool rm", "sure": "--yes-i-really-really-mean-it", "pool": "bench"}]: dispatch
2016-04-28 01:13:10.462910 7f99935d2700 1 mon.ceph4@0(leader).osd e70 e70: 2 osds: 2 up, 2 in
2016-04-28 01:13:10.496217 7f99935d2700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2975964864' entity='client.admin' cmd='[{"pool2": "bench", "prefix": "osd pool rm", "sure": "--yes-i-really-really-mean-it", "pool": "bench"}]': finished
2016-04-28 01:13:10.537910 7f99935d2700 0 log_channel(cluster) log [INF] : osdmap e70: 2 osds: 2 up, 2 in
2016-04-28 01:13:10.588411 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1901: 200 pgs: 200 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:13:11.531638 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:13:11.531712 7f9991a5d700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/3502909753' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:13:11.531853 7f9991a5d700 0 mon.ceph4@0(leader).osd e70 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:13:15.859523 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:13:15.859603 7f9991a5d700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1796329353' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:13:15.859737 7f9991a5d700 0 mon.ceph4@0(leader).osd e70 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:13:27.826074 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:13:27.826178 7f9991a5d700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1362166736' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:13:27.826310 7f9991a5d700 0 mon.ceph4@0(leader).osd e70 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:13:29.542508 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pg_num": 8, "pool": "bench"} v 0) v1
2016-04-28 01:13:29.542573 7f9991a5d700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/777183855' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pool": "bench"}]: dispatch
2016-04-28 01:13:29.994198 7f99935d2700 1 mon.ceph4@0(leader).osd e71 e71: 2 osds: 2 up, 2 in
2016-04-28 01:13:30.027466 7f99935d2700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/777183855' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pool": "bench"}]': finished
2016-04-28 01:13:30.068984 7f99935d2700 0 log_channel(cluster) log [INF] : osdmap e71: 2 osds: 2 up, 2 in
2016-04-28 01:13:30.103122 7f99935d2700 0 log_channel(cluster) log [INF] : pgmap v1902: 208 pgs: 8 creating, 200 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:13:35.857191 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:13:35.857255 7f9991a5d700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/3623723983' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:13:35.857395 7f9991a5d700 0 mon.ceph4@0(leader).osd e71 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:13:43.453660 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1856 MB, avail 287 GB
2016-04-28 01:13:45.113819 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:13:45.113883 7f9991a5d700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1627104395' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:13:45.114021 7f9991a5d700 0 mon.ceph4@0(leader).osd e71 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:13:52.833485 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:13:52.833566 7f9991a5d700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2092068622' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:13:52.833739 7f9991a5d700 0 mon.ceph4@0(leader).osd e71 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:14:41.622949 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:14:41.622987 7f9991a5d700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2754382340' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:14:43.454007 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1866 MB, avail 287 GB
2016-04-28 01:14:46.367131 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:14:46.367194 7f9991a5d700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/618589263' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:14:47.265292 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:14:47.265327 7f9991a5d700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3677456899' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:15:43.454271 7f999225e700 0 mon.ceph4@0(leader).data_health(8) update_stats avail 99% total 290 GB, used 1867 MB, avail 287 GB
2016-04-28 01:15:46.795168 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:15:46.795205 7f9991a5d700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2191688157' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:15:51.236265 7f9991a5d700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:15:51.236301 7f9991a5d700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/893041781' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:16:02.636301 7f999025a700 -1 mon.ceph4@0(leader) e1 *** Got Signal Terminated ***
2016-04-28 01:16:02.636332 7f999025a700 1 mon.ceph4@0(leader) e1 shutdown
2016-04-28 01:16:02.636405 7f999025a700 0 quorum service shutdown
2016-04-28 01:16:02.636409 7f999025a700 0 mon.ceph4@0(shutdown).health(8) HealthMonitor::service_shutdown 1 services
2016-04-28 01:16:02.636412 7f999025a700 0 quorum service shutdown
2016-04-28 01:16:02.783235 7f500b0034c0 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb
2016-04-28 01:16:02.783245 7f500b0034c0 0 set uid:gid to 167:167 (ceph:ceph)
2016-04-28 01:16:02.783268 7f500b0034c0 0 ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9), process ceph-mon, pid 22390
2016-04-28 01:16:02.783358 7f500b0034c0 0 pidfile_write: ignore empty --pid-file
2016-04-28 01:16:02.815757 7f500b0034c0 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb
2016-04-28 01:16:02.834831 7f500b0034c0 1 leveldb: Recovering log #303
2016-04-28 01:16:02.840418 7f500b0034c0 1 leveldb: Level-0 table #314: started
2016-04-28 01:16:02.909689 7f500b0034c0 1 leveldb: Level-0 table #314: 1814263 bytes OK
2016-04-28 01:16:02.976324 7f500b0034c0 1 leveldb: Delete type=3 #29

2016-04-28 01:16:02.976412 7f500b0034c0 1 leveldb: Delete type=0 #303

2016-04-28 01:16:02.977358 7f500b0034c0 0 starting mon.ceph4 rank 0 at [2001:470:20ed:26::4]:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph4 fsid fd797c45-3bc6-4185-a057-3ec84cbf99c5
2016-04-28 01:16:02.977669 7f500b0034c0 1 mon.ceph4@-1(probing) e1 preinit fsid fd797c45-3bc6-4185-a057-3ec84cbf99c5
2016-04-28 01:16:02.977964 7f500b0034c0 1 mon.ceph4@-1(probing).paxosservice(pgmap 1252..1902) refresh upgraded, format 0 -> 1
2016-04-28 01:16:02.977979 7f500b0034c0 1 mon.ceph4@-1(probing).pg v0 on_upgrade discarding in-core PGMap
2016-04-28 01:16:02.979272 7f500b0034c0 0 mon.ceph4@-1(probing).mds e34 print_map
e34
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 34
flags 0
created 2016-04-26 20:17:34.661005
modified 2016-04-26 20:17:34.661005
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 38
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in 0
up {0=54097}
failed
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled
54097: [2001:470:20ed:26::4]:6800/683 'ceph4' mds.0.6 up:active seq 5

2016-04-28 01:16:02.979529 7f500b0034c0 0 mon.ceph4@-1(probing).osd e71 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:16:02.979534 7f500b0034c0 0 mon.ceph4@-1(probing).osd e71 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:16:02.979536 7f500b0034c0 0 mon.ceph4@-1(probing).osd e71 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:16:02.979538 7f500b0034c0 0 mon.ceph4@-1(probing).osd e71 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:16:02.979809 7f500b0034c0 1 mon.ceph4@-1(probing).paxosservice(auth 1..47) refresh upgraded, format 0 -> 1
2016-04-28 01:16:02.981300 7f500b0034c0 0 mon.ceph4@-1(probing) e1 my rank is now 0 (was -1)
2016-04-28 01:16:02.981316 7f500b0034c0 1 mon.ceph4@0(probing) e1 win_standalone_election
2016-04-28 01:16:02.981340 7f500b0034c0 1 mon.ceph4@0(probing).elector(8) init, last seen epoch 8
2016-04-28 01:16:03.074987 7f500b0034c0 0 log_channel(cluster) log [INF] : mon.ceph4@0 won leader election with quorum 0
2016-04-28 01:16:03.075066 7f500b0034c0 0 log_channel(cluster) log [INF] : monmap e1: 1 mons at {ceph4=[2001:470:20ed:26::4]:6789/0}
2016-04-28 01:16:03.075133 7f500b0034c0 0 log_channel(cluster) log [INF] : pgmap v1902: 208 pgs: 8 creating, 200 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:16:03.075183 7f500b0034c0 0 log_channel(cluster) log [INF] : fsmap e34: 1/1/1 up {1:0=ceph4=up:active}
2016-04-28 01:16:03.075242 7f500b0034c0 0 log_channel(cluster) log [INF] : osdmap e71: 2 osds: 2 up, 2 in
2016-04-28 01:16:05.305530 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:16:05.305572 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/1365324532' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:16:07.339225 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:16:07.339259 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/1625066346' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:16:11.083259 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:16:11.083295 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/4003567000' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:16:31.283678 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:16:31.283713 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2448057784' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:16:46.788446 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:16:46.788555 7f5002f7b700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/3997479574' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:16:46.788728 7f5002f7b700 0 mon.ceph4@0(leader).osd e71 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:16:55.898378 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:16:55.898443 7f5002f7b700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/950209277' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:16:55.898606 7f5002f7b700 0 mon.ceph4@0(leader).osd e71 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:16:59.559818 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:16:59.559881 7f5002f7b700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/3206210459' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:16:59.560031 7f5002f7b700 0 mon.ceph4@0(leader).osd e71 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:17:00.436090 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:17:00.436120 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/1984724238' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:17:02.986828 7f500377c700 0 mon.ceph4@0(leader).data_health(9) update_stats avail 99% total 290 GB, used 1866 MB, avail 287 GB
2016-04-28 01:17:03.042101 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:17:03.042131 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3433099964' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:17:06.277657 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:17:06.277687 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/239307387' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:17:13.065755 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:17:13.065819 7f5002f7b700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1135341847' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:17:13.065956 7f5002f7b700 0 mon.ceph4@0(leader).osd e71 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:17:17.608807 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:17:17.608870 7f5002f7b700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/3859697936' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:17:17.609037 7f5002f7b700 0 mon.ceph4@0(leader).osd e71 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:17:22.658357 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:17:22.658387 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2342546125' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:17:26.979933 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:17:26.979996 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2248354545' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:17:30.861066 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:17:30.861093 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2414753617' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:17:30.949672 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:17:30.949740 7f5002f7b700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1477338379' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:17:30.949889 7f5002f7b700 0 mon.ceph4@0(leader).osd e71 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:17:41.819163 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:17:41.819196 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3590100905' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:17:45.112688 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:17:45.112725 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/1509791598' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:17:57.244573 7f5000af3700 0 -- [2001:470:20ed:26::4]:6789/0 >> [2001:470:20ed:26::4]:0/3000917631 pipe(0x559202238800 sd=20 :6789 s=0 pgs=0 cs=0 l=0 c=0x5592024cdf80).accept peer addr is really [2001:470:20ed:26::4]:0/3000917631 (socket is [2001:470:20ed:26::4]:37672/0)
2016-04-28 01:18:02.987118 7f500377c700 0 mon.ceph4@0(leader).data_health(9) update_stats avail 99% total 290 GB, used 1867 MB, avail 287 GB
2016-04-28 01:19:02.987411 7f500377c700 0 mon.ceph4@0(leader).data_health(9) update_stats avail 99% total 290 GB, used 1868 MB, avail 287 GB
2016-04-28 01:19:40.709420 7f5002f7b700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:19:40.709455 7f5002f7b700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/1582241218' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:20:02.987667 7f500377c700 0 mon.ceph4@0(leader).data_health(9) update_stats avail 99% total 290 GB, used 1868 MB, avail 287 GB
2016-04-28 01:21:02.987963 7f500377c700 0 mon.ceph4@0(leader).data_health(9) update_stats avail 98% total 290 GB, used 1868 MB, avail 286 GB
2016-04-28 01:21:14.092143 7f5001778700 -1 mon.ceph4@0(leader) e1 *** Got Signal Terminated ***
2016-04-28 01:21:14.092182 7f5001778700 1 mon.ceph4@0(leader) e1 shutdown
2016-04-28 01:21:14.092242 7f5001778700 0 quorum service shutdown
2016-04-28 01:21:14.092244 7f5001778700 0 mon.ceph4@0(shutdown).health(9) HealthMonitor::service_shutdown 1 services
2016-04-28 01:21:14.092247 7f5001778700 0 quorum service shutdown
2016-04-28 01:22:02.551357 7f05cc8dc4c0 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb
2016-04-28 01:22:02.551390 7f05cc8dc4c0 0 set uid:gid to 167:167 (ceph:ceph)
2016-04-28 01:22:02.551445 7f05cc8dc4c0 0 ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9), process ceph-mon, pid 687
2016-04-28 01:22:02.552810 7f05cc8dc4c0 0 pidfile_write: ignore empty --pid-file
2016-04-28 01:22:02.778811 7f05cc8dc4c0 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb
2016-04-28 01:22:03.080880 7f05cc8dc4c0 1 leveldb: Recovering log #315
2016-04-28 01:22:03.216695 7f05cc8dc4c0 1 leveldb: Level-0 table #317: started
2016-04-28 01:22:03.325365 7f05cc8dc4c0 1 leveldb: Level-0 table #317: 624255 bytes OK
2016-04-28 01:22:03.544080 7f05cc8dc4c0 1 leveldb: Delete type=0 #315

2016-04-28 01:22:03.624695 7f05cc8dc4c0 1 leveldb: Delete type=3 #313

2016-04-28 01:22:03.929789 7f05cc8dc4c0 0 starting mon.ceph4 rank 0 at [2001:470:20ed:26::4]:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph4 fsid fd797c45-3bc6-4185-a057-3ec84cbf99c5
2016-04-28 01:22:03.930054 7f05cc8dc4c0 -1 accepter.accepter.bind unable to bind to [2001:470:20ed:26::4]:6789: (99) Cannot assign requested address
2016-04-28 01:22:03.930071 7f05cc8dc4c0 -1 accepter.accepter.bind was unable to bind. Trying again in 5 seconds
2016-04-28 01:22:08.930490 7f05cc8dc4c0 1 mon.ceph4@-1(probing) e1 preinit fsid fd797c45-3bc6-4185-a057-3ec84cbf99c5
2016-04-28 01:22:08.948984 7f05cc8dc4c0 1 mon.ceph4@-1(probing).paxosservice(pgmap 1252..1902) refresh upgraded, format 0 -> 1
2016-04-28 01:22:08.949004 7f05cc8dc4c0 1 mon.ceph4@-1(probing).pg v0 on_upgrade discarding in-core PGMap
2016-04-28 01:22:08.950349 7f05cc8dc4c0 0 mon.ceph4@-1(probing).mds e34 print_map
e34
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 34
flags 0
created 2016-04-26 20:17:34.661005
modified 2016-04-26 20:17:34.661005
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 38
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in 0
up {0=54097}
failed
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled
54097: [2001:470:20ed:26::4]:6800/683 'ceph4' mds.0.6 up:active seq 5

2016-04-28 01:22:08.950619 7f05cc8dc4c0 0 mon.ceph4@-1(probing).osd e71 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:22:08.950628 7f05cc8dc4c0 0 mon.ceph4@-1(probing).osd e71 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:22:08.950632 7f05cc8dc4c0 0 mon.ceph4@-1(probing).osd e71 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:22:08.950634 7f05cc8dc4c0 0 mon.ceph4@-1(probing).osd e71 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:22:08.987337 7f05cc8dc4c0 1 mon.ceph4@-1(probing).paxosservice(auth 1..49) refresh upgraded, format 0 -> 1
2016-04-28 01:22:08.989223 7f05cc8dc4c0 0 mon.ceph4@-1(probing) e1 my rank is now 0 (was -1)
2016-04-28 01:22:08.989241 7f05cc8dc4c0 1 mon.ceph4@0(probing) e1 win_standalone_election
2016-04-28 01:22:08.989275 7f05cc8dc4c0 1 mon.ceph4@0(probing).elector(9) init, last seen epoch 9
2016-04-28 01:22:09.027702 7f05cc8dc4c0 0 log_channel(cluster) log [INF] : mon.ceph4@0 won leader election with quorum 0
2016-04-28 01:22:09.027825 7f05cc8dc4c0 0 log_channel(cluster) log [INF] : monmap e1: 1 mons at {ceph4=[2001:470:20ed:26::4]:6789/0}
2016-04-28 01:22:09.027874 7f05cc8dc4c0 0 log_channel(cluster) log [INF] : pgmap v1902: 208 pgs: 8 creating, 200 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:22:09.027921 7f05cc8dc4c0 0 log_channel(cluster) log [INF] : fsmap e34: 1/1/1 up {1:0=ceph4=up:active}
2016-04-28 01:22:09.027984 7f05cc8dc4c0 0 log_channel(cluster) log [INF] : osdmap e71: 2 osds: 2 up, 2 in
2016-04-28 01:22:09.385047 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:22:09.385143 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/971915677' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:22:09.385288 7f05c4720700 0 mon.ceph4@0(leader).osd e71 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:22:10.702885 7f05c66f5700 1 mon.ceph4@0(leader).osd e72 e72: 2 osds: 2 up, 2 in
2016-04-28 01:22:10.777707 7f05c66f5700 0 log_channel(cluster) log [INF] : osdmap e72: 2 osds: 2 up, 2 in
2016-04-28 01:22:10.811408 7f05c66f5700 0 log_channel(cluster) log [INF] : pgmap v1903: 208 pgs: 8 creating, 200 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:22:11.436073 7f05c66f5700 0 mon.ceph4@0(leader).mds e35 print_map
e35
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 35
flags 0
created 2016-04-26 20:17:34.661005
modified 2016-04-26 20:17:34.661005
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 72
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in 0
up {}
failed 0
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled
Standby daemons:
74098: [2001:470:20ed:26::4]:6800/690 'ceph4' mds.-1.0 up:standby seq 1

2016-04-28 01:22:11.436366 7f05c66f5700 0 log_channel(cluster) log [INF] : mds.? [2001:470:20ed:26::4]:6800/690 up:boot
2016-04-28 01:22:11.436438 7f05c66f5700 0 mon.ceph4@0(leader).mds e35 taking over failed mds.0 with 74098/ceph4 [2001:470:20ed:26::4]:6800/690
2016-04-28 01:22:11.469167 7f05c66f5700 0 log_channel(cluster) log [INF] : fsmap e35: 0/1/1 up, 1 up:standby, 1 failed
2016-04-28 01:22:11.494376 7f05c66f5700 0 mon.ceph4@0(leader).mds e36 print_map
e36
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 36
flags 0
created 2016-04-26 20:17:34.661005
modified 2016-04-26 20:17:34.661005
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 72
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in 0
up {0=74098}
failed
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled
74098: [2001:470:20ed:26::4]:6800/690 'ceph4' mds.0.7 up:replay seq 1

2016-04-28 01:22:11.494828 7f05c66f5700 0 log_channel(cluster) log [INF] : fsmap e36: 1/1/1 up {1:0=ceph4=up:replay}
2016-04-28 01:22:11.621111 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:22:11.621182 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/822632061' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:22:11.621349 7f05c4720700 0 mon.ceph4@0(leader).osd e72 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:22:28.466738 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:22:28.466807 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2710220881' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:22:28.466930 7f05c4720700 0 mon.ceph4@0(leader).osd e72 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:22:30.232825 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:22:30.232886 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2030598261' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:22:30.233023 7f05c4720700 0 mon.ceph4@0(leader).osd e72 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:22:44.207117 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:22:44.207145 7f05c4720700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/1012093945' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:22:47.171899 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:22:47.171955 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2703782582' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:22:47.172071 7f05c4720700 0 mon.ceph4@0(leader).osd e72 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:22:48.682963 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:22:48.683028 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1245884808' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:22:48.683170 7f05c4720700 0 mon.ceph4@0(leader).osd e72 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:22:49.311228 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:22:49.311254 7f05c4720700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2001081571' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:23:04.897147 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:23:04.897208 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2317684714' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:23:04.897345 7f05c4720700 0 mon.ceph4@0(leader).osd e72 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:23:07.020651 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:23:07.020718 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2759652731' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:23:07.020851 7f05c4720700 0 mon.ceph4@0(leader).osd e72 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:23:08.989715 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1867 MB, avail 286 GB
2016-04-28 01:24:06.832722 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"pool2": "bench", "prefix": "osd pool rm", "pool": "bench"} v 0) v1
2016-04-28 01:24:06.832781 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2650269378' entity='client.admin' cmd=[{"pool2": "bench", "prefix": "osd pool rm", "pool": "bench"}]: dispatch
2016-04-28 01:24:08.990196 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1868 MB, avail 286 GB
2016-04-28 01:24:11.671461 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"pool2": "bench", "prefix": "osd pool rm", "sure": "--yes-i-really-really-mean-it", "pool": "bench"} v 0) v1
2016-04-28 01:24:11.671536 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1709637203' entity='client.admin' cmd=[{"pool2": "bench", "prefix": "osd pool rm", "sure": "--yes-i-really-really-mean-it", "pool": "bench"}]: dispatch
2016-04-28 01:24:11.812148 7f05c66f5700 1 mon.ceph4@0(leader).osd e73 e73: 2 osds: 2 up, 2 in
2016-04-28 01:24:11.853784 7f05c66f5700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1709637203' entity='client.admin' cmd='[{"pool2": "bench", "prefix": "osd pool rm", "sure": "--yes-i-really-really-mean-it", "pool": "bench"}]': finished
2016-04-28 01:24:11.886988 7f05c66f5700 0 log_channel(cluster) log [INF] : osdmap e73: 2 osds: 2 up, 2 in
2016-04-28 01:24:11.921115 7f05c66f5700 0 log_channel(cluster) log [INF] : pgmap v1904: 200 pgs: 200 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:24:13.502630 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:24:13.502666 7f05c4720700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3181177477' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:24:31.702358 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:24:31.702431 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/190633359' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:24:31.702570 7f05c4720700 0 mon.ceph4@0(leader).osd e73 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:24:34.436798 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:24:34.436862 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1846938236' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:24:34.437017 7f05c4720700 0 mon.ceph4@0(leader).osd e73 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:24:36.836989 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:24:36.837018 7f05c4720700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/730907101' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:24:41.220972 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:24:41.220997 7f05c4720700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3117022887' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:24:44.974267 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:24:44.974296 7f05c4720700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2537201443' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:24:49.476877 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:24:49.476937 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1485192004' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:24:49.477093 7f05c4720700 0 mon.ceph4@0(leader).osd e73 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:24:52.769766 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:24:52.769836 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/3663510728' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:24:52.769971 7f05c4720700 0 mon.ceph4@0(leader).osd e73 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:24:56.029956 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:24:56.029985 7f05c4720700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3551250897' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:25:06.739260 7f05c66f5700 1 mon.ceph4@0(leader).osd e74 e74: 2 osds: 2 up, 2 in
2016-04-28 01:25:06.822234 7f05c66f5700 0 log_channel(cluster) log [INF] : osdmap e74: 2 osds: 2 up, 2 in
2016-04-28 01:25:06.864294 7f05c66f5700 0 log_channel(cluster) log [INF] : pgmap v1905: 200 pgs: 200 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:25:06.930855 7f05c66f5700 0 mon.ceph4@0(leader).mds e37 print_map
e37
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 37
flags 0
created 2016-04-26 20:17:34.661005
modified 2016-04-26 20:17:34.661005
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 74
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in 0
up {}
failed 0
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled

2016-04-28 01:25:06.931113 7f05c66f5700 0 log_channel(cluster) log [INF] : mds.0 [2001:470:20ed:26::4]:6800/690 down:dne
2016-04-28 01:25:06.931217 7f05c66f5700 0 log_channel(cluster) log [INF] : fsmap e37: 0/1/1 up, 1 failed
2016-04-28 01:25:07.046949 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:25:07.047012 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2304911065' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:25:07.047161 7f05c4720700 0 mon.ceph4@0(leader).osd e74 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:25:08.997691 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1869 MB, avail 286 GB
2016-04-28 01:25:10.890126 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:25:10.890190 7f05c4720700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/3297128259' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:25:10.890360 7f05c4720700 0 mon.ceph4@0(leader).osd e74 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:26:08.998012 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1869 MB, avail 286 GB
2016-04-28 01:26:37.078641 7f05c66f5700 0 mon.ceph4@0(leader).mds e38 print_map
e38
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 37
flags 0
created 2016-04-26 20:17:34.661005
modified 2016-04-26 20:17:34.661005
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 74
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in 0
up {}
failed 0
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled
Standby daemons:
74121: [2001:470:20ed:26::4]:6800/2940 'ceph4' mds.-1.0 up:standby seq 1

2016-04-28 01:26:37.078895 7f05c66f5700 0 log_channel(cluster) log [INF] : mds.? [2001:470:20ed:26::4]:6800/2940 up:boot
2016-04-28 01:26:37.078980 7f05c66f5700 0 mon.ceph4@0(leader).mds e38 taking over failed mds.0 with 74121/ceph4 [2001:470:20ed:26::4]:6800/2940
2016-04-28 01:26:37.111795 7f05c66f5700 0 log_channel(cluster) log [INF] : fsmap e38: 0/1/1 up, 1 up:standby, 1 failed
2016-04-28 01:26:37.145318 7f05c66f5700 0 mon.ceph4@0(leader).mds e39 print_map
e39
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 39
flags 0
created 2016-04-26 20:17:34.661005
modified 2016-04-26 20:17:34.661005
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 74
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in 0
up {0=74121}
failed
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled
74121: [2001:470:20ed:26::4]:6800/2940 'ceph4' mds.0.8 up:replay seq 1

2016-04-28 01:26:37.145558 7f05c66f5700 0 log_channel(cluster) log [INF] : fsmap e39: 1/1/1 up {1:0=ceph4=up:replay}
2016-04-28 01:27:08.998309 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1869 MB, avail 286 GB
2016-04-28 01:27:37.654444 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:27:37.654480 7f05c4720700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/4261945254' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:27:48.038608 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree"} v 0) v1
2016-04-28 01:27:48.038676 7f05c4720700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3666506729' entity='client.admin' cmd=[{"prefix": "osd crush tree"}]: dispatch
2016-04-28 01:28:08.998600 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1869 MB, avail 286 GB
2016-04-28 01:29:08.998865 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1869 MB, avail 286 GB
2016-04-28 01:30:08.999127 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1869 MB, avail 286 GB
2016-04-28 01:31:08.999388 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1869 MB, avail 286 GB
2016-04-28 01:32:08.999648 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1869 MB, avail 286 GB
2016-04-28 01:33:08.999909 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:34:09.000170 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:35:09.000430 7f05c4f21700 0 mon.ceph4@0(leader).data_health(10) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:35:13.096983 7f05c4720700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:35:13.097019 7f05c4720700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2937186848' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:35:43.300943 7f05c2f1d700 -1 mon.ceph4@0(leader) e1 *** Got Signal Terminated ***
2016-04-28 01:35:43.300975 7f05c2f1d700 1 mon.ceph4@0(leader) e1 shutdown
2016-04-28 01:35:43.301033 7f05c2f1d700 0 quorum service shutdown
2016-04-28 01:35:43.301036 7f05c2f1d700 0 mon.ceph4@0(shutdown).health(10) HealthMonitor::service_shutdown 1 services
2016-04-28 01:35:43.301039 7f05c2f1d700 0 quorum service shutdown
2016-04-28 01:35:47.675019 7f66fd61d4c0 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb
2016-04-28 01:35:47.675032 7f66fd61d4c0 0 set uid:gid to 167:167 (ceph:ceph)
2016-04-28 01:35:47.675054 7f66fd61d4c0 0 ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9), process ceph-mon, pid 3223
2016-04-28 01:35:47.675119 7f66fd61d4c0 0 pidfile_write: ignore empty --pid-file
2016-04-28 01:35:47.706785 7f66fd61d4c0 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb
2016-04-28 01:35:47.715945 7f66fd61d4c0 1 leveldb: Recovering log #318
2016-04-28 01:35:47.718677 7f66fd61d4c0 1 leveldb: Level-0 table #320: started
2016-04-28 01:35:47.806704 7f66fd61d4c0 1 leveldb: Level-0 table #320: 961021 bytes OK
2016-04-28 01:35:47.881727 7f66fd61d4c0 1 leveldb: Delete type=0 #318

2016-04-28 01:35:47.882078 7f66fd61d4c0 1 leveldb: Delete type=3 #316

2016-04-28 01:35:47.882579 7f66fd61d4c0 0 starting mon.ceph4 rank 0 at [2001:470:20ed:26::4]:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph4 fsid fd797c45-3bc6-4185-a057-3ec84cbf99c5
2016-04-28 01:35:47.882879 7f66fd61d4c0 1 mon.ceph4@-1(probing) e1 preinit fsid fd797c45-3bc6-4185-a057-3ec84cbf99c5
2016-04-28 01:35:47.883227 7f66fd61d4c0 1 mon.ceph4@-1(probing).paxosservice(pgmap 1252..1905) refresh upgraded, format 0 -> 1
2016-04-28 01:35:47.883237 7f66fd61d4c0 1 mon.ceph4@-1(probing).pg v0 on_upgrade discarding in-core PGMap
2016-04-28 01:35:47.885297 7f66fd61d4c0 0 mon.ceph4@-1(probing).mds e39 print_map
e39
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 39
flags 0
created 2016-04-26 20:17:34.661005
modified 2016-04-26 20:17:34.661005
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 74
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in 0
up {0=74121}
failed
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled
74121: [2001:470:20ed:26::4]:6800/2940 'ceph4' mds.0.8 up:replay seq 1

2016-04-28 01:35:47.885567 7f66fd61d4c0 0 mon.ceph4@-1(probing).osd e74 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:35:47.885575 7f66fd61d4c0 0 mon.ceph4@-1(probing).osd e74 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:35:47.885579 7f66fd61d4c0 0 mon.ceph4@-1(probing).osd e74 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:35:47.885581 7f66fd61d4c0 0 mon.ceph4@-1(probing).osd e74 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:35:47.885926 7f66fd61d4c0 1 mon.ceph4@-1(probing).paxosservice(auth 1..51) refresh upgraded, format 0 -> 1
2016-04-28 01:35:47.887756 7f66fd61d4c0 0 mon.ceph4@-1(probing) e1 my rank is now 0 (was -1)
2016-04-28 01:35:47.887776 7f66fd61d4c0 1 mon.ceph4@0(probing) e1 win_standalone_election
2016-04-28 01:35:47.887806 7f66fd61d4c0 1 mon.ceph4@0(probing).elector(10) init, last seen epoch 10
2016-04-28 01:35:47.987026 7f66fd61d4c0 0 log_channel(cluster) log [INF] : mon.ceph4@0 won leader election with quorum 0
2016-04-28 01:35:47.987125 7f66fd61d4c0 0 log_channel(cluster) log [INF] : monmap e1: 1 mons at {ceph4=[2001:470:20ed:26::4]:6789/0}
2016-04-28 01:35:47.987166 7f66fd61d4c0 0 log_channel(cluster) log [INF] : pgmap v1905: 200 pgs: 200 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:35:47.987213 7f66fd61d4c0 0 log_channel(cluster) log [INF] : fsmap e39: 1/1/1 up {1:0=ceph4=up:replay}
2016-04-28 01:35:47.987272 7f66fd61d4c0 0 log_channel(cluster) log [INF] : osdmap e74: 2 osds: 2 up, 2 in
2016-04-28 01:35:50.363000 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:35:50.363044 7f66f53da700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2699852328' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:36:09.978096 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:36:09.978132 7f66f53da700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2805365276' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:36:16.455306 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
2016-04-28 01:36:16.455359 7f66f53da700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2194465755' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
2016-04-28 01:36:37.929561 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
2016-04-28 01:36:37.929606 7f66f53da700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/1150410492' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
2016-04-28 01:36:47.887482 7f66f5bdb700 0 mon.ceph4@0(leader).data_health(11) update_stats avail 98% total 290 GB, used 1869 MB, avail 286 GB
2016-04-28 01:37:34.246381 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd blacklist clear"} v 0) v1
2016-04-28 01:37:34.246432 7f66f53da700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/237431323' entity='client.admin' cmd=[{"prefix": "osd blacklist clear"}]: dispatch
2016-04-28 01:37:34.369524 7f66f7436700 1 mon.ceph4@0(leader).osd e75 e75: 2 osds: 2 up, 2 in
2016-04-28 01:37:34.402816 7f66f7436700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/237431323' entity='client.admin' cmd='[{"prefix": "osd blacklist clear"}]': finished
2016-04-28 01:37:34.435951 7f66f7436700 0 log_channel(cluster) log [INF] : osdmap e75: 2 osds: 2 up, 2 in
2016-04-28 01:37:34.470063 7f66f7436700 0 log_channel(cluster) log [INF] : pgmap v1906: 200 pgs: 200 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:37:36.266797 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
2016-04-28 01:37:36.266845 7f66f53da700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/310131699' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
2016-04-28 01:37:47.887868 7f66f5bdb700 0 mon.ceph4@0(leader).data_health(11) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:38:47.888220 7f66f5bdb700 0 mon.ceph4@0(leader).data_health(11) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:38:54.734959 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:38:54.735031 7f66f53da700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/3960537116' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:38:54.735178 7f66f53da700 0 mon.ceph4@0(leader).osd e75 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:39:14.186773 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:39:14.186847 7f66f53da700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2883443978' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:39:14.186997 7f66f53da700 0 mon.ceph4@0(leader).osd e75 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:39:26.680514 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:39:26.680587 7f66f53da700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/3101418897' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:39:26.680744 7f66f53da700 0 mon.ceph4@0(leader).osd e75 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:39:47.888511 7f66f5bdb700 0 mon.ceph4@0(leader).data_health(11) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:40:47.888831 7f66f5bdb700 0 mon.ceph4@0(leader).data_health(11) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:41:47.889093 7f66f5bdb700 0 mon.ceph4@0(leader).data_health(11) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:42:47.889388 7f66f5bdb700 0 mon.ceph4@0(leader).data_health(11) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:43:47.889654 7f66f5bdb700 0 mon.ceph4@0(leader).data_health(11) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:44:47.136530 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:44:47.136565 7f66f53da700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/759715760' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:44:47.889904 7f66f5bdb700 0 mon.ceph4@0(leader).data_health(11) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:45:06.389405 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd getmap"} v 0) v1
2016-04-28 01:45:06.389450 7f66f53da700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/976925897' entity='client.admin' cmd=[{"prefix": "osd getmap"}]: dispatch
2016-04-28 01:45:14.262673 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd getmap"} v 0) v1
2016-04-28 01:45:14.262719 7f66f53da700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/1422798310' entity='client.admin' cmd=[{"prefix": "osd getmap"}]: dispatch
2016-04-28 01:45:47.890161 7f66f5bdb700 0 mon.ceph4@0(leader).data_health(11) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:46:47.890424 7f66f5bdb700 0 mon.ceph4@0(leader).data_health(11) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:47:47.890686 7f66f5bdb700 0 mon.ceph4@0(leader).data_health(11) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:48:35.297185 7f66f53da700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:48:35.297221 7f66f53da700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/1684650752' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:48:44.671660 7f66f3bd7700 -1 mon.ceph4@0(leader) e1 *** Got Signal Terminated ***
2016-04-28 01:48:44.671681 7f66f3bd7700 1 mon.ceph4@0(leader) e1 shutdown
2016-04-28 01:48:44.671737 7f66f3bd7700 0 quorum service shutdown
2016-04-28 01:48:44.671739 7f66f3bd7700 0 mon.ceph4@0(shutdown).health(11) HealthMonitor::service_shutdown 1 services
2016-04-28 01:48:44.671742 7f66f3bd7700 0 quorum service shutdown
2016-04-28 01:49:08.424997 7f1c755154c0 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb
2016-04-28 01:49:08.425010 7f1c755154c0 0 set uid:gid to 167:167 (ceph:ceph)
2016-04-28 01:49:08.425035 7f1c755154c0 0 ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9), process ceph-mon, pid 4193
2016-04-28 01:49:08.425097 7f1c755154c0 0 pidfile_write: ignore empty --pid-file
2016-04-28 01:49:08.457215 7f1c755154c0 -1 WARNING: the following dangerous and experimental features are enabled: bluestore,rocksdb
2016-04-28 01:49:08.466543 7f1c755154c0 1 leveldb: Recovering log #321
2016-04-28 01:49:08.467803 7f1c755154c0 1 leveldb: Level-0 table #323: started
2016-04-28 01:49:08.539303 7f1c755154c0 1 leveldb: Level-0 table #323: 471804 bytes OK
2016-04-28 01:49:08.605929 7f1c755154c0 1 leveldb: Delete type=0 #321

2016-04-28 01:49:08.606153 7f1c755154c0 1 leveldb: Delete type=3 #319

2016-04-28 01:49:08.606313 7f1c6f32e700 1 leveldb: Compacting 4@0 + 2@1 files
2016-04-28 01:49:08.607018 7f1c755154c0 0 starting mon.ceph4 rank 0 at [2001:470:20ed:26::4]:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph4 fsid fd797c45-3bc6-4185-a057-3ec84cbf99c5
2016-04-28 01:49:08.608915 7f1c755154c0 1 mon.ceph4@-1(probing) e1 preinit fsid fd797c45-3bc6-4185-a057-3ec84cbf99c5
2016-04-28 01:49:08.609467 7f1c755154c0 1 mon.ceph4@-1(probing).paxosservice(pgmap 1252..1906) refresh upgraded, format 0 -> 1
2016-04-28 01:49:08.609481 7f1c755154c0 1 mon.ceph4@-1(probing).pg v0 on_upgrade discarding in-core PGMap
2016-04-28 01:49:08.611286 7f1c755154c0 0 mon.ceph4@-1(probing).mds e39 print_map
e39
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 39
flags 0
created 2016-04-26 20:17:34.661005
modified 2016-04-26 20:17:34.661005
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 74
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in 0
up {0=74121}
failed
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled
74121: [2001:470:20ed:26::4]:6800/2940 'ceph4' mds.0.8 up:replay seq 1

2016-04-28 01:49:08.611725 7f1c755154c0 0 mon.ceph4@-1(probing).osd e75 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:49:08.611731 7f1c755154c0 0 mon.ceph4@-1(probing).osd e75 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:49:08.611735 7f1c755154c0 0 mon.ceph4@-1(probing).osd e75 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:49:08.611738 7f1c755154c0 0 mon.ceph4@-1(probing).osd e75 crush map has features 283675107524608, adjusting msgr requires
2016-04-28 01:49:08.612155 7f1c755154c0 1 mon.ceph4@-1(probing).paxosservice(auth 1..53) refresh upgraded, format 0 -> 1
2016-04-28 01:49:08.614330 7f1c755154c0 0 mon.ceph4@-1(probing) e1 my rank is now 0 (was -1)
2016-04-28 01:49:08.614353 7f1c755154c0 1 mon.ceph4@0(probing) e1 win_standalone_election
2016-04-28 01:49:08.614393 7f1c755154c0 1 mon.ceph4@0(probing).elector(11) init, last seen epoch 11
2016-04-28 01:49:08.744534 7f1c755154c0 0 log_channel(cluster) log [INF] : mon.ceph4@0 won leader election with quorum 0
2016-04-28 01:49:08.744642 7f1c755154c0 0 log_channel(cluster) log [INF] : monmap e1: 1 mons at {ceph4=[2001:470:20ed:26::4]:6789/0}
2016-04-28 01:49:08.744679 7f1c755154c0 0 log_channel(cluster) log [INF] : pgmap v1906: 200 pgs: 200 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:49:08.744723 7f1c755154c0 0 log_channel(cluster) log [INF] : fsmap e39: 1/1/1 up {1:0=ceph4=up:replay}
2016-04-28 01:49:08.744778 7f1c755154c0 0 log_channel(cluster) log [INF] : osdmap e75: 2 osds: 2 up, 2 in
2016-04-28 01:49:08.744962 7f1c6f32e700 1 leveldb: Generated table #325: 306 keys, 2142970 bytes
2016-04-28 01:49:08.922768 7f1c6f32e700 1 leveldb: Generated table #326: 239 keys, 2168405 bytes
2016-04-28 01:49:09.022712 7f1c6f32e700 1 leveldb: Generated table #327: 1018 keys, 2072000 bytes
2016-04-28 01:49:09.022741 7f1c6f32e700 1 leveldb: Compacted 4@0 + 2@1 files => 6383375 bytes
2016-04-28 01:49:09.063889 7f1c6f32e700 1 leveldb: compacted to: files[ 0 3 5 0 0 0 0 ]
2016-04-28 01:49:09.064099 7f1c6f32e700 1 leveldb: Delete type=2 #306

2016-04-28 01:49:09.064587 7f1c6f32e700 1 leveldb: Delete type=2 #307

2016-04-28 01:49:09.064812 7f1c6f32e700 1 leveldb: Delete type=2 #314

2016-04-28 01:49:09.065197 7f1c6f32e700 1 leveldb: Delete type=2 #317

2016-04-28 01:49:09.065422 7f1c6f32e700 1 leveldb: Delete type=2 #320

2016-04-28 01:49:09.065680 7f1c6f32e700 1 leveldb: Delete type=2 #323

2016-04-28 01:49:10.375911 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:49:10.375953 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3741090642' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:49:22.155681 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:49:22.155717 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/257266947' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:49:27.280813 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
2016-04-28 01:49:27.280884 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3567410673' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
2016-04-28 01:49:28.720164 7f1c6eb2d700 0 mon.ceph4@0(leader).mds e40 print_map
e40
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 40
flags 0
created 2016-04-26 20:17:34.661005
modified 2016-04-26 20:17:34.661005
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 74
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in 0
up {0=74121}
failed
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled
74121: [2001:470:20ed:26::4]:6800/2940 'ceph4' mds.0.8 up:replay seq 1 laggy since 2016-04-28 01:49:28.655487

2016-04-28 01:49:28.720379 7f1c6eb2d700 0 log_channel(cluster) log [INF] : fsmap e40: 1/1/1 up {1:0=ceph4=up:replay(laggy or crashed)}
2016-04-28 01:50:01.175731 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd down", "ids": ["osd.0"]} v 0) v1
2016-04-28 01:50:01.175801 7f1c6c7ba700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/320944031' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["osd.0"]}]: dispatch
2016-04-28 01:50:01.299993 7f1c6eb2d700 1 mon.ceph4@0(leader).osd e76 e76: 2 osds: 1 up, 2 in
2016-04-28 01:50:01.325133 7f1c6eb2d700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/320944031' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["osd.0"]}]': finished
2016-04-28 01:50:01.366385 7f1c6eb2d700 0 log_channel(cluster) log [INF] : osdmap e76: 2 osds: 1 up, 2 in
2016-04-28 01:50:01.408545 7f1c6eb2d700 0 log_channel(cluster) log [INF] : pgmap v1907: 200 pgs: 200 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:50:03.246419 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:50:03.246456 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3668837482' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:50:06.677351 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd down", "ids": ["osd.1"]} v 0) v1
2016-04-28 01:50:06.677406 7f1c6c7ba700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2573385851' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["osd.1"]}]: dispatch
2016-04-28 01:50:06.799337 7f1c6eb2d700 1 mon.ceph4@0(leader).osd e77 e77: 2 osds: 0 up, 2 in
2016-04-28 01:50:06.832776 7f1c6eb2d700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/2573385851' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["osd.1"]}]': finished
2016-04-28 01:50:06.890855 7f1c6eb2d700 0 log_channel(cluster) log [INF] : osdmap e77: 2 osds: 0 up, 2 in
2016-04-28 01:50:06.925885 7f1c6eb2d700 0 log_channel(cluster) log [INF] : pgmap v1908: 200 pgs: 92 stale+active+clean, 108 active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:50:08.342246 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:50:08.342282 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3710518993' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:50:08.613931 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1870 MB, avail 286 GB
2016-04-28 01:50:08.757381 7f1c6eb2d700 1 mon.ceph4@0(leader).osd e78 e78: 2 osds: 0 up, 2 in
2016-04-28 01:50:08.823947 7f1c6eb2d700 0 log_channel(cluster) log [INF] : osdmap e78: 2 osds: 0 up, 2 in
2016-04-28 01:50:08.858688 7f1c6eb2d700 0 log_channel(cluster) log [INF] : pgmap v1909: 200 pgs: 200 stale+active+clean; 146 GB data, 331 GB used, 600 GB / 931 GB avail
2016-04-28 01:50:33.838512 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:50:33.838585 7f1c6c7ba700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1671244163' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:50:33.838765 7f1c6c7ba700 0 mon.ceph4@0(leader).osd e78 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:50:46.963777 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:50:46.963844 7f1c6c7ba700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/3626764681' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:50:46.963983 7f1c6c7ba700 0 mon.ceph4@0(leader).osd e78 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:50:52.042637 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:50:52.042671 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/797454485' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:50:53.477086 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:50:53.477160 7f1c6c7ba700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1716404354' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:50:53.477300 7f1c6c7ba700 0 mon.ceph4@0(leader).osd e78 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:51:02.114082 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:51:02.114111 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/4120708433' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:51:08.614256 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1871 MB, avail 286 GB
2016-04-28 01:51:12.203938 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001} v 0) v1
2016-04-28 01:51:12.203999 7f1c6c7ba700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/1416630478' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 0, "weight": 0.0001}]: dispatch
2016-04-28 01:51:12.204122 7f1c6c7ba700 0 mon.ceph4@0(leader).osd e78 create-or-move crush item name 'osd.0' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:51:12.491525 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001} v 0) v1
2016-04-28 01:51:12.491592 7f1c6c7ba700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/401072586' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph4", "root=default"], "id": 1, "weight": 0.0001}]: dispatch
2016-04-28 01:51:12.491748 7f1c6c7ba700 0 mon.ceph4@0(leader).osd e78 create-or-move crush item name 'osd.1' initial_weight 0.0001 at location {host=ceph4,root=default}
2016-04-28 01:51:28.611032 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:51:28.611069 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2515702369' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:52:08.614676 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 01:52:50.255187 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:52:50.255222 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/1252174104' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:53:08.614945 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 01:54:08.615223 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 01:54:17.507911 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
2016-04-28 01:54:17.507955 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/3033683654' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
2016-04-28 01:55:08.615491 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 01:55:26.878297 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "mon scrub"} v 0) v1
2016-04-28 01:55:26.878334 7f1c6c7ba700 0 log_channel(audit) log [INF] : from='client.? [2001:470:20ed:26::4]:0/3717855929' entity='client.admin' cmd=[{"prefix": "mon scrub"}]: dispatch
2016-04-28 01:55:29.333392 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:55:29.333428 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/1229997440' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:55:39.459742 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:55:39.459777 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/2573972251' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:55:44.868105 7f1c6c7ba700 0 mon.ceph4@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-28 01:55:44.868141 7f1c6c7ba700 0 log_channel(audit) log [DBG] : from='client.? [2001:470:20ed:26::4]:0/4217940534' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
2016-04-28 01:56:08.615847 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 01:57:08.616078 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 01:58:08.616343 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 01:59:08.616592 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:00:08.616862 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:01:08.617174 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:02:08.617442 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:03:08.617690 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:04:08.617941 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:05:08.618190 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:06:08.618577 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:07:08.618832 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:08:08.619124 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:09:08.619403 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:10:08.619673 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:11:08.619948 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:12:08.620218 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:13:08.620485 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:14:08.620770 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:15:08.621028 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:16:08.621291 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:17:08.621560 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:18:08.621833 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:19:08.622091 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:20:08.622345 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:21:08.622629 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:22:08.622891 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:23:08.623157 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:24:08.623423 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:25:08.623688 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:26:08.623959 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
2016-04-28 02:27:08.624218 7f1c6cfbb700 0 mon.ceph4@0(leader).data_health(12) update_stats avail 98% total 290 GB, used 1872 MB, avail 286 GB
(2-2/3)