Project

General

Profile

Actions

Bug #4703

closed

ceph health hangs when upgrading from bobtail to next branch

Added by Tamilarasi muthamizhan about 11 years ago. Updated almost 11 years ago.

Status:
Can't reproduce
Priority:
Urgent
Assignee:
Category:
Monitor
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

while upgrading from bobtail to next [ceph version 0.60-451-g3888a12 ] all daemons at once [sudo service ceph -a restart after upgrade] works fine, it is strange that upgrading monitors first, then osds and mds causes ceph health command to hang and all i see in the monitor logs is,

2013-04-10 14:28:01.475513 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:01.475527 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47636 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:01.475572 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47636 s=1 pgs=0 cs=0 l=0).fault
2013-04-10 14:28:01.476053 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:01.476063 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47637 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:01.676928 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:01.676942 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47638 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:02.077732 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:02.077746 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47639 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:02.878605 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:02.878618 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47640 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:03.475230 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:03.475243 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47641 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:05.475256 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:05.475269 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47642 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:07.475500 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:07.475513 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47643 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:09.475658 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:09.475672 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47644 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:11.475794 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:11.475807 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47645 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:13.475955 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:13.475969 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47646 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:15.476074 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:15.476088 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47647 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:17.476239 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:17.476253 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47648 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:19.476374 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:19.476387 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47649 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:21.476527 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:21.476540 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47653 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:23.476658 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:23.476671 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47654 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:25.476815 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:25.476828 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47655 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:26.475037 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:26.475129 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:26.475180 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:26.475210 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:26.475255 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:36.475623 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:36.475656 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:36.475663 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:36.475670 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:36.475676 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:37.477708 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:37.477722 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47699 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:39.477872 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:39.477884 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47700 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:41.475841 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:41.475869 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:41.475876 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:41.475883 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:41.475892 7ff0bf7fe700  1 mon.a@1(probing) e1 discarding message auth(proto 0 26 bytes epoch 1) v1 and sending client elsewhere; we are not in quorum
2013-04-10 14:28:41.478003 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
2013-04-10 14:28:41.478017 7ff0c8332700  0 -- 10.214.134.26:6789/0 >> 10.214.134.24:6789/0 pipe(0x3057410 sd=21 :47705 s=1 pgs=0 cs=0 l=0).failed verifying authorize reply
2013-04-10 14:28:43.478101 7ff0c8332700  0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption

ubuntu@burnupi13:~$ sudo service ceph stop mon.a
=== mon.a === 
Stopping Ceph mon.a on burnupi13...kill 8059...done
ubuntu@burnupi13:~$ sudo service ceph start mon.a
=== mon.a === 
Starting Ceph mon.a on burnupi13...
Invalid argument: /var/lib/ceph/mon/ceph-a/store.db: does not exist (create_if_missing is false)
starting mon.a rank 1 at 10.214.134.26:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid 4775b5a9-2dd6-4ca1-90a8-0928d3115b79
Starting ceph-create-keys on burnupi13...
ubuntu@burnupi13:~$ sudo service ceph stop osd.1
=== osd.1 === 
Stopping Ceph osd.1 on burnupi13...kill 8273...done
ubuntu@burnupi13:~$ sudo service ceph start osd.1
=== osd.1 === 
Starting Ceph osd.1 on burnupi13...
starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
ubuntu@burnupi13:~$ sudo service ceph restart osd.2
=== osd.2 === 
=== osd.2 === 
Stopping Ceph osd.2 on burnupi13...kill 8394...done
=== osd.2 === 
Starting Ceph osd.2 on burnupi13...
starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
ubuntu@burnupi13:~$ sudo service ceph restart mds.a
=== mds.a === 
=== mds.a === 
Stopping Ceph mds.a on burnupi13...kill 8189...done
=== mds.a === 
Starting Ceph mds.a on burnupi13...
starting mds.a at :/0

also, when restarting the first monitor, there is "Invalid argument: /var/lib/ceph/mon/ceph-a/store.db: does not exist (create_if_missing is false)" in the command output. do we really need to print this out?

Actions #1

Updated by Tamilarasi muthamizhan about 11 years ago

hit this issue on burnupi13 and burnupi14 cluster. leaving the setup as it is for now so you can take a look at the logs.

Actions #2

Updated by Joao Eduardo Luis about 11 years ago

  • Status changed from New to 4

ah! mon.c (on burnupi14) is still running 0.56. The monitors will be unable to talk to each other unless they are all upgraded. Upgrading burnupi14 to 0.60 should fix it.

Actions #3

Updated by Tamilarasi muthamizhan about 11 years ago

  • Status changed from 4 to In Progress

The problem still persists even after upgrading the whole cluster, The commands 'ceph -s' and 'ceph health' seems to hang after some time of the upgrade process.

ubuntu@burnupi13:~$ sudo service ceph -a status === mon.a ===
mon.a: running {"version":"0.60-467-g6b98162"} === mon.b ===
mon.b: running {"version":"0.60-467-g6b98162"} === mon.c ===
mon.c: running {"version":"0.60-467-g6b98162"} === mds.a ===
mds.a: running {"version":"0.60-467-g6b98162"} === osd.1 ===
osd.1: running {"version":"0.60-467-g6b98162"} === osd.2 ===
osd.2: running {"version":"0.60-467-g6b98162"} === osd.3 ===
osd.3: running {"version":"0.60-467-g6b98162"} === osd.4 ===
osd.4: running {"version":"0.60-467-g6b98162"}
ubuntu@burnupi13:~$ sudo ceph -w
health HEALTH_OK
monmap e1: 3 mons at {a=10.214.134.26:6789/0,b=10.214.134.24:6789/0,c=10.214.134.26:6790/0}, election epoch 12, quorum 0,1,2 a,b,c
osdmap e35: 4 osds: 4 up, 4 in
pgmap v357: 640 pgs: 640 active+clean; 8699 bytes data, 31065 MB used, 3389 GB / 3602 GB avail; 0B/s wr, 0op/s
mdsmap e26: 1/1/1 up {0=a=up:active}

2013-04-15 11:28:50.814409 mon.0 [INF] pgmap v357: 640 pgs: 640 active+clean; 8699 bytes data, 31065 MB used, 3389 GB / 3602 GB avail; 0B/s wr, 0op/s
^Cubuntu@burnupi13:~$ sudo ceph -s
health HEALTH_OK
monmap e1: 3 mons at {a=10.214.134.26:6789/0,b=10.214.134.24:6789/0,c=10.214.134.26:6790/0}, election epoch 12, quorum 0,1,2 a,b,c
osdmap e35: 4 osds: 4 up, 4 in
pgmap v357: 640 pgs: 640 active+clean; 8699 bytes data, 31065 MB used, 3389 GB / 3602 GB avail; 0B/s wr, 0op/s
mdsmap e26: 1/1/1 up {0=a=up:active}

ubuntu@burnupi13:~$ sudo ceph -s

^Cubuntu@burnupi13:~$
ubuntu@burnupi13:~$
ubuntu@burnupi13:~$ sudo ceph health

Also, the monitor logs are heavily populated with the below messages. It would be nice to avoid this as well.

2013-04-15 11:23:40.563737 7f92f1ffb700 1 mon.c@2(peon).osd e32 e32: 4 osds: 4 up, 4 in
2013-04-15 11:23:41.441018 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:42.459122 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:42.459294 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:42.462255 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:43.477044 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:43.480264 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:45.512696 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:47.548309 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:48.565380 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:50.601384 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:52.634403 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:52.635723 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:55.688049 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:55.688848 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:56.706100 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:56.706390 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:56.706712 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:57.723860 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:57.724161 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:57.724268 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:58.741864 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:59.759440 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:23:59.759586 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:00.777467 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:02.812465 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:03.830300 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:04.847810 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:04.848282 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:04.848781 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:05.866552 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:06.883011 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:06.883293 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:06.884455 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:07.026962 7f92f1ffb700 1 mon.c@2(peon).osd e33 e33: 4 osds: 4 up, 4 in
2013-04-15 11:24:09.026849 7f92f27fc700 0 mon.c@2(peon).data_health(12) update_stats avail 93% total 944399796 used 8710952 avail 887716032
2013-04-15 11:24:09.942182 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:12.049386 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:12.049798 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:16.119532 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:18.104294 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:18.153572 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:20.186512 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:22.220457 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:23.191937 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:23.237257 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:24.210116 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:24.254025 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:25.194937 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1
2013-04-15 11:24:25.228169 7f92f1ffb700 0 mon.c@2(peon) e1 handle_command mon_command(auth get-or-create client.admin mon allow * osd allow * mds allow v 0) v1

Actions #4

Updated by Ian Colle almost 11 years ago

  • Assignee changed from Joao Eduardo Luis to Greg Farnum

Greg, can you please take a look at this?

Actions #5

Updated by Sage Weil almost 11 years ago

  • Assignee deleted (Greg Farnum)
Actions #6

Updated by Sage Weil almost 11 years ago

  • Assignee set to Greg Farnum
Actions #7

Updated by Sage Weil almost 11 years ago

  • Assignee changed from Greg Farnum to Sage Weil
Actions #8

Updated by Sage Weil almost 11 years ago

  • Status changed from In Progress to Can't reproduce

this appears to be resolved; unable to reproduce (whereas it used to be pretty frequently triggered).

Actions

Also available in: Atom PDF