Project

General

Profile

Actions

Bug #7692

closed

mon: monitor fails to form quorum

Added by Alfredo Deza about 10 years ago. Updated about 10 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
Monitor
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

For some reason it is looking at 0.0.0.0 and of course that fails:

2014-03-12 05:22:07.058374 7fb95bcbc700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/1 pipe(0x39f8280 sd=-1 :0 s=1 pgs=0 cs=0 l=0 c=0x3951080).writer: state = connecting policy.server=0
2014-03-12 05:22:07.058406 7fb95bcbc700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/1 pipe(0x39f8280 sd=-1 :0 s=1 pgs=0 cs=0 l=0 c=0x3951080).connect 0
2014-03-12 05:22:07.058438 7fb95bcbc700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/1 pipe(0x39f8280 sd=21 :0 s=1 pgs=0 cs=0 l=0 c=0x3951080).connecting to 0.0.0.0:0/1
2014-03-12 05:22:07.058503 7fb954399700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/2 pipe(0x39f8a00 sd=-1 :0 s=1 pgs=0 cs=0 l=0 c=0x3950dc0).writer: state = connecting policy.server=0
2014-03-12 05:22:07.058534 7fb954399700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/2 pipe(0x39f8a00 sd=-1 :0 s=1 pgs=0 cs=0 l=0 c=0x3950dc0).connect 0
2014-03-12 05:22:07.058563 7fb954399700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/2 pipe(0x39f8a00 sd=22 :0 s=1 pgs=0 cs=0 l=0 c=0x3950dc0).connecting to 0.0.0.0:0/2
2014-03-12 05:22:07.058600 7fb954298700 10 -- 10.214.138.102:6789/0 >> 10.214.138.121:6789/0 pipe(0x39f9680 sd=-1 :0 s=1 pgs=0 cs=0 l=0 c=0x3951a20).writer: state = connecting policy.server=0
2014-03-12 05:22:07.058633 7fb954298700 10 -- 10.214.138.102:6789/0 >> 10.214.138.121:6789/0 pipe(0x39f9680 sd=-1 :0 s=1 pgs=0 cs=0 l=0 c=0x3951a20).connect 0
2014-03-12 05:22:07.058661 7fb954298700 10 -- 10.214.138.102:6789/0 >> 10.214.138.121:6789/0 pipe(0x39f9680 sd=23 :0 s=1 pgs=0 cs=0 l=0 c=0x3951a20).connecting to 10.214.138.121:6789/0
2014-03-12 05:22:07.058781 7fb954197700 10 -- 10.214.138.102:6789/0 >> 10.214.138.140:6789/0 pipe(0x39f9180 sd=-1 :0 s=1 pgs=0 cs=0 l=0 c=0x3951760).writer: state = connecting policy.server=0
2014-03-12 05:22:07.058817 7fb954197700 10 -- 10.214.138.102:6789/0 >> 10.214.138.140:6789/0 pipe(0x39f9180 sd=-1 :0 s=1 pgs=0 cs=0 l=0 c=0x3951760).connect 0
2014-03-12 05:22:07.058846 7fb954197700 10 -- 10.214.138.102:6789/0 >> 10.214.138.140:6789/0 pipe(0x39f9180 sd=24 :0 s=1 pgs=0 cs=0 l=0 c=0x3951760).connecting to 10.214.138.140:6789/0
2014-03-12 05:22:07.058925 7fb954399700  2 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/2 pipe(0x39f8a00 sd=22 :0 s=1 pgs=0 cs=0 l=0 c=0x3950dc0).connect error 0.0.0.0:0/2, 111: Connection refused
2014-03-12 05:22:07.059003 7fb954399700  2 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/2 pipe(0x39f8a00 sd=22 :0 s=1 pgs=0 cs=0 l=0 c=0x3950dc0).fault 111: Connection refused
2014-03-12 05:22:07.059032 7fb954399700  0 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/2 pipe(0x39f8a00 sd=22 :0 s=1 pgs=0 cs=0 l=0 c=0x3950dc0).fault
2014-03-12 05:22:07.059054 7fb954399700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/2 pipe(0x39f8a00 sd=22 :0 s=1 pgs=0 cs=0 l=0 c=0x3950dc0).writer: state = connecting policy.server=0
2014-03-12 05:22:07.059075 7fb954399700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/2 pipe(0x39f8a00 sd=22 :0 s=1 pgs=0 cs=0 l=0 c=0x3950dc0).connect 0
2014-03-12 05:22:07.059104 7fb954399700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/2 pipe(0x39f8a00 sd=22 :0 s=1 pgs=0 cs=0 l=0 c=0x3950dc0).connecting to 0.0.0.0:0/2
2014-03-12 05:22:07.059186 7fb95bcbc700  2 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/1 pipe(0x39f8280 sd=21 :0 s=1 pgs=0 cs=0 l=0 c=0x3951080).connect error 0.0.0.0:0/1, 111: Connection refused
2014-03-12 05:22:07.059217 7fb95bcbc700  2 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/1 pipe(0x39f8280 sd=21 :0 s=1 pgs=0 cs=0 l=0 c=0x3951080).fault 111: Connection refused
2014-03-12 05:22:07.059239 7fb95bcbc700  0 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/1 pipe(0x39f8280 sd=21 :0 s=1 pgs=0 cs=0 l=0 c=0x3951080).fault
2014-03-12 05:22:07.059258 7fb95bcbc700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/1 pipe(0x39f8280 sd=21 :0 s=1 pgs=0 cs=0 l=0 c=0x3951080).writer: state = connecting policy.server=0
2014-03-12 05:22:07.059284 7fb95bcbc700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/1 pipe(0x39f8280 sd=21 :0 s=1 pgs=0 cs=0 l=0 c=0x3951080).connect 0
2014-03-12 05:22:07.059345 7fb95bcbc700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/1 pipe(0x39f8280 sd=21 :0 s=1 pgs=0 cs=0 l=0 c=0x3951080).connecting to 0.0.0.0:0/1
2014-03-12 05:22:07.059400 7fb95bcbc700  2 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/1 pipe(0x39f8280 sd=21 :0 s=1 pgs=0 cs=0 l=0 c=0x3951080).connect error 0.0.0.0:0/1, 111: Connection refused
2014-03-12 05:22:07.059429 7fb95bcbc700  2 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/1 pipe(0x39f8280 sd=21 :0 s=1 pgs=0 cs=0 l=0 c=0x3951080).fault 111: Connection refused
2014-03-12 05:22:07.059452 7fb95bcbc700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/1 pipe(0x39f8280 sd=21 :0 s=1 pgs=0 cs=0 l=0 c=0x3951080).fault waiting 0.200000
2014-03-12 05:22:07.059475 7fb954399700  2 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/2 pipe(0x39f8a00 sd=22 :0 s=1 pgs=0 cs=0 l=0 c=0x3950dc0).connect error 0.0.0.0:0/2, 111: Connection refused
2014-03-12 05:22:07.059498 7fb954399700  2 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/2 pipe(0x39f8a00 sd=22 :0 s=1 pgs=0 cs=0 l=0 c=0x3950dc0).fault 111: Connection refused
2014-03-12 05:22:07.059520 7fb954399700 10 -- 10.214.138.102:6789/0 >> 0.0.0.0:0/2 pipe(0x39f8a00 sd=22 :0 s=1 pgs=0 cs=0 l=0 c=0x3950dc0).fault waiting 0.200000

The above excerpt from the mon log at http://qa-proxy.ceph.com/teuthology/teuthology-2014-03-12_01:10:21-ceph-deploy-firefly-distro-basic-vps/127125/remote/ubuntu@vpm052.front.sepia.ceph.com/log/

Other logs for the failed test: http://qa-proxy.ceph.com/teuthology/teuthology-2014-03-12_01:10:21-ceph-deploy-firefly-distro-basic-vps/127125/

Actions #1

Updated by Joao Eduardo Luis about 10 years ago

  • Subject changed from monitor fails to form quorum, gets connection refused to mon: monitor fails to form quorum
  • Source changed from other to Q/A

at first glance, problem has nothing to do with the 0.0.0.0 ip.

This looks like it's early deployment, therefore the mon has a blank monmap. It still connects to the other 2 monitors as evidenced in the above excerpt -- those are on the mon_initial_members/mon_host list. The monitor should form quorum eventually, but Alfredo tells me that the other two monitors form quorum and this one just never gets to participate, so I'm inclined that there's something else going on requiring further inspection.

Actions #2

Updated by Alfredo Deza about 10 years ago

We are now seeing a more generalized failure were no monitor is able to form quorum (from ceph-deploy):

2014-03-14T01:33:46.637 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm035][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm035.asok mon_status
2014-03-14T01:33:46.754 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][INFO  ] processing monitor mon.vpm037
2014-03-14T01:33:46.779 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm037][DEBUG ] connected to host: vpm037
2014-03-14T01:33:46.779 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm037][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm037.asok mon_status
2014-03-14T01:33:46.898 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm037][ERROR ] admin_socket: exception getting command descriptions: [Errno 111] Connection refused
2014-03-14T01:33:46.898 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm037 monitor is not yet in quorum, tries left: 5
2014-03-14T01:33:46.898 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 5 seconds before retrying
2014-03-14T01:33:51.906 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm037][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm037.asok mon_status
2014-03-14T01:33:52.021 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm037][ERROR ] admin_socket: exception getting command descriptions: [Errno 111] Connection refused
2014-03-14T01:33:52.021 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm037 monitor is not yet in quorum, tries left: 4
2014-03-14T01:33:52.021 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 10 seconds before retrying
2014-03-14T01:34:02.034 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm037][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm037.asok mon_status
2014-03-14T01:34:02.153 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm037][ERROR ] admin_socket: exception getting command descriptions: [Errno 111] Connection refused
2014-03-14T01:34:02.154 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm037 monitor is not yet in quorum, tries left: 3
2014-03-14T01:34:02.154 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 10 seconds before retrying
2014-03-14T01:34:12.166 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm037][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm037.asok mon_status
2014-03-14T01:34:12.284 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm037][ERROR ] admin_socket: exception getting command descriptions: [Errno 111] Connection refused
2014-03-14T01:34:12.284 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm037 monitor is not yet in quorum, tries left: 2
2014-03-14T01:34:12.284 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 15 seconds before retrying
2014-03-14T01:34:27.302 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm037][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm037.asok mon_status
2014-03-14T01:34:27.423 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm037][ERROR ] admin_socket: exception getting command descriptions: [Errno 111] Connection refused
2014-03-14T01:34:27.423 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm037 monitor is not yet in quorum, tries left: 1
2014-03-14T01:34:27.423 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 20 seconds before retrying
2014-03-14T01:34:47.446 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][INFO  ] processing monitor mon.vpm038
2014-03-14T01:34:47.483 INFO:teuthology.orchestra.run.err:[10.214.138.100]: Warning: Permanently added 'vpm038,10.214.138.107' (RSA) to the list of known hosts.
2014-03-14T01:34:47.598 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm038][DEBUG ] connected to host: vpm038
2014-03-14T01:34:47.598 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm038][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm038.asok mon_status
2014-03-14T01:34:47.715 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm038 monitor is not yet in quorum, tries left: 5
2014-03-14T01:34:47.716 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 5 seconds before retrying
2014-03-14T01:34:52.726 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm038][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm038.asok mon_status
2014-03-14T01:34:52.841 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm038 monitor is not yet in quorum, tries left: 4
2014-03-14T01:34:52.842 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 10 seconds before retrying
2014-03-14T01:35:02.854 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm038][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm038.asok mon_status
2014-03-14T01:35:02.970 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm038 monitor is not yet in quorum, tries left: 3
2014-03-14T01:35:02.970 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 10 seconds before retrying
2014-03-14T01:35:12.983 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm038][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm038.asok mon_status
2014-03-14T01:35:13.101 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm038 monitor is not yet in quorum, tries left: 2
2014-03-14T01:35:13.101 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 15 seconds before retrying
2014-03-14T01:35:28.119 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm038][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm038.asok mon_status
2014-03-14T01:35:28.234 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm038 monitor is not yet in quorum, tries left: 1
2014-03-14T01:35:28.234 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 20 seconds before retrying
2014-03-14T01:35:48.257 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][INFO  ] processing monitor mon.vpm035
2014-03-14T01:35:48.286 INFO:teuthology.orchestra.run.err:[10.214.138.100]: Warning: Permanently added 'vpm035,10.214.138.103' (RSA) to the list of known hosts.
2014-03-14T01:35:48.466 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm035][DEBUG ] connected to host: vpm035
2014-03-14T01:35:48.466 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm035][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm035.asok mon_status
2014-03-14T01:35:48.632 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm035 monitor is not yet in quorum, tries left: 5
2014-03-14T01:35:48.633 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 5 seconds before retrying
2014-03-14T01:35:53.640 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm035][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm035.asok mon_status
2014-03-14T01:35:53.806 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm035 monitor is not yet in quorum, tries left: 4
2014-03-14T01:35:53.806 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 10 seconds before retrying
2014-03-14T01:36:03.819 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm035][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm035.asok mon_status
2014-03-14T01:36:03.934 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm035 monitor is not yet in quorum, tries left: 3
2014-03-14T01:36:03.934 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 10 seconds before retrying
2014-03-14T01:36:13.947 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm035][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm035.asok mon_status
2014-03-14T01:36:14.065 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm035 monitor is not yet in quorum, tries left: 2
2014-03-14T01:36:14.065 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 15 seconds before retrying
2014-03-14T01:36:29.083 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [vpm035][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm035.asok mon_status
2014-03-14T01:36:29.200 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] mon.vpm035 monitor is not yet in quorum, tries left: 1
2014-03-14T01:36:29.200 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][WARNING] waiting 20 seconds before retrying
2014-03-14T01:36:49.214 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:
2014-03-14T01:36:49.214 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][ERROR ] vpm037
2014-03-14T01:36:49.214 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][ERROR ] vpm035
2014-03-14T01:36:49.215 INFO:teuthology.orchestra.run.err:[10.214.138.100]: [ceph_deploy.mon][ERROR ] vpm038

Then it look like the monitor started having assertion errors:

2014-03-14 04:33:37.554506 7fc2db029700 -1 mon/Monitor.cc: In function 'void Monitor::handle_timecheck_leader(MTimeCheck*)' thread 7fc2db029700 time 2014-03-14 04:33:37.553579
mon/Monitor.cc: 3262: FAILED assert(timecheck_waiting.count(other) > 0)

 ceph version 0.77-868-g4f43e53 (4f43e53ced66a5a24f5cbd5ef56b2b5937b73b97)
 1: (Monitor::handle_timecheck_leader(MTimeCheck*)+0x997) [0x5582d7]
 2: (Monitor::handle_timecheck(MTimeCheck*)+0x3e3) [0x562583]
 3: (Monitor::dispatch(MonSession*, Message*, bool)+0x2b5) [0x57c7e5]
 4: (Monitor::_ms_dispatch(Message*)+0x20e) [0x57ce3e]
 5: (Monitor::ms_dispatch(Message*)+0x32) [0x599ba2]
 6: (DispatchQueue::entry()+0x5a2) [0x837872]
 7: (DispatchQueue::DispatchThread::entry()+0xd) [0x8322ed]
 8: (()+0x7851) [0x7fc2e095d851]
 9: (clone()+0x6d) [0x7fc2df69a67d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

Not sure if this warrants a completely new ticket though.

Error logs for this failure http://qa-proxy.ceph.com/teuthology/teuthology-2014-03-14_01:11:24-ceph-deploy-firefly-distro-basic-vps/129900/

All of the mon failures are for RHEL6 only.

Actions #3

Updated by Sage Weil about 10 years ago

  • Project changed from teuthology to Ceph
  • Category set to Monitor
  • Priority changed from Normal to Urgent
Actions #4

Updated by Ian Colle about 10 years ago

  • Assignee set to Joao Eduardo Luis
Actions #5

Updated by Sage Weil about 10 years ago

  • Assignee changed from Joao Eduardo Luis to Sage Weil
Actions #6

Updated by Sage Weil about 10 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF