Project

General

Profile

Bug #10824

giant: teuthology pool-snaps-few-objects.yaml: monitors fail to start or quickly die

Added by Loïc Dachary about 9 years ago. Updated almost 9 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

http://pulpito.ceph.com/loic-2015-02-02_23:31:31-rados-giant-backports---basic-multi/736421/ which is running description: rados/thrash/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/fastclose.yaml thrashers/default.yaml workloads/pool-snaps-few-objects.yaml} hangs forever because of

monclient: hunting for new mon

immediately after starting the monitors, with no error displayed:
2015-02-02T18:08:11.756 INFO:tasks.ceph:Starting mon daemons...
2015-02-02T18:08:11.756 INFO:tasks.ceph.mon.a:Restarting daemon
2015-02-02T18:08:11.757 INFO:teuthology.orchestra.run.burnupi28:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a'
2015-02-02T18:08:11.759 INFO:tasks.ceph.mon.a:Started
2015-02-02T18:08:11.759 INFO:tasks.ceph.mon.c:Restarting daemon
2015-02-02T18:08:11.760 INFO:teuthology.orchestra.run.burnupi28:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i c'
2015-02-02T18:08:11.762 INFO:tasks.ceph.mon.c:Started
2015-02-02T18:08:11.762 INFO:tasks.ceph.mon.b:Restarting daemon
2015-02-02T18:08:11.762 INFO:teuthology.orchestra.run.plana41:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i b'
2015-02-02T18:08:11.828 INFO:tasks.ceph.mon.b:Started
2015-02-02T18:08:11.828 INFO:tasks.ceph:Starting osd daemons...
2015-02-02T18:08:11.829 INFO:tasks.ceph.osd.0:Restarting daemon
2015-02-02T18:08:11.829 INFO:teuthology.orchestra.run.burnupi28:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 0'
2015-02-02T18:08:11.832 INFO:tasks.ceph.osd.0:Started
2015-02-02T18:08:11.832 INFO:tasks.ceph.osd.1:Restarting daemon
2015-02-02T18:08:11.832 INFO:teuthology.orchestra.run.burnupi28:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 1'
2015-02-02T18:08:11.834 INFO:tasks.ceph.osd.1:Started
2015-02-02T18:08:11.835 INFO:tasks.ceph.osd.2:Restarting daemon
2015-02-02T18:08:11.835 INFO:teuthology.orchestra.run.burnupi28:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 2'
2015-02-02T18:08:11.837 INFO:tasks.ceph.osd.2:Started
2015-02-02T18:08:11.837 INFO:tasks.ceph.osd.3:Restarting daemon
2015-02-02T18:08:11.838 INFO:teuthology.orchestra.run.plana41:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 3'
2015-02-02T18:08:11.840 INFO:tasks.ceph.osd.3:Started
2015-02-02T18:08:11.840 INFO:tasks.ceph.osd.4:Restarting daemon
2015-02-02T18:08:11.840 INFO:teuthology.orchestra.run.plana41:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 4'
2015-02-02T18:08:11.842 INFO:tasks.ceph.osd.4:Started
2015-02-02T18:08:11.842 INFO:tasks.ceph.osd.5:Restarting daemon
2015-02-02T18:08:11.842 INFO:teuthology.orchestra.run.plana41:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 5'
2015-02-02T18:08:11.844 INFO:tasks.ceph.osd.5:Started
2015-02-02T18:08:11.844 INFO:tasks.ceph:Starting mds daemons...
2015-02-02T18:08:11.845 INFO:tasks.ceph:Waiting until ceph is healthy...
2015-02-02T18:08:11.882 INFO:tasks.ceph.osd.0.burnupi28.stdout:starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
2015-02-02T18:08:11.883 INFO:tasks.ceph.osd.1.burnupi28.stdout:starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
2015-02-02T18:08:11.884 INFO:tasks.ceph.osd.2.burnupi28.stdout:starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
2015-02-02T18:08:11.901 INFO:tasks.ceph.osd.3.plana41.stdout:starting osd.3 at :/0 osd_data /var/lib/ceph/osd/ceph-3 /var/lib/ceph/osd/ceph-3/journal
2015-02-02T18:08:11.909 INFO:tasks.ceph.osd.1.burnupi28.stderr:2015-02-02 18:08:11.910148 7f688ca11900 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2015-02-02T18:08:11.909 INFO:tasks.ceph.osd.0.burnupi28.stderr:2015-02-02 18:08:11.910246 7f80ddc08900 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2015-02-02T18:08:11.910 INFO:tasks.ceph.osd.2.burnupi28.stderr:2015-02-02 18:08:11.912576 7f52a3dfe900 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2015-02-02T18:08:11.911 INFO:tasks.ceph.osd.4.plana41.stdout:starting osd.4 at :/0 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal
2015-02-02T18:08:11.912 INFO:tasks.ceph.osd.5.plana41.stdout:starting osd.5 at :/0 osd_data /var/lib/ceph/osd/ceph-5 /var/lib/ceph/osd/ceph-5/journal
2015-02-02T18:08:28.669 INFO:teuthology.misc.health.burnupi28.stderr:2015-02-02 18:08:28.670656 7f6cfc1b1700 0 -- :/1030320 >> 10.214.132.37:6789/0 pipe(0x7f6cf8034b90 sd=7 :0 s=1 pgs=0 cs=0 l=1 c=0x7f6cf8034e20).fault
2015-02-02T18:08:37.030 INFO:teuthology.misc.health.burnupi28.stderr:2015-02-02 18:08:37.031580 7fbabd045700 0 monclient: hunting for new mon
2015-02-02T18:08:45.389 INFO:teuthology.misc.health.burnupi28.stderr:2015-02-02 18:08:45.390581 7fece3fff700 0 monclient: hunting for new mon
2015-02-02T18:09:55.594 INFO:teuthology.misc.health.burnupi28.stderr:2015-02-02 18:09:55.595684 7fe80a7fc700 0 monclient: hunting for new mon

no monitor or osd logs were collected.

History

#1 Updated by Loïc Dachary almost 9 years ago

  • Status changed from Need More Info to Won't Fix
  • Regression set to No

It did not show up in a long time and giant is retired.

Also available in: Atom PDF