Actions
Bug #7259
closedceph mon crash in master branch
Status:
Resolved
Priority:
Normal
Assignee:
Joao Eduardo Luis
Category:
Monitor
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
logs: ubuntu@teuthology:/a/teuthology-2014-01-25_19:40:02-upgrade:parallel-master-testing-basic-plana/53135
2014-01-25T19:44:42.432 DEBUG:teuthology.orchestra.run:Running [10.214.132.22]: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dumpling TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/rados/test.sh' 2014-01-25T19:44:42.505 INFO:teuthology.task.workunit.client.0.err:[10.214.132.22]: + ceph_test_rados_api_aio 2014-01-25T19:44:42.518 INFO:teuthology.task.workunit.client.0.out:[10.214.132.22]: Running main() from gtest_main.cc 2014-01-25T19:44:42.518 INFO:teuthology.task.workunit.client.0.out:[10.214.132.22]: [==========] Running 29 tests from 1 test case. 2014-01-25T19:44:42.518 INFO:teuthology.task.workunit.client.0.out:[10.214.132.22]: [----------] Global test environment set-up. 2014-01-25T19:44:42.519 INFO:teuthology.task.workunit.client.0.out:[10.214.132.22]: [----------] 29 tests from LibRadosAio 2014-01-25T19:44:42.519 INFO:teuthology.task.workunit.client.0.out:[10.214.132.22]: [ RUN ] LibRadosAio.SimpleWrite 2014-01-25T19:44:43.166 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: terminate called after throwing an instance of 'ceph::buffer::end_of_buffer' 2014-01-25T19:44:43.167 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: what(): buffer::end_of_buffer 2014-01-25T19:44:43.167 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: *** Caught signal (Aborted) ** 2014-01-25T19:44:43.167 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: in thread 7f21d5474700 2014-01-25T19:44:43.167 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: ceph version 0.67.5-21-g5817078 (5817078ba9b2aa38f39e1f62d8d08e943646c0bb) 2014-01-25T19:44:43.167 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 1: ceph-mon() [0x6430ca] 2014-01-25T19:44:43.168 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 2: (()+0xfcb0) [0x7f21d9e45cb0] 2014-01-25T19:44:43.168 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 3: (gsignal()+0x35) [0x7f21d80cf425] 2014-01-25T19:44:43.168 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 4: (abort()+0x17b) [0x7f21d80d2b8b] 2014-01-25T19:44:43.168 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 5: (__gnu_cxx::__verbose_terminate_handler()+0x11d) [0x7f21d8a2269d] 2014-01-25T19:44:43.168 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 6: (()+0xb5846) [0x7f21d8a20846] 2014-01-25T19:44:43.169 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 7: (()+0xb5873) [0x7f21d8a20873] 2014-01-25T19:44:43.169 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 8: (()+0xb596e) [0x7f21d8a2096e] 2014-01-25T19:44:43.169 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 9: ceph-mon() [0x715cef] 2014-01-25T19:44:43.169 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 10: (OSDMap::Incremental::decode(ceph::buffer::list::iterator&)+0x272) [0x6f2192] 2014-01-25T19:44:43.169 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 11: (OSDMonitor::update_from_paxos(bool*)+0x1413) [0x5a3783] 2014-01-25T19:44:43.170 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 12: (PaxosService::refresh(bool*)+0x331) [0x595551] 2014-01-25T19:44:43.170 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 13: (Monitor::refresh_from_paxos(bool*)+0x57) [0x537167] 2014-01-25T19:44:43.170 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 14: (Paxos::do_refresh()+0x31) [0x581a61] 2014-01-25T19:44:43.170 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 15: (Paxos::handle_commit(MMonPaxos*)+0x21a) [0x58688a] 2014-01-25T19:44:43.170 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 16: (Paxos::dispatch(PaxosServiceMessage*)+0x23b) [0x58e52b] 2014-01-25T19:44:43.171 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 17: (Monitor::_ms_dispatch(Message*)+0x112c) [0x56a34c] 2014-01-25T19:44:43.171 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 18: (Monitor::ms_dispatch(Message*)+0x32) [0x5811b2] 2014-01-25T19:44:43.171 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 19: (DispatchQueue::entry()+0x549) [0x7cb539] 2014-01-25T19:44:43.171 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 20: (DispatchQueue::DispatchThread::entry()+0xd) [0x7003cd] 2014-01-25T19:44:43.171 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 21: (()+0x7e9a) [0x7f21d9e3de9a] 2014-01-25T19:44:43.172 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 22: (clone()+0x6d) [0x7f21d818d3fd] 2014-01-25T19:44:43.172 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 2014-01-25 19:44:43.165036 7f21d5474700 -1 *** Caught signal (Aborted) ** 2014-01-25T19:44:43.172 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: in thread 7f21d5474700 2014-01-25T19:44:43.172 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: ceph version 0.67.5-21-g5817078 (5817078ba9b2aa38f39e1f62d8d08e943646c0bb) 2014-01-25T19:44:43.173 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 1: ceph-mon() [0x6430ca] 2014-01-25T19:44:43.173 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 2: (()+0xfcb0) [0x7f21d9e45cb0] 2014-01-25T19:44:43.173 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 3: (gsignal()+0x35) [0x7f21d80cf425] 2014-01-25T19:44:43.173 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 4: (abort()+0x17b) [0x7f21d80d2b8b] 2014-01-25T19:44:43.173 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 5: (__gnu_cxx::__verbose_terminate_handler()+0x11d) [0x7f21d8a2269d] 2014-01-25T19:44:43.174 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 6: (()+0xb5846) [0x7f21d8a20846] 2014-01-25T19:44:43.174 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 7: (()+0xb5873) [0x7f21d8a20873] 2014-01-25T19:44:43.174 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 8: (()+0xb596e) [0x7f21d8a2096e] 2014-01-25T19:44:43.174 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 9: ceph-mon() [0x715cef] 2014-01-25T19:44:43.174 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 10: (OSDMap::Incremental::decode(ceph::buffer::list::iterator&)+0x272) [0x6f2192] 2014-01-25T19:44:43.175 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 11: (OSDMonitor::update_from_paxos(bool*)+0x1413) [0x5a3783] 2014-01-25T19:44:43.175 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 12: (PaxosService::refresh(bool*)+0x331) [0x595551] 2014-01-25T19:44:43.175 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 13: (Monitor::refresh_from_paxos(bool*)+0x57) [0x537167] 2014-01-25T19:44:43.175 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 14: (Paxos::do_refresh()+0x31) [0x581a61] 2014-01-25T19:44:43.175 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 15: (Paxos::handle_commit(MMonPaxos*)+0x21a) [0x58688a] 2014-01-25T19:44:43.176 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 16: (Paxos::dispatch(PaxosServiceMessage*)+0x23b) [0x58e52b] 2014-01-25T19:44:43.176 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 17: (Monitor::_ms_dispatch(Message*)+0x112c) [0x56a34c] 2014-01-25T19:44:43.176 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 18: (Monitor::ms_dispatch(Message*)+0x32) [0x5811b2] 2014-01-25T19:44:43.176 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 19: (DispatchQueue::entry()+0x549) [0x7cb539] 2014-01-25T19:44:43.176 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 20: (DispatchQueue::DispatchThread::entry()+0xd) [0x7003cd] 2014-01-25T19:44:43.177 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 21: (()+0x7e9a) [0x7f21d9e3de9a] 2014-01-25T19:44:43.177 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: 22: (clone()+0x6d) [0x7f21d818d3fd] 2014-01-25T19:44:43.177 INFO:teuthology.task.ceph.mon.c.err:[10.214.132.25]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. ubuntu@teuthology:/a/teuthology-2014-01-25_19:40:02-upgrade:parallel-master-testing-basic-plana/53135$ cat config.yaml archive_path: /var/lib/teuthworker/archive/teuthology-2014-01-25_19:40:02-upgrade:parallel-master-testing-basic-plana/53135 description: upgrade/parallel/stress-split/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/radosbench.yaml 6-next-mon/monb.yaml 7-workload/rados_api_tests.yaml 8-next-mon/monc.yaml 9-workload/rados_api_tests.yaml distro/ubuntu_12.04.yaml} email: null job_id: '53135' kernel: &id001 kdb: true sha1: 80213a84a96c3040f5824bce646a184d5dd3dd2b last_in_suite: false machine_type: plana name: teuthology-2014-01-25_19:40:02-upgrade:parallel-master-testing-basic-plana nuke-on-error: true os_type: ubuntu os_version: '12.04' overrides: admin_socket: branch: master ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 5 log-whitelist: - slow request - wrongly marked me down - objects unfound and apparently lost - log bound mismatch sha1: 97edd2fcad04e8766f965e3c487d469afbdb5a3f ceph-deploy: branch: dev: master conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: 97edd2fcad04e8766f965e3c487d469afbdb5a3f s3tests: branch: master workunit: sha1: 97edd2fcad04e8766f965e3c487d469afbdb5a3f owner: scheduled_teuthology@teuthology roles: - - mon.a - mon.b - mds.a - osd.0 - osd.1 - osd.2 - - osd.3 - osd.4 - osd.5 - mon.c - - client.0 targets: ubuntu@plana53.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCi//gOdhmcqnr0Uy2Se/8d63AABU1o8Gi4t+abhcgUyQXThK+npOwSKEyjzWzuAGTm6eoDLW10oLNlpDHDnkPKpFA9iFsZJ3GfBHq5zssqgohieOJ3qH+TNcAoIg1pqqZWpEWRqeGq9Q1mKlpjaoRPYuFBmN5rSEwtnJLM4PdUFpeRya6+942nstW5AarO872AXA+NdpI8vrzeJRCcnPhmBtuX3YchKO08hpD66iAPDTL3LsP1WGPzN24KP/nhCwKIBXBT5Rdxo+b+qlDF34USxngre2y5zRKmnrK1jCVhV1Ifqy2JvM9/6lRsQgFL8/qCvUhkep5/tljGqYEEoxM1 ubuntu@plana56.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQChqN59I/UigCA3uzbghGcb+LdsAtxqAFx8bF44opH4LqITzYYSGeOi/6PxZG5WzYPpkmkM1MoUNpO4eH5OI8wrXv0t7QKUBp1g9MsQYrzdDSOj5v3b7y8bmfJqfZNYIuccLNmIPJvXVbxMH/ASe58gqAPiPjaBF+iHu3rARVGdyCsyaYIpEAyzq45Lo4AsYROHUUxanLcq4tfaw7kkgbTdMsWQuY0OHUg6gFZWyhrRR2ZQkPORWS2INbJrtPG2p0E0uE6IJRX0bkSa6Ss8OjDUu2wKk22deWbQyZ7vQKVDmtcZR2G+SSkGvH7m18oPYHm0m/5xK7wEKyxi8KNlJd73 ubuntu@plana57.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCjX0wuGXHymrU74OOVo6NtuKIyHCZAxcqQEGKS95e/lhQCXB+3VijZUaL/AcEup+h8FqY5bpK3dnFwdyGxQDUMuLxWdjBQ7IF7KaFsHT94sb38jxk9IzDS5wVZBI+u3RUtxoMhle80Vu5pPIYPwUvh2EAxJjk4HZ6Q8S8YLRRdK4z9L9/1+uXYTeKd08BwbnY7PfPj2HX3E3IrE/bmJUIq1RMB0SqX06rCkjzPQ5p1kT2jx0k8qYP/UeLMidPALnj4kOdpLqq4inLZgb1qyKKAWN7ZlgGwlYDqdP/eHm9mko9hsL7vwdQALr65BMQtkOcIAfQ/m/SzfPYLJtwBkEuP tasks: - internal.lock_machines: - 3 - plana - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - kernel: *id001 - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: branch: dumpling - ceph: fs: xfs - install.upgrade: osd.0: null - ceph.restart: daemons: - osd.0 - osd.1 - osd.2 - thrashosds: chance_pgnum_grow: 1 chance_pgpnum_fix: 1 timeout: 1200 - ceph.restart: daemons: - mon.a wait-for-healthy: false wait-for-osds-up: true - radosbench: clients: - client.0 time: 1800 - ceph.restart: daemons: - mon.b wait-for-healthy: false wait-for-osds-up: true - workunit: branch: dumpling clients: client.0: - rados/test.sh - install.upgrade: mon.c: null - ceph.restart: daemons: - mon.c wait-for-healthy: false wait-for-osds-up: true - ceph.wait_for_mon_quorum: - a - b - c - workunit: branch: dumpling clients: client.0: - rados/test.sh
Updated by Joao Eduardo Luis about 10 years ago
- Status changed from New to 4
Tamil, I suspect this is related to #7215, which should have been fixed by https://github.com/ceph/ceph/pull/1148 (which has been merged as of half-hour ago or so).
Actions