Project

General

Profile

Actions

Bug #8409

closed

Bug #5426: librbd: mutex assert in perfcounters::tinc in librbd::AioCompletion::complete()

"Segmentation fault" in rbd in upgrade:dumpling-dumpling-testing-basic-vps

Added by Yuri Weinstein almost 10 years ago. Updated almost 10 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-05-19_19:15:03-upgrade:dumpling-dumpling-testing-basic-vps/263505/

This could be a duplicate or the backport issue related to 7997 resolved on firefly

> 10.214.138.137:6808/4332 -- osd_op(client.4123.0:249 rbd_data.101874b0dc51.00000000000000f2 [sparse-read 0~4194304] 2.4e772a40 e5) v4 -- ?+0 0x1bbe6c0 con 0x1bb2ea0
2014-05-19T22:10:45.813 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:     -5> 2014-05-20 05:10:45.225000 7fc458f07700  1 -- 10.214.138.138:0/1004923 <== osd.1 10.214.138.137:6808/4332 62 ==== osd_op_reply(249 rbd_data.101874b0dc51.00000000000000f2 [sparse-read 0~4194304] ack = -2 (No such file or directory)) v4 ==== 137+0+0 (2632525278 0 0) 0x7fc44c000bd0 con 0x1bb2ea0
2014-05-19T22:10:45.813 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:     -4> 2014-05-20 05:10:45.227138 7fc45d708780  1 -- 10.214.138.138:0/1004923 --> 10.214.138.138:6800/4300 -- osd_op(client.4123.0:250 rbd_data.101874b0dc51.00000000000000f3 [sparse-read 0~4194304] 2.220d3227 e5) v4 -- ?+0 0x1bbe6c0 con 0x1bbe420
2014-05-19T22:10:45.813 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:     -3> 2014-05-20 05:10:45.227712 7fc458f07700  1 -- 10.214.138.138:0/1004923 <== osd.4 10.214.138.138:6800/4300 18 ==== osd_op_reply(250 rbd_data.101874b0dc51.00000000000000f3 [sparse-read 0~4194304] ack = -2 (No such file or directory)) v4 ==== 137+0+0 (3279498718 0 0) 0x7fc450000df0 con 0x1bbe420
2014-05-19T22:10:45.813 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:     -2> 2014-05-20 05:10:45.229825 7fc45d708780  1 -- 10.214.138.138:0/1004923 --> 10.214.138.137:6808/4332 -- osd_op(client.4123.0:251 rbd_data.101874b0dc51.00000000000000f4 [sparse-read 0~749112] 2.d39be0a0 e5) v4 -- ?+0 0x1bbe6c0 con 0x1bb2ea0
2014-05-19T22:10:45.813 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:     -1> 2014-05-20 05:10:45.236884 7fc458f07700  1 -- 10.214.138.138:0/1004923 <== osd.1 10.214.138.137:6808/4332 63 ==== osd_op_reply(251 rbd_data.101874b0dc51.00000000000000f4 [sparse-read 0~749112] ondisk = 0) v4 ==== 137+0+749136 (118748341 0 3703616972) 0x7fc44c001ac0 con 0x1bb2ea0
2014-05-19T22:10:45.813 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:      0> 2014-05-20 05:10:45.637086 7fc456601700 -1 *** Caught signal (Segmentation fault) **
2014-05-19T22:10:45.813 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  in thread 7fc456601700
2014-05-19T22:10:45.813 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]: 
2014-05-19T22:10:45.813 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  ceph version 0.67.2 (eb4380dd036a0b644c6283869911d615ed729ac8)
2014-05-19T22:10:45.813 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  1: rbd() [0x41ef9a]
2014-05-19T22:10:45.813 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  2: (()+0xfcb0) [0x7fc45c201cb0]
2014-05-19T22:10:45.814 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  3: (PerfCounters::tinc(int, utime_t)+0x24) [0x7fc45c5f8014]
2014-05-19T22:10:45.814 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  4: (librbd::AioCompletion::complete()+0x210) [0x7fc45d2855a0]
2014-05-19T22:10:45.814 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  5: (librbd::AioCompletion::complete_request(CephContext*, long)+0x1c7) [0x7fc45d2849f7]
2014-05-19T22:10:45.814 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  6: (librbd::C_AioRead::finish(int)+0x104) [0x7fc45d284d84]
2014-05-19T22:10:45.814 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  7: (Context::complete(int)+0xa) [0x7fc45d28521a]
2014-05-19T22:10:45.814 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  8: (librbd::rados_req_cb(void*, void*)+0x47) [0x7fc45d298f07]
2014-05-19T22:10:45.814 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  9: (librados::C_AioComplete::finish(int)+0x1d) [0x7fc45c597a4d]
2014-05-19T22:10:45.814 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  10: (Context::complete(int)+0xa) [0x7fc45c5784fa]
2014-05-19T22:10:45.814 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  11: (Finisher::finisher_thread_entry()+0x1c0) [0x7fc45c61bd10]
2014-05-19T22:10:45.814 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  12: (()+0x7e9a) [0x7fc45c1f9e9a]
2014-05-19T22:10:45.814 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  13: (clone()+0x6d) [0x7fc45b80c3fd]
2014-05-19T22:10:45.815 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
2014-05-19T22:10:45.815 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]: 
2014-05-19T22:10:45.815 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]: --- logging levels ---
2014-05-19T22:10:45.815 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 5 none
2014-05-19T22:10:45.815 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 1 lockdep
2014-05-19T22:10:45.815 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 1 context
2014-05-19T22:10:45.815 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 1 crush
2014-05-19T22:10:45.815 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 mds
2014-05-19T22:10:45.815 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 mds_balancer
2014-05-19T22:10:45.815 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 mds_locker
2014-05-19T22:10:45.815 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 mds_log
2014-05-19T22:10:45.816 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 mds_log_expire
2014-05-19T22:10:45.816 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 mds_migrator
2014-05-19T22:10:45.816 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 1 buffer
2014-05-19T22:10:45.816 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 1 timer
2014-05-19T22:10:45.816 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 1 filer
2014-05-19T22:10:45.816 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 1 striper
2014-05-19T22:10:45.816 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 1 objecter
2014-05-19T22:10:45.816 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 5 rados
2014-05-19T22:10:45.816 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 5 rbd
2014-05-19T22:10:45.816 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 5 journaler
2014-05-19T22:10:45.816 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 5 objectcacher
2014-05-19T22:10:45.817 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 5 client
2014-05-19T22:10:45.817 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 5 osd
2014-05-19T22:10:45.817 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 5 optracker
2014-05-19T22:10:45.817 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 5 objclass
2014-05-19T22:10:45.817 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 3 filestore
2014-05-19T22:10:45.817 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 3 journal
2014-05-19T22:10:45.817 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 5 ms
2014-05-19T22:10:45.817 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 mon
2014-05-19T22:10:45.817 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/10 monc
2014-05-19T22:10:45.817 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 paxos
2014-05-19T22:10:45.818 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    0/ 5 tp
2014-05-19T22:10:45.818 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 auth
2014-05-19T22:10:45.818 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 crypto
2014-05-19T22:10:45.818 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 1 finisher
2014-05-19T22:10:45.818 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 heartbeatmap
2014-05-19T22:10:45.818 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 perfcounter
2014-05-19T22:10:45.818 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 rgw
2014-05-19T22:10:45.818 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 hadoop
2014-05-19T22:10:45.818 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 javaclient
2014-05-19T22:10:45.818 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 5 asok
2014-05-19T22:10:45.818 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:    1/ 1 throttle
2014-05-19T22:10:45.818 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:   -2/-2 (syslog threshold)
2014-05-19T22:10:45.819 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:   99/99 (stderr threshold)
2014-05-19T22:10:45.819 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:   max_recent       500
2014-05-19T22:10:45.819 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:   max_new         1000
2014-05-19T22:10:45.819 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]:   log_file /var/log/ceph/ceph-client.admin.4923.log
2014-05-19T22:10:45.819 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]: --- end dump of recent events ---
2014-05-19T22:10:45.819 INFO:teuthology.task.workunit.client.0.err:[10.214.138.138]: Segmentation fault (core dumped)
2014-05-19T22:10:45.819 INFO:teuthology.task.workunit:Stopping rbd/import_export.sh on client.0...
2014-05-19T22:10:45.819 DEBUG:teuthology.orchestra.run:Running [10.214.138.138]: 'rm -rf -- /home/ubuntu/cephtest/workunits.list /home/ubuntu/cephtest/workunit.client.0'
2014-05-19T22:10:46.010 ERROR:teuthology.run_tasks:Saw exception from tasks
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-dumpling/teuthology/run_tasks.py", line 25, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/teuthology-dumpling/teuthology/run_tasks.py", line 14, in run_one_task
    return fn(**kwargs)
  File "/home/teuthworker/teuthology-dumpling/teuthology/task/parallel.py", line 41, in task
    p.spawn(_run_spawned, ctx, confg, taskname)
  File "/home/teuthworker/teuthology-dumpling/teuthology/parallel.py", line 88, in __exit__
    raise
CommandFailedError: Command failed on 10.214.138.138 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bd5d6f116416d1b410d57ce00cb3e2abf6de102b TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PYTHONPATH="$PYTHONPATH:/home/ubuntu/cephtest/binary/usr/local/lib/python2.7/dist-packages:/home/ubuntu/cephtest/binary/usr/local/lib/python2.6/dist-packages" RBD_CREATE_ARGS=--new-format /home/ubuntu/cephtest/adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh'
2014-05-19T22:10:46.010 DEBUG:teuthology.run_tasks:Unwinding manager <contextlib.GeneratorContextManager object at 0x267d4d0>
2014-05-19T22:10:46.010 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-dumpling/teuthology/contextutil.py", line 27, in nested
    yield vars
  File "/home/teuthworker/teuthology-dumpling/teuthology/task/ceph.py", line 1168, in task
    yield
  File "/home/teuthworker/teuthology-dumpling/teuthology/run_tasks.py", line 25, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/teuthology-dumpling/teuthology/run_tasks.py", line 14, in run_one_task
    return fn(**kwargs)
  File "/home/teuthworker/teuthology-dumpling/teuthology/task/parallel.py", line 41, in task
    p.spawn(_run_spawned, ctx, confg, taskname)
  File "/home/teuthworker/teuthology-dumpling/teuthology/parallel.py", line 88, in __exit__
    raise
CommandFailedError: Command failed on 10.214.138.138 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bd5d6f116416d1b410d57ce00cb3e2abf6de102b TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PYTHONPATH="$PYTHONPATH:/home/ubuntu/cephtest/binary/usr/local/lib/python2.7/dist-packages:/home/ubuntu/cephtest/binary/usr/local/lib/python2.6/dist-packages" RBD_CREATE_ARGS=--new-format /home/ubuntu/cephtest/adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh'
2014-05-19T22:10:46.011 INFO:teuthology.misc:Shutting down mds daemons...
2014-05-19T22:10:46.011 DEBUG:teuthology.task.ceph.mds.a:waiting for process to exit
2014-05-19T22:10:46.021 INFO:teuthology.task.ceph.mds.a:Stopped
2014-05-19T22:10:46.021 INFO:teuthology.misc:Shutting down osd daemons...
2014-05-19T22:10:46.021 DEBUG:teuthology.task.ceph.osd.1:waiting for process to exit
2014-05-19T22:10:46.070 INFO:teuthology.task.ceph.osd.1:Stopped
2014-05-19T22:10:46.071 DEBUG:teuthology.task.ceph.osd.0:waiting for process to exit
2014-05-19T22:10:46.118 INFO:teuthology.task.ceph.osd.0:Stopped
archive_path: /var/lib/teuthworker/archive/teuthology-2014-05-19_19:15:03-upgrade:dumpling-dumpling-testing-basic-vps/263505
branch: dumpling
description: upgrade/dumpling/rbd/{0-cluster/start.yaml 1-dumpling-install/v0.67.2.yaml
  2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/monthrash.yaml}
email: null
job_id: '263505'
kernel: &id001
  kdb: true
  sha1: 335cb91ce950ce0e12294af671c64a468d89194c
last_in_suite: false
machine_type: vps
name: teuthology-2014-05-19_19:15:03-upgrade:dumpling-dumpling-testing-basic-vps
nuke-on-error: true
os_type: ubuntu
overrides:
  admin_socket:
    branch: dumpling
  ceph:
    conf:
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
    fs: xfs
    log-whitelist:
    - slow request
    - scrub
    sha1: bd5d6f116416d1b410d57ce00cb3e2abf6de102b
  ceph-deploy:
    branch:
      dev: dumpling
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: bd5d6f116416d1b410d57ce00cb3e2abf6de102b
  s3tests:
    branch: dumpling
  workunit:
    sha1: bd5d6f116416d1b410d57ce00cb3e2abf6de102b
owner: scheduled_teuthology@teuthology
roles:
- - mon.a
  - mds.a
  - osd.0
  - osd.1
  - osd.2
- - mon.b
  - mon.c
  - osd.3
  - osd.4
  - osd.5
  - client.0
suite: upgrade:dumpling
targets:
  ubuntu@vpm073.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDIOqWtBTwTGLzgFGKHVJVdvdzThIU4197MNgcsIZaXxtl+dHf5Kbg0WrQZo3xEzqOCtqxBzwz1oQkaTP6EHAYNoq0I5roqIEn35hzDf6J5r3w4K0K90wkJgyU7Mm2VCYoRlJ0U1uV1Mb4w/rsNRnQipGqhd/oMFl1Tsi/55hrFaDlsSUgU2CjOqNN8lWNkV8jgBpucfGyIdyXe5OgwC0KfG8kxTwoTJ4kF6m/H76Q6lG1Ormru+KR41JHIitaVFpdoz2XqndfYc/5yhMHGxhlSw+knXqUKQVoEVyQHLV2Y1GBdsI2myVc5UddZSNyJLeg1MAs+QR6oN8qSw5Epp/+Z
  ubuntu@vpm074.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNZECCV7baNzvzUKbkh0DZDqNpSPY6595oCX0y7iD25F60puVg+2A0iWHyqaWfKJunb7Rxx5MH9SyLYGS2fluz8gredCmGXJyFhofXVak/llZz7wOzMrKhI+rSSO+UfcZze3fi1ZDaYlevMkNg8k0ouGb1rx3tqdd3zt//4wy+mfawew99OBixpfG+EKoasBypsTZY6n67GPnKZrGKTCW2gbS2heO+zrLM6/bCK9EOchaM1HibaVo8a6r28XTbCFtvv3T46921Rd0RjLkoHBv1ELRAHZ2nsdkjTVyH3kUZyPOEbyoPQ/fqMmq23xidDO5U9DFLYu8anm/x1ndxejSL
tasks:
- internal.lock_machines:
  - 2
  - vps
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- kernel: *id001
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install:
    tag: v0.67.2
- ceph: null
- parallel:
  - workload
  - upgrade-sequence
- mon_thrash:
    revive_delay: 20
    thrash_delay: 1
- workunit:
    clients:
      client.0:
      - rbd/copy.sh
    env:
      RBD_CREATE_ARGS: --new-format
teuthology_branch: dumpling
upgrade-sequence:
  sequential:
  - install.upgrade:
      all:
        branch: dumpling
  - ceph.restart:
    - mon.a
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.b
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.c
  - sleep:
      duration: 60
  - ceph.restart:
    - mds.a
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.0
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.1
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.2
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.3
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.4
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.5
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.19361
workload:
  sequential:
  - workunit:
      clients:
        client.0:
        - rbd/import_export.sh
      env:
        RBD_CREATE_ARGS: --new-format
  - workunit:
      clients:
        client.0:
        - cls/test_cls_rbd.sh
description: upgrade/dumpling/rbd/{0-cluster/start.yaml 1-dumpling-install/v0.67.2.yaml
  2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/monthrash.yaml}
duration: 249.3072419166565
failure_reason: 'Command failed on 10.214.138.138 with status 139: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp
  && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
  CEPH_REF=bd5d6f116416d1b410d57ce00cb3e2abf6de102b TESTDIR="/home/ubuntu/cephtest" 
  CEPH_ID="0" PYTHONPATH="$PYTHONPATH:/home/ubuntu/cephtest/binary/usr/local/lib/python2.7/dist-packages:/home/ubuntu/cephtest/binary/usr/local/lib/python2.6/dist-packages" 
  RBD_CREATE_ARGS=--new-format /home/ubuntu/cephtest/adjust-ulimits ceph-coverage
  /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh'''
flavor: basic
mon.a-kernel-sha1: 335cb91ce950ce0e12294af671c64a468d89194c
mon.b-kernel-sha1: 335cb91ce950ce0e12294af671c64a468d89194c
owner: scheduled_teuthology@teuthology
success: false
Actions #1

Updated by Zack Cerza almost 10 years ago

Actions #2

Updated by Yuri Weinstein almost 10 years ago

  • Subject changed from "Segmentation fault" in upgrade:dumpling-dumpling-testing-basic-vps to "Segmentation fault" in rbd in upgrade:dumpling-dumpling-testing-basic-vps
Actions #3

Updated by Josh Durgin almost 10 years ago

  • Status changed from New to Duplicate

This is #5426, which was already backported and released in 0.67.6. Since the test starts with an older version of dumpling (0.67.2), it may still appear there.

Actions #4

Updated by Josh Durgin almost 10 years ago

  • Project changed from teuthology to rbd
Actions #5

Updated by Tamilarasi muthamizhan almost 10 years ago

  • Parent task set to #5426

recent log:
ubuntu@teuthology:/a/teuthology-2014-06-01_19:15:10-upgrade:dumpling-dumpling-testing-basic-vps/284023/

Actions

Also available in: Atom PDF