Project

General

Profile

Actions

Bug #8133

closed

"Segmentation fault" in upgrade:dumpling-x:parallel-firefly---basic-plana suite

Added by Yuri Weinstein about 10 years ago. Updated about 10 years ago.

Status:
Duplicate
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-04-16_19:33:25-upgrade:dumpling-x:parallel-firefly---basic-plana/197229/

2014-04-17T07:40:58.316 INFO:teuthology.task.workunit.client.1.out:[10.214.131.12]: [ RUN      ] LibRadosSnapshotsSelfManagedECPP.SnapPP
2014-04-17T07:40:58.316 INFO:teuthology.task.workunit.client.1.out:[10.214.131.12]: test/librados/TestCase.cc:162: Failure
2014-04-17T07:40:58.317 INFO:teuthology.task.workunit.client.1.out:[10.214.131.12]: Value of: cluster.ioctx_create(pool_name.c_str(), ioctx)
2014-04-17T07:40:58.317 INFO:teuthology.task.workunit.client.1.out:[10.214.131.12]:   Actual: -2
2014-04-17T07:40:58.317 INFO:teuthology.task.workunit.client.1.out:[10.214.131.12]: Expected: 0
2014-04-17T07:40:58.356 INFO:teuthology.task.workunit.client.1.err:[10.214.131.12]: Segmentation fault (core dumped)
2014-04-17T07:40:58.357 INFO:teuthology.task.workunit:Stopping rados/test.sh on client.1...
2014-04-17T07:40:58.357 DEBUG:teuthology.orchestra.run:Running [10.214.131.12]: 'rm -rf -- /home/ubuntu/cephtest/workunits.list /home/ubuntu/cephtest/workunit.client.1'
2014-04-17T07:40:58.366 ERROR:teuthology.parallel:Exception in parallel execution
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-firefly/teuthology/parallel.py", line 82, in __exit__
    for result in self:
  File "/home/teuthworker/teuthology-firefly/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/teuthology-firefly/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/teuthology-firefly/teuthology/task/workunit.py", line 359, in _run_tests
    args=args,
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/remote.py", line 106, in run
    r = self._runner(client=self.ssh, **kwargs)
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 330, in run
    r.exitstatus = _check_status(r.exitstatus)
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 326, in _check_status
    raise CommandFailedError(command=r.command, exitstatus=status, node=host)
CommandFailedError: Command failed on 10.214.131.12 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d6c71b76241b6c5cd2ac5d812250d4bb044ac537 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh'
2014-04-17T07:40:58.443 ERROR:teuthology.run_tasks:Saw exception from tasks.
archive_path: /var/lib/teuthworker/archive/teuthology-2014-04-16_19:33:25-upgrade:dumpling-x:parallel-firefly---basic-plana/197229
description: upgrade/dumpling-x/parallel/{0-cluster/start.yaml 1-dumpling-install/cuttlefish-dumpling.yaml
  2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-upgrade/client.yaml
  5-final-workload/rados_mon_thrash.yaml distros/ubuntu_12.04.yaml}
email: null
job_id: '197229'
last_in_suite: false
machine_type: plana
name: teuthology-2014-04-16_19:33:25-upgrade:dumpling-x:parallel-firefly---basic-plana
nuke-on-error: true
os_type: ubuntu
os_version: '12.04'
overrides:
  admin_socket:
    branch: firefly
  ceph:
    conf:
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
        mon warn on legacy crush tunables: false
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
    log-whitelist:
    - slow request
    - scrub mismatch
    - ScrubResult
    sha1: d6c71b76241b6c5cd2ac5d812250d4bb044ac537
  ceph-deploy:
    branch:
      dev: firefly
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: d6c71b76241b6c5cd2ac5d812250d4bb044ac537
  s3tests:
    branch: master
  workunit:
    sha1: d6c71b76241b6c5cd2ac5d812250d4bb044ac537
owner: scheduled_teuthology@teuthology
roles:
- - mon.a
  - mds.a
  - osd.0
  - osd.1
- - mon.b
  - mon.c
  - osd.2
  - osd.3
- - client.0
  - client.1
targets:
  ubuntu@plana28.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC32atIx3fIZ5gApmLh00izDrNipjvlSfCR8ZLfaap7SiWY1zwmgXZgHzqKb7Fm7slTrBqdMLGqnCwvO7zzLL14u6EppAHUyG8bDo2XAvuC43qm6bueORWk+yJN/U5gih9dZYItSLmSisDY3lAMuEt3dPLfXfkV6Na+2FhHp7l3asu7pHT+X83JTVaWRHKdSPIwnNqTk/2RNrBmehr/gnKqDVXyw4DzvG2nTBoFwVfTefEp2PGmfpp1hXWd9luyclYYyVkOATNDkn00+jQCHDeg3hxk/A/CTxPetbGibvmgTbkI+Y/T9fzbhE25vK4m4syE1i6GWgjI2XKKGW4WIcrX
  ubuntu@plana30.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9Ep9D3784OUaaOcLgeZKuhi/iRY6CjvVzdXGzpUkooP2gBf8eWSNpkfkvOI7DtyKhIdDW0oRbQTwPAOBabHuEUH8LHtyFVQf55zHvf1YR8OYufXWzqe0hnFAmf7YV+1o/11fDiN7y6LULFgcn5zZPmgvREySdeXyj00ojeUi8bb2PJ5482qTWK/pI2/nhBI8NyrAm4q78kGRXX2cARi5GHUBCJb2zynOHUtYxO0Q2oLUp0ZS1psK9omr0+bobtIp7B3znGA422gbCBB2oRh6uiMZfIOt9A/OFOzz01HUfoiEvTjNotD4Q1N9RkiYa6NJNsFdc/8pfmHtqT5HjvMUd
  ubuntu@plana49.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsupc3tilpTrHGdiLAeSUO0fLwE4P+zq+yuyvbs8WD8arny3m7vQvsW3Ef2Pi8rOAkW8LphHmgzwMng5Zx2q8suCJZUawM72Hl9nplLTIf1JrjvyWFUym6kyDc41cohnNkYR9r90/l8jcalTeQQNa3i66YmGG/ZQrlz7n5OjZ78tpZcBWVD1pIU1RVG/V6JinyUMIFjUNr3HToidgcK38jEwwIlyPCzqzsrP5lB/GPiVthKXZaI3DvGHkWydaqADKJrSjz/C8K641TBpxAc4i/tZa99b2kt6K6WbJQjvz0dsfn//b8u3qHkrgOaKmlAn/oFXf8aMk1G7qSpJtRBdXl
tasks:
- internal.lock_machines:
  - 3
  - plana
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.sudo: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install:
    branch: cuttlefish
- print: '**** done cuttlefish install'
- ceph:
    fs: xfs
- print: '**** done ceph'
- install.upgrade:
    all:
      branch: dumpling
- ceph.restart: null
- parallel:
  - workload
  - upgrade-sequence
- print: '**** done parallel'
- install.upgrade:
    client.0: null
- print: '**** done install.upgrade'
- mon_thrash:
    revive_delay: 20
    thrash_delay: 1
- workunit:
    clients:
      client.1:
      - rados/test.sh
teuthology_branch: firefly
upgrade-sequence:
  sequential:
  - install.upgrade:
      mon.a: null
      mon.b: null
  - ceph.restart:
    - mon.a
    - mon.b
    - mon.c
    - mds.a
    - osd.0
    - osd.1
    - osd.2
    - osd.3
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.plana.10168
workload:
  sequential:
  - workunit:
      branch: dumpling
      clients:
        client.0:
        - rbd/test_librbd.sh
description: upgrade/dumpling-x/parallel/{0-cluster/start.yaml 1-dumpling-install/cuttlefish-dumpling.yaml
  2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-upgrade/client.yaml
  5-final-workload/rados_mon_thrash.yaml distros/ubuntu_12.04.yaml}
duration: 2392.9443271160126
failure_reason: 'Command failed on 10.214.131.12 with status 139: ''mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp
  && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
  CEPH_REF=d6c71b76241b6c5cd2ac5d812250d4bb044ac537 TESTDIR="/home/ubuntu/cephtest" 
  CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage
  timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh'''
flavor: basic
owner: scheduled_teuthology@teuthology
sentry_event: http://sentry.ceph.com/inktank/teuthology/search?q=fe060b434a8b40ffa1138d5ea4456481
success: false

Related issues 1 (0 open1 closed)

Related to Ceph - Bug #7997: handle_get_version returns old map epochsResolved04/05/2014

Actions
Actions #1

Updated by Yuri Weinstein about 10 years ago

FYI - Manual re-run did not produce errors.

Actions #2

Updated by Sage Weil about 10 years ago

  • Status changed from New to Duplicate
  • Source changed from other to Q/A

dup #7997

Actions

Also available in: Atom PDF