Project

General

Profile

Bug #9855

rbd "Segmentation fault" in upgrade:firefly:singleton-firefly-distro-basic-vps run

Added by Yuri Weinstein over 9 years ago. Updated over 9 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

On:
os_type: rhel
os_version: '6.4'

Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-10-20_20:50:01-upgrade:firefly:singleton-firefly-distro-basic-vps/562159/

014-10-21T07:52:40.693 INFO:tasks.workunit.client.0.vpm165.stderr:+ rbd rm sparse1
2014-10-21T07:52:40.704 INFO:tasks.workunit.client.0.vpm165.stderr:*** Caught signal (Segmentation fault) **
2014-10-21T07:52:40.704 INFO:tasks.workunit.client.0.vpm165.stderr: in thread 7f8006581700
2014-10-21T07:52:41.190 INFO:tasks.workunit.client.0.vpm165.stderr:/home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh: line 103: 13943 Segmentation fault      (core dumped) rbd rm sparse1
2014-10-21T07:52:41.191 INFO:tasks.workunit:Stopping rbd/import_export.sh on client.0...
archive_path: /var/lib/teuthworker/archive/teuthology-2014-10-20_20:50:01-upgrade:firefly:singleton-firefly-distro-basic-vps/562159
branch: firefly
description: upgrade:firefly:singleton/all/{distros/rhel_6.4.yaml versions-steps.yaml}
email: ceph-qa@ceph.com
job_id: '562159'
kernel:
  kdb: true
  sha1: distro
last_in_suite: false
machine_type: vps
name: teuthology-2014-10-20_20:50:01-upgrade:firefly:singleton-firefly-distro-basic-vps
nuke-on-error: true
os_type: rhel
os_version: '6.4'
overrides:
  admin_socket:
    branch: firefly
  ceph:
    conf:
      global:
        osd heartbeat grace: 100
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
    fs: xfs
    log-whitelist:
    - slow request
    - scrub
    sha1: 5a10b95f7968ecac1f2af4abf9fb91347a290544
  ceph-deploy:
    branch:
      dev: firefly
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: 5a10b95f7968ecac1f2af4abf9fb91347a290544
  rgw:
    default_idle_timeout: 1200
  s3tests:
    branch: firefly
    idle_timeout: 1200
  workunit:
    sha1: 5a10b95f7968ecac1f2af4abf9fb91347a290544
owner: scheduled_teuthology@teuthology
priority: 1000
roles:
- - mon.a
  - mds.a
  - osd.0
  - osd.1
  - osd.2
- - mon.b
  - mon.c
  - osd.3
  - osd.4
  - osd.5
  - client.0
- - client.1
suite: upgrade:firefly:singleton
suite_branch: firefly
suite_path: /var/lib/teuthworker/src/ceph-qa-suite_firefly
tasks:
- chef: null
- clock.check: null
- install:
    tag: v0.80.4
- print: '**** done v0.80.4 install'
- ceph:
    fs: xfs
- print: '**** done ceph xfs'
- sequential:
  - workload
- print: '**** done workload v0.80.4'
- parallel:
  - workload1
  - upgrade-sequence1
- print: '**** done parallel v0.80.5'
- parallel:
  - workload2
  - upgrade-sequence2
- print: '**** done parallel v0.80.6'
- parallel:
  - workload_firefly
  - upgrade-sequence_firefly
- print: '**** done parallel firefly branch'
teuthology_branch: master
tube: vps
upgrade-sequence1:
  sequential:
  - install.upgrade:
      client.1:
        tag: v0.80.5
      mon.a:
        tag: v0.80.5
      mon.b:
        tag: v0.80.5
  - print: '**** done v0.80.5 install.upgrade'
  - ceph.restart:
    - mon.a
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.b
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.c
  - sleep:
      duration: 60
  - ceph.restart:
    - mds.a
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.0
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.1
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.2
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.3
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.4
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.5
  - sleep:
      duration: 30
  - print: '**** done ceph.restart all 1 mon/mds/osd'
upgrade-sequence2:
  sequential:
  - install.upgrade:
      client.1:
        tag: v0.80.6
      mon.a:
        tag: v0.80.6
      mon.b:
        tag: v0.80.6
  - print: '**** done v0.80.6 install.upgrade'
  - ceph.restart:
    - osd.0
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.1
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.2
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.3
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.4
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.5
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.a
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.b
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.c
  - sleep:
      duration: 60
  - ceph.restart:
    - mds.a
  - sleep:
      duration: 60
  - print: '**** done ceph.restart all 2 osd/mon/mds'
upgrade-sequence_firefly:
  sequential:
  - install.upgrade:
      client.1:
        branch: firefly
      mon.a:
        branch: firefly
      mon.b:
        branch: firefly
  - print: '**** done branch: firefly install.upgrade'
  - ceph.restart:
    - mds.a
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.0
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.1
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.2
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.3
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.4
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.5
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.a
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.b
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.c
  - sleep:
      duration: 60
  - print: '**** done ceph.restart all firefly current branch mds/osd/mon'
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.3061
workload:
  workunit:
    clients:
      client.0:
      - suites/blogbench.sh
workload1:
  sequential:
  - workunit:
      clients:
        client.0:
        - rados/load-gen-big.sh
  - print: '**** done rados/load-gen-big.sh'
  - workunit:
      clients:
        client.0:
        - rados/test.sh
        - cls
  - print: '**** done rados/test.sh &  cls'
  - workunit:
      clients:
        client.0:
        - rbd/test_librbd.sh
  - print: '**** done rbd/test_librbd.sh'
workload2:
  sequential:
  - workunit:
      clients:
        client.0:
        - rbd/import_export.sh
      env:
        RBD_CREATE_ARGS: --new-format
  - workunit:
      clients:
        client.0:
        - cls/test_cls_rbd.sh
workload_firefly:
  sequential:
  - rgw:
    - client.0
  - s3tests:
      client.0:
        force-branch: firefly-original
        rgw_server: client.0
description: upgrade:firefly:singleton/all/{distros/rhel_6.4.yaml versions-steps.yaml}
duration: 3973.592614889145
failure_reason: 'Command failed on vpm165 with status 139: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp
  && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
  CEPH_REF=5a10b95f7968ecac1f2af4abf9fb91347a290544 TESTDIR="/home/ubuntu/cephtest" 
  CEPH_ID="0" RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage
  timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh'''
flavor: basic
owner: scheduled_teuthology@teuthology
status: fail
success: false

History

#1 Updated by Tamilarasi muthamizhan over 9 years ago

  • Assignee deleted (Josh Durgin)

#2 Updated by Tamilarasi muthamizhan over 9 years ago

logs: ubuntu@teuthology:/a/teuthology-2014-10-20_19:10:01-upgrade:firefly:newer-firefly-distro-basic-vps/561993

#3 Updated by Tamilarasi muthamizhan over 9 years ago

more logs:

ubuntu@teuthology:/a/teuthology-2014-10-20_18:40:02-upgrade:firefly:older-firefly-distro-basic-vps/561562

#4 Updated by Tamilarasi muthamizhan over 9 years ago

I think this issue could be related to bug # 9288, upgrading clients when workload is in progress.

#5 Updated by Sage Weil over 9 years ago

Tamilarasi muthamizhan wrote:

I think this issue could be related to bug # 9288, upgrading clients when workload is in progress.

yup!

#6 Updated by Sage Weil over 9 years ago

  • Status changed from New to Resolved

fixed test

Also available in: Atom PDF