Project

General

Profile

Bug #7936

"failed: rados" in upgrade:dumpling-x:parallel-firefly-distro-basic-vps suite

Added by Yuri Weinstein almost 10 years ago. Updated almost 10 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-03-31_19:33:26-upgrade:dumpling-x:parallel-firefly-distro-basic-vps/156204/

2014-04-01T04:50:23.560 INFO:teuthology.task.rados.rados.1.out:[10.214.138.113]: 445:  finishing write tid 3 to vpm05017423-13
2014-04-01T04:50:23.560 INFO:teuthology.task.rados.rados.1.out:[10.214.138.113]: update_object_version oid 13 v 271 (ObjNum 196 snap 67 seq_num 196) dirty exists
2014-04-01T04:50:23.610 ERROR:teuthology.run_tasks:Manager failed: rados
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-firefly/teuthology/run_tasks.py", line 92, in run_tasks
    suppress = manager.__exit__(*exc_info)
  File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
    self.gen.next()
  File "/home/teuthworker/teuthology-firefly/teuthology/task/rados.py", line 170, in task
    running.get()
  File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 331, in get
    raise self._exception
CommandCrashedError: Command crashed: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --op read 100 --op write 100 --op delete 50 --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_create 50 --op snap_remove 50 --op rollback 50 --pool unique_pool_0'
2014-04-01T04:50:23.636 DEBUG:teuthology.run_tasks:Unwinding manager install.upgrade
archive_path: /var/lib/teuthworker/archive/teuthology-2014-03-31_19:33:26-upgrade:dumpling-x:parallel-firefly-distro-basic-vps/156204
description: upgrade/dumpling-x/parallel/{0-cluster/start.yaml 1-dumpling-install/cuttlefish-dumpling.yaml
  2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-upgrade/client.yaml
  5-final-workload/rados-snaps-few-objects.yaml distros/debian_7.0.yaml}
email: null
job_id: '156204'
kernel: &id001
  kdb: true
  sha1: distro
last_in_suite: false
machine_type: vps
name: teuthology-2014-03-31_19:33:26-upgrade:dumpling-x:parallel-firefly-distro-basic-vps
nuke-on-error: true
os_type: debian
os_version: '7.0'
overrides:
  admin_socket:
    branch: firefly
  ceph:
    conf:
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
        mon warn on legacy crush tunables: false
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
    log-whitelist:
    - slow request
    - scrub mismatch
    - ScrubResult
    sha1: 5c9b8a271588e39fe6e77bd7a88bcf6b535e1d3e
  ceph-deploy:
    branch:
      dev: firefly
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: 5c9b8a271588e39fe6e77bd7a88bcf6b535e1d3e
  s3tests:
    branch: master
  workunit:
    sha1: 5c9b8a271588e39fe6e77bd7a88bcf6b535e1d3e
owner: scheduled_teuthology@teuthology
roles:
- - mon.a
  - mds.a
  - osd.0
  - osd.1
- - mon.b
  - mon.c
  - osd.2
  - osd.3
- - client.0
  - client.1
targets:
  ubuntu@vpm049.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC6ZX2y/lS/JgXSUkIdv1FvzCDHwX5eGDUQxAdMxfFe1X3B8pAl+1O/uJXBeL19MHCJd/Q/4DAkWCB2KAWo6fkPE2diG3Nl9GNyy8zbIQ8K0sLxu+Z8F2dYd6h50M6o+wNKDg4vQunuECGi1N3U/pCZtYWZs8IIo6yfiYtjVK6NxUP660PZ6v/bk4Jzh/R3vPfD7s8XT9JW//35dojONcLpgwFagBFZJFwo/PP7GO93HaR9urDnbolVpHvZakobSJUyh3S8UhvC85GMC+uMiBhe8He29CZmxzFfI0j0YjefOXWCr2lvq03+HTOHTyRO7Kl1aqg0oXivNJZXn4p1Armn
  ubuntu@vpm050.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDUaS3mxw8DfoQDcX6Nx4M1Dv0z42FW48QpNOtTR+8E+ROn32ncceJCNol1L/9qIHSENZSf8STABYkkqsAmbfVgyYB3/nwmsqR4+Lo4GFuw9JDxJDm3/uaorrVKTvywxR6bAoGt0lsTIdJzpAGD5tUwK4YISSBlI6t7G86Wl45+IkcOM8jPJoHsCZtZO1LNaZoGpDncasFl0qgtHTRx4awGIQhx7WDux0DqLp7ltEoFbnl63o+TYKjArrThn+bX8yiy4PfhdzkL+uH0TErOGYnpsTNmF5TzUqH69ggy0aIDhGzUbDFkiCeLiLN9DlKJXmai/nDNiePZkF8GSJML2iGj
  ubuntu@vpm051.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqTP8rdoA/r7OfPafwRJGbmlHbOQR0njClRd4UgbUngqRK6pHTjhloo7BBSoR2DqCW+vgWJryXd2jemrqXlsCB/Q/bWSsCEXuqYNj1a19b3XhZrXZy6aX2yFfsKTW/WcBeJ00jhgViHoVpm5bi5Je/yXGk9ZAnSXKGgrIdid0BnKikxfdZQu/KvhTjjrKM/UUt+qiFM1er2ob2Z+Bnh4R2bX63VYaxdJQ8WbSfhaHbsCUz/ZxdRZXT/7VKMXw/JkW9fe5DgzhoDLOdQD8TGp7eUf1imWO0bRrI0EF8dnkPNcyu4a7E0Et8JJFhH0MXQvEgw7LweL4puFXzHiwY7fcj
tasks:
- internal.lock_machines:
  - 3
  - vps
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- kernel: *id001
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.sudo: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install:
    branch: cuttlefish
- print: '**** done cuttlefish install'
- ceph:
    fs: xfs
- print: '**** done ceph'
- install.upgrade:
    all:
      branch: dumpling
- ceph.restart: null
- parallel:
  - workload
  - upgrade-sequence
- print: '**** done parallel'
- install.upgrade:
    client.0: null
- print: '**** done install.upgrade'
- rados:
    clients:
    - client.1
    objects: 50
    op_weights:
      delete: 50
      read: 100
      rollback: 50
      snap_create: 50
      snap_remove: 50
      write: 100
    ops: 4000
teuthology_branch: firefly
upgrade-sequence:
  sequential:
  - install.upgrade:
      mon.a: null
      mon.b: null
  - ceph.restart:
      daemons:
      - mon.a
      wait-for-healthy: false
      wait-for-osds-up: true
  - sleep:
      duration: 60
  - ceph.restart:
      daemons:
      - mon.b
      wait-for-healthy: false
      wait-for-osds-up: true
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.c
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.0
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.1
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.2
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.3
  - sleep:
      duration: 60
  - ceph.restart:
    - mds.a
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.11574
workload:
  sequential:
  - workunit:
      branch: dumpling
      clients:
        client.0:
        - rbd/test_librbd_python.sh
description: upgrade/dumpling-x/parallel/{0-cluster/start.yaml 1-dumpling-install/cuttlefish-dumpling.yaml
  2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-upgrade/client.yaml
  5-final-workload/rados-snaps-few-objects.yaml distros/debian_7.0.yaml}
duration: 1518.2113060951233
failure_reason: 'Command crashed: ''CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage
  /home/ubuntu/cephtest/archive/coverage ceph_test_rados --op read 100 --op write
  100 --op delete 50 --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000
  --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_create
  50 --op snap_remove 50 --op rollback 50 --pool unique_pool_0'''
flavor: basic
owner: scheduled_teuthology@teuthology
success: false

History

#1 Updated by Sage Weil almost 10 years ago

  • Status changed from New to Can't reproduce

Also available in: Atom PDF