Project

General

Profile

Actions

Bug #8250

closed

osd crashed "pthread lock: Invalid argument" in upgrade:dumpling-x:stress-split-firefly-distro-basic-vps

Added by Yuri Weinstein almost 10 years ago. Updated almost 10 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Logs are is http://qa-proxy.ceph.com/teuthology/teuthology-2014-04-28_20:35:04-upgrade:dumpling-x:stress-split-firefly-distro-basic-vps/221112/

2014-04-28T20:52:24.826 DEBUG:teuthology.orchestra.run:Running [10.214.138.80]: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --mkfs --mkkey -i 3 --monmap /home/ubuntu/cephtest/monmap'
2014-04-28T20:52:26.580 INFO:teuthology.orchestra.run.err:[10.214.138.80]: 2014-04-28 23:52:26.579299 7f836cb6a7a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-04-28T20:52:26.588 INFO:teuthology.orchestra.run.err:[10.214.138.80]: pthread lock: Invalid argument
2014-04-28T20:52:26.589 INFO:teuthology.orchestra.run.err:[10.214.138.80]: *** Caught signal (Aborted) **
2014-04-28T20:52:26.589 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  in thread 7f836cb6a7a0
2014-04-28T20:52:26.591 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  ceph version 0.67.7-75-g735a90a (735a90a95eea01dbcce5026758895117c2842627)
2014-04-28T20:52:26.592 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  1: ceph-osd() [0x810c01]
2014-04-28T20:52:26.592 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  2: (()+0xf500) [0x7f836bfd9500]
2014-04-28T20:52:26.592 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  3: (gsignal()+0x35) [0x7f836a7e18a5]
2014-04-28T20:52:26.593 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  4: (abort()+0x175) [0x7f836a7e3085]
2014-04-28T20:52:26.593 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  5: (()+0x34fc6) [0x7f836b316fc6]
2014-04-28T20:52:26.593 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  6: (leveldb::DBImpl::Get(leveldb::ReadOptions const&, leveldb::Slice const&, leveldb::Value*)+0x63) [0x7f836b2fc233]
2014-04-28T20:52:26.593 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  7: (LevelDBStore::_get_iterator()+0x41) [0x7f8961]
2014-04-28T20:52:26.594 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  8: (KeyValueDB::get_iterator(std::string const&)+0x2a) [0x7f593a]
2014-04-28T20:52:26.594 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  9: (LevelDBStore::get(std::string const&, std::set<std::string, std::less<std::string>, std::allocator<std::string> > const&, std::map<std::string, ceph::buffer::list, std::less<std::string>, std::allocator<std::pair<std::string const, ceph::buffer::list> > >*)+0x46) [0x7f7cb6]
2014-04-28T20:52:26.594 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  10: (DBObjectMap::init(bool)+0xd5) [0x7ea095]
2014-04-28T20:52:26.594 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  11: (FileStore::mount()+0x27fc) [0x79cbfc]
2014-04-28T20:52:26.595 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  12: (OSD::mkfs(std::string const&, std::string const&, uuid_d, int)+0xea) [0x65b04a]
2014-04-28T20:52:26.595 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  13: (main()+0xd7c) [0x5ae80c]
2014-04-28T20:52:26.595 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  14: (__libc_start_main()+0xfd) [0x7f836a7cdcdd]
2014-04-28T20:52:26.595 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  15: ceph-osd() [0x5ad6d9]
2014-04-28T20:52:26.595 INFO:teuthology.orchestra.run.err:[10.214.138.80]: 2014-04-28 23:52:26.590187 7f836cb6a7a0 -1 *** Caught signal (Aborted) **
2014-04-28T20:52:26.596 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  in thread 7f836cb6a7a0
2014-04-28T20:52:26.596 INFO:teuthology.orchestra.run.err:[10.214.138.80]: 
2014-04-28T20:52:26.596 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  ceph version 0.67.7-75-g735a90a (735a90a95eea01dbcce5026758895117c2842627)
2014-04-28T20:52:26.596 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  1: ceph-osd() [0x810c01]
2014-04-28T20:52:26.597 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  2: (()+0xf500) [0x7f836bfd9500]
2014-04-28T20:52:26.597 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  3: (gsignal()+0x35) [0x7f836a7e18a5]
2014-04-28T20:52:26.597 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  4: (abort()+0x175) [0x7f836a7e3085]
2014-04-28T20:52:26.597 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  5: (()+0x34fc6) [0x7f836b316fc6]
2014-04-28T20:52:26.597 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  6: (leveldb::DBImpl::Get(leveldb::ReadOptions const&, leveldb::Slice const&, leveldb::Value*)+0x63) [0x7f836b2fc233]
2014-04-28T20:52:26.598 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  7: (LevelDBStore::_get_iterator()+0x41) [0x7f8961]
2014-04-28T20:52:26.598 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  8: (KeyValueDB::get_iterator(std::string const&)+0x2a) [0x7f593a]
2014-04-28T20:52:26.598 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  9: (LevelDBStore::get(std::string const&, std::set<std::string, std::less<std::string>, std::allocator<std::string> > const&, std::map<std::string, ceph::buffer::list, std::less<std::string>, std::allocator<std::pair<std::string const, ceph::buffer::list> > >*)+0x46) [0x7f7cb6]
2014-04-28T20:52:26.598 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  10: (DBObjectMap::init(bool)+0xd5) [0x7ea095]
2014-04-28T20:52:26.599 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  11: (FileStore::mount()+0x27fc) [0x79cbfc]
2014-04-28T20:52:26.599 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  12: (OSD::mkfs(std::string const&, std::string const&, uuid_d, int)+0xea) [0x65b04a]
2014-04-28T20:52:26.599 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  13: (main()+0xd7c) [0x5ae80c]
2014-04-28T20:52:26.599 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  14: (__libc_start_main()+0xfd) [0x7f836a7cdcdd]
2014-04-28T20:52:26.599 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  15: ceph-osd() [0x5ad6d9]
2014-04-28T20:52:26.600 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
2014-04-28T20:52:26.600 INFO:teuthology.orchestra.run.err:[10.214.138.80]: 
2014-04-28T20:52:26.600 INFO:teuthology.orchestra.run.err:[10.214.138.80]:    -20> 2014-04-28 23:52:26.579299 7f836cb6a7a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-04-28T20:52:26.600 INFO:teuthology.orchestra.run.err:[10.214.138.80]:      0> 2014-04-28 23:52:26.590187 7f836cb6a7a0 -1 *** Caught signal (Aborted) **
2014-04-28T20:52:26.601 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  in thread 7f836cb6a7a0
2014-04-28T20:52:26.601 INFO:teuthology.orchestra.run.err:[10.214.138.80]: 
2014-04-28T20:52:26.601 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  ceph version 0.67.7-75-g735a90a (735a90a95eea01dbcce5026758895117c2842627)
2014-04-28T20:52:26.601 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  1: ceph-osd() [0x810c01]
2014-04-28T20:52:26.601 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  2: (()+0xf500) [0x7f836bfd9500]
2014-04-28T20:52:26.602 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  3: (gsignal()+0x35) [0x7f836a7e18a5]
2014-04-28T20:52:26.602 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  4: (abort()+0x175) [0x7f836a7e3085]
2014-04-28T20:52:26.602 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  5: (()+0x34fc6) [0x7f836b316fc6]
2014-04-28T20:52:26.602 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  6: (leveldb::DBImpl::Get(leveldb::ReadOptions const&, leveldb::Slice const&, leveldb::Value*)+0x63) [0x7f836b2fc233]
2014-04-28T20:52:26.603 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  7: (LevelDBStore::_get_iterator()+0x41) [0x7f8961]
2014-04-28T20:52:26.603 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  8: (KeyValueDB::get_iterator(std::string const&)+0x2a) [0x7f593a]
2014-04-28T20:52:26.603 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  9: (LevelDBStore::get(std::string const&, std::set<std::string, std::less<std::string>, std::allocator<std::string> > const&, std::map<std::string, ceph::buffer::list, std::less<std::string>, std::allocator<std::pair<std::string const, ceph::buffer::list> > >*)+0x46) [0x7f7cb6]
2014-04-28T20:52:26.603 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  10: (DBObjectMap::init(bool)+0xd5) [0x7ea095]
2014-04-28T20:52:26.604 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  11: (FileStore::mount()+0x27fc) [0x79cbfc]
2014-04-28T20:52:26.604 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  12: (OSD::mkfs(std::string const&, std::string const&, uuid_d, int)+0xea) [0x65b04a]
2014-04-28T20:52:26.604 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  13: (main()+0xd7c) [0x5ae80c]
2014-04-28T20:52:26.604 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  14: (__libc_start_main()+0xfd) [0x7f836a7cdcdd]
2014-04-28T20:52:26.604 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  15: ceph-osd() [0x5ad6d9]
2014-04-28T20:52:26.605 INFO:teuthology.orchestra.run.err:[10.214.138.80]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
2014-04-28T20:52:26.605 INFO:teuthology.orchestra.run.err:[10.214.138.80]: 
2014-04-28T20:52:26.622 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-firefly/teuthology/contextutil.py", line 27, in nested
    vars.append(enter())
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/teuthworker/teuthology-firefly/teuthology/task/ceph.py", line 798, in cluster
    '--monmap', '{tdir}/monmap'.format(tdir=testdir),
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/remote.py", line 106, in run
    r = self._runner(client=self.ssh, **kwargs)
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 330, in run
    r.exitstatus = _check_status(r.exitstatus)
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 326, in _check_status
    raise CommandFailedError(command=r.command, exitstatus=status, node=host)
CommandFailedError: Command failed on 10.214.138.80 with status 134: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --mkfs --mkkey -i 3 --monmap /home/ubuntu/cephtest/monmap'
2014-04-28T20:52:26.693 INFO:teuthology.task.ceph:Checking for errors in any valgrind logs...
2014-04-28T20:52:26.693 DEBUG:teuthology.orchestra.run:Running [10.214.138.80]: "sudo zgrep '<kind>' /var/log/ceph/valgrind/* /dev/null | sort | uniq" 
2014-04-28T20:52:26.697 DEBUG:teuthology.orchestra.run:Running [10.214.138.62]: "sudo zgrep '<kind>' /var/log/ceph/valgrind/* /dev/null | sort | uniq" 
2014-04-28T20:52:26.700 DEBUG:teuthology.orchestra.run:Running [10.214.138.82]: "sudo zgrep '<kind>' /var/log/ceph/valgrind/* /dev/null | sort | uniq" 
2014-04-28T20:52:26.773 INFO:teuthology.orchestra.run.err:[10.214.138.62]: gzip: /var/log/ceph/valgrind/*.gz: No such file or directory
2014-04-28T20:52:26.790 INFO:teuthology.orchestra.run.err:[10.214.138.82]: gzip: /var/log/ceph/valgrind/*.gz: No such file or directory
2014-04-28T20:52:26.969 INFO:teuthology.orchestra.run.err:[10.214.138.80]: gzip: /var/log/ceph/valgrind/*.gz: No such file or directory
2014-04-28T20:52:26.980 INFO:teuthology.task.ceph:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/kcon_most...
2014-04-28T20:52:26.980 DEBUG:teuthology.orchestra.run:Running [10.214.138.80]: 'sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/kcon_most'
2014-04-28T20:52:26.984 DEBUG:teuthology.orchestra.run:Running [10.214.138.62]: 'sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/kcon_most'
2014-04-28T20:52:26.988 DEBUG:teuthology.orchestra.run:Running [10.214.138.82]: 'sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/kcon_most'
2014-04-28T20:52:27.018 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-firefly/teuthology/run_tasks.py", line 41, in run_tasks
    manager.__enter__()
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/teuthworker/teuthology-firefly/teuthology/task/ceph.py", line 1451, in task
    lambda: run_daemon(ctx=ctx, config=config, type_='mds'),
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/teuthworker/teuthology-firefly/teuthology/contextutil.py", line 27, in nested
    vars.append(enter())
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/teuthworker/teuthology-firefly/teuthology/task/ceph.py", line 798, in cluster
    '--monmap', '{tdir}/monmap'.format(tdir=testdir),
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/remote.py", line 106, in run
    r = self._runner(client=self.ssh, **kwargs)
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 330, in run
    r.exitstatus = _check_status(r.exitstatus)
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 326, in _check_status
    raise CommandFailedError(command=r.command, exitstatus=status, node=host)
CommandFailedError: Command failed on 10.214.138.80 with status 134: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --mkfs --mkkey -i 3 --monmap /home/ubuntu/cephtest/monmap'
2014-04-28T20:52:27.038 DEBUG:teuthology.run_tasks:Unwinding manager install
2014-04-28T20:52:27.038 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-firefly/teuthology/contextutil.py", line 29, in nested
    yield vars
  File "/home/teuthworker/teuthology-firefly/teuthology/task/install.py", line 1159, in task
    yield
  File "/home/teuthworker/teuthology-firefly/teuthology/run_tasks.py", line 41, in run_tasks
    manager.__enter__()
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/teuthworker/teuthology-firefly/teuthology/task/ceph.py", line 1451, in task
    lambda: run_daemon(ctx=ctx, config=config, type_='mds'),
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/teuthworker/teuthology-firefly/teuthology/contextutil.py", line 27, in nested
    vars.append(enter())
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/teuthworker/teuthology-firefly/teuthology/task/ceph.py", line 798, in cluster
    '--monmap', '{tdir}/monmap'.format(tdir=testdir),
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/remote.py", line 106, in run
    r = self._runner(client=self.ssh, **kwargs)
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 330, in run
    r.exitstatus = _check_status(r.exitstatus)
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 326, in _check_status
    raise CommandFailedError(command=r.command, exitstatus=status, node=host)
CommandFailedError: Command failed on 10.214.138.80 with status 134: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --mkfs --mkkey -i 3 --monmap /home/ubuntu/cephtest/monmap'
2014-04-28T20:52:27.052 DEBUG:teuthology.orchestra.run:Running [10.214.138.80]: 'sudo lsb_release -is'
2014-04-28T20:52:27.122 DEBUG:teuthology.misc:System to be installed: RedHatEnterpriseServer
archive_path: /var/lib/teuthworker/archive/teuthology-2014-04-28_20:35:04-upgrade:dumpling-x:stress-split-firefly-distro-basic-vps/221112
description: upgrade/dumpling-x/stress-split/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml
  2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/rados_api_tests.yaml
  6-next-mon/monb.yaml 7-workload/rados_api_tests.yaml 8-next-mon/monc.yaml 9-workload/{rados_api_tests.yaml
  rbd-python.yaml rgw-s3tests.yaml snaps-many-objects.yaml} distros/rhel_6.4.yaml}
email: null
job_id: '221112'
kernel: &id001
  kdb: true
  sha1: distro
last_in_suite: false
machine_type: vps
name: teuthology-2014-04-28_20:35:04-upgrade:dumpling-x:stress-split-firefly-distro-basic-vps
nuke-on-error: true
os_type: rhel
os_version: '6.4'
overrides:
  admin_socket:
    branch: firefly
  ceph:
    conf:
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
        mon warn on legacy crush tunables: false
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
    log-whitelist:
    - slow request
    - wrongly marked me down
    - objects unfound and apparently lost
    - log bound mismatch
    sha1: a0271000c12486d3c5adb2b0732e1c70c3789a4f
  ceph-deploy:
    branch:
      dev: firefly
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: a0271000c12486d3c5adb2b0732e1c70c3789a4f
  s3tests:
    branch: master
  workunit:
    sha1: a0271000c12486d3c5adb2b0732e1c70c3789a4f
owner: scheduled_teuthology@teuthology
roles:
- - mon.a
  - mon.b
  - mds.a
  - osd.0
  - osd.1
  - osd.2
- - osd.3
  - osd.4
  - osd.5
  - mon.c
- - client.0
targets:
  ubuntu@vpm008.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvU5tyGQajJG5YFdJtz0c/e9YzZimWOzq7bJprpdoCujf+oMMINClXbvoz0VuXg33GmsAu987FGntUpxqE7gZpvI8EKtge8np3Qvj7extxuE2Q4F1wRhJ1KpBgPbAuwcxEozG4D8ula/YcNBXInuY/1aqCyu4glxe/dqMMbPMWxbvUGfN/VEzSnlG1Es4a9hXj02FTo2Gsx3sNSHXCbPkZmmEuhRHuQ0oT8v7vsDSNpmNdaDZ93Q03r6gLWZgWHpFokM9JgKLVsV7CY6+mGGfFePIjzth+lgBDPhRsR4jehTM9jf0InTte5ya987u5p2EgchM6Sf/jCMI2jpUcul5aw==
  ubuntu@vpm013.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA1I7tdMuRGJ/akNKw+zO97o+lq4vUdn4foNfpCizvJHOnLUrAuFpTVTC8uZ+PT2e8f001qlHuCIuTW3no18t/JEBrAbwqQJibSibPrzd07QzgDbXCGkals3jY0AJ7lw348YVehzJBQqvMwEe3I7cbYoiAOhAAMgR20tPQRZBB25BcuQyGbM98Q+Vt9ysnMyxKgn0Z6hre6NM+9N8UB2dq+LwThT2cVtQjGc8bumKbRXV/cLgH34dvLjyH+03QcHpUjaE4J1sJ2ZGPOrDTo2bnL3yTqrvOe8fIgCV43OqHWL+quuj08s/Al49ya+J2xAyK9mJSBT1BjzlZ/EzvQeJVgw==
  ubuntu@vpm014.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqKPznSM2/TCq1wy8BBJr9SQf5/ApsZGVqOjktHCHWgELLf0POS0Wtjt70omdryc6NfpTx6BMAY7t9PSD4sx6cCSYUfB4tYxuIK4pxBM+/yEmx89D9DfN3XruSQ/TapDsVlRy/MUiCqM7EfG4NqzNJIvwuDe78gLcqBtMjU6YyhBDbV7L7nqrnP9K49/3AGLpQiuVu+gyJwaZSzBoje5FCoMUeJInJCUrhOfopNMrHfLtzYKldoHCW1k8/VZarzHSt6kQ30UrUVuqvVCBAcUB6zPvBkBV4GQ5xtf7RWjijJbt9oQcXdBqLIodvpZrK/EgnK10nwIv1VVz+luo7llRVw==
tasks:
- internal.lock_machines:
  - 3
  - vps
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- kernel: *id001
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.sudo: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install:
    branch: dumpling
- ceph:
    fs: xfs
- install.upgrade:
    osd.0: null
- ceph.restart:
    daemons:
    - osd.0
    - osd.1
    - osd.2
- thrashosds:
    chance_pgnum_grow: 1
    chance_pgpnum_fix: 1
    thrash_primary_affinity: false
    timeout: 1200
- ceph.restart:
    daemons:
    - mon.a
    wait-for-healthy: false
    wait-for-osds-up: true
- workunit:
    branch: dumpling
    clients:
      client.0:
      - rados/test-upgrade-firefly.sh
- ceph.restart:
    daemons:
    - mon.b
    wait-for-healthy: false
    wait-for-osds-up: true
- workunit:
    branch: dumpling
    clients:
      client.0:
      - rados/test-upgrade-firefly.sh
- install.upgrade:
    mon.c: null
- ceph.restart:
    daemons:
    - mon.c
    wait-for-healthy: false
    wait-for-osds-up: true
- ceph.wait_for_mon_quorum:
  - a
  - b
  - c
- workunit:
    branch: dumpling
    clients:
      client.0:
      - rados/test-upgrade-firefly.sh
- workunit:
    branch: dumpling
    clients:
      client.0:
      - rbd/test_librbd_python.sh
- rgw:
    client.0:
      idle_timeout: 300
- swift:
    client.0:
      rgw_server: client.0
- rados:
    clients:
    - client.0
    objects: 500
    op_weights:
      delete: 50
      read: 100
      rollback: 50
      snap_create: 50
      snap_remove: 50
      write: 100
    ops: 4000
teuthology_branch: firefly
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.11504
description: upgrade/dumpling-x/stress-split/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml
  2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/rados_api_tests.yaml
  6-next-mon/monb.yaml 7-workload/rados_api_tests.yaml 8-next-mon/monc.yaml 9-workload/{rados_api_tests.yaml
  rbd-python.yaml rgw-s3tests.yaml snaps-many-objects.yaml} distros/rhel_6.4.yaml}
duration: 400.3017530441284
failure_reason: 'Command failed on 10.214.138.80 with status 134: ''sudo MALLOC_CHECK_=3
  adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --mkfs
  --mkkey -i 3 --monmap /home/ubuntu/cephtest/monmap'''
flavor: basic
owner: scheduled_teuthology@teuthology
success: false
Actions #1

Updated by Sage Weil almost 10 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF