Project

General

Profile

Actions

Bug #19416

closed

ceph-disk failed on ovh

Added by Yuri Weinstein about 7 years ago. Updated about 3 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
ceph-disk
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Seems to be ovh related

Run: http://pulpito.ceph.com/teuthology-2017-03-28_05:10:06-ceph-disk-kraken-distro-basic-ovh/
Jobs: all
Logs: http://qa-proxy.ceph.com/teuthology/teuthology-2017-03-28_05:10:06-ceph-disk-kraken-distro-basic-ovh/954862/teuthology.log

2017-03-29T00:04:50.790 INFO:tasks.workunit.client.0.ovh094.stdout:command = 'ceph-disk --verbose prepare --bluestore --osd-uuid ee0580f4-1412-11e7-a880-fa163e5ff9bb /dev/vdc --block.db /dev/vdd --block.wal /dev/vdd'
2017-03-29T00:04:50.790 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.790 INFO:tasks.workunit.client.0.ovh094.stdout:    @staticmethod
2017-03-29T00:04:50.791 INFO:tasks.workunit.client.0.ovh094.stdout:    def sh(command):
2017-03-29T00:04:50.791 INFO:tasks.workunit.client.0.ovh094.stdout:        LOG.debug(":sh: " + command)
2017-03-29T00:04:50.791 INFO:tasks.workunit.client.0.ovh094.stdout:        proc = subprocess.Popen(
2017-03-29T00:04:50.791 INFO:tasks.workunit.client.0.ovh094.stdout:            args=command,
2017-03-29T00:04:50.791 INFO:tasks.workunit.client.0.ovh094.stdout:            stdout=subprocess.PIPE,
2017-03-29T00:04:50.791 INFO:tasks.workunit.client.0.ovh094.stdout:            stderr=subprocess.STDOUT,
2017-03-29T00:04:50.791 INFO:tasks.workunit.client.0.ovh094.stdout:            shell=True,
2017-03-29T00:04:50.791 INFO:tasks.workunit.client.0.ovh094.stdout:            bufsize=1)
2017-03-29T00:04:50.791 INFO:tasks.workunit.client.0.ovh094.stdout:        lines = []
2017-03-29T00:04:50.791 INFO:tasks.workunit.client.0.ovh094.stdout:        with proc.stdout:
2017-03-29T00:04:50.792 INFO:tasks.workunit.client.0.ovh094.stdout:            for line in iter(proc.stdout.readline, b''):
2017-03-29T00:04:50.792 INFO:tasks.workunit.client.0.ovh094.stdout:                line = line.decode('utf-8')
2017-03-29T00:04:50.792 INFO:tasks.workunit.client.0.ovh094.stdout:                if 'dangerous and experimental' in line:
2017-03-29T00:04:50.792 INFO:tasks.workunit.client.0.ovh094.stdout:                    LOG.debug('SKIP dangerous and experimental')
2017-03-29T00:04:50.792 INFO:tasks.workunit.client.0.ovh094.stdout:                    continue
2017-03-29T00:04:50.792 INFO:tasks.workunit.client.0.ovh094.stdout:                lines.append(line)
2017-03-29T00:04:50.792 INFO:tasks.workunit.client.0.ovh094.stdout:                LOG.debug(line.strip().encode('ascii', 'ignore'))
2017-03-29T00:04:50.792 INFO:tasks.workunit.client.0.ovh094.stdout:        if proc.wait() != 0:
2017-03-29T00:04:50.792 INFO:tasks.workunit.client.0.ovh094.stdout:            raise subprocess.CalledProcessError(
2017-03-29T00:04:50.792 INFO:tasks.workunit.client.0.ovh094.stdout:                returncode=proc.returncode,
2017-03-29T00:04:50.793 INFO:tasks.workunit.client.0.ovh094.stdout:>               cmd=command
2017-03-29T00:04:50.793 INFO:tasks.workunit.client.0.ovh094.stdout:            )
2017-03-29T00:04:50.793 INFO:tasks.workunit.client.0.ovh094.stdout:E           CalledProcessError: Command 'ceph-disk --verbose prepare --bluestore --osd-uuid ee0580f4-1412-11e7-a880-fa163e5ff9bb /dev/vdc --block.db /dev/vdd --block.wal /dev/vdd' returned non-zero exit status 1
2017-03-29T00:04:50.793 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.793 INFO:tasks.workunit.client.0.ovh094.stdout:../../../clone.client.0/qa/workunits/ceph-disk/ceph-disk-test.py:95: CalledProcessError
2017-03-29T00:04:50.793 INFO:tasks.workunit.client.0.ovh094.stdout:_________________ TestCephDisk.test_activate_separated_journal _________________
2017-03-29T00:04:50.793 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.793 INFO:tasks.workunit.client.0.ovh094.stdout:self = <ceph-disk-test.TestCephDisk object at 0x1b2c950>
2017-03-29T00:04:50.793 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.793 INFO:tasks.workunit.client.0.ovh094.stdout:    def test_activate_separated_journal(self):
2017-03-29T00:04:50.793 INFO:tasks.workunit.client.0.ovh094.stdout:        c = CephDisk()
2017-03-29T00:04:50.794 INFO:tasks.workunit.client.0.ovh094.stdout:        disks = c.unused_disks()
2017-03-29T00:04:50.794 INFO:tasks.workunit.client.0.ovh094.stdout:        data_disk = disks[0]
2017-03-29T00:04:50.794 INFO:tasks.workunit.client.0.ovh094.stdout:>       journal_disk = disks[1]
2017-03-29T00:04:50.794 INFO:tasks.workunit.client.0.ovh094.stdout:E       IndexError: list index out of range
2017-03-29T00:04:50.794 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.794 INFO:tasks.workunit.client.0.ovh094.stdout:../../../clone.client.0/qa/workunits/ceph-disk/ceph-disk-test.py:604: IndexError
2017-03-29T00:04:50.794 INFO:tasks.workunit.client.0.ovh094.stdout:_________ TestCephDisk.test_activate_separated_journal_dev_is_symlink __________
2017-03-29T00:04:50.794 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.794 INFO:tasks.workunit.client.0.ovh094.stdout:self = <ceph-disk-test.TestCephDisk object at 0x1b33890>
2017-03-29T00:04:50.794 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.795 INFO:tasks.workunit.client.0.ovh094.stdout:    def test_activate_separated_journal_dev_is_symlink(self):
2017-03-29T00:04:50.795 INFO:tasks.workunit.client.0.ovh094.stdout:        c = CephDisk()
2017-03-29T00:04:50.795 INFO:tasks.workunit.client.0.ovh094.stdout:        disks = c.unused_disks()
2017-03-29T00:04:50.795 INFO:tasks.workunit.client.0.ovh094.stdout:        data_disk = disks[0]
2017-03-29T00:04:50.795 INFO:tasks.workunit.client.0.ovh094.stdout:>       journal_disk = disks[1]
2017-03-29T00:04:50.795 INFO:tasks.workunit.client.0.ovh094.stdout:E       IndexError: list index out of range
2017-03-29T00:04:50.795 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.795 INFO:tasks.workunit.client.0.ovh094.stdout:../../../clone.client.0/qa/workunits/ceph-disk/ceph-disk-test.py:614: IndexError
2017-03-29T00:04:50.795 INFO:tasks.workunit.client.0.ovh094.stdout:_______________ TestCephDisk.test_activate_two_separated_journal _______________
2017-03-29T00:04:50.795 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.795 INFO:tasks.workunit.client.0.ovh094.stdout:self = <ceph-disk-test.TestCephDisk object at 0x1995850>
2017-03-29T00:04:50.796 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.796 INFO:tasks.workunit.client.0.ovh094.stdout:    def test_activate_two_separated_journal(self):
2017-03-29T00:04:50.796 INFO:tasks.workunit.client.0.ovh094.stdout:        c = CephDisk()
2017-03-29T00:04:50.796 INFO:tasks.workunit.client.0.ovh094.stdout:        disks = c.unused_disks()
2017-03-29T00:04:50.796 INFO:tasks.workunit.client.0.ovh094.stdout:        data_disk = disks[0]
2017-03-29T00:04:50.796 INFO:tasks.workunit.client.0.ovh094.stdout:>       other_data_disk = disks[1]
2017-03-29T00:04:50.796 INFO:tasks.workunit.client.0.ovh094.stdout:E       IndexError: list index out of range
2017-03-29T00:04:50.796 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.796 INFO:tasks.workunit.client.0.ovh094.stdout:../../../clone.client.0/qa/workunits/ceph-disk/ceph-disk-test.py:652: IndexError
2017-03-29T00:04:50.796 INFO:tasks.workunit.client.0.ovh094.stdout:___________________ TestCephDisk.test_activate_reuse_journal ___________________
2017-03-29T00:04:50.797 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.797 INFO:tasks.workunit.client.0.ovh094.stdout:self = <ceph-disk-test.TestCephDisk object at 0x1995c10>
2017-03-29T00:04:50.797 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.797 INFO:tasks.workunit.client.0.ovh094.stdout:    def test_activate_reuse_journal(self):
2017-03-29T00:04:50.797 INFO:tasks.workunit.client.0.ovh094.stdout:        c = CephDisk()
2017-03-29T00:04:50.797 INFO:tasks.workunit.client.0.ovh094.stdout:        disks = c.unused_disks()
2017-03-29T00:04:50.797 INFO:tasks.workunit.client.0.ovh094.stdout:        data_disk = disks[0]
2017-03-29T00:04:50.797 INFO:tasks.workunit.client.0.ovh094.stdout:>       journal_disk = disks[1]
2017-03-29T00:04:50.797 INFO:tasks.workunit.client.0.ovh094.stdout:E       IndexError: list index out of range
2017-03-29T00:04:50.797 INFO:tasks.workunit.client.0.ovh094.stdout:
2017-03-29T00:04:50.798 INFO:tasks.workunit.client.0.ovh094.stdout:../../../clone.client.0/qa/workunits/ceph-disk/ceph-disk-test.py:674: IndexError
2017-03-29T00:04:50.798 INFO:tasks.workunit.client.0.ovh094.stdout:==================== 6 failed, 18 passed in 1092.27 seconds ====================
2017-03-29T00:04:50.802 INFO:tasks.workunit.client.0.ovh094.stderr:+ result=1
2017-03-29T00:04:50.802 INFO:tasks.workunit.client.0.ovh094.stderr:+ sudo rm -f /lib/udev/rules.d/60-ceph-by-partuuid.rules
2017-03-29T00:04:50.816 INFO:tasks.workunit.client.0.ovh094.stderr:++ id -u
2017-03-29T00:04:50.817 INFO:tasks.workunit.client.0.ovh094.stderr:++ dirname /home/ubuntu/cephtest/clone.client.0/qa/workunits/ceph-disk/ceph-disk.sh
2017-03-29T00:04:50.818 INFO:tasks.workunit.client.0.ovh094.stderr:+ sudo chown -R 1000 /home/ubuntu/cephtest/clone.client.0/qa/workunits/ceph-disk
2017-03-29T00:04:50.826 INFO:tasks.workunit.client.0.ovh094.stderr:+ exit 1
2017-03-29T00:04:50.827 INFO:tasks.workunit:Stopping ['ceph-disk/ceph-disk.sh'] on client.0...
2017-03-29T00:04:50.827 INFO:teuthology.orchestra.run.ovh094:Running: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0'
2017-03-29T00:04:51.155 ERROR:teuthology.parallel:Exception in parallel execution
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 83, in __exit__
    for result in self:
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_ceph_kraken/qa/tasks/workunit.py", line 415, in _run_tests
    label="workunit test {workunit}".format(workunit=workunit)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 193, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 414, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 149, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 171, in _raise_for_status
    node=self.hostname, label=self.label
CommandFailedError: Command failed (workunit test ceph-disk/ceph-disk.sh) on ovh094 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=98a87fa97c9b23e21a05130c72730f5034691310 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/ceph-disk/ceph-disk.sh'
Actions #1

Updated by Sage Weil about 3 years ago

  • Status changed from New to Won't Fix
Actions

Also available in: Atom PDF