Project

General

Profile

Bug #16206 ยป run-tox.sh.log

Yuri Weinstein, 06/08/2016 10:26 PM

 
flake8 create: /home/yuriw/ceph/src/ceph-disk/.tox/flake8
flake8 installdeps: --use-wheel, --find-links=file:///home/yuriw/ceph/src/ceph-disk/wheelhouse, -r/home/yuriw/ceph/src/ceph-disk/requirements.txt, -r/home/yuriw/ceph/src/ceph-disk/test-requirements.txt, ../ceph-detect-init
flake8 develop-inst: /home/yuriw/ceph/src/ceph-disk
flake8 installed: ceph-detect-init==1.0.1,-e git+git@github.com:ceph/ceph@169947697975bd8bccdf040e7b691bb823190e08#egg=ceph_disk&subdirectory=src/ceph-disk,configobj==5.0.6,coverage==4.1,discover==0.4.0,extras==1.0.0,fixtures==3.0.0,flake8==2.5.4,funcsigs==1.0.2,linecache2==1.0.0,mccabe==0.4.0,mock==2.0.0,pbr==1.10.0,pep8==1.7.0,pluggy==0.3.1,py==1.4.31,pyflakes==1.0.0,pytest==2.9.2,python-mimeparse==1.5.2,python-subunit==1.2.0,six==1.10.0,testrepository==0.0.20,testtools==2.2.0,tox==2.3.1,traceback2==1.4.0,unittest2==1.1.0,virtualenv==15.0.2
flake8 runtests: PYTHONHASHSEED='1587613977'
flake8 runtests: commands[0] | flake8 --ignore=H105,H405 ceph_disk tests
py27 create: /home/yuriw/ceph/src/ceph-disk/.tox/py27
py27 installdeps: --use-wheel, --find-links=file:///home/yuriw/ceph/src/ceph-disk/wheelhouse, -r/home/yuriw/ceph/src/ceph-disk/requirements.txt, -r/home/yuriw/ceph/src/ceph-disk/test-requirements.txt, ../ceph-detect-init
py27 develop-inst: /home/yuriw/ceph/src/ceph-disk
py27 installed: ceph-detect-init==1.0.1,-e git+git@github.com:ceph/ceph@169947697975bd8bccdf040e7b691bb823190e08#egg=ceph_disk&subdirectory=src/ceph-disk,configobj==5.0.6,coverage==4.1,discover==0.4.0,extras==1.0.0,fixtures==3.0.0,flake8==2.5.4,funcsigs==1.0.2,linecache2==1.0.0,mccabe==0.4.0,mock==2.0.0,pbr==1.10.0,pep8==1.7.0,pluggy==0.3.1,py==1.4.31,pyflakes==1.0.0,pytest==2.9.2,python-mimeparse==1.5.2,python-subunit==1.2.0,six==1.10.0,testrepository==0.0.20,testtools==2.2.0,tox==2.3.1,traceback2==1.4.0,unittest2==1.1.0,virtualenv==15.0.2
py27 runtests: PYTHONHASHSEED='1587613977'
py27 runtests: commands[0] | coverage run --append --source=ceph_disk /home/yuriw/ceph/src/ceph-disk/.tox/py27/bin/py.test -vv tests/test_main.py
============================= test session starts ==============================
platform linux2 -- Python 2.7.6, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- /home/yuriw/ceph/src/ceph-disk/.tox/py27/bin/python2.7
cachedir: .cache
rootdir: /home/yuriw/ceph/src/ceph-disk, inifile:
collecting ... collected 25 items

tests/test_main.py::TestCephDisk::test_main_list_json PASSED
tests/test_main.py::TestCephDisk::test_main_list_plain PASSED
tests/test_main.py::TestCephDisk::test_list_format_more_osd_info_plain PASSED
tests/test_main.py::TestCephDisk::test_list_format_plain PASSED
tests/test_main.py::TestCephDisk::test_list_format_dev_plain PASSED
tests/test_main.py::TestCephDisk::test_list_dev_osd PASSED
tests/test_main.py::TestCephDisk::test_list_all_partitions PASSED
tests/test_main.py::TestCephDisk::test_list_data PASSED
tests/test_main.py::TestCephDisk::test_list_dmcrypt_data PASSED
tests/test_main.py::TestCephDisk::test_list_multipath PASSED
tests/test_main.py::TestCephDisk::test_list_default PASSED
tests/test_main.py::TestCephDisk::test_list_bluestore PASSED
tests/test_main.py::TestCephDisk::test_list_other PASSED
tests/test_main.py::TestCephDiskDeactivateAndDestroy::test_check_osd_status PASSED
tests/test_main.py::TestCephDiskDeactivateAndDestroy::test_deallocate_osd_id_fail PASSED
tests/test_main.py::TestCephDiskDeactivateAndDestroy::test_delete_osd_auth_key_fail PASSED
tests/test_main.py::TestCephDiskDeactivateAndDestroy::test_main_deactivate PASSED
tests/test_main.py::TestCephDiskDeactivateAndDestroy::test_main_destroy PASSED
tests/test_main.py::TestCephDiskDeactivateAndDestroy::test_mark_out_out PASSED
tests/test_main.py::TestCephDiskDeactivateAndDestroy::test_mount PASSED
tests/test_main.py::TestCephDiskDeactivateAndDestroy::test_path_set_context PASSED
tests/test_main.py::TestCephDiskDeactivateAndDestroy::test_remove_from_crush_map_fail PASSED
tests/test_main.py::TestCephDiskDeactivateAndDestroy::test_remove_osd_directory_files PASSED
tests/test_main.py::TestCephDiskDeactivateAndDestroy::test_stop_daemon PASSED
tests/test_main.py::TestCephDiskDeactivateAndDestroy::test_umount PASSED

========================== 25 passed in 4.98 seconds ===========================
py27 runtests: commands[1] | coverage run --append --source=ceph_disk /home/yuriw/ceph/src/ceph-disk/.tox/py27/bin/py.test -vv tests/test_prepare.py
============================= test session starts ==============================
platform linux2 -- Python 2.7.6, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- /home/yuriw/ceph/src/ceph-disk/.tox/py27/bin/python2.7
cachedir: .cache
rootdir: /home/yuriw/ceph/src/ceph-disk, inifile:
collecting ... collected 15 items

tests/test_prepare.py::TestPrepare::test_init_dir PASSED
tests/test_prepare.py::TestPrepare::test_init_dev PASSED
tests/test_prepare.py::TestPrepare::test_set_subparser PASSED
tests/test_prepare.py::TestDevice::test_init PASSED
tests/test_prepare.py::TestDevice::test_create_partition PASSED
tests/test_prepare.py::TestDevicePartition::test_init PASSED
tests/test_prepare.py::TestDevicePartition::test_get_uuid PASSED
tests/test_prepare.py::TestDevicePartition::test_get_ptype PASSED
tests/test_prepare.py::TestDevicePartition::test_partition_number PASSED
tests/test_prepare.py::TestDevicePartition::test_dev PASSED
tests/test_prepare.py::TestDevicePartition::test_factory PASSED
tests/test_prepare.py::TestDevicePartitionMultipath::test_init PASSED
tests/test_prepare.py::TestDevicePartitionCrypt::test_luks PASSED
tests/test_prepare.py::TestCryptHelpers::test_get_dmcrypt_type PASSED
tests/test_prepare.py::TestPrepareData::test_set_variables PASSED

========================== 15 passed in 0.52 seconds ===========================
py27 runtests: commands[2] | bash -x tests/ceph-disk.sh
WARNING:test command found but not installed in testenv
cmd: /bin/bash
env: /home/yuriw/ceph/src/ceph-disk/.tox/py27
Maybe you forgot to specify a dependency? See also the whitelist_externals envconfig setting.
+ PS4='${BASH_SOURCE[0]}:$LINENO: ${FUNCNAME[0]}: '
tests/ceph-disk.sh:38: main: export PATH=..:.:/home/yuriw/ceph/src/ceph-disk/.tox/py27/bin:/tmp/ceph-disk-virtualenv/bin:/home/yuriw/ceph/src:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
tests/ceph-disk.sh:38: main: PATH=..:.:/home/yuriw/ceph/src/ceph-disk/.tox/py27/bin:/tmp/ceph-disk-virtualenv/bin:/home/yuriw/ceph/src:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
tests/ceph-disk.sh:39: main: export PATH=../ceph-detect-init/virtualenv/bin:..:.:/home/yuriw/ceph/src/ceph-disk/.tox/py27/bin:/tmp/ceph-disk-virtualenv/bin:/home/yuriw/ceph/src:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
tests/ceph-disk.sh:39: main: PATH=../ceph-detect-init/virtualenv/bin:..:.:/home/yuriw/ceph/src/ceph-disk/.tox/py27/bin:/tmp/ceph-disk-virtualenv/bin:/home/yuriw/ceph/src:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
tests/ceph-disk.sh:40: main: export PATH=virtualenv/bin:../ceph-detect-init/virtualenv/bin:..:.:/home/yuriw/ceph/src/ceph-disk/.tox/py27/bin:/tmp/ceph-disk-virtualenv/bin:/home/yuriw/ceph/src:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
tests/ceph-disk.sh:40: main: PATH=virtualenv/bin:../ceph-detect-init/virtualenv/bin:..:.:/home/yuriw/ceph/src/ceph-disk/.tox/py27/bin:/tmp/ceph-disk-virtualenv/bin:/home/yuriw/ceph/src:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
tests/ceph-disk.sh:41: main: DIR=test-ceph-disk
tests/ceph-disk.sh:42: main: : /home/yuriw/ceph/src/ceph-disk/.tox/py27/bin/coverage run --append --source=ceph_disk -- /home/yuriw/ceph/src/ceph-disk/.tox/py27/bin/ceph-disk
tests/ceph-disk.sh:43: main: OSD_DATA=test-ceph-disk/osd
tests/ceph-disk.sh:44: main: MON_ID=a
tests/ceph-disk.sh:45: main: MONA=127.0.0.1:7451
tests/ceph-disk.sh:46: main: TEST_POOL=rbd
ttests/ceph-disk.sh:47: main: uuidgen
tests/ceph-disk.sh:47: main: FSID=7cd85c56-6cfc-4bcb-bce7-1d73246e6f18
tests/ceph-disk.sh:48: main: export CEPH_CONF=test-ceph-disk/ceph.conf
tests/ceph-disk.sh:48: main: CEPH_CONF=test-ceph-disk/ceph.conf
tests/ceph-disk.sh:49: main: export 'CEPH_ARGS=--fsid 7cd85c56-6cfc-4bcb-bce7-1d73246e6f18'
tests/ceph-disk.sh:49: main: CEPH_ARGS='--fsid 7cd85c56-6cfc-4bcb-bce7-1d73246e6f18'
tests/ceph-disk.sh:50: main: CEPH_ARGS+=' --chdir='
tests/ceph-disk.sh:51: main: CEPH_ARGS+=' --journal-dio=false'
tests/ceph-disk.sh:52: main: CEPH_ARGS+=' --run-dir=test-ceph-disk'
tests/ceph-disk.sh:53: main: CEPH_ARGS+=' --osd-failsafe-full-ratio=.99'
tests/ceph-disk.sh:54: main: CEPH_ARGS+=' --mon-host=127.0.0.1:7451'
tests/ceph-disk.sh:55: main: CEPH_ARGS+=' --log-file=test-ceph-disk/$name.log'
tests/ceph-disk.sh:56: main: CEPH_ARGS+=' --pid-file=test-ceph-disk/$name.pidfile'
tests/ceph-disk.sh:57: main: test -d ../.libs
tests/ceph-disk.sh:58: main: CEPH_ARGS+=' --erasure-code-dir=../.libs'
tests/ceph-disk.sh:59: main: CEPH_ARGS+=' --compression-dir=../.libs'
tests/ceph-disk.sh:61: main: CEPH_ARGS+=' --auth-supported=none'
tests/ceph-disk.sh:62: main: CEPH_ARGS+=' --osd-journal-size=100'
tests/ceph-disk.sh:63: main: CEPH_ARGS+=' --debug-mon=20'
tests/ceph-disk.sh:64: main: CEPH_ARGS+=' --debug-osd=20'
tests/ceph-disk.sh:65: main: CEPH_ARGS+=' --debug-bdev=20'
tests/ceph-disk.sh:66: main: CEPH_ARGS+=' --debug-bluestore=20'
tests/ceph-disk.sh:67: main: CEPH_ARGS+=' --osd-max-object-name-len=460'
tests/ceph-disk.sh:68: main: CEPH_ARGS+=' --osd-max-object-namespace-len=64'
tests/ceph-disk.sh:69: main: CEPH_DISK_ARGS=
tests/ceph-disk.sh:70: main: CEPH_DISK_ARGS+=' --statedir=test-ceph-disk'
tests/ceph-disk.sh:71: main: CEPH_DISK_ARGS+=' --sysconfdir=test-ceph-disk'
tests/ceph-disk.sh:72: main: CEPH_DISK_ARGS+=' --prepend-to-path='
tests/ceph-disk.sh:73: main: CEPH_DISK_ARGS+=' --verbose'
tests/ceph-disk.sh:74: main: TIMEOUT=360
ttests/ceph-disk.sh:76: main: which cat
tests/ceph-disk.sh:76: main: cat=/bin/cat
ttests/ceph-disk.sh:77: main: which timeout
tests/ceph-disk.sh:77: main: timeout=/usr/bin/timeout
ttests/ceph-disk.sh:78: main: which diff
tests/ceph-disk.sh:78: main: diff=/usr/bin/diff
ttests/ceph-disk.sh:79: main: which mkdir
tests/ceph-disk.sh:79: main: mkdir=/bin/mkdir
ttests/ceph-disk.sh:80: main: which rm
tests/ceph-disk.sh:80: main: rm=/bin/rm
ttests/ceph-disk.sh:81: main: which uuidgen
tests/ceph-disk.sh:81: main: uuidgen=/usr/bin/uuidgen
tests/ceph-disk.sh:415: main: run
tests/ceph-disk.sh:386: run: local default_actions
tests/ceph-disk.sh:387: run: default_actions+='test_path '
tests/ceph-disk.sh:388: run: default_actions+='test_no_path '
tests/ceph-disk.sh:389: run: default_actions+='test_find_cluster_by_uuid '
tests/ceph-disk.sh:390: run: default_actions+='test_prepend_to_path '
tests/ceph-disk.sh:391: run: default_actions+='test_activate_dir_magic '
tests/ceph-disk.sh:392: run: default_actions+='test_activate_dir '
tests/ceph-disk.sh:393: run: default_actions+='test_keyring_path '
tests/ceph-disk.sh:394: run: default_actions+='test_mark_init '
tests/ceph-disk.sh:395: run: default_actions+='test_zap '
tests/ceph-disk.sh:396: run: default_actions+='test_activate_dir_bluestore '
tests/ceph-disk.sh:397: run: default_actions+='test_ceph_osd_mkfs '
tests/ceph-disk.sh:398: run: local 'actions=test_path test_no_path test_find_cluster_by_uuid test_prepend_to_path test_activate_dir_magic test_activate_dir test_keyring_path test_mark_init test_zap test_activate_dir_bluestore test_ceph_osd_mkfs '
tests/ceph-disk.sh:399: run: local status
tests/ceph-disk.sh:400: run: for action in '$actions'
tests/ceph-disk.sh:401: run: setup
tests/ceph-disk.sh:84: setup: teardown
tests/ceph-disk.sh:91: teardown: kill_daemons
tests/ceph-disk.sh:123: kill_daemons: test -e test-ceph-disk
tests/ceph-disk.sh:124: kill_daemons: return
ttests/ceph-disk.sh:92: teardown: stat -f -c %T .
tests/ceph-disk.sh:92: teardown: '[' ext2/ext3 == btrfs ']'
tests/ceph-disk.sh:96: teardown: read mounted rest
ttests/ceph-disk.sh:96: teardown: pwd
tests/ceph-disk.sh:96: teardown: grep ' /home/yuriw/ceph/src/ceph-disk/test-ceph-disk/'
tests/ceph-disk.sh:99: teardown: rm -fr test-ceph-disk
tests/ceph-disk.sh:85: setup: mkdir test-ceph-disk
tests/ceph-disk.sh:86: setup: mkdir test-ceph-disk/osd
tests/ceph-disk.sh:87: setup: touch test-ceph-disk/ceph.conf
tests/ceph-disk.sh:402: run: set -x
tests/ceph-disk.sh:403: run: test_path
tests/ceph-disk.sh:195: test_path: tweak_path use_path
tests/ceph-disk.sh:149: tweak_path: local tweaker=use_path
tests/ceph-disk.sh:151: tweak_path: setup
tests/ceph-disk.sh:84: setup: teardown
tests/ceph-disk.sh:91: teardown: kill_daemons
tests/ceph-disk.sh:123: kill_daemons: test -e test-ceph-disk
ttests/ceph-disk.sh:126: kill_daemons: find test-ceph-disk
ttests/ceph-disk.sh:126: kill_daemons: grep pidfile
ttests/ceph-disk.sh:92: teardown: stat -f -c %T .
tests/ceph-disk.sh:92: teardown: '[' ext2/ext3 == btrfs ']'
tests/ceph-disk.sh:96: teardown: read mounted rest
ttests/ceph-disk.sh:96: teardown: pwd
tests/ceph-disk.sh:96: teardown: grep ' /home/yuriw/ceph/src/ceph-disk/test-ceph-disk/'
tests/ceph-disk.sh:99: teardown: rm -fr test-ceph-disk
tests/ceph-disk.sh:85: setup: mkdir test-ceph-disk
tests/ceph-disk.sh:86: setup: mkdir test-ceph-disk/osd
tests/ceph-disk.sh:87: setup: touch test-ceph-disk/ceph.conf
tests/ceph-disk.sh:153: tweak_path: command_fixture ceph-conf
tests/ceph-disk.sh:136: command_fixture: local command=ceph-conf
tttests/ceph-disk.sh:137: command_fixture: which ceph-conf
ttests/ceph-disk.sh:137: command_fixture: readlink -f ../ceph-conf
tests/ceph-disk.sh:137: command_fixture: local fpath=/home/yuriw/ceph/src/ceph-conf
ttests/ceph-disk.sh:138: command_fixture: readlink -f ../ceph-conf
tests/ceph-disk.sh:138: command_fixture: '[' /home/yuriw/ceph/src/ceph-conf = /home/yuriw/ceph/src/ceph-conf ']'
tests/ceph-disk.sh:140: command_fixture: cat
tests/ceph-disk.sh:145: command_fixture: chmod +x test-ceph-disk/ceph-conf
tests/ceph-disk.sh:154: tweak_path: command_fixture ceph-osd
tests/ceph-disk.sh:136: command_fixture: local command=ceph-osd
tttests/ceph-disk.sh:137: command_fixture: which ceph-osd
ttests/ceph-disk.sh:137: command_fixture: readlink -f ../ceph-osd
tests/ceph-disk.sh:137: command_fixture: local fpath=/home/yuriw/ceph/src/ceph-osd
ttests/ceph-disk.sh:138: command_fixture: readlink -f ../ceph-osd
tests/ceph-disk.sh:138: command_fixture: '[' /home/yuriw/ceph/src/ceph-osd = /home/yuriw/ceph/src/ceph-osd ']'
tests/ceph-disk.sh:140: command_fixture: cat
tests/ceph-disk.sh:145: command_fixture: chmod +x test-ceph-disk/ceph-osd
tests/ceph-disk.sh:156: tweak_path: test_activate_dir
tests/ceph-disk.sh:313: test_activate_dir: run_mon
tests/ceph-disk.sh:103: run_mon: local mon_dir=test-ceph-disk/a
tests/ceph-disk.sh:106: run_mon: ceph-mon --id a --mkfs --mon-data=test-ceph-disk/a --mon-initial-members=a
ceph-mon: mon.noname-a 127.0.0.1:7451/0 is local, renaming to mon.a
ceph-mon: set fsid to 7cd85c56-6cfc-4bcb-bce7-1d73246e6f18
ceph-mon: created monfs at test-ceph-disk/a for mon.a
tests/ceph-disk.sh:113: run_mon: ceph-mon --id a --mon-data=test-ceph-disk/a --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --mon-cluster-log-file=test-ceph-disk/a/log --public-addr 127.0.0.1:7451
tests/ceph-disk.sh:315: test_activate_dir: local osd_data=test-ceph-disk/dir
tests/ceph-disk.sh:316: test_activate_dir: /bin/mkdir -p test-ceph-disk/dir
tests/ceph-disk.sh:317: test_activate_dir: test_activate test-ceph-disk/dir test-ceph-disk/dir
tests/ceph-disk.sh:295: test_activate: local to_prepare=test-ceph-disk/dir
tests/ceph-disk.sh:296: test_activate: local to_activate=test-ceph-disk/dir
ttests/ceph-disk.sh:297: test_activate: /usr/bin/uuidgen
tests/ceph-disk.sh:297: test_activate: local osd_uuid=61264e6a-467c-4a67-8cca-0c413b4db43b
tests/ceph-disk.sh:299: test_activate: /bin/mkdir -p test-ceph-disk/osd
tests/ceph-disk.sh:301: test_activate: /home/yuriw/ceph/src/ceph-disk/.tox/py27/bin/coverage run --append --source=ceph_disk -- /home/yuriw/ceph/src/ceph-disk/.tox/py27/bin/ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose prepare --osd-uuid 61264e6a-467c-4a67-8cca-0c413b4db43b test-ceph-disk/dir
command: Running command: ../ceph-osd --cluster=ceph --show-config-value=fsid
command: Running command: ../ceph-osd --check-allows-journal -i 0 --cluster ceph
command: Running command: ../ceph-osd --check-wants-journal -i 0 --cluster ceph
command: Running command: ../ceph-osd --check-needs-journal -i 0 --cluster ceph
command: Running command: ../ceph-osd --cluster=ceph --show-config-value=osd_journal_size
populate_data_path: Preparing osd data dir test-ceph-disk/dir
command: Running command: /bin/chown -R ceph:ceph test-ceph-disk/dir/ceph_fsid.20628.tmp
command: Running command: /bin/chown -R ceph:ceph test-ceph-disk/dir/fsid.20628.tmp
command: Running command: /bin/chown -R ceph:ceph test-ceph-disk/dir/magic.20628.tmp
tests/ceph-disk.sh:304: test_activate: /usr/bin/timeout 360 /home/yuriw/ceph/src/ceph-disk/.tox/py27/bin/coverage run --append --source=ceph_disk -- /home/yuriw/ceph/src/ceph-disk/.tox/py27/bin/ceph-disk --statedir=test-ceph-disk --sysconfdir=test-ceph-disk --prepend-to-path= --verbose activate --mark-init=none test-ceph-disk/dir
main_activate: path = test-ceph-disk/dir
activate: Cluster uuid is 7cd85c56-6cfc-4bcb-bce7-1d73246e6f18
command: Running command: ../ceph-osd --cluster=ceph --show-config-value=fsid
activate: Cluster name is ceph
activate: OSD uuid is 61264e6a-467c-4a67-8cca-0c413b4db43b
allocate_osd_id: Allocating OSD id...
command: Running command: ../ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring osd create --concise 61264e6a-467c-4a67-8cca-0c413b4db43b
command: Running command: /bin/chown -R ceph:ceph test-ceph-disk/dir/whoami.20650.tmp
activate: OSD id is 0
activate: Initializing OSD...
command_check_call: Running command: ../ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring mon getmap -o test-ceph-disk/dir/activate.monmap
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
got monmap epoch 1
command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap test-ceph-disk/dir/activate.monmap --osd-data test-ceph-disk/dir --osd-journal test-ceph-disk/dir/journal --osd-uuid 61264e6a-467c-4a67-8cca-0c413b4db43b --keyring test-ceph-disk/dir/keyring --setuser ceph --setgroup ceph
activate: Authorizing OSD key...
command_check_call: Running command: ../ceph --cluster ceph --name client.bootstrap-osd --keyring test-ceph-disk/bootstrap-osd/ceph.keyring auth add osd.0 -i test-ceph-disk/dir/keyring osd allow * mon allow profile osd
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
added key for osd.0
command: Running command: /bin/chown -R ceph:ceph test-ceph-disk/dir/active.20650.tmp
activate: ceph osd.0 data dir is ready at test-ceph-disk/dir
command_check_call: Running command: ../ceph-osd --cluster=ceph --id=0 --osd-data=test-ceph-disk/dir --osd-journal=test-ceph-disk/dir/journal
starting osd.0 at :/0 osd_data test-ceph-disk/dir test-ceph-disk/dir/journal
tests/ceph-disk.sh:309: test_activate: test_pool_read_write 61264e6a-467c-4a67-8cca-0c413b4db43b
tests/ceph-disk.sh:281: test_pool_read_write: local osd_uuid=61264e6a-467c-4a67-8cca-0c413b4db43b
tests/ceph-disk.sh:283: test_pool_read_write: /usr/bin/timeout 360 ceph osd pool set rbd size 1
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
set pool 0 size to 1
ttests/ceph-disk.sh:285: test_pool_read_write: ceph osd create 61264e6a-467c-4a67-8cca-0c413b4db43b
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
tests/ceph-disk.sh:285: test_pool_read_write: local id=0
tests/ceph-disk.sh:286: test_pool_read_write: local weight=1
tests/ceph-disk.sh:287: test_pool_read_write: ceph osd crush add osd.0 1 root=default host=localhost
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
add item id 0 name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map
tests/ceph-disk.sh:288: test_pool_read_write: echo FOO
tests/ceph-disk.sh:289: test_pool_read_write: /usr/bin/timeout 360 rados --pool rbd put BAR test-ceph-disk/BAR
tests/ceph-disk.sh:289: test_pool_read_write: return 1
tests/ceph-disk.sh:309: test_activate: return 1
tests/ceph-disk.sh:317: test_activate_dir: return 1
tests/ceph-disk.sh:156: tweak_path: return 1
tests/ceph-disk.sh:195: test_path: return 1
tests/ceph-disk.sh:404: run: status=1
tests/ceph-disk.sh:405: run: set +x
ERROR: InvocationError: '/bin/bash -x tests/ceph-disk.sh'
___________________________________ summary ____________________________________
flake8: commands succeeded
ERROR: py27: commands failed
    (1-1/1)