Project

General

Profile

Actions

Bug #9823

closed

ceph-osd mkfs or ceph auth add : exit -9

Added by Loïc Dachary over 9 years ago. Updated over 9 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
Category:
OSD
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

While running src/test/erasure-code/test-erasure-code.sh in a loop, the following happened. The -9 exit code suggests a EBADF. No other logs collected.

run: 23: local dir=test-erasure-code
run: 25: export CEPH_MON=127.0.0.1:7101
run: 25: CEPH_MON=127.0.0.1:7101
run: 26: export CEPH_ARGS
rrun: 27: uuidgen
run: 27: CEPH_ARGS+='--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none '
run: 28: CEPH_ARGS+='--mon-host=127.0.0.1:7101 '
run: 30: setup test-erasure-code
setup: 19: local dir=test-erasure-code
setup: 20: teardown test-erasure-code
teardown: 25: local dir=test-erasure-code
teardown: 26: kill_daemons test-erasure-code
kill_daemons: 63: local dir=test-erasure-code
kkill_daemons: 62: find test-erasure-code
kkill_daemons: 62: grep '\.pid'
teardown: 27: rm -fr test-erasure-code
setup: 21: mkdir test-erasure-code
run: 31: run_mon test-erasure-code a --public-addr 127.0.0.1:7101
run_mon: 31: local dir=test-erasure-code
run_mon: 32: shift
run_mon: 33: local id=a
run_mon: 34: shift
run_mon: 35: dir+=/a
run_mon: 38: ./ceph-mon --id a --mkfs --mon-data=test-erasure-code/a --run-dir=test-erasure-code/a --public-addr 127.0.0.1:7101
./ceph-mon: renaming mon.noname-a 127.0.0.1:7101/0 to mon.a
./ceph-mon: set fsid to 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
./ceph-mon: created monfs at test-erasure-code/a for mon.a
run_mon: 44: ./ceph-mon --id a --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=test-erasure-code/a --log-file=test-erasure-code/a/log --mon-cluster-log-file=test-erasure-code/a/log --run-dir=test-erasure-code/a '--pid-file=test-erasure-code/a/$name.pid' --public-addr 127.0.0.1:7101
run: 33: CEPH_ARGS=
run: 33: ./ceph --admin-daemon test-erasure-code/a/ceph-mon.a.asok log flush
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
{}
run: 34: grep 'load: jerasure.*lrc' test-erasure-code/a/log
2014-10-19 11:20:40.792089 7fd5c0814840 10 load: jerasure load: lrc load: isa 
rrun: 22: seq 0 10
run: 35: for id in '$(seq 0 10)'
run: 36: run_osd test-erasure-code 0
run_osd: 20: local dir=test-erasure-code
run_osd: 21: shift
run_osd: 22: local id=0
run_osd: 23: shift
run_osd: 24: local osd_data=test-erasure-code/0
run_osd: 26: local ceph_disk_args
run_osd: 27: ceph_disk_args+=' --statedir=test-erasure-code'
run_osd: 28: ceph_disk_args+=' --sysconfdir=test-erasure-code'
run_osd: 29: ceph_disk_args+=' --prepend-to-path='
run_osd: 30: ceph_disk_args+=' --verbose'
run_osd: 32: touch test-erasure-code/ceph.conf
run_osd: 34: mkdir -p test-erasure-code/0
run_osd: 35: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/0
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size
DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/0
run_osd: 38: local 'ceph_args=--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101 '
run_osd: 39: ceph_args+=' --osd-backfill-full-ratio=.99'
run_osd: 40: ceph_args+=' --osd-failsafe-full-ratio=.99'
run_osd: 41: ceph_args+=' --osd-journal-size=100'
run_osd: 42: ceph_args+=' --osd-data=test-erasure-code/0'
run_osd: 43: ceph_args+=' --chdir='
run_osd: 44: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs'
run_osd: 45: ceph_args+=' --run-dir=test-erasure-code'
run_osd: 46: ceph_args+=' --debug-osd=20'
run_osd: 47: ceph_args+=' --log-file=test-erasure-code/osd-$id.log'
run_osd: 48: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pid'
run_osd: 49: ceph_args+=' '
run_osd: 50: ceph_args+=
run_osd: 51: mkdir -p test-erasure-code/0
run_osd: 52: CEPH_ARGS='--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101  --osd-backfill-full-ratio=.99 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-data=test-erasure-code/0 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pid '
run_osd: 52: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/0
DEBUG:ceph-disk:Cluster uuid is 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
DEBUG:ceph-disk:Cluster name is ceph
DEBUG:ceph-disk:OSD uuid is 29b1a57d-f648-4146-971a-a866c246a060
DEBUG:ceph-disk:Allocating OSD id...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise 29b1a57d-f648-4146-971a-a866c246a060
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
DEBUG:ceph-disk:OSD id is 0
DEBUG:ceph-disk:Initializing OSD...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/0/activate.monmap
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
got monmap epoch 1
INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap test-erasure-code/0/activate.monmap --osd-data test-erasure-code/0 --osd-journal test-erasure-code/0/journal --osd-uuid 29b1a57d-f648-4146-971a-a866c246a060 --keyring test-erasure-code/0/keyring
2014-10-19 11:20:42.114098 7f54350cd840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:42.216186 7f54350cd840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:42.216822 7f54350cd840 -1 filestore(test-erasure-code/0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2014-10-19 11:20:42.389138 7f54350cd840 -1 created object store test-erasure-code/0 journal test-erasure-code/0/journal for osd.0 fsid 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
2014-10-19 11:20:42.389211 7f54350cd840 -1 auth: error reading file: test-erasure-code/0/keyring: can't open test-erasure-code/0/keyring: (2) No such file or directory
2014-10-19 11:20:42.389340 7f54350cd840 -1 created new key in keyring test-erasure-code/0/keyring
DEBUG:ceph-disk:Authorizing OSD key...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.0 -i test-erasure-code/0/keyring osd allow * mon allow profile osd
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
added key for osd.0
DEBUG:ceph-disk:ceph osd.0 data dir is ready at test-erasure-code/0
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=0 --osd-data=test-erasure-code/0 --osd-journal=test-erasure-code/0/journal
starting osd.0 at :/0 osd_data test-erasure-code/0 test-erasure-code/0/journal
rrun_osd: 57: cat test-erasure-code/0/whoami
run_osd: 57: '[' 0 = 0 ']'
run_osd: 59: ./ceph osd crush create-or-move 0 1 root=default host=localhost
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
create-or-move updating item name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map
run_osd: 61: status=1
run_osd: 63: (( i=0 ))
run_osd: 63: (( i < 60 ))
run_osd: 64: ceph osd dump
run_osd: 64: grep 'osd.0 up'
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
osd.0 up   in  weight 1 up_from 3 up_thru 0 down_at 0 last_clean_interval [0,0) 127.0.0.1:6804/25297 127.0.0.1:6805/25297 127.0.0.1:6806/25297 127.0.0.1:6807/25297 exists,up 29b1a57d-f648-4146-971a-a866c246a060
run_osd: 67: status=0
run_osd: 68: break
run_osd: 72: return 0
run: 35: for id in '$(seq 0 10)'
run: 36: run_osd test-erasure-code 1
run_osd: 20: local dir=test-erasure-code
run_osd: 21: shift
run_osd: 22: local id=1
run_osd: 23: shift
run_osd: 24: local osd_data=test-erasure-code/1
run_osd: 26: local ceph_disk_args
run_osd: 27: ceph_disk_args+=' --statedir=test-erasure-code'
run_osd: 28: ceph_disk_args+=' --sysconfdir=test-erasure-code'
run_osd: 29: ceph_disk_args+=' --prepend-to-path='
run_osd: 30: ceph_disk_args+=' --verbose'
run_osd: 32: touch test-erasure-code/ceph.conf
run_osd: 34: mkdir -p test-erasure-code/1
run_osd: 35: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/1
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size
DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/1
run_osd: 38: local 'ceph_args=--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101 '
run_osd: 39: ceph_args+=' --osd-backfill-full-ratio=.99'
run_osd: 40: ceph_args+=' --osd-failsafe-full-ratio=.99'
run_osd: 41: ceph_args+=' --osd-journal-size=100'
run_osd: 42: ceph_args+=' --osd-data=test-erasure-code/1'
run_osd: 43: ceph_args+=' --chdir='
run_osd: 44: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs'
run_osd: 45: ceph_args+=' --run-dir=test-erasure-code'
run_osd: 46: ceph_args+=' --debug-osd=20'
run_osd: 47: ceph_args+=' --log-file=test-erasure-code/osd-$id.log'
run_osd: 48: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pid'
run_osd: 49: ceph_args+=' '
run_osd: 50: ceph_args+=
run_osd: 51: mkdir -p test-erasure-code/1
run_osd: 52: CEPH_ARGS='--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101  --osd-backfill-full-ratio=.99 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-data=test-erasure-code/1 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pid '
run_osd: 52: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/1
DEBUG:ceph-disk:Cluster uuid is 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
DEBUG:ceph-disk:Cluster name is ceph
DEBUG:ceph-disk:OSD uuid is 6f6a5931-b230-49ef-83f3-ae974bc54021
DEBUG:ceph-disk:Allocating OSD id...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise 6f6a5931-b230-49ef-83f3-ae974bc54021
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
DEBUG:ceph-disk:OSD id is 1
DEBUG:ceph-disk:Initializing OSD...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/1/activate.monmap
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
got monmap epoch 1
INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap test-erasure-code/1/activate.monmap --osd-data test-erasure-code/1 --osd-journal test-erasure-code/1/journal --osd-uuid 6f6a5931-b230-49ef-83f3-ae974bc54021 --keyring test-erasure-code/1/keyring
2014-10-19 11:20:44.959381 7f4f286da840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:45.003106 7f4f286da840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:45.005193 7f4f286da840 -1 filestore(test-erasure-code/1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2014-10-19 11:20:45.037876 7f4f286da840 -1 created object store test-erasure-code/1 journal test-erasure-code/1/journal for osd.1 fsid 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
2014-10-19 11:20:45.037945 7f4f286da840 -1 auth: error reading file: test-erasure-code/1/keyring: can't open test-erasure-code/1/keyring: (2) No such file or directory
2014-10-19 11:20:45.038074 7f4f286da840 -1 created new key in keyring test-erasure-code/1/keyring
DEBUG:ceph-disk:Authorizing OSD key...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.1 -i test-erasure-code/1/keyring osd allow * mon allow profile osd
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
added key for osd.1
DEBUG:ceph-disk:ceph osd.1 data dir is ready at test-erasure-code/1
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=1 --osd-data=test-erasure-code/1 --osd-journal=test-erasure-code/1/journal
starting osd.1 at :/0 osd_data test-erasure-code/1 test-erasure-code/1/journal
rrun_osd: 57: cat test-erasure-code/1/whoami
run_osd: 57: '[' 1 = 1 ']'
run_osd: 59: ./ceph osd crush create-or-move 1 1 root=default host=localhost
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
create-or-move updating item name 'osd.1' weight 1 at location {host=localhost,root=default} to crush map
run_osd: 61: status=1
run_osd: 63: (( i=0 ))
run_osd: 63: (( i < 60 ))
run_osd: 64: ceph osd dump
run_osd: 64: grep 'osd.1 up'
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
osd.1 up   in  weight 1 up_from 6 up_thru 7 down_at 0 last_clean_interval [0,0) 127.0.0.1:6828/31905 127.0.0.1:6829/31905 127.0.0.1:6830/31905 127.0.0.1:6831/31905 exists,up 6f6a5931-b230-49ef-83f3-ae974bc54021
run_osd: 67: status=0
run_osd: 68: break
run_osd: 72: return 0
run: 35: for id in '$(seq 0 10)'
run: 36: run_osd test-erasure-code 2
run_osd: 20: local dir=test-erasure-code
run_osd: 21: shift
run_osd: 22: local id=2
run_osd: 23: shift
run_osd: 24: local osd_data=test-erasure-code/2
run_osd: 26: local ceph_disk_args
run_osd: 27: ceph_disk_args+=' --statedir=test-erasure-code'
run_osd: 28: ceph_disk_args+=' --sysconfdir=test-erasure-code'
run_osd: 29: ceph_disk_args+=' --prepend-to-path='
run_osd: 30: ceph_disk_args+=' --verbose'
run_osd: 32: touch test-erasure-code/ceph.conf
run_osd: 34: mkdir -p test-erasure-code/2
run_osd: 35: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/2
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size
DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/2
run_osd: 38: local 'ceph_args=--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101 '
run_osd: 39: ceph_args+=' --osd-backfill-full-ratio=.99'
run_osd: 40: ceph_args+=' --osd-failsafe-full-ratio=.99'
run_osd: 41: ceph_args+=' --osd-journal-size=100'
run_osd: 42: ceph_args+=' --osd-data=test-erasure-code/2'
run_osd: 43: ceph_args+=' --chdir='
run_osd: 44: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs'
run_osd: 45: ceph_args+=' --run-dir=test-erasure-code'
run_osd: 46: ceph_args+=' --debug-osd=20'
run_osd: 47: ceph_args+=' --log-file=test-erasure-code/osd-$id.log'
run_osd: 48: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pid'
run_osd: 49: ceph_args+=' '
run_osd: 50: ceph_args+=
run_osd: 51: mkdir -p test-erasure-code/2
run_osd: 52: CEPH_ARGS='--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101  --osd-backfill-full-ratio=.99 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-data=test-erasure-code/2 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pid '
run_osd: 52: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/2
DEBUG:ceph-disk:Cluster uuid is 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
DEBUG:ceph-disk:Cluster name is ceph
DEBUG:ceph-disk:OSD uuid is df18cf8a-9cad-4760-80d8-1c2092bbe99e
DEBUG:ceph-disk:Allocating OSD id...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise df18cf8a-9cad-4760-80d8-1c2092bbe99e
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
DEBUG:ceph-disk:OSD id is 2
DEBUG:ceph-disk:Initializing OSD...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/2/activate.monmap
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
got monmap epoch 1
INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 2 --monmap test-erasure-code/2/activate.monmap --osd-data test-erasure-code/2 --osd-journal test-erasure-code/2/journal --osd-uuid df18cf8a-9cad-4760-80d8-1c2092bbe99e --keyring test-erasure-code/2/keyring
2014-10-19 11:20:47.634438 7fe23fc91840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:47.707341 7fe23fc91840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:47.708002 7fe23fc91840 -1 filestore(test-erasure-code/2) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2014-10-19 11:20:47.742501 7fe23fc91840 -1 created object store test-erasure-code/2 journal test-erasure-code/2/journal for osd.2 fsid 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
2014-10-19 11:20:47.742579 7fe23fc91840 -1 auth: error reading file: test-erasure-code/2/keyring: can't open test-erasure-code/2/keyring: (2) No such file or directory
2014-10-19 11:20:47.742715 7fe23fc91840 -1 created new key in keyring test-erasure-code/2/keyring
DEBUG:ceph-disk:Authorizing OSD key...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.2 -i test-erasure-code/2/keyring osd allow * mon allow profile osd
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
added key for osd.2
DEBUG:ceph-disk:ceph osd.2 data dir is ready at test-erasure-code/2
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=2 --osd-data=test-erasure-code/2 --osd-journal=test-erasure-code/2/journal
starting osd.2 at :/0 osd_data test-erasure-code/2 test-erasure-code/2/journal
rrun_osd: 57: cat test-erasure-code/2/whoami
run_osd: 57: '[' 2 = 2 ']'
run_osd: 59: ./ceph osd crush create-or-move 2 1 root=default host=localhost
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
create-or-move updating item name 'osd.2' weight 1 at location {host=localhost,root=default} to crush map
run_osd: 61: status=1
run_osd: 63: (( i=0 ))
run_osd: 63: (( i < 60 ))
run_osd: 64: ceph osd dump
run_osd: 64: grep 'osd.2 up'
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
osd.2 up   in  weight 1 up_from 10 up_thru 10 down_at 0 last_clean_interval [0,0) 127.0.0.1:6847/5405 127.0.0.1:6848/5405 127.0.0.1:6849/5405 127.0.0.1:6850/5405 exists,up df18cf8a-9cad-4760-80d8-1c2092bbe99e
run_osd: 67: status=0
run_osd: 68: break
run_osd: 72: return 0
run: 35: for id in '$(seq 0 10)'
run: 36: run_osd test-erasure-code 3
run_osd: 20: local dir=test-erasure-code
run_osd: 21: shift
run_osd: 22: local id=3
run_osd: 23: shift
run_osd: 24: local osd_data=test-erasure-code/3
run_osd: 26: local ceph_disk_args
run_osd: 27: ceph_disk_args+=' --statedir=test-erasure-code'
run_osd: 28: ceph_disk_args+=' --sysconfdir=test-erasure-code'
run_osd: 29: ceph_disk_args+=' --prepend-to-path='
run_osd: 30: ceph_disk_args+=' --verbose'
run_osd: 32: touch test-erasure-code/ceph.conf
run_osd: 34: mkdir -p test-erasure-code/3
run_osd: 35: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/3
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size
DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/3
run_osd: 38: local 'ceph_args=--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101 '
run_osd: 39: ceph_args+=' --osd-backfill-full-ratio=.99'
run_osd: 40: ceph_args+=' --osd-failsafe-full-ratio=.99'
run_osd: 41: ceph_args+=' --osd-journal-size=100'
run_osd: 42: ceph_args+=' --osd-data=test-erasure-code/3'
run_osd: 43: ceph_args+=' --chdir='
run_osd: 44: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs'
run_osd: 45: ceph_args+=' --run-dir=test-erasure-code'
run_osd: 46: ceph_args+=' --debug-osd=20'
run_osd: 47: ceph_args+=' --log-file=test-erasure-code/osd-$id.log'
run_osd: 48: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pid'
run_osd: 49: ceph_args+=' '
run_osd: 50: ceph_args+=
run_osd: 51: mkdir -p test-erasure-code/3
run_osd: 52: CEPH_ARGS='--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101  --osd-backfill-full-ratio=.99 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-data=test-erasure-code/3 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pid '
run_osd: 52: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/3
DEBUG:ceph-disk:Cluster uuid is 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
DEBUG:ceph-disk:Cluster name is ceph
DEBUG:ceph-disk:OSD uuid is 8956b717-f792-40e0-9bb2-825be52a60d7
DEBUG:ceph-disk:Allocating OSD id...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise 8956b717-f792-40e0-9bb2-825be52a60d7
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
DEBUG:ceph-disk:OSD id is 3
DEBUG:ceph-disk:Initializing OSD...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/3/activate.monmap
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
got monmap epoch 1
INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 3 --monmap test-erasure-code/3/activate.monmap --osd-data test-erasure-code/3 --osd-journal test-erasure-code/3/journal --osd-uuid 8956b717-f792-40e0-9bb2-825be52a60d7 --keyring test-erasure-code/3/keyring
2014-10-19 11:20:50.203412 7fd4b76b8840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:50.241310 7fd4b76b8840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:50.242236 7fd4b76b8840 -1 filestore(test-erasure-code/3) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2014-10-19 11:20:50.275632 7fd4b76b8840 -1 created object store test-erasure-code/3 journal test-erasure-code/3/journal for osd.3 fsid 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
2014-10-19 11:20:50.275705 7fd4b76b8840 -1 auth: error reading file: test-erasure-code/3/keyring: can't open test-erasure-code/3/keyring: (2) No such file or directory
2014-10-19 11:20:50.275849 7fd4b76b8840 -1 created new key in keyring test-erasure-code/3/keyring
DEBUG:ceph-disk:Authorizing OSD key...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.3 -i test-erasure-code/3/keyring osd allow * mon allow profile osd
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
added key for osd.3
DEBUG:ceph-disk:ceph osd.3 data dir is ready at test-erasure-code/3
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=3 --osd-data=test-erasure-code/3 --osd-journal=test-erasure-code/3/journal
starting osd.3 at :/0 osd_data test-erasure-code/3 test-erasure-code/3/journal
rrun_osd: 57: cat test-erasure-code/3/whoami
run_osd: 57: '[' 3 = 3 ']'
run_osd: 59: ./ceph osd crush create-or-move 3 1 root=default host=localhost
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
create-or-move updating item name 'osd.3' weight 1 at location {host=localhost,root=default} to crush map
run_osd: 61: status=1
run_osd: 63: (( i=0 ))
run_osd: 63: (( i < 60 ))
run_osd: 64: ceph osd dump
run_osd: 64: grep 'osd.3 up'
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
osd.3 up   in  weight 1 up_from 13 up_thru 14 down_at 0 last_clean_interval [0,0) 127.0.0.1:6853/10572 127.0.0.1:6854/10572 127.0.0.1:6855/10572 127.0.0.1:6856/10572 exists,up 8956b717-f792-40e0-9bb2-825be52a60d7
run_osd: 67: status=0
run_osd: 68: break
run_osd: 72: return 0
run: 35: for id in '$(seq 0 10)'
run: 36: run_osd test-erasure-code 4
run_osd: 20: local dir=test-erasure-code
run_osd: 21: shift
run_osd: 22: local id=4
run_osd: 23: shift
run_osd: 24: local osd_data=test-erasure-code/4
run_osd: 26: local ceph_disk_args
run_osd: 27: ceph_disk_args+=' --statedir=test-erasure-code'
run_osd: 28: ceph_disk_args+=' --sysconfdir=test-erasure-code'
run_osd: 29: ceph_disk_args+=' --prepend-to-path='
run_osd: 30: ceph_disk_args+=' --verbose'
run_osd: 32: touch test-erasure-code/ceph.conf
run_osd: 34: mkdir -p test-erasure-code/4
run_osd: 35: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/4
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size
DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/4
run_osd: 38: local 'ceph_args=--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101 '
run_osd: 39: ceph_args+=' --osd-backfill-full-ratio=.99'
run_osd: 40: ceph_args+=' --osd-failsafe-full-ratio=.99'
run_osd: 41: ceph_args+=' --osd-journal-size=100'
run_osd: 42: ceph_args+=' --osd-data=test-erasure-code/4'
run_osd: 43: ceph_args+=' --chdir='
run_osd: 44: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs'
run_osd: 45: ceph_args+=' --run-dir=test-erasure-code'
run_osd: 46: ceph_args+=' --debug-osd=20'
run_osd: 47: ceph_args+=' --log-file=test-erasure-code/osd-$id.log'
run_osd: 48: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pid'
run_osd: 49: ceph_args+=' '
run_osd: 50: ceph_args+=
run_osd: 51: mkdir -p test-erasure-code/4
run_osd: 52: CEPH_ARGS='--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101  --osd-backfill-full-ratio=.99 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-data=test-erasure-code/4 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pid '
run_osd: 52: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/4
DEBUG:ceph-disk:Cluster uuid is 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
DEBUG:ceph-disk:Cluster name is ceph
DEBUG:ceph-disk:OSD uuid is 50b1abbc-0606-4550-8ff4-2d329c2e6879
DEBUG:ceph-disk:Allocating OSD id...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise 50b1abbc-0606-4550-8ff4-2d329c2e6879
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
DEBUG:ceph-disk:OSD id is 4
DEBUG:ceph-disk:Initializing OSD...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/4/activate.monmap
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
got monmap epoch 1
INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 4 --monmap test-erasure-code/4/activate.monmap --osd-data test-erasure-code/4 --osd-journal test-erasure-code/4/journal --osd-uuid 50b1abbc-0606-4550-8ff4-2d329c2e6879 --keyring test-erasure-code/4/keyring
2014-10-19 11:20:52.875433 7fbf774fb840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:52.970265 7fbf774fb840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:52.970958 7fbf774fb840 -1 filestore(test-erasure-code/4) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2014-10-19 11:20:53.006380 7fbf774fb840 -1 created object store test-erasure-code/4 journal test-erasure-code/4/journal for osd.4 fsid 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
2014-10-19 11:20:53.006456 7fbf774fb840 -1 auth: error reading file: test-erasure-code/4/keyring: can't open test-erasure-code/4/keyring: (2) No such file or directory
2014-10-19 11:20:53.006570 7fbf774fb840 -1 created new key in keyring test-erasure-code/4/keyring
DEBUG:ceph-disk:Authorizing OSD key...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.4 -i test-erasure-code/4/keyring osd allow * mon allow profile osd
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
added key for osd.4
DEBUG:ceph-disk:ceph osd.4 data dir is ready at test-erasure-code/4
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=4 --osd-data=test-erasure-code/4 --osd-journal=test-erasure-code/4/journal
starting osd.4 at :/0 osd_data test-erasure-code/4 test-erasure-code/4/journal
rrun_osd: 57: cat test-erasure-code/4/whoami
run_osd: 57: '[' 4 = 4 ']'
run_osd: 59: ./ceph osd crush create-or-move 4 1 root=default host=localhost
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
create-or-move updating item name 'osd.4' weight 1 at location {host=localhost,root=default} to crush map
run_osd: 61: status=1
run_osd: 63: (( i=0 ))
run_osd: 63: (( i < 60 ))
run_osd: 64: grep 'osd.4 up'
run_osd: 64: ceph osd dump
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
osd.4 up   in  weight 1 up_from 17 up_thru 18 down_at 0 last_clean_interval [0,0) 127.0.0.1:6860/16998 127.0.0.1:6861/16998 127.0.0.1:6862/16998 127.0.0.1:6863/16998 exists,up 50b1abbc-0606-4550-8ff4-2d329c2e6879
run_osd: 67: status=0
run_osd: 68: break
run_osd: 72: return 0
run: 35: for id in '$(seq 0 10)'
run: 36: run_osd test-erasure-code 5
run_osd: 20: local dir=test-erasure-code
run_osd: 21: shift
run_osd: 22: local id=5
run_osd: 23: shift
run_osd: 24: local osd_data=test-erasure-code/5
run_osd: 26: local ceph_disk_args
run_osd: 27: ceph_disk_args+=' --statedir=test-erasure-code'
run_osd: 28: ceph_disk_args+=' --sysconfdir=test-erasure-code'
run_osd: 29: ceph_disk_args+=' --prepend-to-path='
run_osd: 30: ceph_disk_args+=' --verbose'
run_osd: 32: touch test-erasure-code/ceph.conf
run_osd: 34: mkdir -p test-erasure-code/5
run_osd: 35: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/5
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size
DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/5
run_osd: 38: local 'ceph_args=--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101 '
run_osd: 39: ceph_args+=' --osd-backfill-full-ratio=.99'
run_osd: 40: ceph_args+=' --osd-failsafe-full-ratio=.99'
run_osd: 41: ceph_args+=' --osd-journal-size=100'
run_osd: 42: ceph_args+=' --osd-data=test-erasure-code/5'
run_osd: 43: ceph_args+=' --chdir='
run_osd: 44: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs'
run_osd: 45: ceph_args+=' --run-dir=test-erasure-code'
run_osd: 46: ceph_args+=' --debug-osd=20'
run_osd: 47: ceph_args+=' --log-file=test-erasure-code/osd-$id.log'
run_osd: 48: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pid'
run_osd: 49: ceph_args+=' '
run_osd: 50: ceph_args+=
run_osd: 51: mkdir -p test-erasure-code/5
run_osd: 52: CEPH_ARGS='--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101  --osd-backfill-full-ratio=.99 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-data=test-erasure-code/5 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pid '
run_osd: 52: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/5
DEBUG:ceph-disk:Cluster uuid is 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
DEBUG:ceph-disk:Cluster name is ceph
DEBUG:ceph-disk:OSD uuid is b082dbc1-b31b-4f05-b0ba-000d2389a12d
DEBUG:ceph-disk:Allocating OSD id...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise b082dbc1-b31b-4f05-b0ba-000d2389a12d
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
DEBUG:ceph-disk:OSD id is 5
DEBUG:ceph-disk:Initializing OSD...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/5/activate.monmap
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
got monmap epoch 1
INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 5 --monmap test-erasure-code/5/activate.monmap --osd-data test-erasure-code/5 --osd-journal test-erasure-code/5/journal --osd-uuid b082dbc1-b31b-4f05-b0ba-000d2389a12d --keyring test-erasure-code/5/keyring
2014-10-19 11:20:55.889553 7f3c1442e840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:55.926793 7f3c1442e840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:55.927396 7f3c1442e840 -1 filestore(test-erasure-code/5) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2014-10-19 11:20:55.962405 7f3c1442e840 -1 created object store test-erasure-code/5 journal test-erasure-code/5/journal for osd.5 fsid 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
2014-10-19 11:20:55.962526 7f3c1442e840 -1 auth: error reading file: test-erasure-code/5/keyring: can't open test-erasure-code/5/keyring: (2) No such file or directory
2014-10-19 11:20:55.962647 7f3c1442e840 -1 created new key in keyring test-erasure-code/5/keyring
DEBUG:ceph-disk:Authorizing OSD key...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.5 -i test-erasure-code/5/keyring osd allow * mon allow profile osd
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
added key for osd.5
DEBUG:ceph-disk:ceph osd.5 data dir is ready at test-erasure-code/5
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --id=5 --osd-data=test-erasure-code/5 --osd-journal=test-erasure-code/5/journal
starting osd.5 at :/0 osd_data test-erasure-code/5 test-erasure-code/5/journal
rrun_osd: 57: cat test-erasure-code/5/whoami
run_osd: 57: '[' 5 = 5 ']'
run_osd: 59: ./ceph osd crush create-or-move 5 1 root=default host=localhost
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
create-or-move updating item name 'osd.5' weight 1 at location {host=localhost,root=default} to crush map
run_osd: 61: status=1
run_osd: 63: (( i=0 ))
run_osd: 63: (( i < 60 ))
run_osd: 64: grep 'osd.5 up'
run_osd: 64: ceph osd dump
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
osd.5 up   in  weight 1 up_from 21 up_thru 22 down_at 0 last_clean_interval [0,0) 127.0.0.1:6841/22314 127.0.0.1:6858/22314 127.0.0.1:6864/22314 127.0.0.1:6865/22314 exists,up b082dbc1-b31b-4f05-b0ba-000d2389a12d
run_osd: 67: status=0
run_osd: 68: break
run_osd: 72: return 0
run: 35: for id in '$(seq 0 10)'
run: 36: run_osd test-erasure-code 6
run_osd: 20: local dir=test-erasure-code
run_osd: 21: shift
run_osd: 22: local id=6
run_osd: 23: shift
run_osd: 24: local osd_data=test-erasure-code/6
run_osd: 26: local ceph_disk_args
run_osd: 27: ceph_disk_args+=' --statedir=test-erasure-code'
run_osd: 28: ceph_disk_args+=' --sysconfdir=test-erasure-code'
run_osd: 29: ceph_disk_args+=' --prepend-to-path='
run_osd: 30: ceph_disk_args+=' --verbose'
run_osd: 32: touch test-erasure-code/ceph.conf
run_osd: 34: mkdir -p test-erasure-code/6
run_osd: 35: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose prepare test-erasure-code/6
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=osd_journal_size
DEBUG:ceph-disk:Preparing osd data dir test-erasure-code/6
run_osd: 38: local 'ceph_args=--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101 '
run_osd: 39: ceph_args+=' --osd-backfill-full-ratio=.99'
run_osd: 40: ceph_args+=' --osd-failsafe-full-ratio=.99'
run_osd: 41: ceph_args+=' --osd-journal-size=100'
run_osd: 42: ceph_args+=' --osd-data=test-erasure-code/6'
run_osd: 43: ceph_args+=' --chdir='
run_osd: 44: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs'
run_osd: 45: ceph_args+=' --run-dir=test-erasure-code'
run_osd: 46: ceph_args+=' --debug-osd=20'
run_osd: 47: ceph_args+=' --log-file=test-erasure-code/osd-$id.log'
run_osd: 48: ceph_args+=' --pid-file=test-erasure-code/osd-$id.pid'
run_osd: 49: ceph_args+=' '
run_osd: 50: ceph_args+=
run_osd: 51: mkdir -p test-erasure-code/6
run_osd: 52: CEPH_ARGS='--fsid=5171f315-c9e1-4ba2-b7bd-7f9915a1dafa --auth-supported=none --mon-host=127.0.0.1:7101  --osd-backfill-full-ratio=.99 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-data=test-erasure-code/6 --chdir= --osd-pool-default-erasure-code-directory=.libs --run-dir=test-erasure-code --debug-osd=20 --log-file=test-erasure-code/osd-$id.log --pid-file=test-erasure-code/osd-$id.pid '
run_osd: 52: ./ceph-disk --statedir=test-erasure-code --sysconfdir=test-erasure-code --prepend-to-path= --verbose activate --mark-init=none test-erasure-code/6
DEBUG:ceph-disk:Cluster uuid is 5171f315-c9e1-4ba2-b7bd-7f9915a1dafa
INFO:ceph-disk:Running command: ceph-osd --cluster=ceph --show-config-value=fsid
DEBUG:ceph-disk:Cluster name is ceph
DEBUG:ceph-disk:OSD uuid is ceda34a6-d76b-46a9-a4f3-f5dc179374bd
DEBUG:ceph-disk:Allocating OSD id...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring osd create --concise ceda34a6-d76b-46a9-a4f3-f5dc179374bd
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
DEBUG:ceph-disk:OSD id is 6
DEBUG:ceph-disk:Initializing OSD...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring mon getmap -o test-erasure-code/6/activate.monmap
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
got monmap epoch 1
INFO:ceph-disk:Running command: ceph-osd --cluster ceph --mkfs --mkkey -i 6 --monmap test-erasure-code/6/activate.monmap --osd-data test-erasure-code/6 --osd-journal test-erasure-code/6/journal --osd-uuid ceda34a6-d76b-46a9-a4f3-f5dc179374bd --keyring test-erasure-code/6/keyring
2014-10-19 11:20:58.460171 7f7937a7e840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:58.524300 7f7937a7e840 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-10-19 11:20:58.526488 7f7937a7e840 -1 filestore(test-erasure-code/6) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
Traceback (most recent call last):
  File "./ceph-disk", line 2792, in <module>
    main()
  File "./ceph-disk", line 2770, in main
    args.func(args)
  File "./ceph-disk", line 2011, in main_activate
    init=args.mark_init,
  File "./ceph-disk", line 1841, in activate_dir
    (osd_id, cluster) = activate(path, activate_key_template, init)
  File "./ceph-disk", line 1943, in activate
    keyring=keyring,
  File "./ceph-disk", line 1573, in mkfs
    '--keyring', os.path.join(path, 'keyring'),
  File "./ceph-disk", line 316, in command_check_call
    return subprocess.check_call(arguments)
  File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', '6', '--monmap', 'test-erasure-code/6/activate.monmap', '--osd-data', 'test-erasure-code/6', '--osd-journal', 'test-erasure-code/6/journal', '--osd-uuid', 'ceda34a6-d76b-46a9-a4f3-f5dc179374bd', '--keyring', 'test-erasure-code/6/keyring']' returned non-zero exit status -9
run_osd: 55: return 1
run: 36: return 1
main: 113: code=1
main: 115: teardown test-erasure-code
teardown: 25: local dir=test-erasure-code


Files

log.gz (422 KB) log.gz mon log Loïc Dachary, 10/19/2014 03:02 PM
second.log.gz (4.31 KB) second.log.gz failure log Loïc Dachary, 10/19/2014 03:12 PM
Actions #1

Updated by Loïc Dachary over 9 years ago

it was reproduced with a change to the script to keep the logs.

Actions #2

Updated by Loïc Dachary over 9 years ago

The error matching the mon log is different: auth add exits with -9 instead of mkfs.

2014-10-19 14:25:47.767003 7fddf713a840 -1 auth: error reading file: test-erasure-code/6/keyring: can't open test-erasure-code/6/keyring: (2) No such file or directory
2014-10-19 14:25:47.767114 7fddf713a840 -1 created new key in keyring test-erasure-code/6/keyring
DEBUG:ceph-disk:Authorizing OSD key...
INFO:ceph-disk:Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring test-erasure-code/bootstrap-osd/ceph.keyring auth add osd.6 -i test-erasure-code/6/keyring osd allow * mon allow profile osd
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
Traceback (most recent call last):
  File "./ceph-disk", line 2792, in <module>
    main()
  File "./ceph-disk", line 2770, in main
    args.func(args)
  File "./ceph-disk", line 2011, in main_activate
    init=args.mark_init,
  File "./ceph-disk", line 1841, in activate_dir
    (osd_id, cluster) = activate(path, activate_key_template, init)
  File "./ceph-disk", line 1979, in activate
    keyring=keyring,
  File "./ceph-disk", line 1597, in auth_key
    'mon', 'allow profile osd',
  File "./ceph-disk", line 316, in command_check_call
    return subprocess.check_call(arguments)
  File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', 'test-erasure-code/bootstrap-osd/ceph.keyring', 'auth', 'add', 'osd.6', '-i', 'test-erasure-code/6/keyring', 'osd', 'allow *', 'mon', 'allow profile osd']' returned non-zero exit status -9

Actions #3

Updated by Loïc Dachary over 9 years ago

  • Subject changed from ceph-osd mkfs exit -9 / EBADF to ceph-osd mkfs or ceph auth add : exit -9
Actions #4

Updated by Loïc Dachary over 9 years ago

Maybe it runs out of file descriptors because of the // runs. Since the erasure code test is the one using the more daemons, it would explain why it reaches the 1024 limit set by default on f20. Trying again with

$ cat /etc/security/limits.d/85-nofile.conf# so that running ceph tests in // do not run out of file descriptors
*          soft    nofile    16384
*          hard    nofile    16384

Actions #5

Updated by Samuel Just over 9 years ago

  • Assignee set to Loïc Dachary
Actions #6

Updated by Loïc Dachary over 9 years ago

  • Status changed from Need More Info to Won't Fix

It did not show up after the fix. The tests use a lot more than the default 1024 file descriptors allowed. Marking won't fix because it's not a ceph bug. It is an environmental problem.

Actions

Also available in: Atom PDF