Project

General

Profile

Actions

Bug #12457

closed

tests: run_osd sometime fails

Added by Loïc Dachary almost 9 years ago. Updated over 8 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

One out of 20 runs of make check, run_osd fails (the osd is not reported to be up by osd.dump). This is a transient error that does not seem to be related to low resources on the machine. There are no details about why the osd failed. At least the tail of the osd log and a ceph report would be necessary to figure that out.

./test/mon/osd-crush.sh:202: TEST_add_ruleset_failed:  grep 'Error ENOSPC'
Error ENOSPC: failed to add rule 256 because (28) No space left on device
./test/mon/osd-crush.sh:33: run:  teardown testdir/osd-crush
../qa/workunits/ceph-helpers.sh:117: teardown:  local dir=testdir/osd-crush
../qa/workunits/ceph-helpers.sh:118: teardown:  kill_daemons testdir/osd-crush
.../qa/workunits/ceph-helpers.sh:192: kill_daemons:  shopt -q -o xtrace
.../qa/workunits/ceph-helpers.sh:192: kill_daemons:  echo true
../qa/workunits/ceph-helpers.sh:192: kill_daemons:  local trace=true
../qa/workunits/ceph-helpers.sh:193: kill_daemons:  true
../qa/workunits/ceph-helpers.sh:193: kill_daemons:  shopt -u -o xtrace
../qa/workunits/ceph-helpers.sh:219: kill_daemons:  return 0
.../qa/workunits/ceph-helpers.sh:119: teardown:  stat -f -c %T .
../qa/workunits/ceph-helpers.sh:119: teardown:  '[' ext2/ext3 == btrfs ']'
../qa/workunits/ceph-helpers.sh:122: teardown:  rm -fr testdir/osd-crush
./test/mon/osd-crush.sh:30: run:  for func in '$funcs'
./test/mon/osd-crush.sh:31: run:  setup testdir/osd-crush
../qa/workunits/ceph-helpers.sh:92: setup:  local dir=testdir/osd-crush
../qa/workunits/ceph-helpers.sh:93: setup:  teardown testdir/osd-crush
../qa/workunits/ceph-helpers.sh:117: teardown:  local dir=testdir/osd-crush
../qa/workunits/ceph-helpers.sh:118: teardown:  kill_daemons testdir/osd-crush
.../qa/workunits/ceph-helpers.sh:192: kill_daemons:  shopt -q -o xtrace
.../qa/workunits/ceph-helpers.sh:192: kill_daemons:  echo true
../qa/workunits/ceph-helpers.sh:192: kill_daemons:  local trace=true
../qa/workunits/ceph-helpers.sh:193: kill_daemons:  true
../qa/workunits/ceph-helpers.sh:193: kill_daemons:  shopt -u -o xtrace
../qa/workunits/ceph-helpers.sh:219: kill_daemons:  return 0
.../qa/workunits/ceph-helpers.sh:119: teardown:  stat -f -c %T .
../qa/workunits/ceph-helpers.sh:119: teardown:  '[' ext2/ext3 == btrfs ']'
../qa/workunits/ceph-helpers.sh:122: teardown:  rm -fr testdir/osd-crush
../qa/workunits/ceph-helpers.sh:94: setup:  mkdir -p testdir/osd-crush
./test/mon/osd-crush.sh:32: run:  TEST_crush_reject_empty testdir/osd-crush
./test/mon/osd-crush.sh:220: TEST_crush_reject_empty:  local dir=testdir/osd-crush
./test/mon/osd-crush.sh:221: TEST_crush_reject_empty:  run_mon testdir/osd-crush a
../qa/workunits/ceph-helpers.sh:282: run_mon:  local dir=testdir/osd-crush
../qa/workunits/ceph-helpers.sh:283: run_mon:  shift
../qa/workunits/ceph-helpers.sh:284: run_mon:  local id=a
../qa/workunits/ceph-helpers.sh:285: run_mon:  shift
../qa/workunits/ceph-helpers.sh:286: run_mon:  local data=testdir/osd-crush/a
../qa/workunits/ceph-helpers.sh:289: run_mon:  ceph-mon --id a --mkfs --mon-data=testdir/osd-crush/a --run-dir=testdir/osd-crush
ceph-mon: mon.noname-a 127.0.0.1:7104/0 is local, renaming to mon.a
ceph-mon: set fsid to 49a65d30-da54-4f53-bbe5-0a20540fc282
ceph-mon: created monfs at testdir/osd-crush/a for mon.a
../qa/workunits/ceph-helpers.sh:296: run_mon:  ceph-mon --id a --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=testdir/osd-crush/a '--log-file=testdir/osd-crush/$name.log' '--admin-socket=testdir/osd-crush/$cluster-$name.asok' --mon-cluster-log-file=testdir/osd-crush/log --run-dir=testdir/osd-crush '--pid-file=testdir/osd-crush/$name.pid'
../qa/workunits/ceph-helpers.sh:314: run_mon:  cat
.../qa/workunits/ceph-helpers.sh:314: run_mon:  get_config mon a fsid
.../qa/workunits/ceph-helpers.sh:651: get_config:  local daemon=mon
.../qa/workunits/ceph-helpers.sh:652: get_config:  local id=a
.../qa/workunits/ceph-helpers.sh:653: get_config:  local config=fsid
.../qa/workunits/ceph-helpers.sh:655: get_config:  CEPH_ARGS=
.../qa/workunits/ceph-helpers.sh:655: get_config:  ceph --format xml daemon testdir/osd-crush/ceph-mon.a.asok config get fsid
.../qa/workunits/ceph-helpers.sh:658: get_config:  xmlstarlet sel -t -m //fsid -v . -n
.../qa/workunits/ceph-helpers.sh:314: run_mon:  get_config mon a mon_host
.../qa/workunits/ceph-helpers.sh:651: get_config:  local daemon=mon
.../qa/workunits/ceph-helpers.sh:652: get_config:  local id=a
.../qa/workunits/ceph-helpers.sh:653: get_config:  local config=mon_host
.../qa/workunits/ceph-helpers.sh:655: get_config:  CEPH_ARGS=
.../qa/workunits/ceph-helpers.sh:655: get_config:  ceph --format xml daemon testdir/osd-crush/ceph-mon.a.asok config get mon_host
.../qa/workunits/ceph-helpers.sh:658: get_config:  xmlstarlet sel -t -m //mon_host -v . -n
.../qa/workunits/ceph-helpers.sh:319: run_mon:  get_config mon a mon_initial_members
.../qa/workunits/ceph-helpers.sh:651: get_config:  local daemon=mon
.../qa/workunits/ceph-helpers.sh:652: get_config:  local id=a
.../qa/workunits/ceph-helpers.sh:653: get_config:  local config=mon_initial_members
.../qa/workunits/ceph-helpers.sh:655: get_config:  CEPH_ARGS=
.../qa/workunits/ceph-helpers.sh:655: get_config:  ceph --format xml daemon testdir/osd-crush/ceph-mon.a.asok config get mon_initial_members
.../qa/workunits/ceph-helpers.sh:658: get_config:  xmlstarlet sel -t -m //mon_initial_members -v . -n
../qa/workunits/ceph-helpers.sh:319: run_mon:  test -z ''
../qa/workunits/ceph-helpers.sh:320: run_mon:  ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
pool 'rbd' removed
../qa/workunits/ceph-helpers.sh:321: run_mon:  ceph osd pool create rbd 4
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
pool 'rbd' created
./test/mon/osd-crush.sh:223: TEST_crush_reject_empty:  run_osd testdir/osd-crush 0
../qa/workunits/ceph-helpers.sh:397: run_osd:  local dir=testdir/osd-crush
../qa/workunits/ceph-helpers.sh:398: run_osd:  shift
../qa/workunits/ceph-helpers.sh:399: run_osd:  local id=0
../qa/workunits/ceph-helpers.sh:400: run_osd:  shift
../qa/workunits/ceph-helpers.sh:401: run_osd:  local osd_data=testdir/osd-crush/0
../qa/workunits/ceph-helpers.sh:403: run_osd:  local ceph_disk_args
../qa/workunits/ceph-helpers.sh:404: run_osd:  ceph_disk_args+=' --statedir=testdir/osd-crush'
../qa/workunits/ceph-helpers.sh:405: run_osd:  ceph_disk_args+=' --sysconfdir=testdir/osd-crush'
../qa/workunits/ceph-helpers.sh:406: run_osd:  ceph_disk_args+=' --prepend-to-path='
../qa/workunits/ceph-helpers.sh:408: run_osd:  mkdir -p testdir/osd-crush/0
../qa/workunits/ceph-helpers.sh:409: run_osd:  ceph-disk --statedir=testdir/osd-crush --sysconfdir=testdir/osd-crush --prepend-to-path= prepare testdir/osd-crush/0
2015-07-24 03:28:37.827889 7fc4c79bc7c0 -1 warning: unable to create /var/run/ceph: (13) Permission denied
2015-07-24 03:28:37.841755 7ffdc34f77c0 -1 warning: unable to create /var/run/ceph: (13) Permission denied
2015-07-24 03:28:37.856222 7f24220147c0 -1 warning: unable to create /var/run/ceph: (13) Permission denied
../qa/workunits/ceph-helpers.sh:412: run_osd:  activate_osd testdir/osd-crush 0
../qa/workunits/ceph-helpers.sh:478: activate_osd:  local dir=testdir/osd-crush
../qa/workunits/ceph-helpers.sh:479: activate_osd:  shift
../qa/workunits/ceph-helpers.sh:480: activate_osd:  local id=0
../qa/workunits/ceph-helpers.sh:481: activate_osd:  shift
../qa/workunits/ceph-helpers.sh:482: activate_osd:  local osd_data=testdir/osd-crush/0
../qa/workunits/ceph-helpers.sh:484: activate_osd:  local ceph_disk_args
../qa/workunits/ceph-helpers.sh:485: activate_osd:  ceph_disk_args+=' --statedir=testdir/osd-crush'
../qa/workunits/ceph-helpers.sh:486: activate_osd:  ceph_disk_args+=' --sysconfdir=testdir/osd-crush'
../qa/workunits/ceph-helpers.sh:487: activate_osd:  ceph_disk_args+=' --prepend-to-path='
../qa/workunits/ceph-helpers.sh:489: activate_osd:  local 'ceph_args=--fsid=49a65d30-da54-4f53-bbe5-0a20540fc282 --auth-supported=none --mon-host=127.0.0.1:7104 '
../qa/workunits/ceph-helpers.sh:490: activate_osd:  ceph_args+=' --osd-backfill-full-ratio=.99'
../qa/workunits/ceph-helpers.sh:491: activate_osd:  ceph_args+=' --osd-failsafe-full-ratio=.99'
../qa/workunits/ceph-helpers.sh:492: activate_osd:  ceph_args+=' --osd-journal-size=100'
../qa/workunits/ceph-helpers.sh:493: activate_osd:  ceph_args+=' --osd-data=testdir/osd-crush/0'
../qa/workunits/ceph-helpers.sh:494: activate_osd:  ceph_args+=' --chdir='
../qa/workunits/ceph-helpers.sh:495: activate_osd:  ceph_args+=' --osd-pool-default-erasure-code-directory=.libs'
../qa/workunits/ceph-helpers.sh:496: activate_osd:  ceph_args+=' --osd-class-dir=.libs'
../qa/workunits/ceph-helpers.sh:497: activate_osd:  ceph_args+=' --run-dir=testdir/osd-crush'
../qa/workunits/ceph-helpers.sh:498: activate_osd:  ceph_args+=' --debug-osd=20'
../qa/workunits/ceph-helpers.sh:499: activate_osd:  ceph_args+=' --log-file=testdir/osd-crush/$name.log'
../qa/workunits/ceph-helpers.sh:500: activate_osd:  ceph_args+=' --pid-file=testdir/osd-crush/$name.pid'
../qa/workunits/ceph-helpers.sh:501: activate_osd:  ceph_args+=' '
../qa/workunits/ceph-helpers.sh:502: activate_osd:  ceph_args+=
../qa/workunits/ceph-helpers.sh:503: activate_osd:  mkdir -p testdir/osd-crush/0
../qa/workunits/ceph-helpers.sh:504: activate_osd:  CEPH_ARGS='--fsid=49a65d30-da54-4f53-bbe5-0a20540fc282 --auth-supported=none --mon-host=127.0.0.1:7104  --osd-backfill-full-ratio=.99 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-data=testdir/osd-crush/0 --chdir= --osd-pool-default-erasure-code-directory=.libs --osd-class-dir=.libs --run-dir=testdir/osd-crush --debug-osd=20 --log-file=testdir/osd-crush/$name.log --pid-file=testdir/osd-crush/$name.pid  '
../qa/workunits/ceph-helpers.sh:504: activate_osd:  ceph-disk --statedir=testdir/osd-crush --sysconfdir=testdir/osd-crush --prepend-to-path= activate --mark-init=none testdir/osd-crush/0
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
got monmap epoch 1
2015-07-24 03:28:40.269384 7f626f3c07c0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-07-24 03:28:40.377129 7f626f3c07c0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-07-24 03:28:40.380868 7f626f3c07c0 -1 filestore(testdir/osd-crush/0) could not find -1/23c2fcde/osd_superblock/0 in index: (2) No such file or directory
2015-07-24 03:28:40.457312 7f626f3c07c0 -1 created object store testdir/osd-crush/0 journal testdir/osd-crush/0/journal for osd.0 fsid 49a65d30-da54-4f53-bbe5-0a20540fc282
2015-07-24 03:28:40.457366 7f626f3c07c0 -1 auth: error reading file: testdir/osd-crush/0/keyring: can't open testdir/osd-crush/0/keyring: (2) No such file or directory
2015-07-24 03:28:40.457499 7f626f3c07c0 -1 created new key in keyring testdir/osd-crush/0/keyring
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
added key for osd.0
starting osd.0 at :/0 osd_data testdir/osd-crush/0 testdir/osd-crush/0/journal
.../qa/workunits/ceph-helpers.sh:509: activate_osd:  cat testdir/osd-crush/0/whoami
../qa/workunits/ceph-helpers.sh:509: activate_osd:  '[' 0 = 0 ']'
../qa/workunits/ceph-helpers.sh:511: activate_osd:  ceph osd crush create-or-move 0 1 root=default host=localhost
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
create-or-move updating item name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map
../qa/workunits/ceph-helpers.sh:513: activate_osd:  wait_for_osd up 0
../qa/workunits/ceph-helpers.sh:548: wait_for_osd:  local state=up
../qa/workunits/ceph-helpers.sh:549: wait_for_osd:  local id=0
../qa/workunits/ceph-helpers.sh:551: wait_for_osd:  status=1
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i=0 ))
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i < 120 ))
../qa/workunits/ceph-helpers.sh:553: wait_for_osd:  ceph osd dump
../qa/workunits/ceph-helpers.sh:553: wait_for_osd:  grep 'osd.0 up'
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
../qa/workunits/ceph-helpers.sh:554: wait_for_osd:  sleep 1
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i++ ))
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i < 120 ))
../qa/workunits/ceph-helpers.sh:553: wait_for_osd:  ceph osd dump
../qa/workunits/ceph-helpers.sh:553: wait_for_osd:  grep 'osd.0 up'
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
../qa/workunits/ceph-helpers.sh:554: wait_for_osd:  sleep 1
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i++ ))
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i < 120 ))
../qa/workunits/ceph-helpers.sh:553: wait_for_osd:  ceph osd dump
../qa/workunits/ceph-helpers.sh:553: wait_for_osd:  grep 'osd.0 up'
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
../qa/workunits/ceph-helpers.sh:554: wait_for_osd:  sleep 1
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i++ ))
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i < 120 ))
../qa/workunits/ceph-helpers.sh:553: wait_for_osd:  ceph osd dump
../qa/workunits/ceph-helpers.sh:553: wait_for_osd:  grep 'osd.0 up'
...
../qa/workunits/ceph-helpers.sh:554: wait_for_osd:  sleep 1
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i++ ))
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i < 120 ))
../qa/workunits/ceph-helpers.sh:560: wait_for_osd:  return 1
../qa/workunits/ceph-helpers.sh:513: activate_osd:  return 1
./test/mon/osd-crush.sh:223: TEST_crush_reject_empty:  return 1
./test/mon/osd-crush.sh:32: run:  return 1
../qa/workunits/ceph-helpers.sh:1174: main:  code=1
../qa/workunits/ceph-helpers.sh:1176: main:  teardown testdir/osd-crush
../qa/workunits/ceph-helpers.sh:117: teardown:  local dir=testdir/osd-crush
../qa/workunits/ceph-helpers.sh:118: teardown:  kill_daemons testdir/osd-crush
.../qa/workunits/ceph-helpers.sh:192: kill_daemons:  shopt -q -o xtrace
.../qa/workunits/ceph-helpers.sh:192: kill_daemons:  echo true
../qa/workunits/ceph-helpers.sh:192: kill_daemons:  local trace=true
../qa/workunits/ceph-helpers.sh:193: kill_daemons:  true
../qa/workunits/ceph-helpers.sh:193: kill_daemons:  shopt -u -o xtrace
../qa/workunits/ceph-helpers.sh:219: kill_daemons:  return 0
.../qa/workunits/ceph-helpers.sh:119: teardown:  stat -f -c %T .
../qa/workunits/ceph-helpers.sh:119: teardown:  '[' ext2/ext3 == btrfs ']'
../qa/workunits/ceph-helpers.sh:122: teardown:  rm -fr testdir/osd-crush
../qa/workunits/ceph-helpers.sh:1177: main:  return 1

Actions #1

Updated by Loïc Dachary almost 9 years ago

Marked as urgent because it creates false negatives from the make check bot on a daily basis.

Actions #2

Updated by Loïc Dachary over 8 years ago

  • Priority changed from Urgent to Normal

Has not happened in a week, back to priority normal. Happened again at https://github.com/ceph/ceph/pull/5443#issuecomment-126879157

...
./test/erasure-code/test-erasure-code.sh:36: run:  for id in '$(seq 0 10)'
./test/erasure-code/test-erasure-code.sh:37: run:  run_osd testdir/test-erasure-code 6
../qa/workunits/ceph-helpers.sh:397: run_osd:  local dir=testdir/test-erasure-code
../qa/workunits/ceph-helpers.sh:398: run_osd:  shift
../qa/workunits/ceph-helpers.sh:399: run_osd:  local id=6
../qa/workunits/ceph-helpers.sh:400: run_osd:  shift
../qa/workunits/ceph-helpers.sh:401: run_osd:  local osd_data=testdir/test-erasure-code/6
../qa/workunits/ceph-helpers.sh:403: run_osd:  local ceph_disk_args
../qa/workunits/ceph-helpers.sh:404: run_osd:  ceph_disk_args+=' --statedir=testdir/test-erasure-code'
../qa/workunits/ceph-helpers.sh:405: run_osd:  ceph_disk_args+=' --sysconfdir=testdir/test-erasure-code'
../qa/workunits/ceph-helpers.sh:406: run_osd:  ceph_disk_args+=' --prepend-to-path='
../qa/workunits/ceph-helpers.sh:408: run_osd:  mkdir -p testdir/test-erasure-code/6
../qa/workunits/ceph-helpers.sh:409: run_osd:  ceph-disk --statedir=testdir/test-erasure-code --sysconfdir=testdir/test-erasure-code --prepend-to-path= prepare testdir/test-erasure-code/6
2015-08-01 07:18:30.968334 7f7257cb37c0 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:30.968370 7f7257cb37c0 -1 warning: unable to create /var/run/ceph: (13) Permission denied
2015-08-01 07:18:30.968473 7f7257cb37c0 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:30.991354 7ff85cfec7c0 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:30.991405 7ff85cfec7c0 -1 warning: unable to create /var/run/ceph: (13) Permission denied
2015-08-01 07:18:30.991521 7ff85cfec7c0 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:31.007545 7f6f79b777c0 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:31.007588 7f6f79b777c0 -1 warning: unable to create /var/run/ceph: (13) Permission denied
2015-08-01 07:18:31.007671 7f6f79b777c0 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:31.346508 7f8f6763d780 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:31.498904 7f3751a78780 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:31.625340 7fe6915a7780 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:31.774234 7f1c6c232780 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:31.898699 7f0df24aa780 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:32.046964 7ff46be67780 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:32.322123 7f638fc8c780 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:32.460869 7fda6503b780 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:32.599855 7f73a8a35780 -1 WARNING: the following dangerous and experimental features are enabled: shec
../qa/workunits/ceph-helpers.sh:412: run_osd:  activate_osd testdir/test-erasure-code 6
../qa/workunits/ceph-helpers.sh:478: activate_osd:  local dir=testdir/test-erasure-code
../qa/workunits/ceph-helpers.sh:479: activate_osd:  shift
../qa/workunits/ceph-helpers.sh:480: activate_osd:  local id=6
../qa/workunits/ceph-helpers.sh:481: activate_osd:  shift
../qa/workunits/ceph-helpers.sh:482: activate_osd:  local osd_data=testdir/test-erasure-code/6
../qa/workunits/ceph-helpers.sh:484: activate_osd:  local ceph_disk_args
../qa/workunits/ceph-helpers.sh:485: activate_osd:  ceph_disk_args+=' --statedir=testdir/test-erasure-code'
../qa/workunits/ceph-helpers.sh:486: activate_osd:  ceph_disk_args+=' --sysconfdir=testdir/test-erasure-code'
../qa/workunits/ceph-helpers.sh:487: activate_osd:  ceph_disk_args+=' --prepend-to-path='
../qa/workunits/ceph-helpers.sh:489: activate_osd:  local 'ceph_args=--fsid=c5791cf6-e857-4346-b76a-e5bb6230d2eb --auth-supported=none --enable-experimental-unrecoverable-data-corrupting-features=shec --mon-host=127.0.0.1:7101 '
../qa/workunits/ceph-helpers.sh:490: activate_osd:  ceph_args+=' --osd-backfill-full-ratio=.99'
../qa/workunits/ceph-helpers.sh:491: activate_osd:  ceph_args+=' --osd-failsafe-full-ratio=.99'
../qa/workunits/ceph-helpers.sh:492: activate_osd:  ceph_args+=' --osd-journal-size=100'
../qa/workunits/ceph-helpers.sh:493: activate_osd:  ceph_args+=' --osd-data=testdir/test-erasure-code/6'
../qa/workunits/ceph-helpers.sh:494: activate_osd:  ceph_args+=' --chdir='
../qa/workunits/ceph-helpers.sh:495: activate_osd:  ceph_args+=' --osd-pool-default-erasure-code-directory=.libs'
../qa/workunits/ceph-helpers.sh:496: activate_osd:  ceph_args+=' --osd-class-dir=.libs'
../qa/workunits/ceph-helpers.sh:497: activate_osd:  ceph_args+=' --run-dir=testdir/test-erasure-code'
../qa/workunits/ceph-helpers.sh:498: activate_osd:  ceph_args+=' --debug-osd=20'
../qa/workunits/ceph-helpers.sh:499: activate_osd:  ceph_args+=' --log-file=testdir/test-erasure-code/$name.log'
../qa/workunits/ceph-helpers.sh:500: activate_osd:  ceph_args+=' --pid-file=testdir/test-erasure-code/$name.pid'
../qa/workunits/ceph-helpers.sh:501: activate_osd:  ceph_args+=' '
../qa/workunits/ceph-helpers.sh:502: activate_osd:  ceph_args+=
../qa/workunits/ceph-helpers.sh:503: activate_osd:  mkdir -p testdir/test-erasure-code/6
../qa/workunits/ceph-helpers.sh:504: activate_osd:  CEPH_ARGS='--fsid=c5791cf6-e857-4346-b76a-e5bb6230d2eb --auth-supported=none --enable-experimental-unrecoverable-data-corrupting-features=shec --mon-host=127.0.0.1:7101  --osd-backfill-full-ratio=.99 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-data=testdir/test-erasure-code/6 --chdir= --osd-pool-default-erasure-code-directory=.libs --osd-class-dir=.libs --run-dir=testdir/test-erasure-code --debug-osd=20 --log-file=testdir/test-erasure-code/$name.log --pid-file=testdir/test-erasure-code/$name.pid  '
../qa/workunits/ceph-helpers.sh:504: activate_osd:  ceph-disk --statedir=testdir/test-erasure-code --sysconfdir=testdir/test-erasure-code --prepend-to-path= activate --mark-init=none testdir/test-erasure-code/6
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2015-08-01 07:18:33.060009 7f57b4c81700 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:33.101813 7f57b4c81700 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:33.104832 7f57b4c81700 -1 WARNING: the following dangerous and experimental features are enabled: shec
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2015-08-01 07:18:33.600874 7ff607ba0700 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:33.639973 7ff607ba0700 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:33.643127 7ff607ba0700 -1 WARNING: the following dangerous and experimental features are enabled: shec
got monmap epoch 1
2015-08-01 07:18:33.833098 7f7e942e87c0 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:33.833301 7f7e942e87c0 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:33.865543 7f7e942e87c0 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:33.870815 7f7e942e87c0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-08-01 07:18:33.963415 7f7e942e87c0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-08-01 07:18:33.964090 7f7e942e87c0 -1 filestore(testdir/test-erasure-code/6) could not find -1/23c2fcde/osd_superblock/0 in index: (2) No such file or directory
2015-08-01 07:18:34.126061 7f7e942e87c0 -1 created object store testdir/test-erasure-code/6 journal testdir/test-erasure-code/6/journal for osd.6 fsid c5791cf6-e857-4346-b76a-e5bb6230d2eb
2015-08-01 07:18:34.126129 7f7e942e87c0 -1 auth: error reading file: testdir/test-erasure-code/6/keyring: can't open testdir/test-erasure-code/6/keyring: (2) No such file or directory
2015-08-01 07:18:34.126341 7f7e942e87c0 -1 created new key in keyring testdir/test-erasure-code/6/keyring
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2015-08-01 07:18:34.321720 7f72290ad700 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:34.349790 7f72290ad700 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:34.352130 7f72290ad700 -1 WARNING: the following dangerous and experimental features are enabled: shec
added key for osd.6
2015-08-01 07:18:34.573793 7fc61ea227c0 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:18:34.574052 7fc61ea227c0 -1 WARNING: the following dangerous and experimental features are enabled: shec
starting osd.6 at :/0 osd_data testdir/test-erasure-code/6 testdir/test-erasure-code/6/journal
.../qa/workunits/ceph-helpers.sh:509: activate_osd:  cat testdir/test-erasure-code/6/whoami
../qa/workunits/ceph-helpers.sh:509: activate_osd:  '[' 6 = 6 ']'
../qa/workunits/ceph-helpers.sh:511: activate_osd:  ceph osd crush create-or-move 6 1 root=default host=localhost
create-or-move updating item name 'osd.6' weight 1 at location {host=localhost,root=default} to crush map
../qa/workunits/ceph-helpers.sh:513: activate_osd:  wait_for_osd up 6
../qa/workunits/ceph-helpers.sh:548: wait_for_osd:  local state=up
../qa/workunits/ceph-helpers.sh:549: wait_for_osd:  local id=6
../qa/workunits/ceph-helpers.sh:551: wait_for_osd:  status=1
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i=0 ))
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i < 120 ))
../qa/workunits/ceph-helpers.sh:553: wait_for_osd:  ceph osd dump
../qa/workunits/ceph-helpers.sh:553: wait_for_osd:  grep 'osd.6 up'
...
../qa/workunits/ceph-helpers.sh:554: wait_for_osd:  sleep 1
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i++ ))
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i < 120 ))
../qa/workunits/ceph-helpers.sh:553: wait_for_osd:  ceph osd dump
../qa/workunits/ceph-helpers.sh:553: wait_for_osd:  grep 'osd.6 up'
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2015-08-01 07:21:26.114506 7f09787bf700 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:21:26.138316 7f09787bf700 -1 WARNING: the following dangerous and experimental features are enabled: shec
2015-08-01 07:21:26.140419 7f09787bf700 -1 WARNING: the following dangerous and experimental features are enabled: shec
../qa/workunits/ceph-helpers.sh:554: wait_for_osd:  sleep 1
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i++ ))
../qa/workunits/ceph-helpers.sh:552: wait_for_osd:  (( i < 120 ))
../qa/workunits/ceph-helpers.sh:560: wait_for_osd:  return 1
../qa/workunits/ceph-helpers.sh:513: activate_osd:  return 1
./test/erasure-code/test-erasure-code.sh:37: run:  return 1
../qa/workunits/ceph-helpers.sh:1174: main:  code=1
../qa/workunits/ceph-helpers.sh:1176: main:  teardown testdir/test-erasure-code
../qa/workunits/ceph-helpers.sh:117: teardown:  local dir=testdir/test-erasure-code
../qa/workunits/ceph-helpers.sh:118: teardown:  kill_daemons testdir/test-erasure-code
.../qa/workunits/ceph-helpers.sh:192: kill_daemons:  shopt -q -o xtrace
.../qa/workunits/ceph-helpers.sh:192: kill_daemons:  echo true
../qa/workunits/ceph-helpers.sh:192: kill_daemons:  local trace=true
../qa/workunits/ceph-helpers.sh:193: kill_daemons:  true
../qa/workunits/ceph-helpers.sh:193: kill_daemons:  shopt -u -o xtrace
../qa/workunits/ceph-helpers.sh:219: kill_daemons:  return 0
.../qa/workunits/ceph-helpers.sh:119: teardown:  stat -f -c %T .
../qa/workunits/ceph-helpers.sh:119: teardown:  '[' ext2/ext3 == btrfs ']'
../qa/workunits/ceph-helpers.sh:122: teardown:  rm -fr testdir/test-erasure-code
../qa/workunits/ceph-helpers.sh:1177: main:  return 1
Actions #3

Updated by Loïc Dachary over 8 years ago

  • Status changed from 12 to Can't reproduce

This did not happen in a long time (according to the fail logs of the make check bot).

Actions

Also available in: Atom PDF