Project

General

Profile

Actions

Bug #12419

closed

TEST_crush_rule_create_erasure consistently fails on i386 builder

Added by Kefu Chai over 8 years ago. Updated over 8 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
hammer
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

see http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-tarball-trusty-i386-basic/log.cgi?log=787fa80c2746fde44ac0583ff7995ec5be9a672d

only spotted on i386 builder.

../qa/workunits/ceph-helpers.sh:94: setup: mkdir -p testdir/osd-crush
./test/mon/osd-crush.sh:32: run: TEST_crush_rule_create_erasure testdir/osd-crush
./test/mon/osd-crush.sh:86: TEST_crush_rule_create_erasure: local dir=testdir/osd-crush
./test/mon/osd-crush.sh:88: TEST_crush_rule_create_erasure: run_mon testdir/osd-crush a
../qa/workunits/ceph-helpers.sh:282: run_mon: local dir=testdir/osd-crush
../qa/workunits/ceph-helpers.sh:283: run_mon: shift
../qa/workunits/ceph-helpers.sh:284: run_mon: local id=a
../qa/workunits/ceph-helpers.sh:285: run_mon: shift
../qa/workunits/ceph-helpers.sh:286: run_mon: local data=testdir/osd-crush/a
../qa/workunits/ceph-helpers.sh:289: run_mon: ceph-mon --id a --mkfs --mon-data=testdir/osd-crush/a --run-dir=testdir/osd-crush
ceph-mon: mon.noname-a 127.0.0.1:7104/0 is local, renaming to mon.a
ceph-mon: set fsid to 330fc7c7-ceb9-41e5-8033-c0554ec321b4
ceph-mon: created monfs at testdir/osd-crush/a for mon.a
../qa/workunits/ceph-helpers.sh:296: run_mon: ceph-mon --id a --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --osd-pool-default-erasure-code-directory=.libs --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=testdir/osd-crush/a '--log-file=testdir/osd-crush/$name.log' '--admin-socket=testdir/osd-crush/$cluster-$name.asok' --mon-cluster-log-file=testdir/osd-crush/log --run-dir=testdir/osd-crush '--pid-file=testdir/osd-crush/$name.pid'
../qa/workunits/ceph-helpers.sh:314: run_mon: cat
.../qa/workunits/ceph-helpers.sh:314: run_mon: get_config mon a fsid
.../qa/workunits/ceph-helpers.sh:653: get_config: local daemon=mon
.../qa/workunits/ceph-helpers.sh:654: get_config: local id=a
.../qa/workunits/ceph-helpers.sh:655: get_config: local config=fsid
.../qa/workunits/ceph-helpers.sh:657: get_config: CEPH_ARGS=
.../qa/workunits/ceph-helpers.sh:657: get_config: ceph --format xml daemon testdir/osd-crush/ceph-mon.a.asok config get fsid
.../qa/workunits/ceph-helpers.sh:660: get_config: xmlstarlet sel -t -m //fsid -v . -n
.../qa/workunits/ceph-helpers.sh:314: run_mon: get_config mon a mon_host
.../qa/workunits/ceph-helpers.sh:653: get_config: local daemon=mon
.../qa/workunits/ceph-helpers.sh:654: get_config: local id=a
.../qa/workunits/ceph-helpers.sh:655: get_config: local config=mon_host
.../qa/workunits/ceph-helpers.sh:657: get_config: CEPH_ARGS=
.../qa/workunits/ceph-helpers.sh:657: get_config: ceph --format xml daemon testdir/osd-crush/ceph-mon.a.asok config get mon_host
.../qa/workunits/ceph-helpers.sh:660: get_config: xmlstarlet sel -t -m //mon_host -v . -n
.../qa/workunits/ceph-helpers.sh:319: run_mon: get_config mon a mon_initial_members
.../qa/workunits/ceph-helpers.sh:653: get_config: local daemon=mon
.../qa/workunits/ceph-helpers.sh:654: get_config: local id=a
.../qa/workunits/ceph-helpers.sh:655: get_config: local config=mon_initial_members
.../qa/workunits/ceph-helpers.sh:657: get_config: CEPH_ARGS=
.../qa/workunits/ceph-helpers.sh:657: get_config: ceph --format xml daemon testdir/osd-crush/ceph-mon.a.asok config get mon_initial_members
.../qa/workunits/ceph-helpers.sh:660: get_config: xmlstarlet sel -t -m //mon_initial_members -v . -n
../qa/workunits/ceph-helpers.sh:319: run_mon: test -z ''
../qa/workunits/ceph-helpers.sh:320: run_mon: ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
pool 'rbd' removed
../qa/workunits/ceph-helpers.sh:321: run_mon: ceph osd pool create rbd 4
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
pool 'rbd' created
./test/mon/osd-crush.sh:90: TEST_crush_rule_create_erasure: run_osd testdir/osd-crush 0
../qa/workunits/ceph-helpers.sh:397: run_osd: local dir=testdir/osd-crush
../qa/workunits/ceph-helpers.sh:398: run_osd: shift
../qa/workunits/ceph-helpers.sh:399: run_osd: local id=0
../qa/workunits/ceph-helpers.sh:400: run_osd: shift
../qa/workunits/ceph-helpers.sh:401: run_osd: local osd_data=testdir/osd-crush/0
../qa/workunits/ceph-helpers.sh:403: run_osd: local ceph_disk_args
../qa/workunits/ceph-helpers.sh:404: run_osd: ceph_disk_args+=' --statedir=testdir/osd-crush'
../qa/workunits/ceph-helpers.sh:405: run_osd: ceph_disk_args+=' --sysconfdir=testdir/osd-crush'
../qa/workunits/ceph-helpers.sh:406: run_osd: ceph_disk_args+=' --prepend-to-path='
../qa/workunits/ceph-helpers.sh:408: run_osd: mkdir -p testdir/osd-crush/0
../qa/workunits/ceph-helpers.sh:409: run_osd: ceph-disk --statedir=testdir/osd-crush --sysconfdir=testdir/osd-crush --prepend-to-path= prepare testdir/osd-crush/0
2015-07-21 11:08:12.563087 4060ef00 -1 warning: unable to create /var/run/ceph: (13) Permission denied
2015-07-21 11:08:12.578601 4060ef00 -1 warning: unable to create /var/run/ceph: (13) Permission denied
2015-07-21 11:08:12.594835 4060ef00 -1 warning: unable to create /var/run/ceph: (13) Permission denied
../qa/workunits/ceph-helpers.sh:412: run_osd: activate_osd testdir/osd-crush 0
../qa/workunits/ceph-helpers.sh:479: activate_osd: local dir=testdir/osd-crush
../qa/workunits/ceph-helpers.sh:480: activate_osd: shift
../qa/workunits/ceph-helpers.sh:481: activate_osd: local id=0
../qa/workunits/ceph-helpers.sh:482: activate_osd: shift
../qa/workunits/ceph-helpers.sh:483: activate_osd: local osd_data=testdir/osd-crush/0
../qa/workunits/ceph-helpers.sh:485: activate_osd: local ceph_disk_args
../qa/workunits/ceph-helpers.sh:486: activate_osd: ceph_disk_args+=' --statedir=testdir/osd-crush'
../qa/workunits/ceph-helpers.sh:487: activate_osd: ceph_disk_args+=' --sysconfdir=testdir/osd-crush'
../qa/workunits/ceph-helpers.sh:488: activate_osd: ceph_disk_args+=' --prepend-to-path='
../qa/workunits/ceph-helpers.sh:490: activate_osd: local 'ceph_args=--fsid=330fc7c7-ceb9-41e5-8033-c0554ec321b4 --auth-supported=none --mon-host=127.0.0.1:7104 '
../qa/workunits/ceph-helpers.sh:491: activate_osd: ceph_args+=' --osd-backfill-full-ratio=.99'
../qa/workunits/ceph-helpers.sh:492: activate_osd: ceph_args+=' --osd-failsafe-full-ratio=.99'
../qa/workunits/ceph-helpers.sh:493: activate_osd: ceph_args+=' --osd-journal-size=100'
../qa/workunits/ceph-helpers.sh:494: activate_osd: ceph_args+=' --osd-data=testdir/osd-crush/0'
../qa/workunits/ceph-helpers.sh:495: activate_osd: ceph_args+=' --chdir='
../qa/workunits/ceph-helpers.sh:496: activate_osd: ceph_args+=' --osd-pool-default-erasure-code-directory=.libs'
../qa/workunits/ceph-helpers.sh:497: activate_osd: ceph_args+=' --osd-class-dir=.libs'
../qa/workunits/ceph-helpers.sh:498: activate_osd: ceph_args+=' --run-dir=testdir/osd-crush'
../qa/workunits/ceph-helpers.sh:499: activate_osd: ceph_args+=' --debug-osd=20'
../qa/workunits/ceph-helpers.sh:500: activate_osd: ceph_args+=' --log-file=testdir/osd-crush/$name.log'
../qa/workunits/ceph-helpers.sh:501: activate_osd: ceph_args+=' --pid-file=testdir/osd-crush/$name.pid'
../qa/workunits/ceph-helpers.sh:502: activate_osd: ceph_args+=' '
../qa/workunits/ceph-helpers.sh:503: activate_osd: ceph_args+=
../qa/workunits/ceph-helpers.sh:504: activate_osd: mkdir -p testdir/osd-crush/0
../qa/workunits/ceph-helpers.sh:505: activate_osd: CEPH_ARGS='--fsid=330fc7c7-ceb9-41e5-8033-c0554ec321b4 --auth-supported=none --mon-host=127.0.0.1:7104 --osd-backfill-full-ratio=.99 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-data=testdir/osd-crush/0 --chdir= --osd-pool-default-erasure-code-directory=.libs --osd-class-dir=.libs --run-dir=testdir/osd-crush --debug-osd=20 --log-file=testdir/osd-crush/$name.log --pid-file=testdir/osd-crush/$name.pid '
../qa/workunits/ceph-helpers.sh:505: activate_osd: ceph-disk --statedir=testdir/osd-crush --sysconfdir=testdir/osd-crush --prepend-to-path= activate --mark-init=none testdir/osd-crush/0
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
got monmap epoch 1
2015-07-21 11:08:13.430236 4060ef00 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2015-07-21 11:08:13.468715 4060ef00 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2015-07-21 11:08:13.469721 4060ef00 -1 filestore(testdir/osd-crush/0) could not find -1/23c2fcde/osd_superblock/0 in index: (2) No such file or directory
2015-07-21 11:08:13.495371 4060ef00 -1 created object store testdir/osd-crush/0 journal testdir/osd-crush/0/journal for osd.0 fsid 330fc7c7-ceb9-41e5-8033-c0554ec321b4
2015-07-21 11:08:13.495511 4060ef00 -1 auth: error reading file: testdir/osd-crush/0/keyring: can't open testdir/osd-crush/0/keyring: (2) No such file or directory
2015-07-21 11:08:13.495684 4060ef00 -1 created new key in keyring testdir/osd-crush/0/keyring
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
added key for osd.0
starting osd.0 at :/0 osd_data testdir/osd-crush/0 testdir/osd-crush/0/journal
.../qa/workunits/ceph-helpers.sh:510: activate_osd: cat testdir/osd-crush/0/whoami
../qa/workunits/ceph-helpers.sh:510: activate_osd: '[' 0 = 0 ']'
../qa/workunits/ceph-helpers.sh:512: activate_osd: ceph osd crush create-or-move 0 1 root=default host=localhost
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
create-or-move updating item name 'osd.0' weight 1 at location {host=localhost,root=default} to crush map
../qa/workunits/ceph-helpers.sh:514: activate_osd: wait_for_osd up 0
../qa/workunits/ceph-helpers.sh:550: wait_for_osd: local state=up
../qa/workunits/ceph-helpers.sh:551: wait_for_osd: local id=0
../qa/workunits/ceph-helpers.sh:553: wait_for_osd: status=1
../qa/workunits/ceph-helpers.sh:554: wait_for_osd: (( i=0 ))
../qa/workunits/ceph-helpers.sh:554: wait_for_osd: (( i < 120 ))
../qa/workunits/ceph-helpers.sh:555: wait_for_osd: grep 'osd.0 up'
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
osd.0 up in weight 1 up_from 5 up_thru 6 down_at 0 last_clean_interval [0,0) 127.0.0.1:6800/12155 127.0.0.1:6801/12155 127.0.0.1:6802/12155 127.0.0.1:6803/12155 exists,up 0e2503af-b6b7-4cb8-8f70-3ec79d0172ca
../qa/workunits/ceph-helpers.sh:558: wait_for_osd: status=0
../qa/workunits/ceph-helpers.sh:559: wait_for_osd: break
../qa/workunits/ceph-helpers.sh:562: wait_for_osd: return 0
./test/mon/osd-crush.sh:92: TEST_crush_rule_create_erasure: local ruleset=ruleset3
./test/mon/osd-crush.sh:96: TEST_crush_rule_create_erasure: ./ceph osd crush rule create-erasure ruleset3
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
created ruleset ruleset3 at 1
./test/mon/osd-crush.sh:97: TEST_crush_rule_create_erasure: ./ceph osd crush rule create-erasure ruleset3
./test/mon/osd-crush.sh:98: TEST_crush_rule_create_erasure: grep 'ruleset3 already exists'
rule ruleset3 already exists
./test/mon/osd-crush.sh:99: TEST_crush_rule_create_erasure: ./ceph --format xml osd crush rule dump ruleset3
./test/mon/osd-crush.sh:100: TEST_crush_rule_create_erasure: egrep 'take[^<]+default'
./test/mon/osd-crush.sh:101: TEST_crush_rule_create_erasure: grep 'chooseleaf_indep0host'
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
1ruleset31333set_chooseleaf_tries5set_choose_tries100take-1defaultchooseleaf_indep0hostemit
./test/mon/osd-crush.sh:102: TEST_crush_rule_create_erasure: ./ceph osd crush rule rm ruleset3
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
./test/mon/osd-crush.sh:103: TEST_crush_rule_create_erasure: ./ceph osd crush rule ls
./test/mon/osd-crush.sh:103: TEST_crush_rule_create_erasure: grep ruleset3
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
./test/mon/osd-crush.sh:107: TEST_crush_rule_create_erasure: ./ceph osd crush rule create-erasure ruleset3 default
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
created ruleset ruleset3 at 1
./test/mon/osd-crush.sh:108: TEST_crush_rule_create_erasure: ./ceph osd crush rule ls
./test/mon/osd-crush.sh:108: TEST_crush_rule_create_erasure: grep ruleset3
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
"ruleset3" 
./test/mon/osd-crush.sh:109: TEST_crush_rule_create_erasure: ./ceph osd crush rule rm ruleset3
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
./test/mon/osd-crush.sh:110: TEST_crush_rule_create_erasure: ./ceph osd crush rule ls
./test/mon/osd-crush.sh:110: TEST_crush_rule_create_erasure: grep ruleset3
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
./test/mon/osd-crush.sh:114: TEST_crush_rule_create_erasure: ./ceph osd erasure-code-profile rm default
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
./test/mon/osd-crush.sh:115: TEST_crush_rule_create_erasure: ./ceph osd erasure-code-profile ls
./test/mon/osd-crush.sh:115: TEST_crush_rule_create_erasure: grep default
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
./test/mon/osd-crush.sh:116: TEST_crush_rule_create_erasure: ./ceph osd crush rule create-erasure ruleset3
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
created ruleset ruleset3 at 1
./test/mon/osd-crush.sh:117: TEST_crush_rule_create_erasure: CEPH_ARGS=
./test/mon/osd-crush.sh:117: TEST_crush_rule_create_erasure: ./ceph --admin-daemon testdir/osd-crush/ceph-mon.a.asok log flush
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
{}
./test/mon/osd-crush.sh:118: TEST_crush_rule_create_erasure: grep 'profile set default' testdir/osd-crush/mon.a.log
2015-07-21 11:08:17.917500 41ffab40 20 mon.a@0(leader).osd e12 erasure code profile set default={directory=.libs,jerasure-per-chunk-alignment=false,k=2,m=1,plugin=jerasure,ruleset-failure-domain=host,ruleset-root=default,technique=reed_sol_van,w=8}
./test/mon/osd-crush.sh:119: TEST_crush_rule_create_erasure: ./ceph osd erasure-code-profile ls
./test/mon/osd-crush.sh:119: TEST_crush_rule_create_erasure: grep default
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
default
./test/mon/osd-crush.sh:120: TEST_crush_rule_create_erasure: ./ceph osd crush rule rm ruleset3
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
./test/mon/osd-crush.sh:121: TEST_crush_rule_create_erasure: ./ceph osd crush rule ls
./test/mon/osd-crush.sh:121: TEST_crush_rule_create_erasure: grep ruleset3
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
./test/mon/osd-crush.sh:126: TEST_crush_rule_create_erasure: ceph osd erasure-code-profile set myprofile plugin=lrc mapping=__DD__DD 'layers=[[ "_cDD_cDD", "" ],[ "cDDD____", "" ],[ "____cDDD", "" ],]' 'ruleset-steps=[ [ "choose", "datacenter", 3 ], [ "chooseleaf", "osd", 0] ]'
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
./test/mon/osd-crush.sh:127: TEST_crush_rule_create_erasure: expect_failure testdir/osd-crush 'Error EINVAL' ./ceph osd pool create mypool 1 1 erasure myprofile
../qa/workunits/ceph-helpers.sh:1071: expect_failure: local dir=testdir/osd-crush
../qa/workunits/ceph-helpers.sh:1072: expect_failure: shift
../qa/workunits/ceph-helpers.sh:1073: expect_failure: local 'expected=Error EINVAL'
../qa/workunits/ceph-helpers.sh:1074: expect_failure: shift
../qa/workunits/ceph-helpers.sh:1075: expect_failure: local success
../qa/workunits/ceph-helpers.sh:1077: expect_failure: ./ceph osd pool create mypool 1 1 erasure myprofile
../qa/workunits/ceph-helpers.sh:1078: expect_failure: success=true
../qa/workunits/ceph-helpers.sh:1083: expect_failure: true
../qa/workunits/ceph-helpers.sh:1084: expect_failure: cat testdir/osd-crush/out
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
pool 'mypool' created
../qa/workunits/ceph-helpers.sh:1085: expect_failure: return 1
./test/mon/osd-crush.sh:128: TEST_crush_rule_create_erasure: return 1
./test/mon/osd-crush.sh:32: run: return 1

Related issues 2 (0 open2 closed)

Related to Ceph - Bug #11814: implicit erasure code crush ruleset is not validatedResolvedLoïc Dachary05/29/2015

Actions
Copied to Ceph - Backport #13410: TEST_crush_rule_create_erasure consistently fails on i386 builderResolvedLoïc Dachary07/21/2015Actions
Actions #1

Updated by Loïc Dachary over 8 years ago

  • Assignee set to Loïc Dachary
  • Priority changed from Normal to High
Actions #2

Updated by Loïc Dachary over 8 years ago

  • Status changed from New to Fix Under Review
Actions #4

Updated by Kefu Chai over 8 years ago

  • Status changed from Fix Under Review to Resolved
Actions #5

Updated by Loïc Dachary over 8 years ago

  • Status changed from Resolved to Pending Backport
  • Backport set to hammer
Actions #6

Updated by Loïc Dachary over 8 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF