Actions
Bug #8318
closed"rbd: create error" in upgrade:dumpling-dumpling-testing-basic-plana suite
Status:
Can't reproduce
Priority:
Urgent
Assignee:
-
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2014-05-08T10:11:20.219 INFO:teuthology.task.workunit.client.0.out:[10.214.131.33]: zeros export to sparse file 2014-05-08T10:11:20.219 INFO:teuthology.task.workunit.client.0.err:[10.214.131.33]: + echo zeros export to sparse file 2014-05-08T10:11:20.219 INFO:teuthology.task.workunit.client.0.err:[10.214.131.33]: + rbd create sparse --size 4 2014-05-08T10:11:20.241 INFO:teuthology.task.workunit.client.0.err:[10.214.131.33]: rbd: create error: (38) Function not implemented2014-05-08 10:11:20.241248 7fb52e426780 -1 librbd: librbd does not support requested features. 2014-05-08T10:11:20.242 INFO:teuthology.task.workunit.client.0.err:[10.214.131.33]: 2014-05-08T10:11:20.243 INFO:teuthology.task.workunit:Stopping rbd/import_export.sh on client.0... 2014-05-08T10:11:20.243 DEBUG:teuthology.orchestra.run:Running [10.214.131.33]: 'rm -rf -- /home/ubuntu/cephtest/workunits.list /home/ubuntu/cephtest/workunit.client.0' 2014-05-08T10:11:20.252 ERROR:teuthology.run_tasks:Saw exception from tasks Traceback (most recent call last): File "/home/teuthworker/teuthology-dumpling/teuthology/run_tasks.py", line 25, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthworker/teuthology-dumpling/teuthology/run_tasks.py", line 14, in run_one_task return fn(**kwargs) File "/home/teuthworker/teuthology-dumpling/teuthology/task/parallel.py", line 41, in task p.spawn(_run_spawned, ctx, confg, taskname) File "/home/teuthworker/teuthology-dumpling/teuthology/parallel.py", line 88, in __exit__ raise CommandFailedError: Command failed on 10.214.131.33 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e685c68aa6a500aa7fa433cd9b8246f70c5383e TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PYTHONPATH="$PYTHONPATH:/home/ubuntu/cephtest/binary/usr/local/lib/python2.7/dist-packages:/home/ubuntu/cephtest/binary/usr/local/lib/python2.6/dist-packages" RBD_CREATE_ARGS=--new-format /home/ubuntu/cephtest/adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh' 2014-05-08T10:11:20.332 DEBUG:teuthology.run_tasks:Unwinding manager <contextlib.GeneratorContextManager object at 0x2d4f090> 2014-05-08T10:11:20.333 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthworker/teuthology-dumpling/teuthology/contextutil.py", line 27, in nested yield vars File "/home/teuthworker/teuthology-dumpling/teuthology/task/ceph.py", line 1168, in task yield File "/home/teuthworker/teuthology-dumpling/teuthology/run_tasks.py", line 25, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthworker/teuthology-dumpling/teuthology/run_tasks.py", line 14, in run_one_task return fn(**kwargs) File "/home/teuthworker/teuthology-dumpling/teuthology/task/parallel.py", line 41, in task p.spawn(_run_spawned, ctx, confg, taskname) File "/home/teuthworker/teuthology-dumpling/teuthology/parallel.py", line 88, in __exit__ raise CommandFailedError: Command failed on 10.214.131.33 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e685c68aa6a500aa7fa433cd9b8246f70c5383e TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PYTHONPATH="$PYTHONPATH:/home/ubuntu/cephtest/binary/usr/local/lib/python2.7/dist-packages:/home/ubuntu/cephtest/binary/usr/local/lib/python2.6/dist-packages" RBD_CREATE_ARGS=--new-format /home/ubuntu/cephtest/adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh' 2014-05-08T10:11:20.334 INFO:teuthology.misc:Shutting down mds daemons...
archive_path: /var/lib/teuthworker/archive/teuthology-2014-05-07_19:15:07-upgrade:dumpling-dumpling-testing-basic-plana/242025 branch: dumpling description: upgrade/dumpling/rbd/{0-cluster/start.yaml 1-dumpling-install/v0.67.5.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/osdthrash.yaml} email: null job_id: '242025' kernel: kdb: true sha1: d35a5d13758c7a7aeb19dbd17e51b4c9a712ca2c last_in_suite: false machine_type: plana name: teuthology-2014-05-07_19:15:07-upgrade:dumpling-dumpling-testing-basic-plana nuke-on-error: true os_type: ubuntu overrides: admin_socket: branch: dumpling ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 fs: xfs log-whitelist: - slow request - scrub - wrongly marked me down - objects unfound and apparently lost - log bound mismatch sha1: 0e685c68aa6a500aa7fa433cd9b8246f70c5383e ceph-deploy: branch: dev: dumpling conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: 0e685c68aa6a500aa7fa433cd9b8246f70c5383e s3tests: branch: dumpling workunit: sha1: 0e685c68aa6a500aa7fa433cd9b8246f70c5383e owner: scheduled_teuthology@teuthology roles: - - mon.a - mds.a - osd.0 - osd.1 - osd.2 - - mon.b - mon.c - osd.3 - osd.4 - osd.5 - client.0 suite: upgrade:dumpling tasks: - chef: null - clock.check: null - install: tag: v0.67.5 - ceph: null - parallel: - workload - upgrade-sequence - thrashosds: chance_pgnum_grow: 1 chance_pgpnum_fix: 1 timeout: 1200 - workunit: clients: client.0: - rbd/test_lock_fence.sh teuthology_branch: dumpling upgrade-sequence: sequential: - install.upgrade: all: branch: dumpling - ceph.restart: - osd.0 - sleep: duration: 30 - ceph.restart: - osd.1 - sleep: duration: 30 - ceph.restart: - osd.2 - sleep: duration: 30 - ceph.restart: - osd.3 - sleep: duration: 30 - ceph.restart: - osd.4 - sleep: duration: 30 - ceph.restart: - osd.5 - sleep: duration: 60 - ceph.restart: - mon.a - sleep: duration: 60 - ceph.restart: - mon.b - sleep: duration: 60 - ceph.restart: - mon.c - sleep: duration: 60 - ceph.restart: - mds.a verbose: true worker_log: /var/lib/teuthworker/archive/worker_logs/worker.plana.19265 workload: sequential: - workunit: clients: client.0: - rbd/import_export.sh env: RBD_CREATE_ARGS: --new-format - workunit: clients: client.0: - cls/test_cls_rbd.sh
description: upgrade/dumpling/rbd/{0-cluster/start.yaml 1-dumpling-install/v0.67.5.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/osdthrash.yaml} duration: 215.8999218940735 failure_reason: 'Command failed on 10.214.131.33 with status 1: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e685c68aa6a500aa7fa433cd9b8246f70c5383e TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PYTHONPATH="$PYTHONPATH:/home/ubuntu/cephtest/binary/usr/local/lib/python2.7/dist-packages:/home/ubuntu/cephtest/binary/usr/local/lib/python2.6/dist-packages" RBD_CREATE_ARGS=--new-format /home/ubuntu/cephtest/adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh''' flavor: basic mon.a-kernel-sha1: d35a5d13758c7a7aeb19dbd17e51b4c9a712ca2c mon.b-kernel-sha1: d35a5d13758c7a7aeb19dbd17e51b4c9a712ca2c owner: scheduled_teuthology@teuthology success: false
Updated by Sage Weil almost 10 years ago
- Project changed from Ceph to rbd
- Priority changed from Normal to Urgent
this a bug in the test? ENOSYS, probably suing rbd_create3 or something?
Updated by Sage Weil over 9 years ago
- Status changed from New to Can't reproduce
Actions