Bug #8264
"unknown op copy_from" error in ubuntu-2014-04-30_14:23:02-rados-dumpling-testing-basic-plana
Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
other
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Logs are in http://qa-proxy.ceph.com/teuthology/ubuntu-2014-04-30_14:23:02-rados-dumpling-testing-basic-plana/224887/
2014-04-30T15:48:22.724 INFO:teuthology.task.rados.rados.0.out:[10.214.131.33]: adding op weight read -> 100 2014-04-30T15:48:22.724 INFO:teuthology.task.rados.rados.0.out:[10.214.131.33]: adding op weight write -> 100 2014-04-30T15:48:22.724 INFO:teuthology.task.rados.rados.0.out:[10.214.131.33]: adding op weight delete -> 50 2014-04-30T15:48:22.725 INFO:teuthology.task.rados.rados.0.err:[10.214.131.33]: unknown op copy_from 2014-04-30T15:48:22.747 ERROR:teuthology.run_tasks:Manager failed: rados Traceback (most recent call last): File "/home/teuthworker/teuthology-master/teuthology/run_tasks.py", line 92, in run_tasks suppress = manager.__exit__(*exc_info) File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ self.gen.next() File "/home/teuthworker/teuthology-master/teuthology/task/rados.py", line 170, in task running.get() File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 331, in get raise self._exception CommandFailedError: Command failed on 10.214.131.33 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --op read 100 --op write 100 --op delete 50 --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op copy_from 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --pool unique_pool_0' 2014-04-30T15:48:22.810 DEBUG:teuthology.run_tasks:Unwinding manager mon_thrash 2014-04-30T15:48:22.810 INFO:teuthology.task.mon_thrash:joining mon_thrasher 2014-04-30T15:48:41.931 INFO:teuthology.task.mon_thrash.mon_thrasher:killing mon.c 2014-04-30T15:48:41.931 INFO:teuthology.task.mon_thrash.mon_thrasher:reviving mon.c 2014-04-30T15:48:41.932 INFO:teuthology.task.ceph.mon.c:Restarting
archive_path: /var/lib/teuthworker/archive/ubuntu-2014-04-30_14:23:02-rados-dumpling-testing-basic-plana/224887 description: rados/monthrash/{ceph/ceph.yaml clusters/3-mons.yaml fs/xfs.yaml msgr-failures/few.yaml thrashers/one.yaml workloads/snaps-few-objects.yaml} email: null job_id: '224887' kernel: &id001 kdb: true sha1: f74d66a3ec1b62a663451083091ccb8341d721ec last_in_suite: false machine_type: plana name: ubuntu-2014-04-30_14:23:02-rados-dumpling-testing-basic-plana nuke-on-error: true os_type: ubuntu overrides: admin_socket: branch: dumpling ceph: conf: global: ms inject socket failures: 5000 mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon min osdmap epochs: 25 paxos service trim min: 5 osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 osd sloppy crc: true fs: xfs log-whitelist: - slow request sha1: 5a6b35160417423db7c6ff892627f084ab610dfe ceph-deploy: branch: dev: dumpling conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: 5a6b35160417423db7c6ff892627f084ab610dfe s3tests: branch: master workunit: sha1: 5a6b35160417423db7c6ff892627f084ab610dfe owner: scheduled_ubuntu@yw roles: - - mon.a - mon.c - osd.0 - osd.1 - osd.2 - - mon.b - mds.a - osd.3 - osd.4 - osd.5 - client.0 targets: ubuntu@plana07.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCq+3SwRGoWpVwyqApVRi3WdgCS7y8P0cTFmPKGBuyQ+83S7pBqZXam49inu9GuU/o1c6XdhjX180KAjQGpFP/bnm1ktuIZ16j0Ro+Ib+Tf43b9Y4rv20jEBlTvJurLwy7/RYfi+Na3E2wWx2VPLxdww4wuRteM0/QvTs6/kfDtwrZclLrs2EaQxidFi+cyB696/AJ4T5uv3xSKHiSL9F2arLYL/i2PqpuX3hqElqBCi+zRkhMtcYWT3wG+ojU1/nywZJSo8tGfDE28b4C0gSXUoZftz0nk/hxLTJfOQ2Ge3Sibt4AfaMBn8+x6BOjxXkbfcBzp+/s6roCtkR9WEwGx ubuntu@plana08.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmrPbNrKBRrF2sFfhWGBXBrxP1c9wi06uF6USYU0ubQXxm+xTpudp52IT+VnIL12Lnkg/552C6VJEQhZbWyH1Y/0Udx6lkzW+jedLgQVjeK8gPVxRl3/xtP0C5b9Spao/iX8EiBZ1ijq+CIb6hAej90nEUrfh8dBrpSwX3d2b5ECpkjypoboF7OOWYOEOtUrxVxXzTC6VysQeVoJh15u3lMa1otYbOvlFvdeFE5fQqfg7yPsQqX48CptUT3H6UZXMfpXY5axu69Wqhpj4wQdAFGWW6oZtGeRYyUyowK+aFJRWQKHXIZGi4KOvH8Bgf87u+BOg/t3bFJGuUUZYokOPJ tasks: - internal.lock_machines: - 2 - plana - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.serialize_remote_roles: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - kernel: *id001 - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: null - ceph: null - mon_thrash: revive_delay: 20 thrash_delay: 1 - rados: clients: - client.0 objects: 50 op_weights: copy_from: 50 delete: 50 read: 100 rollback: 50 snap_create: 50 snap_remove: 50 write: 100 ops: 4000 teuthology_branch: master verbose: true worker_log: /var/lib/teuthworker/archive/worker_logs/worker.plana.11494
description: rados/monthrash/{ceph/ceph.yaml clusters/3-mons.yaml fs/xfs.yaml msgr-failures/few.yaml thrashers/one.yaml workloads/snaps-few-objects.yaml} duration: 342.0721859931946 failure_reason: 'Command failed on 10.214.131.33 with status 1: ''CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --op read 100 --op write 100 --op delete 50 --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op copy_from 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --pool unique_pool_0''' flavor: basic mon.a-kernel-sha1: f74d66a3ec1b62a663451083091ccb8341d721ec mon.b-kernel-sha1: f74d66a3ec1b62a663451083091ccb8341d721ec owner: scheduled_ubuntu@yw success: false
History
#1 Updated by Sage Weil almost 10 years ago
- Status changed from New to Rejected
If you are seeing the copy-from failure, it is becuase you are using the master/firefly branch of teuthology or ceph-qa-suite on dumpling code (which does not support copy-from). Probably you need to add the argument to schedule_suite.sh that specifies the teuthology branch (dumpling) and make sure that you ceph-qa-suite checkout is also on the latest dumpling branch.