Actions
Bug #8019
closedos/JournalingObjectStore.cc: 121: FAILED assert(op > committed_seq) on wheezy
Status:
Resolved
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2014-04-07T08:05:44.519 ERROR:teuthology.run_tasks:Manager failed: rados Traceback (most recent call last): File "/home/teuthworker/teuthology-firefly/teuthology/run_tasks.py", line 92, in run_tasks suppress = manager.__exit__(*exc_info) File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ self.gen.next() File "/home/teuthworker/teuthology-firefly/teuthology/task/rados.py", line 170, in task running.get() File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 308, in get raise self._exception CommandFailedError: Command failed on 10.214.138.149 with status 22: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool create unique_pool_0 16' 2014-04-07T08:05:44.573 DEBUG:teuthology.run_tasks:Unwinding manager ceph.restart 2014-04-07T08:05:44.573 DEBUG:teuthology.run_tasks:Unwinding manager thrashosds 2014-04-07T08:05:44.574 INFO:teuthology.task.thrashosds:joining thrashosds 2014-04-07T08:05:44.574 ERROR:teuthology.run_tasks:Manager failed: thrashosds Traceback (most recent call last): File "/home/teuthworker/teuthology-firefly/teuthology/run_tasks.py", line 92, in run_tasks suppress = manager.__exit__(*exc_info) File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__ self.gen.throw(type, value, traceback) File "/home/teuthworker/teuthology-firefly/teuthology/task/thrashosds.py", line 172, in task thrash_proc.do_join() File "/home/teuthworker/teuthology-firefly/teuthology/task/ceph_manager.py", line 153, in do_join self.thread.get() File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 308, in get raise self._exception Exception: timed out waiting for admin_socket to appear after osd.0 restart 2014-04-07T08:05:44.624 DEBUG:teuthology.run_tasks:Unwinding manager ceph.restart 2014-04-07T08:05:44.625 DEBUG:teuthology.run_tasks:Unwinding manager install.upgrade 2014-04-07T08:05:44.625 DEBUG:teuthology.run_tasks:Unwinding manager ceph 2014-04-07T08:05:44.625 DEBUG:teuthology.orchestra.run:Running [10.214.138.133]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json' 2014-04-07T08:05:45.929 INFO:teuthology.orchestra.run.err:[10.214.138.133]: dumped all in format json 2014-04-07T08:05:46.961 INFO:teuthology.task.ceph:Scrubbing osd osd.0 2014-04-07T08:05:46.961 DEBUG:teuthology.orchestra.run:Running [10.214.138.133]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd scrub osd.0' 2014-04-07T08:05:47.126 INFO:teuthology.orchestra.run.err:[10.214.138.133]: Error EAGAIN: osd.0 is not up 2014-04-07T08:05:47.133 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthworker/teuthology-firefly/teuthology/contextutil.py", line 29, in nested yield vars File "/home/teuthworker/teuthology-firefly/teuthology/task/ceph.py", line 1458, in task osd_scrub_pgs(ctx, config) File "/home/teuthworker/teuthology-firefly/teuthology/task/ceph.py", line 1090, in osd_scrub_pgs 'ceph', 'osd', 'scrub', role]) File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/remote.py", line 106, in run r = self._runner(client=self.ssh, **kwargs) File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 330, in run r.exitstatus = _check_status(r.exitstatus) File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 326, in _check_status raise CommandFailedError(command=r.command, exitstatus=status, node=host) CommandFailedError: Command failed on 10.214.138.133 with status 11: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd scrub osd.0' 2014-04-07T08:05:48.650 INFO:teuthology.misc:Shutting down mds daemons... 2014-04-07T08:05:48.650 DEBUG:teuthology.task.ceph.mds.a:waiting for process to exit 2014-04-07T08:05:49.104 INFO:teuthology.task.ceph.mds.a:Stopped 2014-04-07T08:05:49.104 INFO:teuthology.misc:Shutting down osd daemons... 2014-04-07T08:05:49.104 DEBUG:teuthology.task.ceph.osd.1:waiting for process to exit 2014-04-07T08:05:49.125 INFO:teuthology.task.ceph.osd.1:Stopped 2014-04-07T08:05:49.125 DEBUG:teuthology.task.ceph.osd.0:waiting for process to exit 2014-04-07T08:05:49.125 ERROR:teuthology.misc:Saw exception from osd.0 Traceback (most recent call last): File "/home/teuthworker/teuthology-firefly/teuthology/misc.py", line 1128, in stop_daemons_of_type daemon.stop() File "/home/teuthworker/teuthology-firefly/teuthology/task/ceph.py", line 57, in stop run.wait([self.proc]) File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 356, in wait proc.exitstatus.get() File "/usr/lib/python2.7/dist-packages/gevent/event.py", line 207, in get raise self._exception CommandFailedError: Command failed on 10.214.138.149 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 0' 2014-04-07T08:05:49.149 DEBUG:teuthology.task.ceph.osd.3:waiting for process to exit
archive_path: /var/lib/teuthworker/archive/teuthology-2014-04-06_22:35:23-upgrade:dumpling-x:stress-split-firefly-distro-basic-vps/175603 description: upgrade/dumpling-x/stress-split/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/snaps-few-objects.yaml 6-next-mon/monb.yaml 7-workload/radosbench.yaml 8-next-mon/monc.yaml 9-workload/{rados_api_tests.yaml rbd-python.yaml rgw-s3tests.yaml snaps-many-objects.yaml} distros/debian_7.0.yaml} email: null job_id: '175603' kernel: &id001 kdb: true sha1: distro last_in_suite: false machine_type: vps name: teuthology-2014-04-06_22:35:23-upgrade:dumpling-x:stress-split-firefly-distro-basic-vps nuke-on-error: true os_type: debian os_version: '7.0' overrides: admin_socket: branch: firefly ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on legacy crush tunables: false osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 log-whitelist: - slow request - wrongly marked me down - objects unfound and apparently lost - log bound mismatch sha1: 4aef403dbc2ba3dd572d13c43b5192f04941dc07 ceph-deploy: branch: dev: firefly conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: 4aef403dbc2ba3dd572d13c43b5192f04941dc07 s3tests: branch: master workunit: sha1: 4aef403dbc2ba3dd572d13c43b5192f04941dc07 owner: scheduled_teuthology@teuthology roles: - - mon.a - mon.b - mds.a - osd.0 - osd.1 - osd.2 - - osd.3 - osd.4 - osd.5 - mon.c - - client.0 targets: ubuntu@vpm080.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDd8LavsXqSh7OetjhEoxqVGyEC5XvY1uYkbjtwXjJpbnFDKZy9F9t3B9vW7/F0Sw9yMgIRDcDIX1L2K3FX0wbslT3U4mZQ3GppHAZQW9Npn/+xq2V9ozDtl4n7LsFH4nTNifXrLqNxc1CODk705mXOMtRpVugHgh0gV7RLspwtkdz1QH1XAKwPb6Tzdn0NkuQcpI6tudwABdrG385xu0Gp7+BQhbxCKqKaZmx8EqhlRmRLp7aEcAgD+r0LO//0fj/YIr9hSIzryYD74YsxZ1aAMRe64rSmPiYC2lG0vcex00NI6ZOFEbt6im1hyeZ/eUJwUUDfWZ2QELwL2jUV8Lyf ubuntu@vpm087.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDaS7BFDTzRt7XOTU0cN0MHkMQNW3Us/2C8ySIPPrrp4UhY0VcmWSVLm2Dwvs1eN9l7b9SiwMEYlZwvnzDjlSL+7xBY5O+IVhpttxLOW2lF5Hm2Q/e6utPhpHd8eQUfamX6VvXLFK6/dM/nRWUIP+Z1d5ofVsHoEjQy808GhXJleEPoUyGnD21r4Jc5+eWNrjZoWbqXGmYyXyOuynT3LOE3fPB8IgfuO2HtvFWsUkyCSDjxAZ3ecw6QtW7I71imEsgY2QjPDQshLQdE5rCDyOlBssKZxRCO+RCPqIDX2KAwlpmPfyETTrMV47itJVORIzT68pW4Spq2lG5U28hloc95 ubuntu@vpm088.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDs8UIsFtgkYTxDLQ+BjWQuEmWJ27N2facdHWANAdhcz+WYfr/dSGR81b1KuyGelfZLmcLum6g5iQ4lCb6dTrNyjCYEozIXM3WcO4V/8HJ1JPDUKHn7WKeE/OX37tUIQjg/tZey2Z1zM1cnf4XpqLQAeDINlEShZWgUI6g3c2oNjsVnkKJa0V2I81YdZH8w61A8dutO09KtEJNAakjxWtMxPhBJZ6Iwq1Kv73shhqpGdeXDXlp6jwHMs8/5Dtya1WWEkanN2IoWPvCyhvY3ndtJVa4I31KymgJFZQEeMKmemC8rcCuEt2UqxZo3idshhMxin9AkAnMzu3ICEJvJmprr tasks: - internal.lock_machines: - 3 - vps - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - kernel: *id001 - internal.base: null - internal.archive: null - internal.coredump: null - internal.sudo: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: branch: dumpling - ceph: fs: xfs - install.upgrade: osd.0: null - ceph.restart: daemons: - osd.0 - osd.1 - osd.2 - thrashosds: chance_pgnum_grow: 1 chance_pgpnum_fix: 1 thrash_primary_affinity: false timeout: 1200 - ceph.restart: daemons: - mon.a wait-for-healthy: false wait-for-osds-up: true - rados: clients: - client.0 objects: 50 op_weights: delete: 50 read: 100 rollback: 50 snap_create: 50 snap_remove: 50 write: 100 ops: 4000 - ceph.restart: daemons: - mon.b wait-for-healthy: false wait-for-osds-up: true - radosbench: clients: - client.0 time: 1800 - install.upgrade: mon.c: null - ceph.restart: daemons: - mon.c wait-for-healthy: false wait-for-osds-up: true - ceph.wait_for_mon_quorum: - a - b - c - workunit: branch: dumpling clients: client.0: - rados/test-upgrade-firefly.sh - workunit: branch: dumpling clients: client.0: - rbd/test_librbd_python.sh - rgw: client.0: idle_timeout: 120 - swift: client.0: rgw_server: client.0 - rados: clients: - client.0 objects: 500 op_weights: delete: 50 read: 100 rollback: 50 snap_create: 50 snap_remove: 50 write: 100 ops: 4000 teuthology_branch: firefly verbose: true worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.17017
description: upgrade/dumpling-x/stress-split/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/snaps-few-objects.yaml 6-next-mon/monb.yaml 7-workload/radosbench.yaml 8-next-mon/monc.yaml 9-workload/{rados_api_tests.yaml rbd-python.yaml rgw-s3tests.yaml snaps-many-objects.yaml} distros/debian_7.0.yaml} duration: 5868.566761016846 failure_reason: 'Command failed on 10.214.138.149 with status 22: ''adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool create unique_pool_0 16''' flavor: basic owner: scheduled_teuthology@teuthology success: false
Updated by Sage Weil about 10 years ago
- Subject changed from "failed: rados"...in "create unique_pool_0 16" in upgrade:dumpling-x:stress-split-firefly-distro-basic-vps to os/JournalingObjectStore.cc: 121: FAILED assert(op > committed_seq) on wheezy
- Priority changed from Normal to Urgent
- Source changed from other to Q/A
Updated by Sage Weil about 10 years ago
- Status changed from New to Fix Under Review
Updated by Sage Weil about 10 years ago
- Status changed from Fix Under Review to Resolved
Updated by Sage Weil about 10 years ago
- Status changed from Resolved to Pending Backport
Updated by Sage Weil about 10 years ago
- Status changed from Pending Backport to Resolved
Actions