Bug #14136
closed"Error EINVAL: crushtool check failed with -22" in upgrade:infernalis-infernalis-distro-basic-openstack
0%
Description
Run: http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-12-20_13:18:02-upgrade:infernalis-infernalis-distro-basic-openstack/
Job: 45528
Logs: http://teuthology.ovh.sepia.ceph.com/teuthology/teuthology-2015-12-20_13:18:02-upgrade:infernalis-infernalis-distro-basic-openstack/45528/teuthology.log
2015-12-20T15:08:27.806 INFO:tasks.ceph.ceph_manager:recovered! 2015-12-20T15:08:27.806 INFO:teuthology.run_tasks:Running task sequential... 2015-12-20T15:08:27.806 INFO:teuthology.task.sequential:In sequential, running task rados... 2015-12-20T15:08:27.806 INFO:tasks.rados:Beginning rados... 2015-12-20T15:08:27.807 INFO:tasks.rados:joining rados 2015-12-20T15:08:27.807 INFO:tasks.rados:clients are ['client.1', 'client.0'] 2015-12-20T15:08:27.807 INFO:tasks.rados:starting run 0 out of 1 2015-12-20T15:08:27.807 INFO:tasks.ceph.ceph_manager:creating pool_name unique_pool_1 2015-12-20T15:08:27.807 INFO:teuthology.orchestra.run.target066068:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool create unique_pool_1 16' 2015-12-20T15:08:34.045 INFO:teuthology.orchestra.run.target066068.stderr:Error EINVAL: crushtool check failed with -22: crushtool: timed out (5 sec) 2015-12-20T15:08:34.065 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 53, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 41, in run_one_task return fn(**kwargs) File "/home/teuthworker/src/teuthology_master/teuthology/task/sequential.py", line 55, in task mgr.__exit__(*exc_info) File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ self.gen.next() File "/home/teuthworker/src/ceph-qa-suite_infernalis/tasks/rados.py", line 246, in task running.get() File "/home/teuthworker/src/teuthology_master/virtualenv/local/lib/python2.7/site-packages/gevent/greenlet.py", line 274, in get raise self._exception CommandFailedError: Command failed on target066068 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool create unique_pool_1 16'
Updated by Yuri Weinstein about 8 years ago
- ceph-qa-suite rados added
Also in rados run:
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-01-23_02:00:01-rados-infernalis-distro-basic-openstack/
Job: 8070
Logs: http://teuthology.ovh.sepia.ceph.com/teuthology/teuthology-2016-01-23_02:00:01-rados-infernalis-distro-basic-openstack/8070/teuthology.log
2016-01-23T02:28:07.693 INFO:teuthology.orchestra.run.target084035.stderr:pool 'ec-ca' created 2016-01-23T02:28:08.573 INFO:teuthology.orchestra.run.target084035:Running: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph osd pool create ec 1 1 erasure default'" 2016-01-23T02:28:21.585 INFO:teuthology.orchestra.run.target084035.stderr:Error EINVAL: crushtool check failed with -22: crushtool: timed out (5 sec) 2016-01-23T02:28:21.599 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 53, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 41, in run_one_task return fn(**kwargs) File "/home/teuthworker/src/teuthology_master/teuthology/task/exec.py", line 54, in task c], File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/remote.py", line 156, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 378, in run r.wait() File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 114, in wait label=self.label) CommandFailedError: Command failed on target084035 with status 22: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph osd pool create ec 1 1 erasure default'"
Updated by Loïc Dachary about 8 years ago
It's bound to happen from time to time when the machine is slow. The real fix for this would be for the check using crushtool to be asynchronous. But that's a complicated fix and unlikely to happen in infernalis. I guess we should keep this as a reminder that such an error should be considered noise.
Updated by Loïc Dachary about 8 years ago
- Related to Bug #11907: crushmap validation must not block the monitor added