Project

General

Profile

Actions

Bug #7493

closed

cephtool/pool_ops failure

Added by Samuel Just about 10 years ago. Updated about 10 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Category:
OSD
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ubuntu@teuthology:/a/teuthology-2014-02-19_23:00:21-rados-master-testing-basic-plana/90946

{description: 'rados/singleton/{all/cephtool.yaml fs/btrfs.yaml msgr-failures/few.yaml}',
duration: 1333.5854132175446, failure_reason: 'Command failed on 10.214.131.10 with
status 1: ''mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp
&& CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c5c7f6c8e8643c57f92cf7048a9beec080a59fbe
TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage
/home/ubuntu/cephtest/workunit.client.0/cephtool/test.sh''', flavor: basic, mon.a-kernel-sha1: bee26897097fea59a969654b36d25b5844ba0b63,
owner: scheduled_teuthology@teuthology, sentry_event: 'http://sentry.ceph.com/inktank/teuthology/search?q=de4d951c757047cc8933ae0b5305281b',
success: false}

2014-02-20T00:04:35.637 INFO:teuthology.task.workunit:Stopping cephtool on client.0...
2014-02-20T00:04:35.638 DEBUG:teuthology.orchestra.run:Running [10.214.131.10]: 'rm rf - /home/ubuntu/cephtest/workunits.list /home/ubuntu/cephtest/workunit.client.0'
2014-02-20T00:04:35.648 ERROR:teuthology.parallel:Exception in parallel execution
Traceback (most recent call last):
File "/home/teuthworker/teuthology-master/teuthology/parallel.py", line 82, in exit
for result in self:
File "/home/teuthworker/teuthology-master/teuthology/parallel.py", line 101, in next
resurrect_traceback(result)
File "/home/teuthworker/teuthology-master/teuthology/parallel.py", line 19, in capture_traceback
return func(args, **kwargs)
File "/home/teuthworker/teuthology-master/teuthology/task/workunit.py", line 345, in _run_tests
args=args,
File "/home/teuthworker/teuthology-master/teuthology/orchestra/remote.py", line 68, in run
r = self._runner(client=self.ssh, **kwargs)
File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 328, in run
r.exitstatus = _check_status(r.exitstatus)
File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 324, in _check_status
raise CommandFailedError(command=r.command, exitstatus=status, node=host)
CommandFailedError: Command failed on 10.214.131.10 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c5c7f6c8e8643c57f92cf7048a9beec080a59fbe TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" adjust-ulimits ceph-cov
erage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/cephtool/test.sh'
2014-02-20T00:04:35.649 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
File "/home/teuthworker/teuthology-master/teuthology/run_tasks.py", line 31, in run_tasks
manager = run_one_task(taskname, ctx=ctx, config=config)
File "/home/teuthworker/teuthology-master/teuthology/run_tasks.py", line 19, in run_one_task
return fn(
*kwargs)
File "/home/teuthworker/teuthology-master/teuthology/task/workunit.py", line 100, in task
_spawn_on_all_clients(ctx, refspec, all_tasks, config.get('env'), config.get('subdir'))
File "/home/teuthworker/teuthology-master/teuthology/task/workunit.py", line 246, in _spawn_on_all_clients
p.spawn(_run_tests, ctx, refspec, role, [unit], env, subdir)
File "/home/teuthworker/teuthology-master/teuthology/parallel.py", line 82, in exit
for result in self:
File "/home/teuthworker/teuthology-master/teuthology/parallel.py", line 101, in next
resurrect_traceback(result)
File "/home/teuthworker/teuthology-master/teuthology/parallel.py", line 19, in capture_traceback
return func(*args, **kwargs)
File "/home/teuthworker/teuthology-master/teuthology/task/workunit.py", line 345, in _run_tests
args=args,
File "/home/teuthworker/teuthology-master/teuthology/orchestra/remote.py", line 68, in run
r = self._runner(client=self.ssh, **kwargs)
File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 328, in run
r.exitstatus = _check_status(r.exitstatus)
File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 324, in _check_status
raise CommandFailedError(command=r.command, exitstatus=status, node=host)
CommandFailedError: Command failed on 10.214.131.10 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c5c7f6c8e8643c57f92cf7048a9beec080a59fbe TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" adjust-ulimits ceph-cov
erage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/cephtool/test.sh'
2014-02-20T00:04:35.686 ERROR:teuthology.run_tasks: Sentry event: http://sentry.ceph.com/inktank/teuthology/search?q=de4d951c757047cc8933ae0b5305281b
CommandFailedError: Command failed on 10.214.131.10 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c5c7f6c8e8643c57f92cf7048a9beec080a59fbe TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/cephtool/test.sh'
2014-02-20T00:04:35.686 DEBUG:teuthology.run_tasks:Unwinding manager ceph
2014-02-20T00:04:35.686 DEBUG:teuthology.orchestra.run:Running [10.214.131.10]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json'
2014-02-20T00:04:35.907 INFO:teuthology.orchestra.run.err:[10.214.131.10]: dumped all in format json
2014-02-20T00:04:36.912 INFO:teuthology.task.ceph:Scrubbing osd osd.0
2014-02-20T00:04:36.913 DEBUG:teuthology.orchestra.run:Running [10.214.131.10]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd scrub osd.0'
2014-02-20T00:04:37.131 INFO:teuthology.orchestra.run.err:[10.214.131.10]: Error EAGAIN: osd.0 is not up
2014-02-20T00:04:37.142 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
File "/home/teuthworker/teuthology-master/teuthology/contextutil.py", line 27, in nested
yield vars
File "/home/teuthworker/teuthology-master/teuthology/task/ceph.py", line 1442, in task
osd_scrub_pgs(ctx, config)
File "/home/teuthworker/teuthology-master/teuthology/task/ceph.py", line 1078, in osd_scrub_pgs
'ceph', 'osd', 'scrub', role])
File "/home/teuthworker/teuthology-master/teuthology/orchestra/remote.py", line 68, in run
r = self._runner(client=self.ssh, **kwargs)
File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 328, in run
r.exitstatus = _check_status(r.exitstatus)
File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 324, in _check_status
raise CommandFailedError(command=r.command, exitstatus=status, node=host)
CommandFailedError: Command failed on 10.214.131.10 with status 11: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd scrub osd.0'
2014-02-20T00:04:37.142 INFO:teuthology.misc:Shutting down mds daemons...
2014-02-20T00:04:37.143 DEBUG:teuthology.task.ceph.mds.a:waiting for process to exit
2014-02-20T00:04:37.143 INFO:teuthology.task.ceph.mds.a:Stopped
2014-02-20T00:04:37.143 INFO:teuthology.misc:Shutting down osd daemons...
2014-02-20T00:04:37.144 DEBUG:teuthology.task.ceph.osd.1:waiting for process to exit
2014-02-20T00:04:37.152 INFO:teuthology.task.ceph.osd.1:Stopped
2014-02-20T00:04:37.152 DEBUG:teuthology.task.ceph.osd.0:waiting for process to exit
2014-02-20T00:04:37.200 INFO:teuthology.task.ceph.osd.0:Stopped

Actions #1

Updated by Sage Weil about 10 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF