Actions
Bug #9819
closedEBUSY during scrub
Status:
Won't Fix
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
logs: ubuntu@teuthology:/a/teuthology-2014-10-17_02:32:01-rados-giant-distro-basic-multi/552986
2014-10-17T18:27:38.372 INFO:teuthology.orchestra.run.burnupi25:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph scrub' 2014-10-17T18:27:39.504 INFO:teuthology.orchestra.run.burnupi25.stderr:2014-10-17 18:27:39.491474 7f2dd77fe700 0 monclient: hunting for new mon 2014-10-17T18:27:39.515 INFO:teuthology.orchestra.run.burnupi25.stderr:Error EBUSY: 2014-10-17T18:27:39.536 ERROR:tasks.mon_thrash:Saw exception while triggering scrub Traceback (most recent call last): File "/var/lib/teuthworker/src/ceph-qa-suite_giant/tasks/mon_thrash.py", line 301, in do_thrash self.manager.raw_cluster_cmd('scrub') File "/var/lib/teuthworker/src/ceph-qa-suite_giant/tasks/ceph_manager.py", line 547, in raw_cluster_cmd stdout=StringIO(), File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/remote.py", line 128, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 364, in run r.wait() File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 105, in wait exitstatus=status, node=self.hostname) CommandFailedError: Command failed on burnupi25 with status 16: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph scrub'
Updated by Sage Weil over 9 years ago
- Status changed from New to Won't Fix
this is expected and harmless. we just report the failure and move it. it happens when paxos is busy when we request a scrub. we'll just try again later.
Actions