Project

General

Profile

Actions

Bug #11259

closed

s3_bucket_quota.pl fails

Added by Abhishek Lekshmanan about 9 years ago. Updated almost 8 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rgw
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Actions #1

Updated by Abhishek Lekshmanan about 9 years ago

Primary failure logs from the run

2015-03-27T15:31:30.155 INFO:teuthology.orchestra.run.mira012:Running: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/workunit.client.0'
2015-03-27T15:31:30.257 ERROR:teuthology.parallel:Exception in parallel execution
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 82, in __exit__
    for result in self:
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/var/lib/teuthworker/src/ceph-qa-suite_giant/tasks/workunit.py", line 361, in _run_tests
    label="workunit test {workunit}".format(workunit=workunit)
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/remote.py", line 137, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 378, in run
    r.wait()
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 114, in wait
    label=self.label)
CommandFailedError: Command failed (workunit test rgw/s3_bucket_quota.pl) on mira012 with status 141: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=90b37d9bdcc044e26f978632cd68f19ece82d19a TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rgw/s3_bucket_quota.pl'
2015-03-27T15:31:30.303 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 53, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 41, in run_one_task
    return fn(**kwargs)
  File "/var/lib/teuthworker/src/ceph-qa-suite_giant/tasks/workunit.py", line 105, in task
    config.get('env'), timeout=timeout)
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 82, in __exit__
    for result in self:
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/var/lib/teuthworker/src/ceph-qa-suite_giant/tasks/workunit.py", line 361, in _run_tests
    label="workunit test {workunit}".format(workunit=workunit)
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/remote.py", line 137, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 378, in run
    r.wait()
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 114, in wait
    label=self.label)
CommandFailedError: Command failed (workunit test rgw/s3_bucket_quota.pl) on mira012 with status 141: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=90b37d9bdcc044e26f978632cd68f19ece82d19a TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rgw/s3_bucket_quota.pl'
2015-03-27T15:31:30.377 ERROR:teuthology.run_tasks: Sentry event: http://sentry.ceph.com/sepia/teuthology/search?q=66a0369003314103a38ce2db687a235e
CommandFailedError: Command failed (workunit test rgw/s3_bucket_quota.pl) on mira012 with status 141: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=90b37d9bdcc044e26f978632cd68f19ece82d19a TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rgw/s3_bucket_quota.pl'
2015-03-27T15:31:30.377 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_master/teuthology/contextutil.py", line 30, in nested
    yield vars
  File "/var/lib/teuthworker/src/ceph-qa-suite_giant/tasks/rgw.py", line 846, in task
    yield
CommandFailedError: Command failed (workunit test rgw/s3_bucket_quota.pl) on mira012 with status 141: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=90b37d9bdcc044e26f978632cd68f19ece82d19a TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rgw/s3_bucket_quota.pl'

Actions #2

Updated by Abhishek Lekshmanan about 9 years ago

Currently, looking at the patches being merged into giant [[https://github.com/ceph/ceph/pulls?utf8=%E2%9C%93&q=is%3Aclosed+milestone%3Agiant+rgw+]], only one patch relatively touches this code path, https://github.com/ceph/ceph/pull/3580, which fixes an error when quota was set to a negative number. While the s3_bucket_quota.pl does test this scenario, the test seemed to have fail even before the test was reached. Not sure of any other patches that made into giant that affect bucket/object quotas.

Actions #3

Updated by Abhishek Lekshmanan about 9 years ago

Re run of the test suite to eliminate transient failures:

http://pulpito.ceph.com/abhi-2015-03-29_17:16:17-rgw-giant---basic-multi/

The tests have passed this time, though, if we analyze the logs between the failed run and this run, there is not much difference that indicates any network failure; primary differences that could be observed between the 2 runs are:
in the failed run, as the test starts we can observe a:

2015-03-27T15:31:24.082 INFO:tasks.rgw.client.0.mira012.stdout:2015-03-27 15:31:24.082177 7f1bc0ff9700 -1 failed to list objects pool_iterate returned r=-2

which might probably indicate the cause of failure. This is not seen in the successful run

Actions #4

Updated by Yehuda Sadeh almost 8 years ago

  • Status changed from New to Closed
Actions

Also available in: Atom PDF