Project

General

Profile

Actions

Bug #7528

closed

Error in "parallel execution" in rgw-firefly-distro-basic-plana suite

Added by Yuri Weinstein about 10 years ago. Updated about 10 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

Logs are in qa-proxy.ceph.com/teuthology/teuthology-2014-02-22_23:02:21-rgw-firefly-distro-basic-plana/98501/

2014-02-23T20:39:49.556 DEBUG:teuthology.orchestra.run:Running [10.214.131.11]: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6d8cb22e5887b3289bf3e0809f207442874aaccc TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/rgw/s3_user_quota.pl'
2014-02-23T20:39:50.581 INFO:teuthology.task.workunit:Stopping rgw/s3_user_quota.pl on client.0...
2014-02-23T20:39:50.581 DEBUG:teuthology.orchestra.run:Running [10.214.131.11]: 'rm -rf -- /home/ubuntu/cephtest/workunits.list /home/ubuntu/cephtest/workunit.client.0'
2014-02-23T20:39:50.589 ERROR:teuthology.parallel:Exception in parallel execution
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-firefly/teuthology/parallel.py", line 82, in __exit__
    for result in self:
  File "/home/teuthworker/teuthology-firefly/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/teuthology-firefly/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/teuthology-firefly/teuthology/task/workunit.py", line 345, in _run_tests
    args=args,
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/remote.py", line 106, in run
    r = self._runner(client=self.ssh, **kwargs)
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 328, in run
    r.exitstatus = _check_status(r.exitstatus)
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 324, in _check_status
    raise CommandFailedError(command=r.command, exitstatus=status, node=host)
CommandFailedError: Command failed on 10.214.131.11 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6d8cb22e5887b3289bf3e0809f207442874aaccc TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/rgw/s3_user_quota.pl'
2014-02-23T20:39:50.590 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-firefly/teuthology/run_tasks.py", line 31, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/teuthology-firefly/teuthology/run_tasks.py", line 19, in run_one_task
    return fn(**kwargs)
  File "/home/teuthworker/teuthology-firefly/teuthology/task/workunit.py", line 96, in task
    all_spec = True
  File "/home/teuthworker/teuthology-firefly/teuthology/parallel.py", line 82, in __exit__
    for result in self:
  File "/home/teuthworker/teuthology-firefly/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/teuthology-firefly/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/teuthology-firefly/teuthology/task/workunit.py", line 345, in _run_tests
    args=args,
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/remote.py", line 106, in run
    r = self._runner(client=self.ssh, **kwargs)
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 328, in run
    r.exitstatus = _check_status(r.exitstatus)
  File "/home/teuthworker/teuthology-firefly/teuthology/orchestra/run.py", line 324, in _check_status
    raise CommandFailedError(command=r.command, exitstatus=status, node=host)
CommandFailedError: Command failed on 10.214.131.11 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6d8cb22e5887b3289bf3e0809f207442874aaccc TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/rgw/s3_user_quota.pl'
2014-02-23T20:39:50.616 ERROR:teuthology.run_tasks: Sentry event: http://sentry.ceph.com/inktank/teuthology/search?q=d529000d2ee64292ad1d3dbffee5d119
CommandFailedError: Command failed on 10.214.131.11 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6d8cb22e5887b3289bf3e0809f207442874aaccc TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/rgw/s3_user_quota.pl'
2014-02-23T20:39:50.616 DEBUG:teuthology.run_tasks:Unwinding manager rgw
2014-02-23T20:39:50.617 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
archive_path: /var/lib/teuthworker/archive/teuthology-2014-02-22_23:02:21-rgw-firefly-distro-basic-plana/98501
description: rgw/multifs/{clusters/fixed-2.yaml fs/xfs.yaml tasks/rgw_user_quota.yaml}
email: null
job_id: '98501'
kernel: &id001
  kdb: true
  sha1: distro
last_in_suite: false
machine_type: plana
name: teuthology-2014-02-22_23:02:21-rgw-firefly-distro-basic-plana
nuke-on-error: true
os_type: ubuntu
overrides:
  admin_socket:
    branch: firefly
  ceph:
    conf:
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug filestore: 20
        debug ms: 1
        debug osd: 20
        osd sloppy crc: true
    fs: xfs
    log-whitelist:
    - slow request
    sha1: 6d8cb22e5887b3289bf3e0809f207442874aaccc
  ceph-deploy:
    branch:
      dev: firefly
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: 6d8cb22e5887b3289bf3e0809f207442874aaccc
  s3tests:
    branch: master
  workunit:
    sha1: 6d8cb22e5887b3289bf3e0809f207442874aaccc
owner: scheduled_teuthology@teuthology
roles:
- - mon.a
  - mon.c
  - osd.0
  - osd.1
  - osd.2
  - client.0
- - mon.b
  - mds.a
  - osd.3
  - osd.4
  - osd.5
  - client.1
targets:
  ubuntu@plana12.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC3NLjRpCwPwUlRPzeJSE20kVoPse4PCP6mTARZ9wGc6XkZhxrYqLU3OIqhefd2wh0tz61JtclB4aVI6fCD/cx5QRxahyCd9m/BhDnpUWuLmrV5Mqg4lhlpTmeKDiCcgEuxu9A6UY6ch50ME6P08hVVZhRokjJ/2X72yD/KbaZdHx0WMHrKVHmvq0aIJTPqlRlsMvvydQ3BBb1yxORUimMVSt5yDwgovGn0ojSGv5SUrN8opB5qP/3kEX4H1+Eel+FCX53lQmGwNfuidzcZQPLlYvL/Ztp3stmC9N+x2kwepaU8GtgWPKZrlGLCBi3T29bpi8r/m57H4jTdObn3j/gl
  ubuntu@plana29.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC6yHeLkt88spLGm1otJ4MM2SDrBJesgz1uoFxqD5WX+t9P7ge/qXAPvn/ySslAtaseWnsEdCbSXGN2GHplEgL0FB3qlb948qz1F5YM++QCMsXV9qql8dDPgW3P3S6AeHNyn+jS37TUDDJuxLO9i9C1A59/Edx6pSSQLQ+OlodC64LD00RQY2Qs/PE+q64NtJnwYbdre01b6MPWFzh68cm3c1p47rK7VDeEuermEM8D5gHuOeudW5TzzzE2YzFiyF9P9kfQSLZlMEEF++yqHXuUtWgQEN8YdGDfH/xhEi2Uj/sVIG6kqTJi/cPhHyXA9VuVExARJuXRM2gZSYeaimEh
tasks:
- internal.lock_machines:
  - 2
  - plana
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- kernel: *id001
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.sudo: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install: null
- ceph: null
- rgw:
  - client.0
- workunit:
    clients:
      client.0:
      - rgw/s3_user_quota.pl
teuthology_branch: firefly
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.plana.3324
description: rgw/multifs/{clusters/fixed-2.yaml fs/xfs.yaml tasks/rgw_user_quota.yaml}
duration: 282.6667058467865
failure_reason: 'Command failed on 10.214.131.11 with status 1: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp
  && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
  CEPH_REF=6d8cb22e5887b3289bf3e0809f207442874aaccc TESTDIR="/home/ubuntu/cephtest" 
  CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage
  /home/ubuntu/cephtest/workunit.client.0/rgw/s3_user_quota.pl'''
flavor: basic
owner: scheduled_teuthology@teuthology
sentry_event: http://sentry.ceph.com/inktank/teuthology/search?q=d529000d2ee64292ad1d3dbffee5d119
success: false
Actions #1

Updated by Anonymous about 10 years ago

  • Assignee set to Anonymous

I think that this may be related to 7375

Actions #2

Updated by Anonymous about 10 years ago

A fix for this has been pushed to https://github.com/ceph/ceph/pull/1333

The wip branch is wip-s3radoscheck-wusui.

Actions #3

Updated by Anonymous about 10 years ago

  • Status changed from New to Fix Under Review
  • Assignee changed from Anonymous to Yehuda Sadeh
Actions #4

Updated by Anonymous about 10 years ago

  • Status changed from Fix Under Review to Resolved
  • Assignee changed from Yehuda Sadeh to Anonymous
Actions

Also available in: Atom PDF