Project

General

Profile

Actions

Bug #8715

closed

"ceph_test_librbd_fsx: invalid option -- 'h'" error in teuthology-2014-06-30_19:02:27-rbd-dumpling-testing-basic-plana suite

Added by Yuri Weinstein almost 10 years ago. Updated over 9 years ago.

Status:
Can't reproduce
Priority:
Urgent
Assignee:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-06-30_19:02:27-rbd-dumpling-testing-basic-plana/335280/

2014-07-01T07:19:44.065 INFO:teuthology.orchestra.run.plana78.stderr:ceph_test_librbd_fsx: invalid option -- 'h'
2014-07-01T07:19:44.066 ERROR:teuthology.parallel:Exception in parallel execution
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-master/teuthology/parallel.py", line 82, in __exit__
    for result in self:
  File "/home/teuthworker/teuthology-master/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/teuthology-master/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/teuthology-master/teuthology/task/rbd_fsx.py", line 82, in _run_one_client
    remote.run(args=args)
  File "/home/teuthworker/teuthology-master/teuthology/orchestra/remote.py", line 114, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 401, in run
    r.wait()
  File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 102, in wait
    exitstatus=status, node=self.hostname)
CommandFailedError: Command failed on plana78 with status 89: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_librbd_fsx -d -W -R -p 100 -P /home/ubuntu/cephtest/archive -r 1 -w 1 -t 1 -h 1 -l 250000000 -S 0 -N 2000 pool_client.0 image_client.0'
2014-07-01T07:19:44.067 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-master/teuthology/run_tasks.py", line 45, in run_tasks
    manager.__enter__()
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/teuthworker/teuthology-master/teuthology/task/rbd_fsx.py", line 40, in task
    p.spawn(_run_one_client, ctx, config, role)
  File "/home/teuthworker/teuthology-master/teuthology/parallel.py", line 82, in __exit__
    for result in self:
  File "/home/teuthworker/teuthology-master/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/teuthology-master/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/teuthology-master/teuthology/task/rbd_fsx.py", line 82, in _run_one_client
    remote.run(args=args)
  File "/home/teuthworker/teuthology-master/teuthology/orchestra/remote.py", line 114, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 401, in run
    r.wait()
  File "/home/teuthworker/teuthology-master/teuthology/orchestra/run.py", line 102, in wait
    exitstatus=status, node=self.hostname)
CommandFailedError: Command failed on plana78 with status 89: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_librbd_fsx -d -W -R -p 100 -P /home/ubuntu/cephtest/archive -r 1 -w 1 -t 1 -h 1 -l 250000000 -S 0 -N 2000 pool_client.0 image_client.0'
2014-07-01T07:19:44.068 INFO:teuthology.task.thrashosds:joining thrashosds
2014-07-01T07:19:44.320 INFO:teuthology.orchestra.run.plana87.stderr:reweighted osd.0 to 1 (8655362)
2014-07-01T07:19:44.333 INFO:teuthology.orchestra.run.plana87:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd reweight 1 1'
2014-07-01T07:19:45.323 INFO:teuthology.orchestra.run.plana87.stderr:reweighted osd.1 to 1 (8655362)
2014-07-01T07:19:45.336 INFO:teuthology.orchestra.run.plana87:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd reweight 2 1'
2014-07-01T07:19:46.376 INFO:teuthology.orchestra.run.plana87.stderr:reweighted osd.2 to 1 (8655362)
2014-07-01T07:19:46.389 INFO:teuthology.orchestra.run.plana87:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd reweight 3 1'
2014-07-01T07:19:47.340 INFO:teuthology.orchestra.run.plana87.stderr:reweighted osd.3 to 1 (8655362)
2014-07-01T07:19:47.353 INFO:teuthology.orchestra.run.plana87:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd reweight 4 1'
2014-07-01T07:19:48.561 INFO:teuthology.orchestra.run.plana87.stderr:reweighted osd.4 to 1 (8655362)
2014-07-01T07:19:48.575 INFO:teuthology.orchestra.run.plana87:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd reweight 5 1'
2014-07-01T07:19:49.610 INFO:teuthology.orchestra.run.plana87.stderr:reweighted osd.5 to 1 (8655362)
2014-07-01T07:19:49.623 INFO:teuthology.task.thrashosds.ceph_manager:waiting for recovery to complete
2014-07-01T07:19:49.623 INFO:teuthology.orchestra.run.plana87:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format=json'
2014-07-01T07:19:49.920 INFO:teuthology.orchestra.run.plana87.stderr:dumped all in format json
2014-07-01T07:19:49.934 INFO:teuthology.orchestra.run.plana87:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format=json'
2014-07-01T07:19:50.229 INFO:teuthology.orchestra.run.plana87.stderr:dumped all in format json
2014-07-01T07:19:50.245 INFO:teuthology.orchestra.run.plana87:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph status --format=json-pretty'
archive_path: /var/lib/teuthworker/archive/teuthology-2014-06-30_19:02:27-rbd-dumpling-testing-basic-plana/335280
branch: dumpling
description: rbd/thrash/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml
  thrashers/default.yaml workloads/rbd_fsx_cache_writeback.yaml}
email: null
job_id: '335280'
kernel: &id001
  kdb: true
  sha1: 2172e939f9256947064274f6f5e02aebfc0a2a9b
last_in_suite: false
machine_type: plana
name: teuthology-2014-06-30_19:02:27-rbd-dumpling-testing-basic-plana
nuke-on-error: true
os_type: ubuntu
overrides:
  admin_socket:
    branch: dumpling
  ceph:
    conf:
      client:
        rbd cache: true
      global:
        ms inject socket failures: 5000
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
        osd op thread timeout: 60
    fs: btrfs
    log-whitelist:
    - slow request
    sha1: 583e6e3ef7f28bf34fe038e8a2391f9325a69adf
  ceph-deploy:
    branch:
      dev: dumpling
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: 583e6e3ef7f28bf34fe038e8a2391f9325a69adf
  s3tests:
    branch: dumpling
  workunit:
    sha1: 583e6e3ef7f28bf34fe038e8a2391f9325a69adf
owner: scheduled_teuthology@teuthology
priority: 1000
roles:
- - mon.a
  - mon.c
  - osd.0
  - osd.1
  - osd.2
- - mon.b
  - mds.a
  - osd.3
  - osd.4
  - osd.5
  - client.0
suite: rbd
targets:
  ubuntu@plana78.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8m+86JHGSyRkSWj9p/K6JUbRcPjB7TtLZ9OBudXAGZNgReiOJoCU5kkpwejl0uXXCOHe/DB/bH81JCQbqY3XCJjU5JZ1wBsL/owaErPSfbbaouNV2k1FQjiSXYtPzx+qwEOeOZtEBPQ4p04npai6NzPLX43OGx/UiAwpyEGfVxZedmci0VBtC7QdCQkP3sNJqSxFYdoVGjU5jv6BarPqV8LM4v00f8TmD1GdP51bfLGSKii6UU1IKXXR78ifb+9QUX4p/Clkl6Qgz8CJ70Iu+mcBZclJaGoAyuoKBhXE2oi2W1cQVquPqloxbN+VbbjoOL5OHbGg2euxyohZhgJaF
  ubuntu@plana87.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDhqfyCPf8ZRUGHmR9+On1Q68qh6Ieckq0ZVGPWu++MNGmHM9Ed18yHcV0Ahx70UiXObav91bmFynnI6eQc3PMn8yR6cn2nBloczLD1NzNvTC/nKPXaXq4KJQP/HRjYnTa997pQq3J+4+Cktxxy1ookAoAlMsHhKlCZFodaz/a/CmdfWPEYVId4dVFY1oWqIY5BTgIJduf+NMA0XRlIq8Pm7m16mB7wIWtX3zyZiE/urS9OYN/8YwPR4huwH/DBoVxiZOa5gTEJPdgappmEmNLtz0TCro4Q3Gt0q5uiu9KiWVwsJQjDskCbciGWubmZK20DP3U72NPawXj24YvEyIgp
tasks:
- internal.lock_machines:
  - 2
  - plana
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.serialize_remote_roles: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- kernel: *id001
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.sudo: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install: null
- ceph:
    log-whitelist:
    - wrongly marked me down
    - objects unfound and apparently lost
- thrashosds:
    timeout: 1200
- rbd_fsx:
    clients:
    - client.0
    ops: 2000
teuthology_branch: master
tube: plana
verbose: false
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.plana.10776
description: rbd/thrash/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml
  thrashers/default.yaml workloads/rbd_fsx_cache_writeback.yaml}
duration: 321.27611207962036
failure_reason: 'Command failed on plana78 with status 89: ''adjust-ulimits ceph-coverage
  /home/ubuntu/cephtest/archive/coverage ceph_test_librbd_fsx -d -W -R -p 100 -P /home/ubuntu/cephtest/archive
  -r 1 -w 1 -t 1 -h 1 -l 250000000 -S 0 -N 2000 pool_client.0 image_client.0'''
flavor: basic
mon.a-kernel-sha1: 2172e939f9256947064274f6f5e02aebfc0a2a9b
mon.b-kernel-sha1: 2172e939f9256947064274f6f5e02aebfc0a2a9b
owner: scheduled_teuthology@teuthology
success: false
Actions #1

Updated by Sage Weil almost 10 years ago

  • Project changed from Ceph to rbd
  • Priority changed from Normal to Urgent
Actions #2

Updated by Sage Weil over 9 years ago

  • Status changed from New to Can't reproduce
Actions

Also available in: Atom PDF