Project

General

Profile

Actions

Bug #9169

closed

100-continue broken for centos/rhel

Added by Yuri Weinstein over 9 years ago. Updated about 9 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
dumpling
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-08-18_16:07:27-upgrade:dumpling-firefly-x-firefly-distro-basic-vps/432668/

2014-08-18T21:19:03.988 INFO:teuthology.orchestra.run.vpm030.stderr:======================================================================
2014-08-18T21:19:03.989 INFO:teuthology.orchestra.run.vpm030.stderr:ERROR: s3tests.functional.test_s3.test_atomic_read_1mb
2014-08-18T21:19:03.989 INFO:teuthology.orchestra.run.vpm030.stderr:----------------------------------------------------------------------
2014-08-18T21:19:03.989 INFO:teuthology.orchestra.run.vpm030.stderr:Traceback (most recent call last):
2014-08-18T21:19:03.989 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/virtualenv/lib/python2.6/site-packages/nose/case.py", line 197, in runTest
2014-08-18T21:19:03.989 INFO:teuthology.orchestra.run.vpm030.stderr:    self.test(*self.arg)
2014-08-18T21:19:03.989 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/s3tests/functional/test_s3.py", line 4551, in test_atomic_read_1mb
2014-08-18T21:19:03.989 INFO:teuthology.orchestra.run.vpm030.stderr:    _test_atomic_read(1024*1024)
2014-08-18T21:19:03.989 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/s3tests/functional/test_s3.py", line 4541, in _test_atomic_read
2014-08-18T21:19:03.990 INFO:teuthology.orchestra.run.vpm030.stderr:    read_key.get_contents_to_file(fp_a2)
2014-08-18T21:19:03.990 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/virtualenv/lib/python2.6/site-packages/boto/s3/key.py", line 1643, in get_contents_to_file
2014-08-18T21:19:03.990 INFO:teuthology.orchestra.run.vpm030.stderr:    response_headers=response_headers)
2014-08-18T21:19:03.990 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/virtualenv/lib/python2.6/site-packages/boto/s3/key.py", line 1475, in get_file
2014-08-18T21:19:03.990 INFO:teuthology.orchestra.run.vpm030.stderr:    query_args=None)
2014-08-18T21:19:03.990 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/virtualenv/lib/python2.6/site-packages/boto/s3/key.py", line 1529, in _get_file_internal
2014-08-18T21:19:03.990 INFO:teuthology.orchestra.run.vpm030.stderr:    fp.write(bytes)
2014-08-18T21:19:03.991 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/s3tests/functional/test_s3.py", line 4481, in write
2014-08-18T21:19:03.991 INFO:teuthology.orchestra.run.vpm030.stderr:    self.interrupt()
2014-08-18T21:19:03.991 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/s3tests/functional/test_s3.py", line 4537, in <lambda>
2014-08-18T21:19:03.991 INFO:teuthology.orchestra.run.vpm030.stderr:    lambda: key.set_contents_from_file(fp_b)
2014-08-18T21:19:03.991 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/virtualenv/lib/python2.6/site-packages/boto/s3/key.py", line 1286, in set_contents_from_file
2014-08-18T21:19:03.991 INFO:teuthology.orchestra.run.vpm030.stderr:    chunked_transfer=chunked_transfer, size=size)
2014-08-18T21:19:03.991 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/virtualenv/lib/python2.6/site-packages/boto/s3/key.py", line 746, in send_file
2014-08-18T21:19:03.991 INFO:teuthology.orchestra.run.vpm030.stderr:    chunked_transfer=chunked_transfer, size=size)
2014-08-18T21:19:03.992 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/virtualenv/lib/python2.6/site-packages/boto/s3/key.py", line 944, in _send_file_internal
2014-08-18T21:19:03.992 INFO:teuthology.orchestra.run.vpm030.stderr:    query_args=query_args
2014-08-18T21:19:03.992 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/virtualenv/lib/python2.6/site-packages/boto/s3/connection.py", line 664, in make_request
2014-08-18T21:19:03.992 INFO:teuthology.orchestra.run.vpm030.stderr:    retry_handler=retry_handler
2014-08-18T21:19:03.992 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/virtualenv/lib/python2.6/site-packages/boto/connection.py", line 1053, in make_request
2014-08-18T21:19:03.992 INFO:teuthology.orchestra.run.vpm030.stderr:    retry_handler=retry_handler)
2014-08-18T21:19:03.992 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/virtualenv/lib/python2.6/site-packages/boto/connection.py", line 923, in _mexe
2014-08-18T21:19:03.992 INFO:teuthology.orchestra.run.vpm030.stderr:    request.body, request.headers)
2014-08-18T21:19:03.993 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/home/ubuntu/cephtest/s3-tests/virtualenv/lib/python2.6/site-packages/boto/s3/key.py", line 840, in sender
2014-08-18T21:19:03.993 INFO:teuthology.orchestra.run.vpm030.stderr:    http_conn.send(chunk)
2014-08-18T21:19:03.993 INFO:teuthology.orchestra.run.vpm030.stderr:  File "/usr/lib64/python2.6/httplib.py", line 759, in send
2014-08-18T21:19:03.993 INFO:teuthology.orchestra.run.vpm030.stderr:    self.sock.sendall(str)
2014-08-18T21:19:03.993 INFO:teuthology.orchestra.run.vpm030.stderr:  File "<string>", line 1, in sendall
2014-08-18T21:19:03.993 INFO:teuthology.orchestra.run.vpm030.stderr:timeout: timed out
archive_path: /var/lib/teuthworker/archive/teuthology-2014-08-18_16:07:27-upgrade:dumpling-firefly-x-firefly-distro-basic-vps/432668
branch: firefly
description: upgrade:dumpling-firefly-x/parallel/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml
  2-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml}
  3-firefly-upgrade/firefly.yaml 4-workload/{rados_api.yaml rados_loadgenbig.yaml
  test_rbd_api.yaml test_rbd_python.yaml} 5-upgrade-sequence/upgrade-by-daemon.yaml
  6-final-workload/{ec-readwrite.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml
  rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml rgw_swift.yaml}
  distros/centos_6.5.yaml}
email: ceph-qa@ceph.com
job_id: '432668'
kernel: &id001
  kdb: true
  sha1: distro
last_in_suite: false
machine_type: vps
name: teuthology-2014-08-18_16:07:27-upgrade:dumpling-firefly-x-firefly-distro-basic-vps
nuke-on-error: true
os_type: centos
os_version: '6.5'
overrides:
  admin_socket:
    branch: firefly
  ceph:
    conf:
      global:
        osd heartbeat grace: 100
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
        mon warn on legacy crush tunables: false
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
    log-whitelist:
    - slow request
    - scrub mismatch
    - ScrubResult
    sha1: ae787cfa88dfd0f5add5932b297258c46af4e333
  ceph-deploy:
    branch:
      dev: firefly
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: ae787cfa88dfd0f5add5932b297258c46af4e333
  rgw:
    default_idle_timeout: 1200
  s3tests:
    branch: firefly
  workunit:
    sha1: ae787cfa88dfd0f5add5932b297258c46af4e333
owner: scheduled_yuriw_no_ec
priority: 1000
roles:
- - mon.a
  - mds.a
  - osd.0
  - osd.1
- - mon.b
  - mon.c
  - osd.2
  - osd.3
- - client.0
  - client.1
suite: upgrade:dumpling-firefly-x
suite_branch: firefly
suite_path: /var/lib/teuthworker/src/ceph-qa-suite_firefly
targets:
  ubuntu@vpm027.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAx4XUejHF6ZPVQwQHnOc4nk9BipStKwb8XsEjP6iwXgNmaNlqN+qq2CyVRDIHWIVOm2R2XePms7cXneH9nGZoOqf0+awncWSu/YP67tjpBMi4tIbJO5X+qqcHqALdAh5QrNLQfcBEoPPHGk9WumKzhs2t+0dpuzE0RcLtGMV1hlJ8VZa7wYICvWGIdiZDqUWXZtSHdnVKd94LsFYTG+JtXaNGlc8UAsCqYPoupga8aZDMXDnwuFafGOk8an6g1Wvu0aiHP2ZcGJ/EUdKyRjBpH87w0N4hIZ9IljFJ6la4Qv3WErh36KU6yo4xhGHioI34dqpOL0I+d0RCViWKac+HBQ==
  ubuntu@vpm030.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqEX3BP+EJRqdEgWbhe3THdSvsQRc31RXBnHkYvBP0SQfipYvfXkspRUWifIzbagfzT2Qym8wloFESusK2SZs3n6Qr+1WG6CWbshpFPCt8ZL19nnAn+rWyA2/C4iwcmeaLrampJzQ8uKddDqRjUvQejIytQYqQuTfZld5kKt4RBPsRu0W1RO5pkwWmZHfNV+faV7jToVHRSbrrixFxtAWIp/FYYcvT3dp/rhg9NnUsp03QEaXup41JydJMkuV50dou7pZ4OLT9TzB1L2UyCR//rMpA7S2RKv7LzvXjDsTCQ5gK/V5wLgtMk0w1VDVUbXiegiKQgunqlMnElrgZhYKdw==
  ubuntu@vpm042.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvkNA77XmPmFL/hzkcIVL5L+hwOKb3Gc+s8ahWI2bt3NxK2bSnHZlxyZo3NZGcOlJbiuinMTCKueEwq89zgeaHzvpZKurDkBRjkRRXbgyeXhzTZz+vleG8PzpBUdoj8JpRpMDZKD/eP2opMOxE8C3QAaNXIUbJMPfDVnrMg3SKW+my52QQLSpgrpPDnvTpie4dTZnmAcjIklxQfbLZ4SP+NBN26DrvYgSYiLSOj2K59tnQugkZlkczq4NGATjjdIxoe54zN33NJ6BmKrVjOXclPmKSr1YhsWxPE0doTvrWLvViqdrxv53at6Ze0xAKwTcwxiKZx5HmjJO+l4sr7Q3sQ==
tasks:
- internal.lock_machines:
  - 3
  - vps
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.serialize_remote_roles: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- kernel: *id001
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.sudo: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install:
    branch: dumpling
- print: '**** done dumpling install'
- ceph:
    fs: xfs
- parallel:
  - workload
- print: '**** done parallel'
- install.upgrade:
    client.0:
      branch: firefly
    mon.a:
      branch: firefly
    mon.b:
      branch: firefly
- print: '**** done install.upgrade'
- ceph.restart: null
- print: '**** done restart'
- parallel:
  - workload2
  - upgrade-sequence
- print: '**** done parallel'
- install.upgrade:
    client.0: null
- print: '**** done install.upgrade client.0 to the version from teuthology-suite
    arg'
- rados:
    clients:
    - client.1
    objects: 50
    op_weights:
      delete: 50
      read: 100
      rollback: 50
      snap_create: 50
      snap_remove: 50
      write: 100
    ops: 4000
- workunit:
    clients:
      client.1:
      - rados/load-gen-mix.sh
- mon_thrash:
    revive_delay: 20
    thrash_delay: 1
- workunit:
    clients:
      client.1:
      - rados/test.sh
- workunit:
    clients:
      client.1:
      - cls/test_cls_rbd.sh
- workunit:
    clients:
      client.1:
      - rbd/import_export.sh
    env:
      RBD_CREATE_ARGS: --new-format
- rgw:
  - client.1
- s3tests:
    client.1:
      rgw_server: client.1
- swift:
    client.1:
      rgw_server: client.1
teuthology_branch: master
tube: vps
upgrade-sequence:
  sequential:
  - install.upgrade:
      mon.a: null
  - print: '**** done install.upgrade mon.a to the version from teuthology-suite arg'
  - install.upgrade:
      mon.b: null
  - print: '**** done install.upgrade mon.b to the version from teuthology-suite arg'
  - ceph.restart:
      daemons:
      - mon.a
  - sleep:
      duration: 60
  - ceph.restart:
      daemons:
      - mon.b
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.c
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.0
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.1
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.2
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.3
  - sleep:
      duration: 60
  - ceph.restart:
    - mds.a
  - exec:
      mon.a:
      - ceph osd crush tunables firefly
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.4660
workload:
  sequential:
  - workunit:
      branch: dumpling
      clients:
        client.0:
        - rados/test.sh
        - cls
  - print: '**** done rados/test.sh &  cls'
  - workunit:
      branch: dumpling
      clients:
        client.0:
        - rados/load-gen-big.sh
  - print: '**** done rados/load-gen-big.sh'
  - workunit:
      branch: dumpling
      clients:
        client.0:
        - rbd/test_librbd.sh
  - print: '**** done rbd/test_librbd.sh'
  - workunit:
      branch: dumpling
      clients:
        client.0:
        - rbd/test_librbd_python.sh
  - print: '**** done rbd/test_librbd_python.sh'
workload2:
  sequential:
  - workunit:
      branch: firefly
      clients:
        client.0:
        - rados/test.sh
        - cls
  - print: '**** done #rados/test.sh and cls 2'
  - workunit:
      branch: firefly
      clients:
        client.0:
        - rados/load-gen-big.sh
  - print: '**** done rados/load-gen-big.sh 2'
  - workunit:
      branch: firefly
      clients:
        client.0:
        - rbd/test_librbd.sh
  - print: '**** done rbd/test_librbd.sh 2'
  - workunit:
      branch: firefly
      clients:
        client.0:
        - rbd/test_librbd_python.sh
  - print: '**** done rbd/test_librbd_python.sh 2'
description: upgrade:dumpling-firefly-x/parallel/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml
  2-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml}
  3-firefly-upgrade/firefly.yaml 4-workload/{rados_api.yaml rados_loadgenbig.yaml
  test_rbd_api.yaml test_rbd_python.yaml} 5-upgrade-sequence/upgrade-by-daemon.yaml
  6-final-workload/{ec-readwrite.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml
  rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml rgw_swift.yaml}
  distros/centos_6.5.yaml}
duration: 18391.802810907364
failure_reason: 'Command failed on vpm030 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.1.conf
  /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests
  -v -a ''!fails_on_rgw''"'
flavor: basic
owner: scheduled_yuriw_no_ec
success: false

Related issues 1 (0 open1 closed)

Has duplicate rgw - Bug #9825: s3tests failing on rhel 6.4 and 6.5 in upgrade:dumpling-firefly-x:parallel-giant-distro-basic-vps run DuplicateYehuda Sadeh10/19/2014

Actions
Actions

Also available in: Atom PDF