Bug #9148
closed
rgw: multiregion tests failing, s3tests.functional.test_s3.test_region_copy_object
Added by Sage Weil over 9 years ago.
Updated over 9 years ago.
Description
ubuntu@teuthology:/a/sage-2014-08-16_16:59:28-rgw-master-testing-basic-multi$ teuthology-ls . | grep FAIL
428745 FAIL scheduled_sage@flab rgw/singleton/{all/radosgw-admin-data-sync.yaml rgw_pool_type/ec-cache.yaml} 581s
428746 FAIL scheduled_sage@flab rgw/singleton/{all/radosgw-admin-data-sync.yaml rgw_pool_type/ec.yaml} 650s
428747 FAIL scheduled_sage@flab rgw/singleton/{all/radosgw-admin-data-sync.yaml rgw_pool_type/replicated.yaml} 1418s
428759 FAIL scheduled_sage@flab rgw/verify/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_s3tests_multiregion.yaml validater/lockdep.yaml} 1580s
428760 FAIL scheduled_sage@flab rgw/verify/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_s3tests_multiregion.yaml validater/valgrind.yaml} 2023s
428765 FAIL scheduled_sage@flab rgw/verify/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/ec.yaml tasks/rgw_s3tests_multiregion.yaml validater/lockdep.yaml} 1372s
428766 FAIL scheduled_sage@flab rgw/verify/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/ec.yaml tasks/rgw_s3tests_multiregion.yaml validater/valgrind.yaml} 3071s
428771 FAIL scheduled_sage@flab rgw/verify/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/replicated.yaml tasks/rgw_s3tests_multiregion.yaml validater/lockdep.yaml} 908s
428772 FAIL scheduled_sage@flab rgw/verify/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/replicated.yaml tasks/rgw_s3tests_multiregion.yaml validater/valgrind.yaml} 1590s
- Subject changed from rgw: multiregion tests failing to rgw: multiregion tests failing, s3tests.functional.test_s3.test_region_copy_object
- Assignee set to Yehuda Sadeh
- Status changed from New to Fix Under Review
- Status changed from Fix Under Review to Pending Backport
- Status changed from Pending Backport to Resolved
- Status changed from Resolved to 12
teuthology-2014-10-24_23:02:01-rgw-giant-distro-basic-multi/570701 fails with slow_backend:true on giant.
archive_path: /var/lib/teuthworker/archive/teuthology-2014-10-24_23:02:01-rgw-giant-distro-basic-multi/570701
branch: giant
description: rgw/verify/{clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml
msgr-failures/few.yaml rgw_pool_type/ec.yaml tasks/rgw_s3tests_multiregion.yaml
validater/valgrind.yaml}
email: ceph-qa@ceph.com
job_id: '570701'
kernel:
kdb: true
sha1: distro
last_in_suite: false
machine_type: plana,burnupi,mira
name: teuthology-2014-10-24_23:02:01-rgw-giant-distro-basic-multi
nuke-on-error: true
os_type: ubuntu
overrides:
admin_socket:
branch: giant
ceph:
conf:
global:
ms inject socket failures: 5000
osd heartbeat grace: 40
mon:
debug mon: 20
debug ms: 1
debug paxos: 20
osd:
debug filestore: 20
debug journal: 20
debug ms: 1
debug osd: 20
osd op thread timeout: 60
osd sloppy crc: true
fs: btrfs
log-whitelist:
- slow request
sha1: b05efddb77290b86eb5c150776c761ab84f66f37
valgrind:
mds:
- --tool=memcheck
mon:
- --tool=memcheck
- --leak-check=full
- --show-reachable=yes
osd:
- --tool=memcheck
ceph-deploy:
branch:
dev: giant
conf:
client:
log file: /var/log/ceph/ceph-$name.$pid.log
mon:
debug mon: 1
debug ms: 20
debug paxos: 20
osd default pool size: 2
install:
ceph:
flavor: notcmalloc
sha1: b05efddb77290b86eb5c150776c761ab84f66f37
rgw:
ec-data-pool: true
frontend: apache
s3tests:
branch: giant
slow_backend: true
workunit:
sha1: b05efddb77290b86eb5c150776c761ab84f66f37
owner: scheduled_teuthology@teuthology
priority: 1000
roles:
- - mon.a
- mon.c
- osd.0
- osd.1
- osd.2
- client.0
- - mon.b
- osd.3
- osd.4
- osd.5
- client.1
suite: rgw
suite_branch: giant
suite_path: /var/lib/teuthworker/src/ceph-qa-suite_giant
tasks:
- chef: null
- clock.check: null
- install:
flavor: notcmalloc
- ceph:
conf:
client.0:
rgw gc pool: .rgw.gc.0
rgw log data: true
rgw log meta: true
rgw region: zero
rgw region root pool: .rgw.region.0
rgw user keys pool: .users.0
rgw user uid pool: .users.uid.0
rgw zone: r0z1
rgw zone root pool: .rgw.zone.0
client.1:
rgw gc pool: .rgw.gc.1
rgw log data: false
rgw log meta: false
rgw region: one
rgw region root pool: .rgw.region.1
rgw user keys pool: .users.1
rgw user uid pool: .users.uid.1
rgw zone: r1z1
rgw zone root pool: .rgw.zone.1
- rgw:
client.0:
system user:
access key: 1te6NH5mcdcq0Tc5i8i2
name: client0-system-user
secret key: 1y4IOauQoL18Gp2zM7lC1vLmoawgqcYPbYGcWfXv
valgrind:
- --tool=memcheck
client.1:
system user:
access key: 0te6NH5mcdcq0Tc5i8i2
name: client1-system-user
secret key: Oy4IOauQoL18Gp2zM7lC1vLmoawgqcYPbYGcWfXv
valgrind:
- --tool=memcheck
default_idle_timeout: 300
regions:
one:
api name: api1
is master: false
master zone: r1z1
zones:
- r1z1
zero:
api name: api1
is master: true
master zone: r0z1
zones:
- r0z1
- radosgw-agent:
client.0:
dest: client.1
metadata-only: true
src: client.0
- s3tests:
client.0:
idle_timeout: 300
rgw_server: client.0
teuthology_branch: master
tube: multi
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.multi.3190
also
ubuntu@teuthology:/a/teuthology-2014-10-24_23:02:01-rgw-giant-distro-basic-multi/570719
ubuntu@teuthology:/a/teuthology-2014-10-24_23:02:01-rgw-giant-distro-basic-multi/570695
ubuntu@teuthology:/a/teuthology-2014-10-24_23:02:01-rgw-giant-distro-basic-multi/570713
in latest run, still trying to copy the 100M:
2014-10-25T22:51:31.611 INFO:teuthology.orchestra.run.mira056.stderr:boto: DEBUG: Final headers: {'Content-Length': '104857600', 'Content-MD5': 'WTf7FMpnjt1H/Kisvw8S0A==', 'Expect': '100-Continue', 'Date': 'Sun, 26 Oct 2014 05:41:14 GMT', 'User-Agent': 'Boto/2.33.0 Python/2.7.6 Linux/3.13.0-37-generic', 'Content-Type': 'application/octet-stream', 'Authorization': u'AWS ZHXXQEJRPISQBZZACDZH:qLb1dJ7UXvXOaN+8UpmmC6EUWxM='}
the http timeout though is 30 seconds:
2014-10-25T22:51:31.614 INFO:teuthology.orchestra.run.mira056.stderr:boto: DEBUG: establishing HTTP connection: kwargs={'port': 7281, 'timeout': 30}
Seems that the slow_backend param has not been applied on the s3tests giant branch.
- Status changed from 12 to Resolved
Also available in: Atom
PDF