Project

General

Profile

Bug #9069

rgw tests reported as failed in teuthology-2014-08-11_10:35:04-upgrade:dumpling:rgw-dumpling---basic-vps suite

Added by Yuri Weinstein over 9 years ago. Updated over 9 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-08-11_12:05:02-upgrade:dumpling-dumpling---basic-vps/417055/teuthology.log

Tests actually passed, but then during the clean up stage (guessing) failed "Cleaning up apache directories..."

Could be addressed by time outs increased - https://github.com/ceph/ceph-qa-suite/pull/79/files

2014-08-11T13:50:29.223 INFO:teuthology.orchestra.run.vpm031:Running: 'rm -f /home/ubuntu/cephtest/apache/apache.client.0.conf && rm -f /home/ubuntu/cephtest/apache/htdocs.client.0/rgw.fcgi'
2014-08-11T13:50:29.232 INFO:tasks.rgw:Cleaning up apache directories...
2014-08-11T13:50:29.233 INFO:teuthology.orchestra.run.vpm031:Running: 'rm -rf /home/ubuntu/cephtest/apache/tmp.client.0 && rmdir /home/ubuntu/cephtest/apache/htdocs.client.0'
2014-08-11T13:50:29.310 INFO:teuthology.orchestra.run.vpm031:Running: 'rmdir /home/ubuntu/cephtest/apache'
2014-08-11T13:50:29.382 ERROR:teuthology.parallel:Exception in parallel execution
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 82, in __exit__
    for result in self:
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/teuthology_master/teuthology/task/parallel.py", line 50, in _run_spawned
    mgr = run_tasks.run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 39, in run_one_task
    return fn(**kwargs)
  File "/home/teuthworker/src/teuthology_master/teuthology/task/sequential.py", line 55, in task
    mgr.__exit__(*exc_info)
  File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
    self.gen.next()
  File "/var/lib/teuthworker/src/ceph-qa-suite_dumpling/tasks/rgw.py", line 601, in task
    yield
  File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
    self.gen.next()
  File "/home/teuthworker/src/teuthology_master/teuthology/contextutil.py", line 37, in nested
    if exit(*exc):
  File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
    self.gen.next()
  File "/var/lib/teuthworker/src/ceph-qa-suite_dumpling/tasks/rgw.py", line 199, in start_rgw
    teuthology.stop_daemons_of_type(ctx, 'rgw')
  File "/home/teuthworker/src/teuthology_master/teuthology/misc.py", line 1044, in stop_daemons_of_type
    daemon.stop()
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/daemon.py", line 45, in stop
    run.wait([self.proc], timeout=timeout)
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 418, in wait
    check_time()
  File "/home/teuthworker/src/teuthology_master/teuthology/contextutil.py", line 127, in __call__
    raise MaxWhileTries(error_msg)
MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds
2014-08-11T13:50:29.383 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 51, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 39, in run_one_task
    return fn(**kwargs)
  File "/home/teuthworker/src/teuthology_master/teuthology/task/parallel.py", line 43, in task
    p.spawn(_run_spawned, ctx, confg, taskname)
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 89, in __exit__
    raise
MaxWhileTries: reached maximum tries (50) after waiting for 300 seconds
archive_path: /var/lib/teuthworker/archive/teuthology-2014-08-11_12:05:02-upgrade:dumpling-dumpling---basic-vps/417055
branch: dumpling
description: upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/cuttlefish.v0.67.1.yaml
  2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-mds-mon-osd.yaml 4-final/osdthrash.yaml}
email: ceph-qa@ceph.com
job_id: '417055'
last_in_suite: false
machine_type: vps
name: teuthology-2014-08-11_12:05:02-upgrade:dumpling-dumpling---basic-vps
nuke-on-error: true
os_type: ubuntu
overrides:
  admin_socket:
    branch: dumpling
  ceph:
    conf:
      global:
        osd heartbeat grace: 40
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
    fs: xfs
    log-whitelist:
    - slow request
    - scrub
    - wrongly marked me down
    - objects unfound and apparently lost
    - log bound mismatch
    sha1: cff90b6310c3845218c130e269d92de492384291
  ceph-deploy:
    branch:
      dev: dumpling
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: cff90b6310c3845218c130e269d92de492384291
  s3tests:
    branch: dumpling
  workunit:
    sha1: cff90b6310c3845218c130e269d92de492384291
owner: scheduled_teuthology@teuthology
priority: 1000
roles:
- - mon.a
  - mds.a
  - osd.0
  - osd.1
  - osd.2
- - mon.b
  - mon.c
  - osd.3
  - osd.4
  - osd.5
  - client.0
suite: upgrade:dumpling
suite_branch: dumpling
suite_path: /var/lib/teuthworker/src/ceph-qa-suite_dumpling
targets:
  ubuntu@vpm031.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCitNKQxkuD8yGAGapr3uZBZRcRc2bzAGZhz6O8ik7YCeT6iN9mLr4cGg6KFuydIV8QuE+QIcmYgfvaVMJjDZLjfBag5xSPa0v6EovIeVAbs4kJ0o9Vb5rvMlVaXkpWjisYpLz7KqzC8moYKgY0DUF/renEB4xxJKnfxxX4tmWE2vbnU4AJP2WDzaqJ50atNwdkHGCs77mCGveEawDTR6Y51Nuq95gkkPaeserWuex8OGCMSdDE+unRiLUrFkAWUTMebAEwc3KOkhU7rcNGvTfVfBx3ezA5RWq23hum+OeFrzYPbA45Ed6NzasnsjCldQtifzXfe8z+g4Upcw6NTCj
  ubuntu@vpm042.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDDtnTmLeUg+T+21ogsbu7NKXl4gqIQ/49iycgrT3WTn8DHO/Bql4wBDtq9ZCRPTYXxrRAZcVwT1LQ+/u/8swrv4wzrpl6xY8s7pci/IyMbSMbeiNfY0x9174OYIgcNmWa202DB3o+RXucnKkBf6+agi8w7PiHvaCkmoyeXuStSobWUmYHTrnAfUq/QCAnIsjEjl7yJnJhQQb9FjM4UZrRDOYRDjNo+bEFmSNAuoP4vZWJVLFd1gjV29L9nwGyLjdW8oRBDtSMYhHDs1jUgGjSzQ1HLZMW5KkbqjCMMu0hn1pzyGVA8EHsFIaDWTG+9cH4G7wX+DtlPC4nvHFfAq/L9
tasks:
- internal.lock_machines:
  - 2
  - vps
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.serialize_remote_roles: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.sudo: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install:
    branch: cuttlefish
- ceph: null
- install.upgrade:
    all:
      tag: v0.67.1
- ceph.restart: null
- install.upgrade:
    all:
      branch: dumpling
- parallel:
  - workload
  - upgrade-sequence
- thrashosds:
    chance_pgnum_grow: 1
    chance_pgpnum_fix: 1
    thrash_primary_affinity: false
    timeout: 1200
- swift:
    client.0:
      rgw_server: client.0
teuthology_branch: master
tube: vps
upgrade-sequence:
  sequential:
  - ceph.restart:
    - mds.a
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.a
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.b
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.c
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.0
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.1
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.2
  - sleep:
      duration: 30
  - ceph.restart:
    - rgw.client.0
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.3
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.4
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.5
  - sleep:
      duration: 30
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.19954
workload:
  sequential:
  - rgw:
    - client.0
  - s3tests:
      client.0:
        rgw_server: client.0
description: upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/cuttlefish.v0.67.1.yaml
  2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-mds-mon-osd.yaml 4-final/osdthrash.yaml}
duration: 1254.0433940887451
failure_reason: reached maximum tries (50) after waiting for 300 seconds
flavor: basic
owner: scheduled_teuthology@teuthology
success: false

Related issues

Related to Ceph - Bug #9040: clients can SEGV during package upgrade Won't Fix

History

#1 Updated by Sage Weil over 9 years ago

  • Status changed from New to 12
  • Assignee set to Sage Weil
  • Priority changed from Normal to Urgent

7585 ? Sl 0:05 radosgw -n client.0 -k /etc/ceph/ceph.client.0.keyring --rgw-socket-path /home/ubuntu/cephtest/apache/tmp.client.0/fastcgi_sock/rgw_sock --log-file /var/log/ceph/rgw.client.0.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.client.0.sock /home/ubuntu/cephtest/apache/apache.client.0.conf --foreground

ubuntu@burnupi17:/var/log/ceph$ less /var/log/ceph/rgw.client.0.log
/var/log/ceph/rgw.client.0.log: No such file or directory

radosgw not opening a log file >???

#2 Updated by Sage Weil over 9 years ago

oh.. it' snot running as root.. or with daemon-helper.

#3 Updated by Yuri Weinstein over 9 years ago

  • Source changed from other to Q/A

#4 Updated by Sage Weil over 9 years ago

  • Status changed from 12 to Resolved

Also available in: Atom PDF