Project

General

Profile

Actions

Bug #9899

closed

Error "coverage ceph osd pool get '' pg_num" in upgrade:dumpling-dumpling-distro-basic-multi run

Added by Yuri Weinstein over 9 years ago. Updated over 9 years ago.

Status:
Resolved
Priority:
High
Assignee:
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Seems related to rgw and 3-upgrade-sequence/upgrade-osd-mon-mds.yaml configurations

teuthology@teuthology:~$ teuthology-ls /a/teuthology-2014-10-26_10:00:03-upgrade:dumpling-dumpling-distro-basic-multi | grep FAIL | grep rgw
571865 FAIL scheduled_teuthology@teuthology upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/v0.67.1.yaml 2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/monthrash.yaml} 971s
571868 FAIL scheduled_teuthology@teuthology upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/v0.67.3.yaml 2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/monthrash.yaml} 997s
571869 FAIL scheduled_teuthology@teuthology upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/v0.67.5.yaml 2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/monthrash.yaml} 1189s
571870 FAIL scheduled_teuthology@teuthology upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/v0.67.7.yaml 2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/monthrash.yaml} 1013s
571871 FAIL scheduled_teuthology@teuthology upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/v0.67.9.yaml 2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/monthrash.yaml} 1213s
Failure: Command failed on mira094 with status 22: "adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get '' pg_num" 

Jobs: ['571865', '571868', '571869', '571870', '571871']

One for example:

Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-10-26_10:00:03-upgrade:dumpling-dumpling-distro-basic-multi/571865/

archive_path: /var/lib/teuthworker/archive/teuthology-2014-10-26_10:00:03-upgrade:dumpling-dumpling-distro-basic-multi/571865
branch: dumpling
description: upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/v0.67.1.yaml
  2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/monthrash.yaml}
email: ceph-qa@ceph.com
job_id: '571865'
kernel: &id001
  kdb: true
  sha1: distro
last_in_suite: false
machine_type: plana,mira,burnupi
name: teuthology-2014-10-26_10:00:03-upgrade:dumpling-dumpling-distro-basic-multi
nuke-on-error: true
os_type: ubuntu
overrides:
  admin_socket:
    branch: dumpling
  ceph:
    conf:
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
    fs: xfs
    log-whitelist:
    - slow request
    - scrub
    sha1: 6a90775dfecd6cb05486c49716afbd9c98c28446
  ceph-deploy:
    branch:
      dev: dumpling
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: 6a90775dfecd6cb05486c49716afbd9c98c28446
  s3tests:
    branch: dumpling
  workunit:
    sha1: 6a90775dfecd6cb05486c49716afbd9c98c28446
owner: scheduled_teuthology@teuthology
priority: 1000
roles:
- - mon.a
  - mds.a
  - osd.0
  - osd.1
  - osd.2
- - mon.b
  - mon.c
  - osd.3
  - osd.4
  - osd.5
- - client.0
suite: upgrade:dumpling
suite_branch: dumpling
suite_path: /var/lib/teuthworker/src/ceph-qa-suite_dumpling
targets:
  ubuntu@burnupi61.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5CaIa5kft0KDWomzDS5eW2im5dSyK/Re8W1M3u/Y0/1T02C7Fbcv2wAi4TgCjmnMLdCv43TVSGpHkkWAsZH0T1OTywV9npNpqyf0qFq73zzepI3yR5QwHT0jmMd4DFQPgMtLXoK1SK8AZmfkJ5QzLPQcp0CMJTvhnSJkyOcVqd7rLI7cQyZOHcbrqdbQ2pKp2Yg2eunWpUsjr1VVu/YNfg787cCAJMnpzy+QbCGaF7K1+UdQEhZWs89Hr/HrFTBHuf0dYGyne+F5d3XFxyRWUnZujf3jKlK1FQJ/Jtq59DkVrh/HGXe46mUlQUTeBUv31HwsgSyi2PZgYWQixHY0J
  ubuntu@mira094.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC4Lh3Tj3b+Ece4WHL9w4WIJuUmaRhcMZtiLVbttvXUHBRer9GcCr/BUbwOlBtMCkfeq45oPOyAyjOxhLGv3gDAziKw75yNKRhkGTn0rYYFiqonhh8PQCv4xe9ZfDejblLp15Bftf3Iakyh6GEdbGGTVaNY3Usrzlz2gaXdna6ldji25+0W6YqVWRPTNmBYtxgoCmk4W60yDMpvmEpDvrbIx660BzmbpvZ30zJwVE0rdev93dbiWl13KhOwXUwYfGgBm16om+g0dsrXK4mymIVuoNod/9ChsPA4QdhrXm0MsSqdTzvt2MCxs05UrF1GzCywcCLNNh8s8NwILfbtDzAn
  ubuntu@plana59.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDaHG83zRXo6ydv6IGWDFTf6YNjWG9M5LRbbYIpPXKOqCg9zfI/4ZjymLpznESFIACVrqe06jqD7uvsQPOlbcm3W/H44su70C21KrzMs77IpskMT7tYgCzY75uxbwg949qYIRf1SEY2RW0Bf2zldbOeKAY/TcnGIkLtc4NCIDPfCxMG0rAJJgUAwbvbKVUqLKe/jcyu3RiiAxV3TGjTAzTz+XHwT46gDXB5Fxt49Sfx+AgpILHk7DvN/HILtU3gRT9ac0D2WlQi1sJLDgjeTAZxyfpRR5iZH4tWYBFIS7C4ugHYye95zUYTc/3Jt364Jl/giUherGjE5od7p65VjxRJ
tasks:
- internal.lock_machines:
  - 3
  - plana,mira,burnupi
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.push_inventory: null
- internal.serialize_remote_roles: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- kernel: *id001
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.sudo: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install:
    tag: v0.67.1
- ceph: null
- install.upgrade:
    mon.a:
      branch: dumpling
    mon.b:
      branch: dumpling
- rgw:
  - client.0
- parallel:
  - workload
  - upgrade-sequence
- mon_thrash:
    revive_delay: 20
    thrash_delay: 1
- install.upgrade:
    client.0:
      branch: dumpling
- ceph.restart:
  - rgw.client.0
- sleep:
    duration: 60
- swift:
    client.0:
      rgw_server: client.0
teuthology_branch: master
tube: multi
upgrade-sequence:
  sequential:
  - ceph.restart:
    - osd.0
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.1
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.2
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.3
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.4
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.5
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.a
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.b
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.c
  - sleep:
      duration: 60
  - ceph.restart:
    - mds.a
  - sleep:
      duration: 60
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.multi.3184
workload:
  sequential:
  - s3tests:
      client.0:
        rgw_server: client.0
description: upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/v0.67.1.yaml
  2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/monthrash.yaml}
duration: 971.2150490283966
failure_reason: 'Command failed on mira094 with status 22: "adjust-ulimits ceph-coverage
  /home/ubuntu/cephtest/archive/coverage ceph osd pool get '''' pg_num"'
flavor: basic
owner: scheduled_teuthology@teuthology
status: fail
success: false

Related issues 1 (0 open1 closed)

Related to rgw - Bug #8311: No pool name error in ubuntu-2014-05-06_21:02:54-upgrade:dumpling-dumpling-testing-basic-vpsResolvedYuri Weinstein05/08/2014

Actions
Actions #2

Updated by Samuel Just over 9 years ago

It does appear to be trying to get pg_num for the empty name pool. Is that deliberate?

Actions #3

Updated by Samuel Just over 9 years ago

Is this that bug where radosgw can create a pool with an empty name?

Actions #4

Updated by Yuri Weinstein over 9 years ago

Yes, Tamil said we have such a case with empty pool name.

Actions #5

Updated by Yuri Weinstein over 9 years ago

It's probably duplicate of #8311, but effects other releases:

teuthology@teuthology:~$ teuthology-ls /a/teuthology-2014-11-02_10:00:02-upgrade:dumpling-dumpling-distro-basic-multi | grep testrgw | grep FAIL
582949 FAIL scheduled_teuthology@teuthology upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/v0.67.1.yaml 2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/monthrash.yaml} 1077s
582952 FAIL scheduled_teuthology@teuthology upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/v0.67.3.yaml 2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/monthrash.yaml} 972s
582953 FAIL scheduled_teuthology@teuthology upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/v0.67.5.yaml 2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/monthrash.yaml} 1133s
582954 FAIL scheduled_teuthology@teuthology upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/v0.67.7.yaml 2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/monthrash.yaml} 1107s
582955 FAIL scheduled_teuthology@teuthology upgrade:dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/v0.67.9.yaml 2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/monthrash.yaml} 1115s

Actions #6

Updated by Yuri Weinstein over 9 years ago

  • Project changed from Ceph to rgw
Actions #7

Updated by Yuri Weinstein over 9 years ago

  • Assignee set to Yehuda Sadeh
  • Priority changed from Urgent to High

Yehuda, can you take a look pls?

Actions #9

Updated by Yuri Weinstein over 9 years ago

  • Assignee deleted (Yehuda Sadeh)
Actions #10

Updated by Sage Weil over 9 years ago

this bug was fixed in 0.80.3 or 0.80.4. i think we need to make the 'older' tests skip the mon_thrash tests.

Actions #11

Updated by Yuri Weinstein over 9 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Sage Weil

Per Sage - removed mon_thrash tests from the rgw/ section, https://github.com/ceph/ceph-qa-suite/pull/230

Actions #12

Updated by Sage Weil over 9 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF