Project

General

Profile

Actions

Bug #7397

closed

All test failed for upgrades:dumpling-next; Python error "ValueError: too many values to unpack"

Added by Yuri Weinstein about 10 years ago. Updated about 10 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

Here is one on 600 jobs:

Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-02-10_11:55:57-upgrades:dumpling-next-next-testing-basic-vps/74542/teuthology.log

2014-02-10T12:16:19.489 DEBUG:teuthology.misc:Ceph health: HEALTH_OK
2014-02-10T12:16:19.489 INFO:teuthology.run_tasks:Running task parallel...
2014-02-10T12:16:19.490 INFO:teuthology.task.parallel:starting parallel...
2014-02-10T12:16:19.490 INFO:teuthology.task.parallel:In parallel, running task sequential...
2014-02-10T12:16:19.490 INFO:teuthology.task.parallel:In parallel, running task sequential...
2014-02-10T12:16:19.491 INFO:teuthology.task.sequential:In sequential, running task install.upgrade...
2014-02-10T12:16:19.491 INFO:teuthology.task.install:project ceph config {'all': {'branch': 'next'}} overrides {'sha1': '39b393d7fa0e66c6e7a73ce6b5b45bbf3bb6980a'}
2014-02-10T12:16:19.491 INFO:teuthology.task.install:extra packages: []
2014-02-10T12:16:19.491 INFO:teuthology.task.install:config contains sha1|tag|branch, removing those keys from override
2014-02-10T12:16:19.492 INFO:teuthology.task.install:remote ubuntu@vpm003.front.sepia.ceph.com config {'branch': 'next'}
2014-02-10T12:16:19.492 DEBUG:teuthology.orchestra.run:Running [10.214.140.223]: 'sudo lsb_release -is'
2014-02-10T12:16:19.493 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-next/teuthology/run_tasks.py", line 31, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/teuthology-next/teuthology/run_tasks.py", line 19, in run_one_task
    return fn(**kwargs)
  File "/home/teuthworker/teuthology-next/teuthology/task/parallel.py", line 43, in task
    p.spawn(_run_spawned, ctx, confg, taskname)
  File "/home/teuthworker/teuthology-next/teuthology/parallel.py", line 88, in __exit__
    raise
ValueError: too many values to unpack
2014-02-10T12:16:19.753 ERROR:teuthology.run_tasks: Sentry event: http://sentry.ceph.com/inktank/teuthology/search?q=8da6426fb447451b9a62ee6226f0cb07
ValueError: too many values to unpack
2014-02-10T12:16:19.753 DEBUG:teuthology.run_tasks:Unwinding manager <contextlib.GeneratorContextManager object at 0x2d96b90>
2014-02-10T12:16:19.753 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/teuthology-next/teuthology/contextutil.py", line 27, in nested
    yield vars
  File "/home/teuthworker/teuthology-next/teuthology/task/ceph.py", line 1363, in task
    yield
ValueError: too many values to unpack
archive_path: /var/lib/teuthworker/archive/teuthology-2014-02-10_11:55:57-upgrades:dumpling-next-next-testing-basic-vps/74542
description: upgrades/dumpling-next/parallel/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml
  2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/rados-snaps-few-objects.yaml
  distros/centos_6.4.yaml}
email: null
job_id: '74542'
kernel:
  kdb: true
  sha1: 85c8a654d88cc1ce33eb1f3e1a2325b983a906be
last_in_suite: false
machine_type: vps
name: teuthology-2014-02-10_11:55:57-upgrades:dumpling-next-next-testing-basic-vps
nuke-on-error: true
os_type: centos
os_version: '6.4'
overrides:
  admin_socket:
    branch: next
  ceph:
    conf:
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug ms: 1
        debug osd: 5
    log-whitelist:
    - slow request
    sha1: 39b393d7fa0e66c6e7a73ce6b5b45bbf3bb6980a
  ceph-deploy:
    branch:
      dev: next
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: 39b393d7fa0e66c6e7a73ce6b5b45bbf3bb6980a
  s3tests:
    branch: next
  workunit:
    sha1: 39b393d7fa0e66c6e7a73ce6b5b45bbf3bb6980a
owner: scheduled_teuthology@teuthology
roles:
- - mon.a
  - mds.a
  - osd.0
  - osd.1
- - mon.b
  - mon.c
  - osd.2
  - osd.3
- - client.0
  - client.1
targets:
  ubuntu@vpm003.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA04XFI+xxnmzW2gIXiKa+dxZXaOh6tS9RN8a/hZgB8p9HtcPZbxTtoQHL5e7QoH3Q4Jp9ArjuZYr9/mBv3ynohUfYFM0NGM9swCJy1Evj284/+cjG4nVjcEr0tOuzJ9lB+Xx7KG1mXKaBUdLjl6g2+lybS46tZ4nrRKlDP4j9UePo6OF4YcOMYT9ulY8B0XUS+y5rLeBALEu8P8PIdOd52pG05Xn4OlJPjrRhoR+bpd3P8yP8hrxAjbUXzyh3m86hckcsDU10f0VZdk/Q5gQSYjhO15Mn4aPWCs39NJ7Mak+o7J60Nn/xFbUqYyTxHLFcC4EHXfVjDUTi8RImrie1pQ==
  ubuntu@vpm013.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAn31EKuapFNANafg7rn0FrTniXCI50z0kgFlwL3xyc9+DRz1s5Z9CSuzC8KepXb1CqKVdDoFeY0FPsb7FY4SPWj5GCZ0vZWvQNDJyNm9+FhFMC3GPUCP02XvIal4m2XY+G4o7eV9jPVth4YPTd2OtlBSLHyKu4p9NM1GoEDYDI/vMx4FcJWRM9IWFe/zKaFjTDMZK1XKCvbt4xYfHxhDYTgnCGtq3wPQG09kUV/Z+J5Vjji6WsAXUQzUJCKKLOEUALn6rkIqkOt382MGVoyYfk8OIawPrSP2KNb/n0nP1EXMs6d1Fec1drteCWXIUWuYdaAFn7va0Ke2Kz6rhiZRCEQ==
  ubuntu@vpm014.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEArGMR5NsSiOqJGay3b3R6MlhZ3dR1TtLZEF88LgWt77vMShJGZnEHY24GuGMxUlTB/YJJ+VqPNFpfE5lVjxbOuJPcegdAtUSKm+rUQYXbKbbx8WJ6LXyfMgF7bHmCv1xwXHUSc+onjjSGNjthv/E7QMqkYAR36fKWnm3Gkq5/OWifENhsLzgpR1TaZyXHr/5trxmqIxXasVEXNMj3qCbo8+uNXhfMDiLBKnmkwu7jXwqYkYN4sS44z3xzq+tx/fPK6iz1Wyig7CbeEEBE8gLfiis7cJvPmMT2HckRyhuwPUishVVcTwp690dQk4KcU9+38Cy9bpbNRLdPRTGzTFUdhQ==
tasks:
- internal.lock_machines:
  - 3
  - vps
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.sudo: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install:
    branch: dumpling
- ceph:
    fs: xfs
- parallel:
  - workload
  - upgrade-sequence
- rados:
    clients:
    - client.1
    objects: 50
    op_weights:
      delete: 50
      read: 100
      rollback: 50
      snap_create: 50
      snap_remove: 50
      write: 100
    ops: 4000
teuthology_branch: next
upgrade-sequence:
  sequential:
  - install.upgrade:
      all:
        branch: next
  - ceph.restart:
    - mon.a
    - mon.b
    - mon.c
    - mds.a
    - osd.0
    - osd.1
    - osd.2
    - osd.3
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.23145
workload:
  sequential:
  - branch: dumpling
    clients:
      client.0:
      - rados/test.sh
      - cls
    workunit: null
description: upgrades/dumpling-next/parallel/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml
  2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/rados-snaps-few-objects.yaml
  distros/centos_6.4.yaml}
duration: 477.1553189754486
failure_reason: too many values to unpack
flavor: basic
owner: scheduled_teuthology@teuthology
sentry_event: http://sentry.ceph.com/inktank/teuthology/search?q=8da6426fb447451b9a62ee6226f0cb07
success: false
Actions #1

Updated by Yuri Weinstein about 10 years ago

I was looking around about this issue and here is an update.
I added

interactive-on-error: true

to the yaml config, looked at ctx stack and after exiting the prompt, the test ran with success status (next time I ran it failed thou)!
I hope it will help debug and fix the root issue.

Actions #2

Updated by Anonymous about 10 years ago

I think that the pass after the interactive-on-error occuring is incorrect. I suspect that the state of teuthology was such that when the problem occurred, teuthology had not yet detected a problem. After that, it cleaned up things, and then exited. Since no problem had yet been detected, teuthology said everything passed.

Mind you, this comment is based on things that I have seen in the past, not by my looking at this specific problems. If anything, I think that we need to file another bug about how teuthology gives 'false okays' when this happens.

Actions #3

Updated by Zack Cerza about 10 years ago

I'm not sure why success would be reverting to True after interactive-on-error, but by the very definition of interactive-on-error, if you hit it the test has failed.

Actions #4

Updated by Zack Cerza about 10 years ago

ignore this, commented on the wrong ticket

Actions #5

Updated by Anonymous about 10 years ago

  • Assignee set to Anonymous

I think that I have found this problem.

Actions #6

Updated by Yuri Weinstein about 10 years ago

  • Severity changed from 3 - minor to 2 - major

Similar issue #7225
Also we ran newly merged tests for upgrade/dumpling-x and they failed with the same error.

Actions #7

Updated by Anonymous about 10 years ago

  • Assignee changed from Anonymous to Ian Colle

the parallel code used to set an empty dictionary value and then reference it. Now it only references the dictionary entry if it exists.

Actions #8

Updated by Ian Colle about 10 years ago

  • Assignee changed from Ian Colle to Anonymous
Actions #9

Updated by Anonymous about 10 years ago

  • Status changed from New to Fix Under Review
  • Assignee changed from Anonymous to Zack Cerza

I need to get this reviewed.

Actions #10

Updated by Ian Colle about 10 years ago

Please include a link to the PR or wip branch

Actions #11

Updated by Anonymous about 10 years ago

The pull request is #204 for branch wip-fix7397-wusui

Actions #12

Updated by Zack Cerza about 10 years ago

Warren, this is a link to the PR...
https://github.com/ceph/teuthology/pull/204

Actions #13

Updated by Zack Cerza about 10 years ago

  • Status changed from Fix Under Review to Resolved

Forgot to close this 24 days ago...

Actions

Also available in: Atom PDF