Project

General

Profile

Actions

Bug #13656

closed

Confusing behavior with volume detachment and instance deletion

Added by Zack Cerza over 8 years ago. Updated over 8 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

http://teuthology.ovh.sepia.ceph.com/teuthology/teuthology-2015-10-29_21:00:02-rados-hammer-distro-basic-openstack/10706/teuthology.log
In the first traceback, we detach volumes, then delete the instance, then delete the volumes. The volumes fail to delete because they are still attached. /facepalm

So it looks like we try again, beginning with volume detachment - which fails, because the instance is in the process of being deleted.

Uh?

2015-10-29T22:31:58.500 DEBUG:teuthology.provision:ProvisionOpenStack:destroy target088034
2015-10-29T22:31:58.500 DEBUG:teuthology.misc:openstack server show -f json target088034
2015-10-29T22:32:05.211 DEBUG:teuthology.misc:openstack server show -f json target088034 output [{"Field": "OS-DCF:diskConfig", "Value": "MANUAL"}, {"Field": "OS-EXT-AZ:availability_zone", "Value": "nova"}, {"Field": "OS-EXT... (truncated to the first 128 characters)
2015-10-29T22:32:05.212 DEBUG:teuthology.misc:openstack server remove volume target088034 84a02288-7619-41f7-a59c-72244b8b28a8
2015-10-29T22:32:10.519 DEBUG:teuthology.misc:openstack server remove volume target088034 a90401ac-739c-4876-9177-688b1e638e03
2015-10-29T22:32:16.375 DEBUG:teuthology.misc:openstack server remove volume target088034 76da253a-7ee7-46c7-ac86-9214b3da945b
2015-10-29T22:32:22.013 DEBUG:teuthology.misc:openstack server delete target088034
2015-10-29T22:32:25.892 DEBUG:teuthology.misc:openstack volume delete 84a02288-7619-41f7-a59c-72244b8b28a8
2015-10-29T22:32:28.766 DEBUG:teuthology.misc:openstack volume delete 84a02288-7619-41f7-a59c-72244b8b28a8 failed with Volume 84a02288-7619-41f7-a59c-72244b8b28a8 is still attached, detach volume first. (HTTP 400) (Request-ID: req-8bfcde87-c2ef-4506-a85f-f64817b0fff7)

2015-10-29T22:32:28.767 ERROR:teuthology.run_tasks:Manager failed: internal.lock_machines
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 125, in run_tasks
    suppress = manager.__exit__(*exc_info)
  File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
    self.gen.next()
  File "/home/teuthworker/src/teuthology_master/teuthology/task/internal.py", line 191, in lock_machines
    lock.unlock_one(ctx, machine, ctx.owner, ctx.archive)
  File "/home/teuthworker/src/teuthology_master/teuthology/lock.py", line 512, in unlock_one
    if not provision.destroy_if_vm(ctx, name, user, description):
  File "/home/teuthworker/src/teuthology_master/teuthology/provision.py", line 436, in destroy_if_vm
    decanonicalize_hostname(machine_name))
  File "/home/teuthworker/src/teuthology_master/teuthology/provision.py", line 376, in destroy
    misc.sh("openstack volume delete " + volume)
  File "/home/teuthworker/src/teuthology_master/teuthology/misc.py", line 1329, in sh
    output=output,
CalledProcessError: Command 'openstack volume delete 84a02288-7619-41f7-a59c-72244b8b28a8' returned non-zero exit status 1
2015-10-29T22:32:28.771 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CalledProcessError: Command 'openstack volume delete 84a02288-7619-41f7-a59c-72244b8b28a8' returned non-zero exit status 1
2015-10-29T22:32:28.773 INFO:teuthology.nuke:Checking targets against current locks
2015-10-29T22:32:30.611 DEBUG:teuthology.provision:ProvisionOpenStack: {'subnet': '149.202.160.0/19', 'user-data': '/home/teuthworker/src/teuthology_master/teuthology/openstack/openstack-{os_type}-{os_version}-user-data.txt', 'ip': '149.202.184.32', 'clone': 'git clone http://github.com/ceph/teuthology', 'machine': {'disk': 20, 'ram': 8000, 'cpus': 1}, 'volumes': {'count': 0, 'size': 1}, 'flavor-select-regexp': '^(vps|eg)-', 'nameserver': '149.202.184.32'}
2015-10-29T22:32:30.612 DEBUG:teuthology.provision:ProvisionOpenStack:destroy target088034
2015-10-29T22:32:30.612 DEBUG:teuthology.misc:openstack server show -f json target088034
2015-10-29T22:32:36.176 DEBUG:teuthology.misc:openstack server show -f json target088034 output [{"Field": "OS-DCF:diskConfig", "Value": "MANUAL"}, {"Field": "OS-EXT-AZ:availability_zone", "Value": "nova"}, {"Field": "OS-EXT... (truncated to the first 128 characters)
2015-10-29T22:32:36.177 DEBUG:teuthology.misc:openstack server remove volume target088034 76da253a-7ee7-46c7-ac86-9214b3da945b
2015-10-29T22:32:41.295 DEBUG:teuthology.misc:openstack server remove volume target088034 76da253a-7ee7-46c7-ac86-9214b3da945b failed with Cannot 'detach_volume' while instance is in task_state deleting (HTTP 409) (Request-ID: req-36d3d2d6-f156-4955-9d41-4e6fac6963f2)

2015-10-29T22:32:41.301 CRITICAL:root:  File "/home/teuthworker/src/teuthology_master/virtualenv/bin/teuthology", line 9, in <module>
    load_entry_point('teuthology==0.1.0', 'console_scripts', 'teuthology')()
  File "/home/teuthworker/src/teuthology_master/scripts/run.py", line 34, in main
    teuthology.run.main(args)
  File "/home/teuthworker/src/teuthology_master/teuthology/run.py", line 354, in main
    report_outcome(config, archive, fake_ctx.summary, fake_ctx)
  File "/home/teuthworker/src/teuthology_master/teuthology/run.py", line 225, in report_outcome
    nuke(fake_ctx, fake_ctx.lock)
  File "/home/teuthworker/src/teuthology_master/teuthology/nuke.py", line 451, in nuke
    for unnuked in p:
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 101, in next
    resurrect_traceback(result)
  File "/home/teuthworker/src/teuthology_master/teuthology/parallel.py", line 19, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/teuthology_master/teuthology/nuke.py", line 483, in nuke_one
    unlock_one(ctx, target.keys()[0], ctx.owner)
  File "/home/teuthworker/src/teuthology_master/teuthology/lock.py", line 512, in unlock_one
    if not provision.destroy_if_vm(ctx, name, user, description):
  File "/home/teuthworker/src/teuthology_master/teuthology/provision.py", line 436, in destroy_if_vm
    decanonicalize_hostname(machine_name))
  File "/home/teuthworker/src/teuthology_master/teuthology/provision.py", line 373, in destroy
    (name_or_id, volume))
  File "/home/teuthworker/src/teuthology_master/teuthology/misc.py", line 1329, in sh
    output=output,

2015-10-29T22:32:41.302 CRITICAL:root:CalledProcessError
Actions #1

Updated by Zack Cerza over 8 years ago

And, of course, openstack server remove volume doesn't take a --wait

Actions #2

Updated by Zack Cerza over 8 years ago

  • Assignee set to Loïc Dachary

Loic, I'm lost here; any sense of what we might do to solve this?

Actions #3

Updated by Loïc Dachary over 8 years ago

  • Status changed from New to In Progress
Actions #4

Updated by Loïc Dachary over 8 years ago

  • Status changed from In Progress to Resolved
Actions

Also available in: Atom PDF