Project

General

Profile

Actions

Bug #14514

closed

vps leaked after teuthology run

Added by Orit Wasserman over 8 years ago. Updated about 8 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

vps failed to unlock long after the test suite completed.
When unlocking them manually it asked for 's password.
resulting in destroy failure.
In spite of this error the machine were unlocked.


Files

leaked_vps_description.json (41.1 KB) leaked_vps_description.json teuthology-lock --list output for the leaked nodes Josh Durgin, 01/26/2016 04:09 PM
Actions #1

Updated by Josh Durgin over 8 years ago

The leaked vpses were from a variety of hosts:

$ grep vm_host leaked_vps_description.json | sort | uniq -c
      1         "vm_host": "mira001.front.sepia.ceph.com", 
      4         "vm_host": "mira003.front.sepia.ceph.com", 
      1         "vm_host": "mira006.front.sepia.ceph.com", 
      3         "vm_host": "mira007.front.sepia.ceph.com", 
      2         "vm_host": "mira008.front.sepia.ceph.com", 
      3         "vm_host": "mira009.front.sepia.ceph.com", 
      1         "vm_host": "mira010.front.sepia.ceph.com", 
      4         "vm_host": "mira013.front.sepia.ceph.com", 
      2         "vm_host": "mira014.front.sepia.ceph.com", 
      5         "vm_host": "mira017.front.sepia.ceph.com", 
      2         "vm_host": "mira020.front.sepia.ceph.com", 
      1         "vm_host": "mira024.front.sepia.ceph.com", 
      1         "vm_host": "mira036.front.sepia.ceph.com", 
      6         "vm_host": "mira043.front.sepia.ceph.com", 
      3         "vm_host": "mira044.front.sepia.ceph.com", 
      1         "vm_host": "mira079.front.sepia.ceph.com", 
      2         "vm_host": "mira098.front.sepia.ceph.com",

Attached are the descriptions from before they were unlocked.

Actions #2

Updated by Dan Mick about 8 years ago

  • Status changed from New to Closed

This was probably a collision between vmhost maintenance and old stale jobs. If it happens again, reopen or rereport.

Actions

Also available in: Atom PDF