Cleanup #16825
closedvpm061 has odd downburst error
0%
Description
./virtualenv/bin/teuthology-lock --lock-many 1 --os-type ubuntu --os-version 14.04 --machine-type vps 2016-07-26 16:42:59,036.036 INFO:teuthology.provision.downburst:Provisioning a ubuntu 14.04 vps 2016-07-26 16:42:59,824.824 INFO:teuthology.provision.downburst:Downburst failed on ubuntu@vpm061.front.sepia.ceph.com: /home/wusui/downburst/virtualenv/local/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:318: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning. SNIMissingWarning /home/wusui/downburst/virtualenv/local/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. InsecurePlatformWarning libvirt: Storage Driver error : cannot stat file '/srv/libvirtpool/vpm061/ubuntu-14.04-20140226.1-cloudimg-amd64.img': No such file or directory Traceback (most recent call last): File "/home/wusui/downburst/virtualenv/bin/downburst", line 9, in <module> load_entry_point('downburst==0.0.1', 'console_scripts', 'downburst')() File "/home/wusui/downburst/downburst/cli.py", line 64, in main return args.func(args) File "/home/wusui/downburst/downburst/create.py", line 97, in create vol, raw = image.ensure_cloud_image(pool=pool, distro=distro, distroversion=distroversion, arch=arch, forcenew=args.forcenew) File "/home/wusui/downburst/downburst/image.py", line 113, in ensure_cloud_image name = find_cloud_image(pool=pool, distro=distro, distroversion=distroversion, arch=arch) File "/home/wusui/downburst/downburst/image.py", line 73, in find_cloud_image names = list(names) File "/home/wusui/downburst/downburst/image.py", line 57, in list_cloud_images if not remove_image_if_corrupt(pool, name): File "/home/wusui/downburst/downburst/image.py", line 22, in remove_image_if_corrupt size = vol.info()[1] File "/home/wusui/downburst/virtualenv/local/lib/python2.7/site-packages/libvirt.py", line 3039, in info if ret is None: raise libvirtError ('virStorageVolGetInfo() failed', vol=self) libvirt.libvirtError: cannot stat file '/srv/libvirtpool/vpm061/ubuntu-14.04-20140226.1-cloudimg-amd64.img': No such file or directory 2016-07-26 16:42:59,825.825 ERROR:teuthology.lock:Unable to create virtual machine: ubuntu@vpm061.front.sepia.ceph.com 2016-07-26 16:43:00,525.525 ERROR:teuthology.provision.downburst:Error destroying vpm061.front.sepia.ceph.com: libvirt: QEMU Driver error : Domain not found: no domain with matching name 'vpm061' 2016-07-26 16:43:00,526.526 ERROR:teuthology.lock:destroy failed for vpm061.front.sepia.ceph.com 2016-07-26 16:43:00,567.567 INFO:teuthology.lock:unlocked vpm061.front.sepia.ceph.com
I left it marked down, because a scheduled job stole it before I could look into it. Preliminarily, there was one ubuntu image there, not the requested one.
Updated by Dan Mick almost 8 years ago
- Category set to Test Node
- Source changed from other to Q/A
Updated by David Galloway almost 8 years ago
I think I remember seeing this error during my mira drive party last week. It may be lab-wide and not just limited to a single VPS host.
Updated by Yuri Weinstein almost 8 years ago
See more in http://pulpito.ceph.com/teuthology-2016-07-26_08:18:30-smoke-master-distro-basic-vps/
334828, 334829, 334830
Updated by Dan Mick over 7 years ago
The problem was a corrupted storage pool for vpm061: libvirt was reporting that it contained many images, but it only contained one. A manual "virsh pool-refresh vpm061" on the vmhost (mira010) resolved the issue.
mira010 has errors on some of its drives. I don't know how libvirt got out of sync.
Updated by Dan Mick over 7 years ago
Yuri Weinstein wrote:
See more in http://pulpito.ceph.com/teuthology-2016-07-26_08:18:30-smoke-master-distro-basic-vps/
334828, 334829, 334830
all those jobs passed ?
Updated by Dan Mick over 7 years ago
Running a loop to visit all 25 vmhosts and refresh all defined pools.
Updated by Dan Mick over 7 years ago
- Tracker changed from Bug to Cleanup
- Status changed from New to Resolved