Actions
Bug #17943
closedOVH nodes are coming up in Error state.
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Error: Failed to launch instance "ceph-docker-registry": Please try again later [Error: No valid host was found. ].
Updated by Alfredo Deza over 7 years ago
November 21st: Still an issue.
Flavors: hg-30-ssd
Region: SBG1
Updated by Nathan Cutler over 7 years ago
Related to http://tracker.ceph.com/issues/17952
Updated by Alfredo Deza over 7 years ago
Update(s) from OVH at: http://travaux.ovh.com/?do=details&id=20859
From OVH directly:
We plan to build +1300 hosts in the next weeks to catch up our delay in stock provisionning (all regions included).
Updated by Alfredo Deza over 7 years ago
We had about a week worth of having OK nodes. We are now backed up again.
Updated by Andrew Schoen over 7 years ago
We're getting a new error back now from the UI.
Error: Failed to launch instance "xenial_trusty_pbuilder_huge__25899bcc-8886-4e30-80db-39fd4cb4ec2b": Please try again later [Error: Timed out waiting for a reply to message ID 93c617ce09944343959ea68996415388]
Updated by Andrew Schoen over 7 years ago
We're having troubles again today with nodes requiring an ssd coming up in an Error state, using the SBG1 region.
Error: Failed to launch instance "centos7_small__6f8f0e19-ec02-4bb5-a2a8-fb09e67602eb": Please try again later [Error: Timed out waiting for a reply to message ID 4eb00edb3fae46d99a9074317cb7afa9].
Updated by Andrew Schoen about 7 years ago
This is happening again today with the hg-30-flex node type in the SGB1 region.
Updated by David Galloway almost 6 years ago
- Status changed from New to Closed
- Assignee set to David Galloway
This hasn't been much of an issue anymore
Actions