Actions
Bug #25149
closedsupport for "ovh" machine type
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):
Description
When using --machine-type ovh
, all RHEL 7.5 jobs fail because they can't download an RPM from satellite.front.sepia.ceph.com:
os_type: rhel os_version: '7.5' ... ... ... 2018-07-29T17:55:57.200 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks manager.__enter__() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__ self.begin() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 426, in begin super(CephLab, self).begin() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 268, in begin self.execute_playbook() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 295, in execute_playbook self._handle_failure(command, status) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 319, in _handle_failure raise AnsibleFailedError(failures) AnsibleFailedError: {'ovh068.front.sepia.ceph.com': {'_ansible_parsed': True, 'invocation': {'module_args': {'allow_downgrade': False, 'name': ['http://satellite.front.sepia.ceph.com/pub/katello-ca-consumer-latest.noarch.rpm'], 'exclude': None, 'list': None, 'disable_gpg_check': False, 'conf_file': None, 'install_repoquery': True, 'state': 'present', 'disablerepo': None, 'update_cache': False, 'enablerepo': None, 'skip_broken': False, 'security': False, 'validate_certs': False, 'installroot': '/'}}, 'changed': False, '_ansible_no_log': False, 'msg': 'Failure downloading http://satellite.front.sepia.ceph.com/pub/katello-ca-consumer-latest.noarch.rpm, Request failed: <urlopen error timed out>'}}
This can be worked around by adding the following yaml to the job:
overrides: ansible.cephlab: skip_tags: entitlements,packages,repos
However, this workaround breaks Ubuntu 16.04 on OVH:
overrides: ansible.cephlab: skip_tags: entitlements,packages,repos ... os_type: ubuntu os_version: '16.04' ... ... ... Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks manager.__enter__() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__ self.begin() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 426, in begin super(CephLab, self).begin() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 268, in begin self.execute_playbook() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 295, in execute_playbook self._handle_failure(command, status) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 319, in _handle_failure raise AnsibleFailedError(failures) AnsibleFailedError: {'ovh002.front.sepia.ceph.com': {'_ansible_parsed': True, 'invocation': {'module_args': {'comment': None, 'ssh_key_bits': 0, 'update_password': 'always', 'non_unique': False, 'force': False, 'skeleton': None, 'expires': None, 'ssh_key_passphrase': None, 'groups': ['fuse', 'kvm', 'disk'], 'createhome': True, 'home': None, 'move_home': False, 'password': None, 'generate_ssh_key': None, 'append': True, 'uid': None, 'ssh_key_comment': 'ansible-generated on ovh002', 'group': None, 'name': 'ubuntu', 'local': None, 'seuser': None, 'system': False, 'remove': False, 'state': 'present', 'ssh_key_file': None, 'login_class': None, 'shell': None, 'ssh_key_type': 'rsa'}}, 'changed': False, '_ansible_no_log': False, 'msg': 'Group kvm does not exist'}, 'ovh068.front.sepia.ceph.com': {'_ansible_parsed': True, 'invocation': {'module_args': {'comment': None, 'ssh_key_bits': 0, 'update_password': 'always', 'non_unique': False, 'force': False, 'skeleton': None, 'expires': None, 'ssh_key_passphrase': None, 'groups': ['fuse', 'kvm', 'disk'], 'createhome': True, 'home': None, 'move_home': False, 'password': None, 'generate_ssh_key': None, 'append': True, 'uid': None, 'ssh_key_comment': 'ansible-generated on ovh068', 'group': None, 'name': 'ubuntu', 'local': None, 'seuser': None, 'system': False, 'remove': False, 'state': 'present', 'ssh_key_file': None, 'login_class': None, 'shell': None, 'ssh_key_type': 'rsa'}}, 'changed': False, '_ansible_no_log': False, 'msg': 'Group kvm does not exist'}}
Updated by Nathan Cutler almost 6 years ago
- Priority changed from Normal to Urgent
Raising priority because this makes it difficult to test Ceph in OVH.
Updated by David Galloway almost 6 years ago
- Status changed from New to Resolved
- Assignee set to David Galloway
Updated by Nathan Cutler almost 6 years ago
- Related to Bug #25005: AnsibleFailedError in ceph-disk added
Actions