Project

General

Profile

Actions

Bug #13679

closed

ceph-cm-ansible: correct 'cloud.front' hostnames

Added by Zack Cerza over 8 years ago. Updated over 8 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

See #13365

Instead of the "Correct hostname if it is 'localhost'" step in the testnodes role, we should set the hostname we know a node should have.


Related issues 1 (0 open1 closed)

Has duplicate devops - Bug #13463: "Failure: ceph-deploy was not able to gatherkeys" on centos 7.0Duplicate10/12/2015

Actions
Actions #1

Updated by Zack Cerza over 8 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Zack Cerza
Actions #2

Updated by Zack Cerza over 8 years ago

A bit of background:

When I investigated what was happening in #13365, I initially wasn't reproducing the problem. CentOS 7 instances created with downburst were correctly getting their real hostname. Then I noticed the logs showed us using an upstream kernel, which would require a reboot after installation. I began to suspect cloud-init.

Sure enough, after rebooting my downburst instances, the hostname was set to 'cloud.front.sepia.ceph.com'. Turns out cloud-init isn't writing to /etc/hostname and instead only using /bin/hostname.

The PR drops the fancy logic and uses the ansible hostname module unconditionally. That module is idempotent and its effects are permanent.

Actions #3

Updated by Yuri Weinstein over 8 years ago

  • Has duplicate Bug #13463: "Failure: ceph-deploy was not able to gatherkeys" on centos 7.0 added
Actions

Also available in: Atom PDF