Bug #52761

OSDs announcing incorrect front_addr after upgrade to 16.2.6

Added by Javier Cacheiro over 2 years ago. Updated over 2 years ago.

Target version:
% Done:


2 - major
Affected Versions:
Pull request ID:
Crash signature (v1):
Crash signature (v2):


Ceph cluster configured with a public and cluster network:

ceph config dump|grep network

global advanced cluster_network *
mon advanced public_network *

Upgraded from 16.2.4 to 16.2.6 and all nodes rebooted after the upgrade.

Investigating an issue with clients not being able to connect I found that the problem is that clients are directed to the cluster_network address for some OSDs.

Looking at the osd metadata I see in most OSDs the front addresses are correctly configured through the 10.113 public network, like this one:

"back_addr": "[v2:,v1:]",
"front_addr": "[v2:,v1:]",
"hb_back_addr": "[v2:,v1:]",
"hb_front_addr": "[v2:,v1:]",

But then, there are also many osds where the configuration is incorrect, but this could happen in different ways.

For example in some OSDs the error is just in the front_addr, but the hb_front_addr is fine:

"back_addr": "[v2:,v1:]",
"front_addr": "[v2:,v1:]",
"hb_back_addr": "[v2:,v1:]",
"hb_front_addr": "[v2:,v1:]",

In others it is the hb_front_addr:

"back_addr": "[v2:,v1:]",
"front_addr": "[v2:,v1:]",
"hb_back_addr": "[v2:,v1:]",
"hb_front_addr": "[v2:,v1:]",

And in others both are wrong:

"back_addr": "[v2:,v1:]",
"front_addr": "[v2:,v1:]",
"hb_back_addr": "[v2:,v1:]",
"hb_front_addr": "[v2:,v1:]",

This happens just for the front_addr assignation, the back_addr in all OSDs is in the cluster network (10.114).

In the same node there can be OSDs that have the right configuration and OSDs that are announcing wrong front addresses.


#1 Updated by Javier Cacheiro over 2 years ago

Just as statistics, there are now:

- 51 cases where there is an error in the front_addr or hb_front_addr configuration.
- 333 cases where it is correct

#2 Updated by Javier Cacheiro over 2 years ago

Restarting the daemons seems to get the correct configuration but it is unclear why this did not happen when they were all rebooted after the upgrade.

#3 Updated by Javier Cacheiro over 2 years ago

In some cases it requires several daemon restarts until it gets to the right configuration.

I don't know if the wrong config could be something that could be happening randomly with a lower probability.

#4 Updated by Javier Cacheiro over 2 years ago

Upgraded from v16.2.6 to v16.2.6-20210927 to apply the remoto bug fix.

After the upgrade (no reboot of the nodes but the daemons were restarted by the upgrade) still 6 osds announcing incorrect front_addr and hb_front_addr (in this case all osds announce both front_addr and hb_front_addr in the cluster_network).

#5 Updated by Javier Cacheiro over 2 years ago

I have kept restarting the incorrectly configured osds daemons until they got the right front_addr. In some cases it required several restarts.

Now all osds are announcing the correct front_addr and hb_front_addr.

#6 Updated by Greg Farnum over 2 years ago

  • Project changed from Ceph to RADOS

#7 Updated by Neha Ojha over 2 years ago

The docs suggest setting public_network in the global section, not just for the mons, can you give this a try and see if it helps?

#8 Updated by Javier Cacheiro over 2 years ago

Yes, I tried that, but it does not change the behavior:

ceph config set global public_network

and then I run the daemon reconfig.

Same behavior.

As a further comment, the config with the setting only for the mon comes directly from cephadm when I bootstraped the cluster with:

cephadm bootstrap --mon-ip --cluster-network

It run with no issue until the upgrade.

Also available in: Atom PDF