Project

General

Profile

Actions

Bug #62195

open

cephadm ignores IPv6 addresses on localhost

Added by Stefan A. 9 months ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
cephadm
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Description

For our cluster we are using a setup with redundant network interfaces and BGP. Therefore the IPs assigned to our hosts are placed on the "lo" interface.

cephadm however does not detect these:

root@ceph02.cloud.example.com: ~ # cephadm list-networks
{
    "172.17.0.0/16": {
        "docker0": [
            "172.17.0.1" 
        ]
    },
    "fe80::/64": {
        "enp129s0f1np1": [
            "fe80::1270:fdff:fecf:77ed" 
        ],
        "enp129s0f0np0": [
            "fe80::1270:fdff:fecf:77ec" 
        ]
    }
}

Here's ip -6 a and ip -6 r:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
    inet6 2001:db8:3000:1010::2002/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
5: enp129s0f0np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 state UP qlen 1000
    inet6 fe80::1270:fdff:fecf:77ec/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
6: enp129s0f1np1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 state UP qlen 1000
    inet6 fe80::1270:fdff:fecf:77ed/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

::1 dev lo proto kernel metric 256 pref medium
[...]
2001:db8:3000:1010::2001 nhid 294 proto bgp metric 20 pref medium
    nexthop via fe80::9a19:2cff:fee5:b280 dev enp129s0f1np1 weight 1 
    nexthop via fe80::9a19:2cff:fee5:a480 dev enp129s0f0np0 weight 1 
2001:db8:3000:1010::2002 dev lo proto kernel metric 256 pref medium
2001:db8:3000:1010::2003 nhid 294 proto bgp metric 20 pref medium
    nexthop via fe80::9a19:2cff:fee5:b280 dev enp129s0f1np1 weight 1 
    nexthop via fe80::9a19:2cff:fee5:a480 dev enp129s0f0np0 weight 1 
[...]
fe80::/64 dev enp129s0f1np1 proto kernel metric 1024 pref medium
fe80::/64 dev enp129s0f0np0 proto kernel metric 1024 pref medium

Further context

We believe this is the root cause of this error message and the fact that ceph orch did not automatically start up additional MONs:

cephadm 2023-07-13T13:24:55.811849+0000 mgr.ceph01.tfthdu (mgr.14189) 379 : cephadm [INF] Filtered out host ceph01.cloud.example.com: does not belong to mon public_network(s):  2001:db8:3000:1010::2000/116, host network(s): 172.17.0.0/16,fe80::/64

Incidentally cephadm also ignored the IPv4 address we had on localhost, but our cluster setup was IPv6-only this time.

The reponsible code is here:
https://github.com/ceph/ceph/blob/d3f9d140227cd8a0a8b4de654da4558c7808c45b/src/cephadm/cephadm.py#L6957-L6958

No data to display

Actions

Also available in: Atom PDF