Project

General

Profile

Actions

Bug #51311

closed

Failed to apply ingress.rgw: IndexError: list index out of range

Added by Sigurd Brinch almost 3 years ago. Updated over 2 years ago.

Status:
Resolved
Priority:
High
Category:
cephadm
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Following the docs at: https://docs.ceph.com/en/latest/cephadm/rgw/ I've set up rgw with:

# ceph orch apply rgw ikt --placement="3" 

Then to add the ingress I used:

# ceph orch apply -i rgw-ingress.yaml

where rgw-ingress.yaml containes:

service_type: ingress
service_id: rgw.ikt
placement:
  count: 3
spec:
  backend_service: rgw.ikt
  virtual_ip: 192.168.20.63/24
  frontend_port: 9000
  monitor_port: 1967
  ssl_cert: |
    -----BEGIN CERTIFICATE-----
    blabla
    -----END CERTIFICATE-----
    -----BEGIN PRIVATE KEY-----
    blabla
    -----END PRIVATE KEY-----

This gives the output:

Scheduled ingress.rgw.ikt update...

But in the MGR log it gives:
debug 2021-06-22T10:04:50.073+0000 7f30ebd57700  0 log_channel(cephadm) log [INF] : Saving service ingress.rgw.ikt spec with placement count:3
debug 2021-06-22T10:04:50.129+0000 7f31ce804700  0 log_channel(cephadm) log [INF] : 192.16.20.63 is 192.16.20.0/24 on bamboo-grm5 interface bond0
debug 2021-06-22T10:04:50.133+0000 7f31ce804700  0 [cephadm ERROR cephadm.serve] Failed to apply ingress.rgw.ikt spec IngressSpec({'placement': PlacementSpec(count=3), 'service_type': 'ingress', 'service_id': 'rgw.ikt', 'unmanaged': False, 'preview_only': False, 'networks': [], 'config': None, 'backend_service': 'rgw.ikt', 'frontend_port': 9000, 'ssl_cert': None, 'ssl_dh_param': None, 'ssl_ciphers': None, 'ssl_options': None, 'monitor_port': 1967, 'monitor_user': None, 'monitor_password': None, 'keepalived_password': None, 'virtual_ip': '192.16.20.63/24', 'virtual_interface_networks': [], 'haproxy_container_image': None, 'keepalived_container_image': None}): list index out of range
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/cephadm/serve.py", line 466, in _apply_all_services
    if self._apply_service(spec):
  File "/usr/share/ceph/mgr/cephadm/serve.py", line 625, in _apply_service
    daemon_spec = svc.prepare_create(daemon_spec)
  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 31, in prepare_create
    return self.keepalived_prepare_create(daemon_spec)
  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 127, in keepalived_prepare_create
    daemon_spec.final_config, daemon_spec.deps = self.keepalived_generate_config(daemon_spec)
  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 192, in keepalived_generate_config
    if hosts[0] == host:
IndexError: list index out of range
debug 2021-06-22T10:04:50.133+0000 7f31ce804700 -1 log_channel(cephadm) log [ERR] : Failed to apply ingress.rgw.ikt spec IngressSpec({'placement': PlacementSpec(count=3), 'service_type': 'ingress', 'service_id': 'rgw.ikt', 'unmanaged': False, 'preview_only': False, 'networks': [], 'config': None, 'backend_service': 'rgw.ikt', 'frontend_port': 9000, 'ssl_cert': None, 'ssl_dh_param': None, 'ssl_ciphers': None, 'ssl_options': None, 'monitor_port': 1967, 'monitor_user': None, 'monitor_password': None, 'keepalived_password': None, 'virtual_ip': '192.16.20.63/24', 'virtual_interface_networks': [], 'haproxy_container_image': None, 'keepalived_container_image': None}): list index out of range
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/cephadm/serve.py", line 466, in _apply_all_services
    if self._apply_service(spec):
  File "/usr/share/ceph/mgr/cephadm/serve.py", line 625, in _apply_service
    daemon_spec = svc.prepare_create(daemon_spec)
  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 31, in prepare_create
    return self.keepalived_prepare_create(daemon_spec)
  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 127, in keepalived_prepare_create
    daemon_spec.final_config, daemon_spec.deps = self.keepalived_generate_config(daemon_spec)
  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 192, in keepalived_generate_config
    if hosts[0] == host:
IndexError: list index out of range

Files

ceph_orch_ps.yaml (45.3 KB) ceph_orch_ps.yaml ceph orch ps --format yaml Sigurd Brinch, 06/22/2021 10:29 AM

Related issues 1 (0 open1 closed)

Has duplicate Orchestrator - Bug #51713: Cephadm: Timeout waiting for ingress.nfs.foo to startDuplicate

Actions
Actions #1

Updated by Sebastian Wagner almost 3 years ago

  • Description updated (diff)
  • Priority changed from Normal to High
Actions #2

Updated by Sebastian Wagner almost 3 years ago

  • Subject changed from MGR log reports "IndexError: list index out of range" when trying to apply an rgw ingress to pacific 16.2.4 to Failed to apply ingress.rgw: IndexError: list index out of range
Actions #4

Updated by Asbjørn Sannes almost 3 years ago

What does

ceph orch ls

give you?

I had that error when I put the wrong backend service name there (or they were not currently running).

I did have it complaining loudly on setting up those missing haproxies with something like:

log_channel(cephadm) log [ERR] : Failed while placing haproxy.rgw.test.test-1.host1.kcwssk on host1: ingress.rgw.test.test-1 backend service rgw.test.test-1 does not exist

in the logs.

It could probably behave a bit better with:

diff --git a/src/pybind/mgr/cephadm/services/ingress.py b/src/pybind/mgr/cephadm/services/ingress.py
index f78f558a2d8..34a69d3dede 100644
--- a/src/pybind/mgr/cephadm/services/ingress.py
+++ b/src/pybind/mgr/cephadm/services/ingress.py
@@ -187,6 +187,9 @@ class IngressService(CephService):

         host = daemon_spec.host
         hosts = sorted(list(set([str(d.hostname) for d in daemons])))
+        if len(hosts) == 0:
+            raise OrchestratorError(
+                f"Unable to find any haproxy daemons for service {spec.service_name()}, not starting keepalived.")

Actions #5

Updated by Sebastian Wagner almost 3 years ago

  • Has duplicate Bug #51713: Cephadm: Timeout waiting for ingress.nfs.foo to start added
Actions #6

Updated by Sebastian Wagner almost 3 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Sebastian Wagner
  • Pull request ID set to 42433
Actions #7

Updated by Sigurd Brinch almost 3 years ago

  1. ceph orch ls
    NAME PORTS RUNNING REFRESHED AGE PLACEMENT
    alertmanager 1/1 6m ago 8w count:1
    crash 12/12 6m ago 8w *
    grafana 1/1 6m ago 8w count:1
    mds.csi-cephfs 2/2 6m ago 8w count:2
    mgr 2/2 6m ago 8w count:2
    mon 5/5 6m ago 8w count:5
    node-exporter 12/12 6m ago 8w *
    osd.all-available-devices 85/97 6m ago 8w *
    prometheus 1/1 6m ago 8w count:1
    rgw.ikt ?:80 3/3 6m ago 4w count:3
Actions #8

Updated by Sebastian Wagner over 2 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF