Project

General

Profile

Actions

Bug #45197

closed

cephadm: rgw: failed to bind address 0.0.0.0:80

Added by Sebastian Wagner almost 4 years ago. Updated almost 4 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
cephadm
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Despite running as root, RGW still cannot bind to port 80.

Apr 22 22:36:04 node02 systemd[1]: Starting Ceph rgw.myorg.us-east-1.node02.wzdozc for xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx...
Apr 22 22:36:04 node02 podman[3306]: Error: no container with name or ID ceph-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx-rgw.myorg.us-east-1.node02.wzdozc found: no such container
Apr 22 22:36:04 node02 systemd[1]: Started Ceph rgw.myorg.us-east-1.node02.wzdozc for xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx.
Apr 22 22:36:04 node02 podman[3316]: 2020-04-22 22:36:04.900573616 +0300 +03 m=+0.195480600 container create 2df051c99b6a1a2b069576796b7a283d195da289ebace699674e60e5671beb52 (image=docker.io/ceph/ceph:v15, name=ceph-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx-rgw.myorg.us-east-1.node02.wzdozc)
Apr 22 22:36:04 node02 systemd[1]: Started libpod-conmon-2df051c99b6a1a2b069576796b7a283d195da289ebace699674e60e5671beb52.scope.
Apr 22 22:36:05 node02 systemd[1]: Started libcontainer container 2df051c99b6a1a2b069576796b7a283d195da289ebace699674e60e5671beb52.
Apr 22 22:36:05 node02 bash[1571]: debug 2020-04-22T19:36:05.146+0000 7fee9ce70700  1 mon.node02@1(peon).osd e55 e55: 4 total, 4 up, 4 in
Apr 22 22:36:05 node02 bash[1576]: debug 2020-04-22T19:36:05.389+0000 7f135643a700  0 <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/15.2.1/rpm/el8/BUILD/ceph-15.2.1/src/cls/queue/cls_queue_src.cc:54: ERROR: queue_read_head: failed to decode queue start: buffer::end_of_buffer
Apr 22 22:36:05 node02 bash[1561]: debug 2020-04-22T19:36:05.462+0000 7f10ef94a700  0 log_channel(cluster) log [DBG] : pgmap v616: 129 pgs: 1 creating+activating, 12 creating+peering, 7 unknown, 109 active+clean; 1.9 KiB data, 29 MiB used, 20 GiB / 24 GiB avail
Apr 22 22:36:05 node02 bash[1576]: debug 2020-04-22T19:36:05.521+0000 7f135643a700  0 <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/15.2.1/rpm/el8/BUILD/ceph-15.2.1/src/cls/queue/cls_queue_src.cc:54: ERROR: queue_read_head: failed to decode queue start: buffer::end_of_buffer
Apr 22 22:36:05 node02 bash[1576]: debug 2020-04-22T19:36:05.541+0000 7f135643a700  0 <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/15.2.1/rpm/el8/BUILD/ceph-15.2.1/src/cls/queue/cls_queue_src.cc:54: ERROR: queue_read_head: failed to decode queue start: buffer::end_of_buffer
Apr 22 22:36:05 node02 bash[1576]: debug 2020-04-22T19:36:05.626+0000 7f135643a700  0 <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/15.2.1/rpm/el8/BUILD/ceph-15.2.1/src/cls/queue/cls_queue_src.cc:54: ERROR: queue_read_head: failed to decode queue start: buffer::end_of_buffer
[.....]
Apr 22 22:36:06 node02 bash[1571]: audit 2020-04-22T19:36:05.125075+0000 mon.node01 (mon.0) 82 : audit [INF] from='client.? 192.168.100.101:0/3331055036' entity='client.rgw.myorg.us-east-1.node01.dgfdkv' cmd='[{"prefix": "osd pool set", "pool": "us-east-1.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished
Apr 22 22:36:06 node02 bash[1571]: cluster 2020-04-22T19:36:05.125132+0000 mon.node01 (mon.0) 83 : cluster [DBG] osdmap e55: 4 total, 4 up, 4 in
Apr 22 22:36:06 node02 bash[1571]: cluster 2020-04-22T19:36:05.463711+0000 mgr.node02.qbhwjb (mgr.44101) 604 : cluster [DBG] pgmap v616: 129 pgs: 1 creating+activating, 12 creating+peering, 7 unknown, 109 active+clean; 1.9 KiB data, 29 MiB used, 20 GiB / 24 GiB avail
Apr 22 22:36:06 node02 bash[3314]: debug 2020-04-22T19:36:06.337+0000 7fb93538f240  0 set uid:gid to 167:167 (ceph:ceph)
Apr 22 22:36:06 node02 bash[3314]: debug 2020-04-22T19:36:06.337+0000 7fb93538f240  0 ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable), process radosgw, pid 1
Apr 22 22:36:06 node02 bash[3314]: debug 2020-04-22T19:36:06.337+0000 7fb93538f240  0 framework: beast
Apr 22 22:36:06 node02 bash[3314]: debug 2020-04-22T19:36:06.337+0000 7fb93538f240  0 framework conf key: port, val: 80
Apr 22 22:36:06 node02 bash[3314]: debug 2020-04-22T19:36:06.337+0000 7fb93538f240  1 radosgw_Main not setting numa affinity
Apr 22 22:36:06 node02 bash[3314]: debug 2020-04-22T19:36:06.819+0000 7fb93538f240  0 framework: beast
Apr 22 22:36:06 node02 bash[3314]: debug 2020-04-22T19:36:06.819+0000 7fb93538f240  0 framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Apr 22 22:36:06 node02 bash[3314]: debug 2020-04-22T19:36:06.819+0000 7fb93538f240  0 framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Apr 22 22:36:06 node02 bash[3314]: debug 2020-04-22T19:36:06.819+0000 7fb93538f240  0 starting handler: beast
Apr 22 22:36:06 node02 podman[3316]: 2020-04-22 22:36:06.890791974 +0300 +03 m=+2.185699058 container died 2df051c99b6a1a2b069576796b7a283d195da289ebace699674e60e5671beb52 (image=docker.io/ceph/ceph:v15, name=ceph-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx-rgw.myorg.us-east-1.node02.wzdozc)
Apr 22 22:36:06 node02 systemd[1]: libpod-2df051c99b6a1a2b069576796b7a283d195da289ebace699674e60e5671beb52.scope: Consumed 512ms CPU time
Apr 22 22:36:07 node02 podman[3316]: 2020-04-22 22:36:07.033069665 +0300 +03 m=+2.327976749 container remove 2df051c99b6a1a2b069576796b7a283d195da289ebace699674e60e5671beb52 (image=docker.io/ceph/ceph:v15, name=ceph-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx-rgw.myorg.us-east-1.node02.wzdozc)
Apr 22 22:36:07 node02 systemd[1]: ceph-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx@rgw.myorg.us-east-1.node02.wzdozc.service: Main process exited, code=exited, status=13/n/a
Apr 22 22:36:07 node02 systemd[1]: ceph-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx@rgw.myorg.us-east-1.node02.wzdozc.service: Failed with result 'exit-code'.
Apr 22 22:36:07 node02 bash[1571]: debug 2020-04-22T19:36:07.341+0000 7fee9ce70700  0 mon.node02@1(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "us-east-1.rgw.meta", "var": "pg_num", "val": "8"} v 0) v1
Apr 22 22:36:07 node02 bash[1571]: debug 2020-04-22T19:36:07.341+0000 7fee9ce70700  0 log_channel(audit) log [INF] : from='mgr.44101 192.168.100.102:0/2789319733' entity='mgr.node02.qbhwjb' cmd=[{"prefix": "osd pool set", "pool": "us-east-1.rgw.meta", "var": "pg_num", "val": "8"}]: dispatch
Apr 22 22:36:07 node02 bash[1561]: debug 2020-04-22T19:36:07.464+0000 7f10ef94a700  0 log_channel(cluster) log [DBG] : pgmap v617: 129 pgs: 1 creating+activating, 12 creating+peering, 116 active+clean; 2.2 KiB data, 30 MiB used, 20 GiB / 24 GiB avail; 0 B/s rd, 381 B/s wr, 1 op/s
Apr 22 22:36:07 node02 firewalld[956]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will be removed in a future release. Please consider disabling it now.
[....]
 cmd=[{"prefix": "osd pool set", "pool": "us-east-1.rgw.meta", "var": "pg_num_actual", "val": "30"}]: dispatch
Apr 22 22:36:19 node02 bash[3695]: debug 2020-04-22T19:36:19.533+0000 7f7e36959240  0 framework: beast
Apr 22 22:36:19 node02 bash[3695]: debug 2020-04-22T19:36:19.533+0000 7f7e36959240  0 framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Apr 22 22:36:19 node02 bash[3695]: debug 2020-04-22T19:36:19.533+0000 7f7e36959240  0 framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Apr 22 22:36:19 node02 bash[3695]: debug 2020-04-22T19:36:19.533+0000 7f7e36959240  0 starting handler: beast
Apr 22 22:36:19 node02 bash[3695]: debug 2020-04-22T19:36:19.552+0000 7f7e36959240 -1 failed to bind address 0.0.0.0:80: Permission denied
Apr 22 22:36:19 node02 bash[3695]: debug 2020-04-22T19:36:19.553+0000 7f7e36959240 -1 ERROR: failed initializing frontend
Apr 22 22:36:19 node02 systemd[1]: libpod-473588aa4eb799f976544fdbee5ca1068346f7c69c802cc037b3d7e933cee1f9.scope: Consumed 522ms CPU time
Apr 22 22:36:19 node02 podman[3697]: 2020-04-22 22:36:19.61003749 +0300 +03 m=+2.235166237 container died 473588aa4eb799f976544fdbee5ca1068346f7c69c802cc037b3d7e933cee1f9 (image=docker.io/ceph/ceph:v15, name=ceph-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx-rgw.myorg.us-east-1.node02.wzdozc)
Apr 22 22:36:19 node02 podman[3697]: 2020-04-22 22:36:19.841846867 +0300 +03 m=+2.466975714 container remove 473588aa4eb799f976544fdbee5ca1068346f7c69c802cc037b3d7e933cee1f9 (image=docker.io/ceph/ceph:v15, name=ceph-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx-rgw.myorg.us-east-1.node02.wzdozc)
Apr 22 22:36:19 node02 systemd[1]: ceph-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx@rgw.myorg.us-east-1.node02.wzdozc.service: Main process exited, code=exited, status=13/n/a
Apr 22 22:36:19 node02 systemd[1]: ceph-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx@rgw.myorg.us-east-1.node02.wzdozc.service: Failed with result 'exit-code'.

Might relate to https://github.com/rook/rook/issues/5106


Related issues 1 (0 open1 closed)

Related to rgw - Bug #44661: radosgw can't bind to reserved port (443)ResolvedCasey Bodley

Actions
Actions #1

Updated by Sebastian Wagner almost 4 years ago

[root@node02 ~]# ./cephadm run --name rgw.myorg.us-east-1.node02.wzdozc --fsid  xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
"debug "2020-04-23T09:14:57.961+0000 7fab91f64240  0 set uid:gid to 167:167 (ceph:ceph)
"debug "2020-04-23T09:14:57.961+0000 7fab91f64240  0 ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable), process radosgw, pid 1
"debug "2020-04-23T09:14:57.961+0000 7fab91f64240  0 framework: beast
"debug "2020-04-23T09:14:57.961+0000 7fab91f64240  0 framework conf key: port, val: 80
"debug "2020-04-23T09:14:57.961+0000 7fab91f64240  1 radosgw_Main not setting numa affinity
"debug "2020-04-23T09:14:59.154+0000 7fab91f64240  0 framework: beast
"debug "2020-04-23T09:14:59.154+0000 7fab91f64240  0 framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
"debug "2020-04-23T09:14:59.154+0000 7fab91f64240  0 framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
"debug "2020-04-23T09:14:59.154+0000 7fab91f64240  0 starting handler: beast
"debug "2020-04-23T09:14:59.164+0000 7fab91f64240 -1 failed to bind address 0.0.0.0:80: Permission denied
"debug "2020-04-23T09:14:59.164+0000 7fab91f64240 -1 ERROR: failed initializing frontend
Actions #2

Updated by Sebastian Wagner almost 4 years ago

  • Related to Bug #44661: radosgw can't bind to reserved port (443) added
Actions #3

Updated by Sebastian Wagner almost 4 years ago

  • Status changed from New to Duplicate

close. duplicate.

Actions

Also available in: Atom PDF