Project

General

Profile

Support #24602

bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300:

Added by taehoon kim almost 6 years ago. Updated over 5 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
OSD
Target version:
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

how to fix this Error...???

on [osd2-node]

-----------------message (/var/log/ceph/ceph-osd.3.log)----------------------------------
2018-06-20 08:56:26.777238 7fefccc84d00 1 Processor - bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300: (99) Cannot assign requested address
2018-06-20 08:56:26.777254 7fefccc84d00 1 Processor - bind was unable to bind after 3 attempts: (99) Cannot assign requested address
2018-06-20 08:56:46.984919 7feea6f44d00 0 set uid:gid to 167:167 (ceph:ceph)
2018-06-20 08:56:46.984942 7feea6f44d00 0 ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable), process (unknown), pid 8434
2018-06-20 08:56:46.999362 7feea6f44d00 1 Processor - bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300: (99) Cannot assign requested address
2018-06-20 08:56:46.999399 7feea6f44d00 1 Processor - bind was unable to bind. Trying again in 5 seconds
2018-06-20 08:56:52.012371 7feea6f44d00 1 Processor - bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300: (99) Cannot assign requested address
2018-06-20 08:56:52.012396 7feea6f44d00 1 Processor - bind was unable to bind. Trying again in 5 seconds
2018-06-20 08:56:57.025301 7feea6f44d00 1 Processor - bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300: (99) Cannot assign requested address
2018-06-20 08:56:57.025315 7feea6f44d00 1 Processor - bind was unable to bind after 3 attempts: (99) Cannot assign requested address

i have OSD daemon 25ea.
osd1-node :: 1921.68.5.74
osd2-node :: 1921.68.5.75
osd3-node :: 1921.68.5.76
osd4-node :: 1921.68.5.77
osd5-node :: 1921.68.5.78

History

#1 Updated by Greg Farnum almost 6 years ago

  • Tracker changed from Bug to Support

This may mean that the system doesn't have the IP the daemon is asking for (if you explicitly configured one), or that for some reason there's no available port — they're all eaten up by other processes. This sometimes happens if the OSD is crashing and restarting repeatedly and runs through the whole range before any of them are released for use again.

#2 Updated by taehoon kim almost 6 years ago

Greg Farnum wrote:

This may mean that the system doesn't have the IP the daemon is asking for (if you explicitly configured one), or that for some reason there's no available port — they're all eaten up by other processes. This sometimes happens if the OSD is crashing and restarting repeatedly and runs through the whole range before any of them are released for use again.

hi Greg Farnum,
Thank you for your opinion,

how to fix them??
I try to re-create Error on OSD. 5 ~ 8times.
and, re-create OSD & reboot OSD node etc.
but not changed status.

#3 Updated by taehoon kim almost 6 years ago

taehoon kim wrote:

Greg Farnum wrote:

This may mean that the system doesn't have the IP the daemon is asking for (if you explicitly configured one), or that for some reason there's no available port — they're all eaten up by other processes. This sometimes happens if the OSD is crashing and restarting repeatedly and runs through the whole range before any of them are released for use again.

hi Greg Farnum,
Thank you for your opinion,

how to fix them??
I try to re-create Error on OSD. 5 ~ 8times.
and, re-create OSD & reboot OSD node etc.
but not changed status.
is it not bug??

#4 Updated by taehoon kim almost 6 years ago

taehoon kim wrote:

taehoon kim wrote:

Greg Farnum wrote:

This may mean that the system doesn't have the IP the daemon is asking for (if you explicitly configured one), or that for some reason there's no available port — they're all eaten up by other processes. This sometimes happens if the OSD is crashing and restarting repeatedly and runs through the whole range before any of them are released for use again.

hi Greg Farnum,
Thank you for your opinion,

how to fix them??
I tried to re-create Error on OSD. 5 ~ 8 times.
and, re-created OSD & reboot OSD node etc.
but not fixed this,
is it not bug??

#5 Updated by Greg Farnum almost 6 years ago

  • Status changed from New to Closed

It sounds like you're having more general problems with cluster setup than just an IP on this OSD. You should go to the users mailing list or irc. :)

#6 Updated by taehoon kim almost 6 years ago

Greg Farnum wrote:

It sounds like you're having more general problems with cluster setup than just an IP on this OSD. You should go to the users mailing list or irc. :)

Dear Greg Farnum,
Thank you for your opinions,

before this error rised, i operate ceph storage very well.
For OSD Troubleshooting test, i force export Disk(ceph-osd@3).

And i work below manual.
1. ceph osd cruch out osd.3
2. systemctl stop ceph-osd@3
3. ceph osd crush remove osd.3
4. ceph auth del osd.3
5. ceph osd rm osd.3
6. Raid config on Megacli (Raid0) -> i have not SATA cable. and not enough SATA Port on server Mainboard.
7. (on ceph-deploy node) ceph-deploy disk list ceph-osd2
8. (on ceph-deploy node) ceph-deploy disk zap ceph-osd2 /dev/sdc
9. (on ceph-deploy node) ceph-deploy create --filestore --data /dev/sdc --journal /dev/sdg2

.
.
osd out osd.3
osd down osd.3
.
.
so check osd Crush map..
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 21.80859 root default
-3 4.54346 host ceph-osd1
0 hdd 0.90869 osd.0 up 1.00000 1.00000
6 hdd 0.90869 osd.6 up 1.00000 1.00000
9 hdd 0.90869 osd.9 up 1.00000 1.00000
12 hdd 0.90869 osd.12 up 1.00000 1.00000
25 hdd 0.90869 osd.25 up 1.00000 1.00000
-5 3.63477 host ceph-osd2
1 hdd 0.90869 osd.1 up 1.00000 1.00000
7 hdd 0.90869 osd.7 up 1.00000 1.00000
10 hdd 0.90869 osd.10 up 1.00000 1.00000
13 hdd 0.90869 osd.13 up 1.00000 1.00000
-7 4.54346 host ceph-osd3
2 hdd 0.90869 osd.2 up 1.00000 1.00000
5 hdd 0.90869 osd.5 up 1.00000 1.00000
8 hdd 0.90869 osd.8 up 1.00000 1.00000
11 hdd 0.90869 osd.11 up 1.00000 1.00000
14 hdd 0.90869 osd.14 up 1.00000 1.00000
-9 4.54346 host ceph-osd4
15 hdd 0.90869 osd.15 up 1.00000 1.00000
16 hdd 0.90869 osd.16 up 1.00000 1.00000
17 hdd 0.90869 osd.17 up 1.00000 1.00000
18 hdd 0.90869 osd.18 up 1.00000 1.00000
19 hdd 0.90869 osd.19 up 1.00000 1.00000
-11 4.54346 host ceph-osd5
20 hdd 0.90869 osd.20 up 1.00000 1.00000
21 hdd 0.90869 osd.21 up 1.00000 1.00000
22 hdd 0.90869 osd.22 up 1.00000 1.00000
23 hdd 0.90869 osd.23 up 1.00000 1.00000
24 hdd 0.90869 osd.24 up 1.00000 1.00000
3 0 osd.3 down 0 1.00000 -----> Not array in ceph-osd2 node/

Do I have something wrong?

#7 Updated by taehoon kim over 5 years ago

taehoon kim wrote:

Greg Farnum wrote:

It sounds like you're having more general problems with cluster setup than just an IP on this OSD. You should go to the users mailing list or irc. :)

Dear Greg Farnum,
Thank you for your opinions,

before this error rised, i operate ceph storage very well.
For OSD Troubleshooting test, i force export Disk(ceph-osd@3).

And i work below manual.
1. ceph osd cruch out osd.3
2. systemctl stop ceph-osd@3
3. ceph osd crush remove osd.3
4. ceph auth del osd.3
5. ceph osd rm osd.3
6. Raid config on Megacli (Raid0) -> i have not SATA cable. and not enough SATA Port on server Mainboard.
7. (on ceph-deploy node) ceph-deploy disk list ceph-osd2
8. (on ceph-deploy node) ceph-deploy disk zap ceph-osd2 /dev/sdc
9. (on ceph-deploy node) ceph-deploy create --filestore --data /dev/sdc --journal /dev/sdg2

.
.
osd out osd.3
osd down osd.3
.
.
so check osd Crush map..
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 21.80859 root default
-3 4.54346 host ceph-osd1
0 hdd 0.90869 osd.0 up 1.00000 1.00000
6 hdd 0.90869 osd.6 up 1.00000 1.00000
9 hdd 0.90869 osd.9 up 1.00000 1.00000
12 hdd 0.90869 osd.12 up 1.00000 1.00000
25 hdd 0.90869 osd.25 up 1.00000 1.00000
-5 3.63477 host ceph-osd2
1 hdd 0.90869 osd.1 up 1.00000 1.00000
7 hdd 0.90869 osd.7 up 1.00000 1.00000
10 hdd 0.90869 osd.10 up 1.00000 1.00000
13 hdd 0.90869 osd.13 up 1.00000 1.00000
-7 4.54346 host ceph-osd3
2 hdd 0.90869 osd.2 up 1.00000 1.00000
5 hdd 0.90869 osd.5 up 1.00000 1.00000
8 hdd 0.90869 osd.8 up 1.00000 1.00000
11 hdd 0.90869 osd.11 up 1.00000 1.00000
14 hdd 0.90869 osd.14 up 1.00000 1.00000
-9 4.54346 host ceph-osd4
15 hdd 0.90869 osd.15 up 1.00000 1.00000
16 hdd 0.90869 osd.16 up 1.00000 1.00000
17 hdd 0.90869 osd.17 up 1.00000 1.00000
18 hdd 0.90869 osd.18 up 1.00000 1.00000
19 hdd 0.90869 osd.19 up 1.00000 1.00000
-11 4.54346 host ceph-osd5
20 hdd 0.90869 osd.20 up 1.00000 1.00000
21 hdd 0.90869 osd.21 up 1.00000 1.00000
22 hdd 0.90869 osd.22 up 1.00000 1.00000
23 hdd 0.90869 osd.23 up 1.00000 1.00000
24 hdd 0.90869 osd.24 up 1.00000 1.00000
3 0 osd.3 down 0 1.00000 -----> Not array in ceph-osd2 node/

Do I have something wrong?

monmap option..
/var/run/ceph

ceph --admin-daemon ceph-mon.ceph-mon.asok config show | grep public
.
.
"public_bind_addr": "-" 
.
.

what do you think about this option?

Also available in: Atom PDF