https://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2018-06-25T17:19:46ZCeph Ceph - Support #24602: bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300:https://tracker.ceph.com/issues/24602?journal_id=1156612018-06-25T17:19:46ZGreg Farnumgfarnum@redhat.com
<ul><li><strong>Tracker</strong> changed from <i>Bug</i> to <i>Support</i></li></ul><p>This may mean that the system doesn't have the IP the daemon is asking for (if you explicitly configured one), or that for some reason there's no available port — they're all eaten up by other processes. This sometimes happens if the OSD is crashing and restarting repeatedly and runs through the whole range before any of them are released for use again.</p> Ceph - Support #24602: bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300:https://tracker.ceph.com/issues/24602?journal_id=1156852018-06-26T01:11:23Ztaehoon kim
<ul></ul><p>Greg Farnum wrote:</p>
<blockquote>
<p>This may mean that the system doesn't have the IP the daemon is asking for (if you explicitly configured one), or that for some reason there's no available port — they're all eaten up by other processes. This sometimes happens if the OSD is crashing and restarting repeatedly and runs through the whole range before any of them are released for use again.</p>
</blockquote>
<p>hi Greg Farnum,<br />Thank you for your opinion,</p>
<p>how to fix them?? <br />I try to re-create Error on OSD. 5 ~ 8times.<br />and, re-create OSD & reboot OSD node etc.<br />but not changed status.</p> Ceph - Support #24602: bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300:https://tracker.ceph.com/issues/24602?journal_id=1156862018-06-26T02:18:44Ztaehoon kim
<ul></ul><p>taehoon kim wrote:</p>
<blockquote>
<p>Greg Farnum wrote:</p>
<blockquote>
<p>This may mean that the system doesn't have the IP the daemon is asking for (if you explicitly configured one), or that for some reason there's no available port — they're all eaten up by other processes. This sometimes happens if the OSD is crashing and restarting repeatedly and runs through the whole range before any of them are released for use again.</p>
</blockquote>
<p>hi Greg Farnum,<br />Thank you for your opinion,</p>
<p>how to fix them?? <br />I try to re-create Error on OSD. 5 ~ 8times.<br />and, re-create OSD & reboot OSD node etc.<br />but not changed status.<br />is it not bug??</p>
</blockquote> Ceph - Support #24602: bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300:https://tracker.ceph.com/issues/24602?journal_id=1156902018-06-26T04:12:06Ztaehoon kim
<ul></ul><p>taehoon kim wrote:</p>
<blockquote>
<p>taehoon kim wrote:</p>
<blockquote>
<p>Greg Farnum wrote:</p>
<blockquote>
<p>This may mean that the system doesn't have the IP the daemon is asking for (if you explicitly configured one), or that for some reason there's no available port — they're all eaten up by other processes. This sometimes happens if the OSD is crashing and restarting repeatedly and runs through the whole range before any of them are released for use again.</p>
</blockquote>
<p>hi Greg Farnum,<br />Thank you for your opinion,</p>
<p>how to fix them?? <br />I tried to re-create Error on OSD. 5 ~ 8 times.<br />and, re-created OSD & reboot OSD node etc.<br />but not fixed this,<br />is it not bug??</p>
</blockquote></blockquote> Ceph - Support #24602: bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300:https://tracker.ceph.com/issues/24602?journal_id=1157142018-06-26T18:03:17ZGreg Farnumgfarnum@redhat.com
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>Closed</i></li></ul><p>It sounds like you're having more general problems with cluster setup than just an IP on this OSD. You should go to the users mailing list or irc. :)</p> Ceph - Support #24602: bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300:https://tracker.ceph.com/issues/24602?journal_id=1157512018-06-27T08:31:06Ztaehoon kim
<ul></ul><p>Greg Farnum wrote:</p>
<blockquote>
<p>It sounds like you're having more general problems with cluster setup than just an IP on this OSD. You should go to the users mailing list or irc. :)</p>
</blockquote>
<p>Dear Greg Farnum,<br />Thank you for your opinions,</p>
<p>before this error rised, i operate ceph storage very well.<br />For OSD Troubleshooting test, i force export Disk(ceph-osd@3).</p>
<p>And i work below manual.<br />1. ceph osd cruch out osd.3<br />2. systemctl stop ceph-osd@3<br />3. ceph osd crush remove osd.3<br />4. ceph auth del osd.3<br />5. ceph osd rm osd.3<br />6. Raid config on Megacli (Raid0) -> i have not SATA cable. and not enough SATA Port on server Mainboard.<br />7. (on ceph-deploy node) ceph-deploy disk list ceph-osd2<br />8. (on ceph-deploy node) ceph-deploy disk zap ceph-osd2 /dev/sdc<br />9. (on ceph-deploy node) ceph-deploy create --filestore --data /dev/sdc --journal /dev/sdg2</p>
<p>.<br />.<br />osd out osd.3<br />osd down osd.3<br />.<br />.<br />so check osd Crush map..<br />ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF<br /> -1 21.80859 root default <br /> -3 4.54346 host ceph-osd1 <br /> 0 hdd 0.90869 osd.0 up 1.00000 1.00000<br /> 6 hdd 0.90869 osd.6 up 1.00000 1.00000<br /> 9 hdd 0.90869 osd.9 up 1.00000 1.00000<br /> 12 hdd 0.90869 osd.12 up 1.00000 1.00000<br /> 25 hdd 0.90869 osd.25 up 1.00000 1.00000<br /> -5 3.63477 host ceph-osd2 <br /> 1 hdd 0.90869 osd.1 up 1.00000 1.00000<br /> 7 hdd 0.90869 osd.7 up 1.00000 1.00000<br /> 10 hdd 0.90869 osd.10 up 1.00000 1.00000<br /> 13 hdd 0.90869 osd.13 up 1.00000 1.00000<br /> -7 4.54346 host ceph-osd3 <br /> 2 hdd 0.90869 osd.2 up 1.00000 1.00000<br /> 5 hdd 0.90869 osd.5 up 1.00000 1.00000<br /> 8 hdd 0.90869 osd.8 up 1.00000 1.00000<br /> 11 hdd 0.90869 osd.11 up 1.00000 1.00000<br /> 14 hdd 0.90869 osd.14 up 1.00000 1.00000<br /> -9 4.54346 host ceph-osd4 <br /> 15 hdd 0.90869 osd.15 up 1.00000 1.00000<br /> 16 hdd 0.90869 osd.16 up 1.00000 1.00000<br /> 17 hdd 0.90869 osd.17 up 1.00000 1.00000<br /> 18 hdd 0.90869 osd.18 up 1.00000 1.00000<br /> 19 hdd 0.90869 osd.19 up 1.00000 1.00000<br />-11 4.54346 host ceph-osd5 <br /> 20 hdd 0.90869 osd.20 up 1.00000 1.00000<br /> 21 hdd 0.90869 osd.21 up 1.00000 1.00000<br /> 22 hdd 0.90869 osd.22 up 1.00000 1.00000<br /> 23 hdd 0.90869 osd.23 up 1.00000 1.00000<br /> 24 hdd 0.90869 osd.24 up 1.00000 1.00000<br /> 3 0 osd.3 down 0 1.00000 -----> Not array in ceph-osd2 node/</p>
<p>Do I have something wrong?</p> Ceph - Support #24602: bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300:https://tracker.ceph.com/issues/24602?journal_id=1159702018-07-02T09:45:17Ztaehoon kim
<ul></ul><p>taehoon kim wrote:</p>
<blockquote>
<p>Greg Farnum wrote:</p>
<blockquote>
<p>It sounds like you're having more general problems with cluster setup than just an IP on this OSD. You should go to the users mailing list or irc. :)</p>
</blockquote>
<p>Dear Greg Farnum,<br />Thank you for your opinions,</p>
<p>before this error rised, i operate ceph storage very well.<br />For OSD Troubleshooting test, i force export Disk(ceph-osd@3).</p>
<p>And i work below manual.<br />1. ceph osd cruch out osd.3<br />2. systemctl stop ceph-osd@3<br />3. ceph osd crush remove osd.3<br />4. ceph auth del osd.3<br />5. ceph osd rm osd.3<br />6. Raid config on Megacli (Raid0) -> i have not SATA cable. and not enough SATA Port on server Mainboard.<br />7. (on ceph-deploy node) ceph-deploy disk list ceph-osd2<br />8. (on ceph-deploy node) ceph-deploy disk zap ceph-osd2 /dev/sdc<br />9. (on ceph-deploy node) ceph-deploy create --filestore --data /dev/sdc --journal /dev/sdg2</p>
<p>.<br />.<br />osd out osd.3<br />osd down osd.3<br />.<br />.<br />so check osd Crush map..<br />ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF<br />-1 21.80859 root default <br />-3 4.54346 host ceph-osd1 <br />0 hdd 0.90869 osd.0 up 1.00000 1.00000<br />6 hdd 0.90869 osd.6 up 1.00000 1.00000<br />9 hdd 0.90869 osd.9 up 1.00000 1.00000<br />12 hdd 0.90869 osd.12 up 1.00000 1.00000<br />25 hdd 0.90869 osd.25 up 1.00000 1.00000<br />-5 3.63477 host ceph-osd2 <br />1 hdd 0.90869 osd.1 up 1.00000 1.00000<br />7 hdd 0.90869 osd.7 up 1.00000 1.00000<br />10 hdd 0.90869 osd.10 up 1.00000 1.00000<br />13 hdd 0.90869 osd.13 up 1.00000 1.00000<br />-7 4.54346 host ceph-osd3 <br />2 hdd 0.90869 osd.2 up 1.00000 1.00000<br />5 hdd 0.90869 osd.5 up 1.00000 1.00000<br />8 hdd 0.90869 osd.8 up 1.00000 1.00000<br />11 hdd 0.90869 osd.11 up 1.00000 1.00000<br />14 hdd 0.90869 osd.14 up 1.00000 1.00000<br />-9 4.54346 host ceph-osd4 <br />15 hdd 0.90869 osd.15 up 1.00000 1.00000<br />16 hdd 0.90869 osd.16 up 1.00000 1.00000<br />17 hdd 0.90869 osd.17 up 1.00000 1.00000<br />18 hdd 0.90869 osd.18 up 1.00000 1.00000<br />19 hdd 0.90869 osd.19 up 1.00000 1.00000<br />-11 4.54346 host ceph-osd5 <br />20 hdd 0.90869 osd.20 up 1.00000 1.00000<br />21 hdd 0.90869 osd.21 up 1.00000 1.00000<br />22 hdd 0.90869 osd.22 up 1.00000 1.00000<br />23 hdd 0.90869 osd.23 up 1.00000 1.00000<br />24 hdd 0.90869 osd.24 up 1.00000 1.00000<br />3 0 osd.3 down 0 1.00000 -----> Not array in ceph-osd2 node/</p>
<p>Do I have something wrong?</p>
</blockquote>
<p>monmap option..<br />/var/run/ceph</p>
<pre>
ceph --admin-daemon ceph-mon.ceph-mon.asok config show | grep public
.
.
"public_bind_addr": "-"
.
.
</pre>
<p>what do you think about this option?</p>