Bug #3829
closednew osd added to the cluster is not receiving data
0%
Description
ceph version: 0.56.1 (e4a541624df62ef353e754391cbbb707f54b16f7)
1. Initially , had a cluster[burnupi21,burnupi22,burnupi23,burnupi24] running on v0.56.1.
2. When running bonnie workload on the cluster from client, uninstalled ceph and installed argonaut v0.48.3 on burnupi24 and then upgraded it to v0.56.1
3. in order to bring a new osd up on burnupi24, executed the following commands on burnupi24
ceph osd create [ this created osd 5]
ceph-osd -i 5 --mkfs --mkkey
ceph auth add osd.5 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-5/keyring
ceph osd crush set osd.5 1.0 root=default
4. started ceph after adding an entry for osd 5 in ceph.conf in all the hosts
5. Now, the ceph osd tree looks like,ubuntu@burnupi21:/etc/ceph$ sudo ceph osd tree
- id weight type name up/down reweight
-1 4 pool default
-3 4 rack unknownrack
-2 1 host burnupi21
1 1 osd.1 up 0
-4 1 host burnupi22
2 1 osd.2 up 1
-5 1 host burnupi23
3 1 osd.3 up 0
-6 1 host burnupi24
4 1 osd.4 down 0
0 0 osd.0 down 0
5 0 osd.5 up 1
5. The osd.5 that is up and running on burnupi24 doesn't seem to receive any I/O from client.
ubuntu@burnupi24:/etc/ceph$ sudo cat ceph.conf
[global]
auth client required = cephx
auth service required = cephx
auth cluster required = cephx
debug ms = 1
[client]
log file = /var/log/ceph/client.admin.log
debug client = 20
[osd]
osd journal size = 1000
filestore xattr use omap = true
debug osd = 20
[osd.1]
host = burnupi21
[osd.2]
host = burnupi22
[osd.3]
host = burnupi23
[osd.4]
host = burnupi24
osd min pg log entries = 10
[osd.5]
host = burnupi24
[mon.a]
host = burnupi21
mon addr = 10.214.134.10:6789
[mon.b]
host = burnupi22
mon addr = 10.214.134.8:6789
[mon.c]
host = burnupi23
mon addr = 10.214.134.6:6789
[mds.a]
host = burnupi21
leaving the cluster in current state for reference.