Bug #12877
Updated by Loïc Dachary over 8 years ago
h3. Steps to reproduce: * ceph-disk prepare /dev/sdb * ceph-disk activate /dev/sdb1 h3. Original description Ceph deploy version: 1.5.28 Test fails becuase its unable to umount the osd's, It seems the recent changes to ceph-disk might have fixed some previous issue, previously 'ceph-deploy osd prepare' was sufficient for OSD's to be "in' and 'up' now maybe 'osd activate' is required, Need to check. <pre> 2015-08-31T05:13:09.146 INFO:teuthology.orchestra.run.burnupi19:Running: 'sudo ceph -s' 2015-08-31T05:13:09.394 INFO:teuthology.orchestra.run.burnupi19.stdout: cluster f63e827f-a9e3-43cb-bea9-5d301d657009 2015-08-31T05:13:09.395 INFO:teuthology.orchestra.run.burnupi19.stdout: health HEALTH_WARN 2015-08-31T05:13:09.395 INFO:teuthology.orchestra.run.burnupi19.stdout: 64 pgs stuck inactive 2015-08-31T05:13:09.395 INFO:teuthology.orchestra.run.burnupi19.stdout: 64 pgs stuck unclean 2015-08-31T05:13:09.396 INFO:teuthology.orchestra.run.burnupi19.stdout: monmap e1: 1 mons at {burnupi19=10.214.134.14:6789/0} 2015-08-31T05:13:09.396 INFO:teuthology.orchestra.run.burnupi19.stdout: election epoch 2, quorum 0 burnupi19 2015-08-31T05:13:09.396 INFO:teuthology.orchestra.run.burnupi19.stdout: osdmap e4: 3 osds: 0 up, 0 in 2015-08-31T05:13:09.396 INFO:teuthology.orchestra.run.burnupi19.stdout: flags sortbitwise 2015-08-31T05:13:09.397 INFO:teuthology.orchestra.run.burnupi19.stdout: pgmap v5: 64 pgs, 1 pools, 0 bytes data, 0 objects 2015-08-31T05:13:09.397 INFO:teuthology.orchestra.run.burnupi19.stdout: 0 kB used, 0 kB / 0 kB avail 2015-08-31T05:13:09.397 INFO:teuthology.orchestra.run.burnupi19.stdout: 64 creating 2015-08-31T05:13:09.416 INFO:teuthology.orchestra.run.burnupi19:Running: 'sudo ceph health' </pre> more logs: http://qa-proxy.ceph.com/teuthology/teuthology-2015-08-31_05:00:06-smoke-master-distro-basic-multi/1039702/teuthology.log