Project

General

Profile

Bug #13988

Updated by Loïc Dachary over 8 years ago

Steps to reproduce 

 <pre> 
 teuthology-openstack --verbose --key-filename ~/Downloads/myself --key-name loic --teuthology-git-url http://github.com/dachary/teuthology --teuthology-branch wip-suite --ceph-qa-suite-git-url http://github.com/dachary/ceph-qa-suite --suite-branch wip-ceph-disk --ceph-git-url http://github.com/dachary/ceph --ceph master --suite ceph-disk --filter ubuntu_14.04 
 </pre> 

 It will sleep forever with two targets provisionned and ready to be used.  

 * ssh to the target that runs the monitory 
 * git clone http://github.com/ceph/ceph 
 * cd ceph/workunits/qa/ceph-disk 
 * sudo bash 
 * bash ceph-disk.sh  
 * Control-c when it starts to run the tests 

 Although the problem shows when running the tests, it is easier to reproduce as follows: 

 ceph version 10.0.0-855-g15a81bb (15a81bb7121799ba1b71b88b356998ebc8effec9) 

 <pre> 
 [root@target167114226249 ceph-disk]# uuid=$(uuidgen) ; ceph-disk prepare --osd-uuid $uuid /dev/vdd 
 [root@target167114226249 ceph-disk]# id=$(ceph osd create $uuid) 
 [root@target167114226249 ceph-disk]# echo $id 
 4 
 [root@target167114226249 ceph-disk]# ceph osd tree 
 ID WEIGHT    TYPE NAME                     UP/DOWN REWEIGHT PRIMARY-AFFINITY 
 ... 
  4 0.00969           osd.4                      up    1.00000            1.00000 
 [root@target167114226249 ceph-disk]# ceph-disk deactivate --deactivate-by-id $id ; ceph-disk destroy --zap --destroy-by-id $id 
 [root@target167114226249 ceph-disk]# ceph osd tree 
 ID WEIGHT    TYPE NAME                     UP/DOWN REWEIGHT PRIMARY-AFFINITY 
 -1 0.01938 root default 
 -3         0       rack localrack 
 -2         0           host localhost 
 -4 0.01938       host target167114226249 
  2 0.00969           osd.2                    down    1.00000            1.00000 
  3 0.00969           osd.3                    down    1.00000            1.00000 
 [root@target167114226249 ceph-disk]# ceph-disk list /dev/vdd 
 /dev/vdd other, unknown 
 [root@target167114226249 ceph-disk]# ceph-disk prepare --osd-uuid $uuid /dev/vdd 
 [root@target167114226249 ceph-disk]# sleep 300 ; ceph osd tree 
 ... 
  4         0 osd.4                            down    1.00000            1.00000 
 [root@target167114226249 ceph-disk]#  
 </pre> 

Back