Project

General

Profile

Bug #45980

Updated by Nathan Cutler almost 4 years ago

This one is easy to reproduce. 

 Ask cephadm to create FileStore OSDs: 

 <pre> 
 master:~ # cat service_spec_osd.yml  
 service_type: osd 
 placement: 
     hosts: 
         - 'master' 
 service_id: generic_osd_deployment 
 data_devices: 
     all: true 
 objectstore: filestore 
 master:~ # ceph orch device ls --refresh 
 master:~ # ceph orch apply osd -i service_spec_osd.yml 
 </pre> 

 Wait awhile. Note that OSDs do not come up. Investigate. The ceph-volume.log contains a Python Traceback: 

 <pre> 
 Traceback (most recent call last): 
   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc 
     return f(*a, **kw) 
   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 150, in main 
     terminal.dispatch(self.mapper, subcommand_args) 
   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch 
     instance.main() 
   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 42, in main 
     terminal.dispatch(self.mapper, self.argv) 
   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch 
     instance.main() 
   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 341, in main 
     self.activate(args) 
   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root 
     return func(*a, **kw) 
   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 265, in activate 
     activate_bluestore(lvs, no_systemd=args.no_systemd) 
   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 128, in activate_bluestore 
     raise RuntimeError('could not find a bluestore OSD to activate') 
 RuntimeError: could not find a bluestore OSD to activate 
 </pre> 

 Hmm, I can't think of any good reason why ceph-volume should be saying that in this scenario. 

 The size of the underlying physical disks I am using are 32 GB in size, GB, yet "ceph orch device ls" complains they are too small (?): 

 <pre> 
 master:~ # ceph orch device ls 
 HOST      PATH        TYPE     SIZE    DEVICE    AVAIL    REJECT REASONS                                           
 master    /dev/vda    hdd     42.0G            False    locked                                                   
 master    /dev/vdb    hdd     32.0G    615389    False    LVM detected, locked, Insufficient space (<5GB) on vgs   
 master    /dev/vdc    hdd     32.0G    916054    False    LVM detected, locked, Insufficient space (<5GB) on vgs   
 master    /dev/vdd    hdd     32.0G    389182    False    LVM detected, locked, Insufficient space (<5GB) on vgs   
 master    /dev/vde    hdd     32.0G    389751    False    LVM detected, locked, Insufficient space (<5GB) on vgs   
 </pre>

Back