Project

General

Profile

Bug #45604

Updated by Volker Theile almost 4 years ago

Creating an OSD using the following commands fails. 

 <pre> 
 $ ceph --version 
 ceph version 15.2.1-277-g17d346932e (17d346932e584056578f1b6c4c49b43ac2a712a2) octopus (stable) 

 $ ceph orch device ls 
 HOST      PATH        TYPE     SIZE    DEVICE    AVAIL    REJECT REASONS                                           
 master    /dev/vda    hdd     42.0G            False    locked                                                   
 node1     /dev/vda    hdd     42.0G            False    locked                                                   
 node1     /dev/vdb    hdd     8192M    999162    False    Insufficient space (<5GB) on vgs, locked, LVM detected   
 node1     /dev/vdc    hdd     8192M    230824    False    Insufficient space (<5GB) on vgs, locked, LVM detected   
 node2     /dev/vdb    hdd     8192M    282151    True                                                            
 node2     /dev/vda    hdd     42.0G            False    locked                                                   
 node2     /dev/vdc    hdd     8192M    271426    False    LVM detected                                             
 node3     /dev/vdb    hdd     8192M    577378    True                                                            
 node3     /dev/vdc    hdd     8192M    727088    True                                                            
 node3     /dev/vda    hdd     42.0G            False    locked  

 $ cat <<EOF > /tmp/cephadm-apply.yml 
 service_type: osd 
 service_id: osd.testing_dg_node3 
 placement: 
     host_pattern: node3 
 data_devices:  
     all: true 
 EOF 
 $ ceph orch apply -i /tmp/cephadm-apply.yml 
 </pre> 
 or 
 <pre> 
 $ echo "{\"service_type\": \"osd\", \"placement\": {\"host_pattern\": \"node3\"}, \"service_id\": \"osd.testing_dg_node3\", \"data_devices\": {\"all\": True}}" | ceph orch apply osd -i - 
 </pre> 

 The attached log from node3 contains for example 
 <pre> 
 [2020-05-19 09:31:54,556][ceph_volume][ERROR ] exception caught by decorator 
 Traceback (most recent call last): 
   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc 
     return f(*a, **kw) 
   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 150, in main 
     terminal.dispatch(self.mapper, subcommand_args) 
   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch 
     instance.main() 
   File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 38, in main 
     self.format_report(Devices()) 
   File "/usr/lib/python3.6/site-packages/ceph_volume/inventory/main.py", line 42, in format_report 
     print(json.dumps(inventory.json_report())) 
   File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 51, in json_report 
     output.append(device.json_report()) 
   File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 202, in json_report 
     output['lvs'] = [lv.report() for lv in self.lvs] 
   File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 202, in <listcomp> 
     output['lvs'] = [lv.report() for lv in self.lvs] 
   File "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 945, in report 
     'cluster_name': self.tags['ceph.cluster_name'], 
 KeyError: 'ceph.cluster_name' 
 </pre>

Back