Project

General

Profile

Bug #55382

Updated by Radoslaw Zarzynski about 2 years ago

 OSD's are configured from only one node in the cluster. All nodes are having the same configurations. 

 *Nodes List:*++ 

 <pre> 
 [root@bruuni006 examples]# kubectl get node 
 NAME          STATUS     ROLES                    AGE       VERSION 
 bruuni006     Ready      control-plane,master     4h37m     v1.23.5 
 bruuni007     Ready      <none>                   4h36m     v1.23.5 
 bruuni008     Ready      <none>                   4h35m     v1.23.5 
 bruuni010     Ready      <none>                   4h35m     v1.23.5 
 [root@bruuni006 examples]#  

 *Node Configurations:*++ 

 [root@bruuni006 examples]# lsblk 
 NAME      MAJ:MIN RM     SIZE RO TYPE MOUNTPOINT 
 sda         8:0      0 447.1G    0 disk  
 └─sda1      8:1      0 447.1G    0 part / 
 sdb         8:16     0 894.3G    0 disk  
 sdc         8:32     0 894.3G    0 disk  
 sdd         8:48     0 894.3G    0 disk  
 sde         8:64     0     1.8T    0 disk  
 sdf         8:80     0     1.8T    0 disk  
 nvme0n1 259:0      0     1.5T    0 disk  
 [root@bruuni006 examples]# 
 NAME      MAJ:MIN RM     SIZE RO TYPE MOUNTPOINT 
 sda         8:0      0 447.1G    0 disk  
 └─sda1      8:1      0 447.1G    0 part / 
 sdb         8:16     0 894.3G    0 disk  
 sdc         8:32     0 894.3G    0 disk  
 sdd         8:48     0 894.3G    0 disk  
 sde         8:64     0     1.8T    0 disk  
 sdf         8:80     0     1.8T    0 disk  
 nvme0n1 259:0      0     1.5T    0 disk  
 [root@bruuni007 ubuntu]# 
 [root@bruuni010 ubuntu]# lsblk 
 NAME      MAJ:MIN RM     SIZE RO TYPE MOUNTPOINT 
 sda         8:0      0 447.1G    0 disk  
 └─sda1      8:1      0 447.1G    0 part / 
 sdb         8:16     0 894.3G    0 disk  
 sdc         8:32     0 894.3G    0 disk  
 sdd         8:48     0 894.3G    0 disk  
 sde         8:64     0     1.8T    0 disk  
 sdf         8:80     0     1.8T    0 disk  
 nvme0n1 259:0      0     1.5T    0 disk  
 [root@bruuni010 ubuntu]# 

 *CEPH Details:*++ 

 [rook@rook-ceph-tools-d6d7c985c-ksjcs /]$ ceph -v 
 ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable) 
 [rook@rook-ceph-tools-d6d7c985c-ksjcs /]$ 

 [rook@rook-ceph-tools-d6d7c985c-ksjcs /]$ ceph osd tree 
 ID    CLASS    WEIGHT     TYPE NAME             STATUS    REWEIGHT    PRI-AFF 
 -1           6.84087    root default                                  
 -2           6.84087        host bruuni010                            
  0           1.81940            osd.0             up     1.00000    1.00000 
  1           1.81940            osd.1             up     1.00000    1.00000 
  2           1.45549            osd.2             up     1.00000    1.00000 
  3           0.87329            osd.3             up     1.00000    1.00000 
  4           0.87329            osd.4             up     1.00000    1.00000 
 [rook@rook-ceph-tools-d6d7c985c-ksjcs /]$ 
 </pre> 

 *cluster.yaml parameters:*++ 

 storage: 
     useAllNodes: true 
     useAllDevices: true 
     #deviceFilter: 
   healthCheck: 



Back