Project

General

Profile

Feature #20611

Updated by Patrick Donnelly over 5 years ago

Here's what you see currently: 

 <pre> 
 $ ceph fs set a down true cephfs_a cluster_down 1 
 a marked down.  
 $ ceph mds fail 1:1 # rank 1 of 2 
 ceph mds fail 1:0 # rank 0 of 2 
 ceph status 
   cluster: 
     id:       e3d43918-f643-442b-bacc-5a1c1d9a8a7a 4ef94796-a652-4e0f-ad4e-8f3aaa9b9d18 
     health: HEALTH_ERR 
             1 filesystem mds ranks 0,1 have failed 
             mds cluster is offline 
 
   degraded 

   services: 
     mon: 3 daemons, quorum a,b,c (age 100s) 
     mgr: x(active, since 96s) x(active) 
     mds: a-0/0/0 up , 3 up:standby 0/2/2 up, 2 up:standby, 2 failed 
     osd: 3 osds: 3 up (since 64s), up, 3 in (since 64s) 
 
   

   data: 
     pools:     2 pools, 16 pgs 
     objects: 22 39 objects, 2.2 KiB 3558 bytes 
     usage:     3.2 GiB 3265 MB used, 27 GiB 27646 MB / 30 GiB 30911 MB avail 
     pgs:       16 active+clean 
 </pre>

Back