⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Orchestrator
All Projects
Ceph
»
Orchestrator
Overview
Activity
Roadmap
Issues
Download (1.16 KB)
Bug #49654
» ceph-s.txt
ceph status -
Frank Holtz
, 03/08/2021 02:33 PM
cluster:
id: UUID
health: HEALTH_WARN
9 failed cephadm daemon(s)
10 stray daemon(s) not managed by cephadm
services:
mon: 5 daemons, quorum s105,s106,s104,s103,vs036 (age 2h)
mgr: vs036.gdjgpg(active, since 3d), standbys: s102.cwxqgb, s105.laxklh, s106.vifupj, s101.ycvekp, s103.evfdyh, s104.qwtlbv
mds: cephfs01:1 {0=cephfs01.s104.ijghfv=up:active} 5 up:standby
osd: 36 osds: 36 up (since 2h), 36 in (since 2w)
tcmu-runner: 10 daemons active (s101:iscsi-data/XenDesktop_Server_1_DS1, s101:iscsi-data/XenDesktop_Server_1_DS2, s101:iscsi-data/oVirt-Services-CL11_1, s101:iscsi-data/oVirt-Services-CL21_1, s101:iscsi-data/oVrt-Admin-glusterfs-s079, s102:iscsi-data/XenDesktop_Server_1_DS1, s102:iscsi-data/XenDesktop_Server_1_DS2, s102:iscsi-data/oVirt-Services-CL11_1, s102:iscsi-data/oVirt-Services-CL21_1, s102:iscsi-data/oVrt-Admin-glusterfs-s079)
task status:
data:
pools: 12 pools, 961 pgs
objects: 11.01M objects, 25 TiB
usage: 79 TiB used, 106 TiB / 185 TiB avail
pgs: 961 active+clean
io:
client: 1.9 MiB/s rd, 913 KiB/s wr, 17 op/s rd, 154 op/s wr
« Previous
1
2
3
4
5
Next »
(2-2/5)
Loading...