⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
bluestore
All Projects
Ceph
»
bluestore
Overview
Activity
Roadmap
Issues
Download (855 Bytes)
Bug #58707
» ceph-status.txt
ceph status output -
Roel van Meer
, 02/13/2023 01:50 PM
cluster:
id: e562eff3-379d-4c99-abbc-cf75bad28851
health: HEALTH_OK
services:
mon: 5 daemons, quorum ceph01-ssd-dlf,ceph03-ssd-dlf,ceph01-hdd-dlf,ceph03-hdd-dlf,ceph05-hdd-dlf (age 2w)
mgr: ceph01-hdd-dlf(active, since 2w), standbys: ceph02-hdd-dlf, ceph03-hdd-dlf, ceph01-ssd-dlf, ceph02-ssd-dlf, ceph03-ssd-dlf, ceph04-hdd-dlf, ceph04-ssd-dlf, ceph05-hdd-dlf, ceph05-ssd-dlf, ceph06-ssd-dlf, ceph07-ssd-dlf, ceph08-ssd-dlf
mds: 1/1 daemons up, 12 standby
osd: 140 osds: 140 up (since 2w), 140 in (since 3w)
data:
volumes: 1/1 healthy
pools: 5 pools, 3329 pgs
objects: 61.18M objects, 219 TiB
usage: 622 TiB used, 393 TiB / 1015 TiB avail
pgs: 3316 active+clean
13 active+clean+scrubbing+deep
io:
client: 121 MiB/s rd, 93 MiB/s wr, 2.05k op/s rd, 6.48k op/s wr
« Previous
1
2
3
4
…
17
Next »
(2-2/17)
Loading...