Bug #22232
Updated by Jan Fajerski over 6 years ago
Consider this output: GLOBAL: SIZE AVAIL RAW USED %RAW USED 30.2G 14.3G 15.9G 52.78 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS cephfs_data_a 1 0 0 4.65G 0 cephfs_metadata_a 2 2.19K 0 4.65G 21 foo 3 4.25G 23.33 4.65G 1088 The cluster is ~50% full but pool foo only uses ~23% of its max capacity. All pools have 3 replicas. To reproduce: <pre><code class="text"> ../src/vstart.sh -n -s -d bin/ceph osd pool create foo 8 8 bin/rados bench -p foo 60 write --max-object-size 4K --no-cleanup bin/ceph df d </code></pre>