Actions
Bug #42084
opendf output difference if 8 OSD cluster has 5+3 shared EC pool vs larger cluster
Status:
New
Priority:
Normal
Assignee:
David Zafman
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
I created an 8 OSD cluster with 1 EC pool 5+3 and this ceph df detail output.
$ ceph df detail RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 8.0 TiB 8.0 TiB 8.1 GiB 16 GiB 0.20 TOTAL 8.0 TiB 8.0 TiB 8.1 GiB 16 GiB 0.20 POOLS: POOL ID STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR ecpool 1 9.8 MiB 9.8 MiB 0 B 100 50 MiB 50 MiB 0 B 0 4.9 TiB N/A N/A 100 0 B 0 B
Using a 9 OSD cluster instead with 1 EC pool 5+3 and this ceph df detail output.
$ ceph df detail RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 9.0 TiB 9.0 TiB 9.1 GiB 18 GiB 0.20 TOTAL 9.0 TiB 9.0 TiB 9.1 GiB 18 GiB 0.20 POOLS: POOL ID STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR ecpool 1 9.8 MiB 9.8 MiB 0 B 100 9.8 MiB 9.8 MiB 0 B 0 5.6 TiB N/A N/A 100 0 B 0 B
I had thought that num_bytes_hit_set_archive might be non-zero, but rather it is probably the difference between allocated and num_bytes.
bool use_per_pool_stats() const { return osd_sum.num_osds == osd_sum.num_per_pool_osds; } uint64_t get_allocated_data_bytes(bool per_pool) const { if (per_pool) { return store_stats.allocated; } else { // legacy mode, use numbers from 'stats' return stats.sum.num_bytes + stats.sum.num_bytes_hit_set_archive; } }
No data to display
Actions