Project

General

Profile

Actions

Bug #19960

closed

overflow in client_io_rate in ceph osd pool stats

Added by Aleksei Gutikov almost 7 years ago. Updated almost 7 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rgw
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

luminous branch, v12.0.2

Output of ceph osd pool stats -f json contains overflowed values in client_io_rate section.

$ ceph osd pool stats -f json
[{"pool_name":"rbd","pool_id":0,"recovery":{},"recovery_rate":{},"client_io_rate":{}},{"pool_name":"cephfs_data_a","pool_id":1,"recovery":{},"recovery_rate":{},"client_io_rate":{}},{"pool_name":"cephfs_metadata_a","pool_id":2,"recovery":{},"recovery_rate":{},"client_io_rate":{}},{"pool_name":".rgw.root","pool_id":3,"recovery":{},"recovery_rate":{},"client_io_rate":{}},{"pool_name":"default.rgw.control","pool_id":4,"recovery":{},"recovery_rate":{},"client_io_rate":{}},{"pool_name":"default.rgw.meta","pool_id":5,"recovery":{},"recovery_rate":{},"client_io_rate":{}},{"pool_name":"default.rgw.log","pool_id":6,"recovery":{},"recovery_rate":{},"client_io_rate":{"read_bytes_sec":-9223372036854775808,"write_bytes_sec":-9223372036854775808,"read_op_per_sec":-9223372036854775808,"write_op_per_sec":-9223372036854775808}},{"pool_name":"default.rgw.buckets.index","pool_id":7,"recovery":{},"recovery_rate":{},"client_io_rate":{}},{"pool_name":"default.rgw.buckets.data","pool_id":8,"recovery":{},"recovery_rate":{},"client_io_rate":{}}]

Actions #1

Updated by Aleksei Gutikov almost 7 years ago

fixed in master

Actions #2

Updated by Nathan Cutler almost 7 years ago

Aleksei Gutikov wrote:

fixed in master

By which commit/PR?

Actions #3

Updated by Aleksei Gutikov almost 7 years ago

By which commit/PR?

554cf8394a9ac4f845c1fce03dd1a7f551a414a9
Merge pull request #15073 from liewegas/wip-mgr-stats

Actions #4

Updated by Matt Benjamin almost 7 years ago

  • Project changed from rgw to RADOS
  • Status changed from New to Pending Backport
Actions #5

Updated by Nathan Cutler almost 7 years ago

Aleksei: Please be more specific. PR#15073 has 131 commits - see https://github.com/ceph/ceph/pull/15073/commits

Actions #6

Updated by Nathan Cutler almost 7 years ago

  • Subject changed from luminous: rgw: overflow in client_io_rate in ceph osd pool stats to overflow in client_io_rate in ceph osd pool stats
Actions #7

Updated by Nathan Cutler almost 7 years ago

  • Status changed from Pending Backport to Resolved

If it's just one or two commits, we could backport (please fill in the Backport field in that case). But 131 commits?

Actions #8

Updated by Aleksei Gutikov almost 7 years ago

Bug is not reproducible after this commit (not sure that only one contains fix):

commit d6d1db62edeb4c40a774fcb56e7619b2997ff2d0
Author: Sage Weil <>
Date: Tue May 23 09:49:16 2017 -0400

mgr: apply PGMap incremental at same interval as reports
We were doing an incremental per osd stat report; this screws up the
delta stats updates when there are more than a handful of OSDs. Instead,
do it with the same period as the mgr->mon reports.
Signed-off-by: Sage Weil &lt;&gt;
Actions

Also available in: Atom PDF