Project

General

Profile

Actions

Bug #22337

closed

Prometheus exporter in the MGR daemon crashes when PGs are in recovery_wait state

Added by Subhachandra Chandra over 6 years ago. Updated over 6 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When the cluster has one or more PGs in the "recovery_wait" the prometheus exporter in the MGR daemon returns the following error and backtrace.

500 Internal Server Error

The server encountered an unexpected condition which prevented it from fulfilling the request.

Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/cherrypy/_cprequest.py", line 670, in respond
response.body = self.handler()
File "/usr/lib/python2.7/dist-packages/cherrypy/lib/encoding.py", line 217, in call
self.body = self.oldhandler(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/cherrypy/_cpdispatch.py", line 61, in call
return self.callable(*self.args, **self.kwargs)
File "/usr/lib/ceph/mgr/prometheus/module.py", line 386, in metrics
metrics = global_instance().collect()
File "/usr/lib/ceph/mgr/prometheus/module.py", line 324, in collect
self.get_pg_status()
File "/usr/lib/ceph/mgr/prometheus/module.py", line 266, in get_pg_status
self.metrics[path].set(value)
KeyError: 'pg_recovery_wait'


Related issues 1 (0 open1 closed)

Is duplicate of mgr - Bug #22116: prometheus module 500 if 'deep' in pg statesResolved11/13/2017

Actions
Actions #1

Updated by Subhachandra Chandra over 6 years ago

This is definitely not a "bluestore" bug. Not sure how it was classified under it when I hit the "New Issue" link.

Actions #2

Updated by Subhachandra Chandra over 6 years ago

Found another backtrace with PG in 'deep scrubbing' state

data:
pools: 1 pools, 4096 pgs
objects: 464k objects, 59393 GB
usage: 89488 GB used, 240 TB / 327 TB avail
pgs: 4094 active+clean
1 active+clean+scrubbing
1 active+clean+scrubbing+deep

Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/cherrypy/_cprequest.py", line 670, in respond
response.body = self.handler()
File "/usr/lib/python2.7/dist-packages/cherrypy/lib/encoding.py", line 217, in call
self.body = self.oldhandler(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/cherrypy/_cpdispatch.py", line 61, in call
return self.callable(*self.args, **self.kwargs)
File "/usr/lib/ceph/mgr/prometheus/module.py", line 386, in metrics
metrics = global_instance().collect()
File "/usr/lib/ceph/mgr/prometheus/module.py", line 324, in collect
self.get_pg_status()
File "/usr/lib/ceph/mgr/prometheus/module.py", line 266, in get_pg_status
self.metrics[path].set(value)
KeyError: 'pg_deep'

Actions #3

Updated by Shinobu Kinjo over 6 years ago

  • Project changed from bluestore to mgr
Actions #4

Updated by John Spray over 6 years ago

  • Is duplicate of Bug #22116: prometheus module 500 if 'deep' in pg states added
Actions #5

Updated by John Spray over 6 years ago

  • Status changed from New to Duplicate

Duplicate of http://tracker.ceph.com/issues/22116, which is currently fixed in master and pending backport to luminous

Actions

Also available in: Atom PDF