Bug #50838
closedNo cached data available yet in ceph mgr prometheus module
0%
Description
When I visit http://192.168.1.182:9283/metrics through a browser.
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/cherrypy/_cprequest.py", line 670, in respond
response.body = self.handler()
File "/usr/lib/python2.7/dist-packages/cherrypy/lib/encoding.py", line 220, in call
self.body = self.oldhandler(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/cherrypy/_cpdispatch.py", line 60, in call
return self.callable(*self.args, **self.kwargs)
File "/usr/share/ceph/mgr/prometheus/module.py", line 1116, in metrics
return self._metrics(_global_instance)
File "/usr/share/ceph/mgr/prometheus/module.py", line 1122, in _metrics
raise cherrypy.HTTPError(503, 'No cached data available yet')
HTTPError: (503, 'No cached data available yet')
Updated by Jiang Yu almost 3 years ago
39045 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: [05/Jun/2021:12:23:05] ENGINE Bus STARTING 39046 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: CherryPy Checker: 39047 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: The Application mounted at '' has an empty config. 39048 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: [05/Jun/2021:12:23:05] ENGINE Started monitor thread '_TimeoutMonitor'. 39049 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: Exception in thread Thread-1: 39050 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: Traceback (most recent call last): 39051 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner 39052 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: self.run() 39053 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: File "/usr/lib/python2.7/threading.py", line 754, in run 39054 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: self.__target(*self.__args, **self.__kwargs) 39055 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: File "/usr/share/ceph/mgr/prometheus/module.py", line 191, in collect 39056 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: data = inst.collect() 39057 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: File "/usr/share/ceph/mgr/prometheus/module.py", line 982, in collect 39058 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: self.get_mgr_status() 39059 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: File "/usr/share/ceph/mgr/prometheus/module.py", line 528, in get_mgr_status 39060 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: ceph_release = host_version[1].split()[-2] # e.g. nautilus 39061 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: IndexError: list index out of range 39062 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: 2021-06-05 12:23:05.092 7f1ec4eb3700 -1 client.0 error registering admin socket command: (17) File exists 39063 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: message repeated 4 times: [ 2021-06-05 12:23:05.092 7f1ec4eb3700 -1 client.0 error registering admin socket command: (17) File exists] 39064 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: [05/Jun/2021:12:23:05] ENGINE Serving on http://0.0.0.0:9283 39065 Jun 5 12:23:05 yujiang-performance-test ceph-mgr[698756]: [05/Jun/2021:12:23:05] ENGINE Bus STARTED
Updated by Neha Ojha almost 3 years ago
- Project changed from Ceph to mgr
- Category set to prometheus module
Updated by Jiang Yu over 2 years ago
Single thread, the exception is not handled, causing the thread to crash?
Updated by Patrick Seidensal over 2 years ago
- Status changed from New to Rejected
Hello Jiang Yu,
It looks much like the issue is not there in v14.2.22 anymore. I've looked into it and couldn't find the code in the Traceback you provided. I think updating Ceph will resolve that issue.
Here's how I checked, including results:
for tag in $(git tag | grep v14 | sort -V); do echo $tag curl -fsSl https://raw.githubusercontent.com/ceph/ceph/${tag}/src/pybind/mgr/prometheus/module.py | grep -n 'ceph_release = host_version' done v14.0.0 v14.0.1 v14.1.0 v14.1.1 v14.2.0 v14.2.1 v14.2.2 v14.2.3 479: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.4 479: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.5 480: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.6 480: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.7 480: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.8 482: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.9 482: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.10 482: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.11 527: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.12 527: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.13 527: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.14 528: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.15 528: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.16 528: ceph_release = host_version[1].split()[-2] # e.g. nautilus v14.2.17 v14.2.18 v14.2.19 v14.2.20 v14.2.21 v14.2.22
Can you please check and let me know that the issue is resolved in Ceph v14.2.17 and onwards? I'll close the issue but if you can verify that you still have that problem in newer Ceph versions, I'll be glad to re-open it.