Actions
Bug #23967
closedceph fs status and Dashboard fail with Python stack trace
Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Using Ceph v12.2.4, in `ceph fs status` I get:
# ceph fs status Error EINVAL: Traceback (most recent call last): File "/usr/lib/ceph/mgr/status/module.py", line 310, in handle_command return self.handle_fs_status(cmd) File "/usr/lib/ceph/mgr/status/module.py", line 235, in handle_fs_status mds_versions[metadata.get('ceph_version', "unknown")].append(standby['name']) AttributeError: 'NoneType' object has no attribute 'get'
On the Ceph Dashboard http://localhost:7000/filesystem/1/ I get:
500 Internal Server Error The server encountered an unexpected condition which prevented it from fulfilling the request. Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/cherrypy/_cprequest.py", line 670, in respond response.body = self.handler() File "/usr/lib/python2.7/dist-packages/cherrypy/lib/encoding.py", line 217, in __call__ self.body = self.oldhandler(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/cherrypy/_cpdispatch.py", line 61, in __call__ return self.callable(*self.args, **self.kwargs) File "/usr/lib/ceph/mgr/dashboard/module.py", line 548, in filesystem "fs_status": global_instance().fs_status(int(fs_id)) File "/usr/lib/ceph/mgr/dashboard/module.py", line 431, in fs_status mds_versions[metadata.get('ceph_version', 'unknown')].append(standby['name']) AttributeError: 'NoneType' object has no attribute 'get' Powered by CherryPy 3.5.0
Updated by Niklas Hambuechen about 6 years ago
In the relevant code
for standby in fsmap['standbys']: metadata = self.get_metadata('mds', standby['name']) mds_versions[metadata.get('ceph_version', "unknown")].append(standby['name']) standby_table.add_row([standby['name']])
I have inserted
with open('/tmp/niklas-ceph-log', 'a') as f: f.write("standby" + str(standby) + '\n') f.write("standby['name']" + str(standby['name']) + '\n') f.write("self.get_metadata('mds', standby['name'])" + str(self.get_metadata('mds', standby['name'])) + '\n')
and found:
standby: {'standby_for_rank': -1L, 'addr': '1.2.3.4:6804/2234944343', 'export_targets': [], 'name': 'ceph2', 'incarnation': 0L, 'state': 'up:standby', 'state_seq': 2L, 'standby_for_fscid': -1L, 'epoch': 202L, 'gid': 774424L, 'features': 2305244844532236283L, 'rank': -1L, 'standby_for_name': '', 'standby_replay': False} standby['name']: ceph2 self.get_metadata('mds', standby['name']): None
So the name of the standby is correct, but `metadata = self.get_metadata('mds', standby['name'])` is `None`.
And so `metadata.get` will crash.
Updated by Nathan Cutler about 6 years ago
- Project changed from Ceph to mgr
- Category deleted (
ceph cli)
Updated by Janek Bevendorff over 4 years ago
This just happened to me in Nautilus 14.2.2.
I failed an MDS, so the standby took over. Then I started deleting a large directory with many files and this happened. From what I can tell, everything else is still working and the FS is operational, but ceph fs status only prints this stack trace.
Updated by Janek Bevendorff over 4 years ago
Update: after rotating through the other standby MDSs by repeatedly failing the currently active MDS, I got it working again.
Updated by Volker Theile over 4 years ago
- Related to Bug #41525: mgr/dashboard: Missing service metadata is not handled correctly added
Updated by Volker Theile over 4 years ago
- Related to Bug #41538: mds: wrong compat can cause MDS to be added daemon registry on mgr but not the fsmap added
Updated by Patrick Donnelly over 4 years ago
- Status changed from New to Duplicate
Actions