Project

General

Profile

Actions

Bug #24181

closed

luminous dashboard: CephFS info generates 500 Internal error

Added by Марк Коренберг almost 6 years ago. Updated about 3 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
General
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
fs
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

This woked OK in 12.2.4 (or 12.2.2 ?).

500 Internal Server Error

The server encountered an unexpected condition which prevented it from fulfilling the request.

Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/cherrypy/_cprequest.py", line 670, in respond
response.body = self.handler()
File "/usr/lib/python2.7/dist-packages/cherrypy/lib/encoding.py", line 217, in call
self.body = self.oldhandler(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/cherrypy/_cpdispatch.py", line 61, in call
return self.callable(*self.args, **self.kwargs)
File "/usr/lib/ceph/mgr/dashboard/module.py", line 548, in filesystem
"fs_status": global_instance().fs_status(int(fs_id))
File "/usr/lib/ceph/mgr/dashboard/module.py", line 362, in fs_status
mds_versions[metadata.get('ceph_version', 'unknown')].append(info['name'])
AttributeError: 'NoneType' object has no attribute 'get'

Actions #2

Updated by John Spray almost 6 years ago

  • Project changed from Ceph to mgr
  • Subject changed from Dashboard: CephFS info generates 500 Internal error to luminous dashboard: CephFS info generates 500 Internal error
  • Category set to 132

This indicates that for some reason the mgr doesn't have metadata about the MDS. Can you see the metadata from "ceph mds metadata"? Does the issue clear when you restart ceph-mgr?

Actions #3

Updated by Марк Коренберг almost 6 years ago

Yes, bug 100% reproducible.

$ ceph mds metadata
[ {
"name": "node2",
"addr": "10.80.20.100:6800/1179518924",
"arch": "x86_64",
"ceph_version": "ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)",
"cpu": "Intel(R) Xeon(R) CPU E5-2407 0 @ 2.20GHz",
"distro": "debian",
"distro_description": "Debian GNU/Linux 9 (stretch)",
"distro_version": "9",
"hostname": "node2",
"kernel_description": "#1 SMP Debian 4.15.11-1~bpo9+1 (2018-04-07)",
"kernel_version": "4.15.0-0.bpo.2-amd64",
"mem_swap_kb": "5242876",
"mem_total_kb": "8125904",
"os": "Linux"
}, {
"name": "node4"
}, {
"name": "node1",
"addr": "10.80.20.99:6800/2386014006",
"arch": "x86_64",
"ceph_version": "ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)",
"cpu": "Intel(R) Xeon(R) CPU E5-2407 0 @ 2.20GHz",
"distro": "debian",
"distro_description": "Debian GNU/Linux 9 (stretch)",
"distro_version": "9",
"hostname": "node1",
"kernel_description": "#1 SMP Debian 4.15.11-1~bpo9+1 (2018-04-07)",
"kernel_version": "4.15.0-0.bpo.2-amd64",
"mem_swap_kb": "3903484",
"mem_total_kb": "8125904",
"os": "Linux"
}
]

I have not tried to restart MDS. I will, if you ask me bout this again.

Actions #4

Updated by Марк Коренберг almost 6 years ago

As you can see, node4 have no information:

{
"name": "node4"
}

And I don't know why.

$ ceph -s
cluster:
id: 56ed206b-67cf-42a6-be65-9baf32334fc9
health: HEALTH_OK

services:
mon: 3 daemons, quorum node1,node2,node3
mgr: node2(active), standbys: node1
mds: mainfs-1/1/1 up {0=node4=up:active}, 2 up:standby
osd: 16 osds: 13 up, 13 in
data:
pools: 5 pools, 704 pgs
objects: 1055k objects, 2349 GB
usage: 7221 GB used, 4416 GB / 11638 GB avail
pgs: 704 active+clean
io:
client: 1013 B/s rd, 1129 kB/s wr, 0 op/s rd, 61 op/s wr
Actions #5

Updated by Марк Коренберг over 5 years ago

Can not reproduce. please close

Actions #6

Updated by Lenz Grimmer over 5 years ago

  • Status changed from New to Can't reproduce

Thanks for the feedback!

Actions #7

Updated by Ernesto Puerta about 3 years ago

  • Project changed from mgr to Dashboard
  • Category changed from 132 to General
Actions

Also available in: Atom PDF