Actions
Bug #36519
closedsimple tests failing (ceph-disk related)
Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Tests are failing with:
=================================== FAILURES =================================== _________ TestOsdsFromMons.test_all_osds_are_up_and_in[ansible://mon0] _________ [gw2] linux2 -- Python 2.7.5 /tmp/tox.JyczuJpnUl/centos7-filestore-activate/bin/python self = <tests.mon.test_osds_from_mons.TestOsdsFromMons object at 0x7f03af232490> node = {'address': '192.168.1.10', 'ceph_release_num': {'dev': 99, 'jewel': 10, 'kraken': 11, 'luminous': 12, ...}, 'ceph_stable_release': 'luminous', 'cluster_address': '', ...} host = <testinfra.host.Host object at 0x7f03b6356610> @pytest.mark.no_docker def test_all_osds_are_up_and_in(self, node, host): cmd = "sudo ceph --cluster={cluster} --connect-timeout 5 osd tree -f json".format(cluster=node["cluster_name"]) output = json.loads(host.check_output(cmd)) nb_osd_up = self._get_nb_osd_up(output) > assert int(node["num_osds"]) == int(nb_osd_up) E assert 0 == 3 E + where 0 = int(0) E + and 3 = int(3)
Inspecting this case, we see that OSD are up:
[vagrant@mon0 ~]$ sudo ceph --cluster=test --connect-timeout 5 osd tree -f json-pretty { "nodes": [ { "id": -1, "name": "default", "type": "root", "type_id": 10, "children": [ -3, -5 ] }, { "id": -5, "name": "osd0", "type": "host", "type_id": 1, "pool_weights": {}, "children": [ 2 ] }, { "id": 2, "device_class": "hdd", "name": "osd.2", "type": "osd", "type_id": 0, "crush_weight": 0.011688, "depth": 2, "pool_weights": {}, "exists": 1, "status": "up", "reweight": 1.000000, "primary_affinity": 1.000000 }, { "id": -3, "name": "osd1", "type": "host", "type_id": 1, "pool_weights": {}, "children": [ 1, 0 ] }, { "id": 0, "device_class": "hdd", "name": "osd.0", "type": "osd", "type_id": 0, "crush_weight": 0.011597, "depth": 2, "pool_weights": {}, "exists": 1, "status": "up", "reweight": 1.000000, "primary_affinity": 1.000000 }, { "id": 1, "device_class": "hdd", "name": "osd.1", "type": "osd", "type_id": 0, "crush_weight": 0.011597, "depth": 2, "pool_weights": {}, "exists": 1, "status": "up", "reweight": 1.000000, "primary_affinity": 1.000000 } ], "stray": [] }
Either these OSDs are taking longer to show up, or the test has been updated to use something incompatible with Luminous json reporting
Updated by Alfredo Deza over 5 years ago
Updated by Alfredo Deza over 5 years ago
- Status changed from New to Closed
This was fixed in ceph-ansible when the commit was reverted
Actions