Project

General

Profile

Backport #18719

tests: lfn-upgrade-hammer: cluster stuck in HEALTH_WARN after last upgraded node reboots

Added by Nathan Cutler about 7 years ago. Updated almost 7 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
Release:
jewel
Crash signature (v1):
Crash signature (v2):

History

#1 Updated by Nathan Cutler about 7 years ago

  • Description updated (diff)

description

Scenario: jewel 10.2.6 integration testing

Symptom: "rados/singleton-nomsgr/{all/lfn-upgrade-hammer.yaml rados.yaml}" fails because, after last upgraded node is rebooted, cluster gets stuck in "HEALTH_WARN all OSDs are running jewel or later but the 'require_jewel_osds' osdmap flag is not set"

Root cause: the test does not issue the "ceph osd set require_jewel_osds" command after rebooting the last upgraded node.

Example failure: http://pulpito.ceph.com/loic-2017-01-26_22:01:29-rados-wip-jewel-backports-distro-basic-smithi/753628/

#3 Updated by Nathan Cutler about 7 years ago

Have high hopes for a successful test run at http://tracker.ceph.com/issues/17851#note-45 (at the very end of the comment)

#4 Updated by Nathan Cutler almost 7 years ago

  • Status changed from Fix Under Review to Resolved
  • Target version set to v10.2.6

Also available in: Atom PDF