Project

General

Profile

Actions

Bug #22659

open

During the cache tiering configuration ,ceph-mon daemon getting crashed after setting negative value for "hit_set_count"

Added by Debashis Mondal over 6 years ago. Updated over 6 years ago.

Status:
In Progress
Priority:
Normal
Assignee:
-
Category:
Correctness/Safety
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
jewel luminous
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Monitor
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Observation:
--------------
Before setting the value of "hit_set_count" Ceph health was OK but after configuring the negative value to hit_set_count, the ceph-mon daemon getting crashed and ceph is hunting for a new mon in the cluster.

Execution Steps:
------------------
  1. ceph osd pool create cold-storage 16 16
  2. ceph osd pool create hot-stoarage 16 16
  3. ceph osd tier cache-mode hot-storage writeback
  4. ceph osd tier set-overlay cold-storage hot-storage
  5. ceph health
  6. ceph osd pool set hot-storage hit_set_count -1

Files

ceph_mon_daemon_crash_console_log.txt (3.17 KB) ceph_mon_daemon_crash_console_log.txt Console log showing the ceph mon daemon is failed and ceph is hunting for new mon Debashis Mondal, 01/11/2018 07:29 AM
ceph_mon_log_when_mon_restarted.txt (97.3 KB) ceph_mon_log_when_mon_restarted.txt status of failed ceph_mon daemon after restart Debashis Mondal, 01/11/2018 07:30 AM
Actions #1

Updated by Greg Farnum over 6 years ago

  • Project changed from Ceph to RADOS
  • Category changed from ceph cli to Tiering
  • Component(RADOS) Monitor added
Actions #3

Updated by Joao Eduardo Luis over 6 years ago

  • Category changed from Tiering to Correctness/Safety
  • Status changed from New to In Progress

This will need to be backported to luminous and jewel once merged.

Actions #4

Updated by Nathan Cutler over 6 years ago

  • Backport set to jewel luminous
Actions

Also available in: Atom PDF