Project

General

Profile

Bug #22659

During the cache tiering configuration ,ceph-mon daemon getting crashed after setting negative value for "hit_set_count"

Added by Debashis Mondal 6 months ago. Updated 6 months ago.

Status:
In Progress
Priority:
Normal
Assignee:
-
Category:
Correctness/Safety
Target version:
-
Start date:
01/11/2018
Due date:
% Done:

0%

Source:
other
Tags:
Backport:
jewel luminous
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Monitor

Description

Observation:
--------------
Before setting the value of "hit_set_count" Ceph health was OK but after configuring the negative value to hit_set_count, the ceph-mon daemon getting crashed and ceph is hunting for a new mon in the cluster.

Execution Steps:
------------------
  1. ceph osd pool create cold-storage 16 16
  2. ceph osd pool create hot-stoarage 16 16
  3. ceph osd tier cache-mode hot-storage writeback
  4. ceph osd tier set-overlay cold-storage hot-storage
  5. ceph health
  6. ceph osd pool set hot-storage hit_set_count -1

ceph_mon_daemon_crash_console_log.txt View - Console log showing the ceph mon daemon is failed and ceph is hunting for new mon (3.17 KB) Debashis Mondal, 01/11/2018 07:29 AM

ceph_mon_log_when_mon_restarted.txt View - status of failed ceph_mon daemon after restart (97.3 KB) Debashis Mondal, 01/11/2018 07:30 AM

History

#1 Updated by Greg Farnum 6 months ago

  • Project changed from Ceph to RADOS
  • Category changed from ceph cli to Tiering
  • Component(RADOS) Monitor added

#3 Updated by Joao Luis 6 months ago

  • Category changed from Tiering to Correctness/Safety
  • Status changed from New to In Progress

This will need to be backported to luminous and jewel once merged.

#4 Updated by Nathan Cutler 6 months ago

  • Backport set to jewel luminous

Also available in: Atom PDF