Actions
Bug #22659
openDuring the cache tiering configuration ,ceph-mon daemon getting crashed after setting negative value for "hit_set_count"
Status:
In Progress
Priority:
Normal
Assignee:
-
Category:
Correctness/Safety
Target version:
-
% Done:
0%
Source:
other
Tags:
Backport:
jewel luminous
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Monitor
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Observation:
--------------
Before setting the value of "hit_set_count" Ceph health was OK but after configuring the negative value to hit_set_count, the ceph-mon daemon getting crashed and ceph is hunting for a new mon in the cluster.
------------------
- ceph osd pool create cold-storage 16 16
- ceph osd pool create hot-stoarage 16 16
- ceph osd tier cache-mode hot-storage writeback
- ceph osd tier set-overlay cold-storage hot-storage
- ceph health
- ceph osd pool set hot-storage hit_set_count -1
Files
Updated by Greg Farnum over 6 years ago
- Project changed from Ceph to RADOS
- Category changed from ceph cli to Tiering
- Component(RADOS) Monitor added
Updated by Joao Eduardo Luis over 6 years ago
- Category changed from Tiering to Correctness/Safety
- Status changed from New to In Progress
This will need to be backported to luminous and jewel once merged.
Actions