Bug #48026
Mon crashes when adding 4th OSD
0%
Description
Context: I'm running Ceph Octopus 15.2.5 (the latest as of this bug) using Rook on a toy Kubernetes cluster of two nodes. I've got a single Ceph mon node running perfectly with 3 OSDs . There are two pools running which were created as part of a CephFS install.
Problem: when I try to add my 4th OSD, the Ceph mon starts crashing on the OSDMonitor::build_incremental function. I've checked on the mailing lists and just in general and the last instance of this issue seems to have been 7 years ago so I'm probably not hitting the same thing!
Question: I was wondering if anyone had ideas on what I might be doing wrong? I'm very new to Ceph so my suspicion is that it's something to do with my configuration but given I'm literally just adding an OSD and everything is fine otherwise, I'm not sure what my mistake might be.
Please find attached a log file of the mon with log level 20 (the maximum I think?)
History
#1 Updated by Sage Weil almost 3 years ago
- Project changed from Ceph to RADOS