Feature #2394
Updated by Anonymous almost 12 years ago
After "ceph osd out 123", when is it safe to kill the ceph-osd daemon? Assume a busy cluster where there's other failures happening all the time, so "100% active+clean" pgs will never be reached. This question needs to be answered in terms of osd.123, not global cluster health. "ceph pg dump" probably contains this information, in which case it just needs to be made more accessible; documenting what the output *means* would be useful. Background: http://thread.gmane.org/gmane.comp.file-systems.ceph.devel/6217 Unhelpful but existing docs: http://ceph.com/docs/master/control/#pg-subsystem http://ceph.com/docs/master/man/8/ceph/#examples