Bug #42592
closedceph-mon/mgr PGstat Segmentation Fault
0%
Description
Ceph version: nautilus 14.2.4
3 nodes cluster is used for cephfs file system storage.
when I run the scripts file as follow, one mon is down and I get some info from /var/log/ceph/ceph-mon.node1.log
cephfsreset.sh:
cd /etc/ceph/
echo "rm io500fs now:"
ceph fs fail io500fs
ceph fs rm io500fs --yes-i-really-mean-it
ceph osd pool rm fs_data fs_data --yes-i-really-really-mean-it
ceph osd pool rm fs_metadata fs_metadata --yes-i-really-really-mean-it
echo "done"
echo "create pool:"
ceph osd pool create fs_data 2048 2048
ceph osd pool set fs_data size 2
ceph osd pool create fs_metadata 2048 2048
ceph osd pool set fs_metadata size 2
echo "wait for pg create..."
ceph -s
echo "create fs"
ceph fs new io500fs fs_metadata fs_data
echo "set max_mds"
ceph fs set io500fs max_mds 30
echo "done"
echo "check fs status"
ceph fs status
Maybe because I don't add some sleep time between pool remove and pool create, and then trigger this bug
Files
Updated by Neha Ojha over 4 years ago
- Related to Bug #40011: ceph -s shows wrong number of pools when pool was deleted added
Updated by Kefu Chai over 4 years ago
- Related to deleted (Bug #40011: ceph -s shows wrong number of pools when pool was deleted)
Updated by Kefu Chai over 4 years ago
- Is duplicate of Bug #40011: ceph -s shows wrong number of pools when pool was deleted added