Bug #19320
openPg inconsistent make ceph osd down
0%
Description
Hi all.
I am running a ceph cluster.
These is a pg inconsistent:
pg 3.aff is active+recovery_wait+degraded+inconsistent, acting [267,463,157]
When i start osd 267. It not recovery compelete, then osd down.
log osd in attach file.
ceph version
ceph 0.94.7-1trusty amd64 distributed storage and file system
dmesg log:
[Tue Mar 21 16:34:02 2017] init: ceph-osd (ceph/267) respawning too fast, stopped
[Tue Mar 21 18:57:20 2017] init: ceph-osd (ceph/267) main process (3423088) killed by SEGV signal
[Tue Mar 21 18:57:20 2017] init: ceph-osd (ceph/267) main process ended, respawning
[Tue Mar 21 19:09:59 2017] init: ceph-osd (ceph/267) main process (3452503) killed by SEGV signal
[Tue Mar 21 19:09:59 2017] init: ceph-osd (ceph/267) main process ended, respawning
[Tue Mar 21 19:10:57 2017] init: ceph-osd (ceph/267) main process (3482095) killed by SEGV signal
[Tue Mar 21 19:10:57 2017] init: ceph-osd (ceph/267) main process ended, respawning
[Tue Mar 21 19:11:12 2017] init: ceph-osd (ceph/267) main process (3486136) killed by SEGV signal
[Tue Mar 21 19:11:12 2017] init: ceph-osd (ceph/267) respawning too fast, stopped
Thanks all.
Hoan
Files