Bug #5959
closedQuorum is crashing on 'osd pool mksnap'
0%
Description
Full backtrace is attached. Crashed mons running 0.61.7,
ceph osd pool mksnap --keyfile admin dev-rack0 snap2
2013-08-14 15:00:30.651544 7f493e265700 0 monclient: hunting for new mon
2013-08-14 15:00:42.473016 7f493c160700 0 -- 10.6.0.1:0/13014 >> 10.6.0.1:6789/0 pipe(0x7f4928002590 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
Files
Updated by Andrey Korolyov over 10 years ago
There is some update:
- if pool contains volumes with their own snapshots, it`s more likely for entire pool to die
- if pool contains volumes with recently deleted snapshots or recently deleted volumes with snapshots so trim not finished yet at the moment of issuing command above, only mon leader dies
- if pool does not contain any volume with snapshot, all works fine.
Updated by Andrey Korolyov over 10 years ago
s/entire pool/entire quorum/g in prev comment
Updated by Joao Eduardo Luis over 10 years ago
- Status changed from New to In Progress
I haven't been able to reproduce this either on 0.61.7 or earlier latest versions.
I was however able to trigger yet another crash, fairly easy to reproduce, on 0.61.7 (doesn't trigger on next): see #5986
Updated by Joao Eduardo Luis over 10 years ago
- Status changed from In Progress to Pending Backport
- Backport set to cuttlefish
I'm pretty sure this is fixed by d1501938f5d07c067d908501fc5cfe3c857d7281 on next.
Updated by Sage Weil over 10 years ago
- Status changed from Pending Backport to Resolved
backported by 64bef4ae4bab28b0b82a1481381b0c68a22fe1a4