Bug #5959
closed
Quorum is crashing on 'osd pool mksnap'
Added by Andrey Korolyov over 10 years ago.
Updated over 10 years ago.
Description
Full backtrace is attached. Crashed mons running 0.61.7,
ceph osd pool mksnap --keyfile admin dev-rack0 snap2
2013-08-14 15:00:30.651544 7f493e265700 0 monclient: hunting for new mon
2013-08-14 15:00:42.473016 7f493c160700 0 -- 10.6.0.1:0/13014 >> 10.6.0.1:6789/0 pipe(0x7f4928002590 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
Files
There is some update:
- if pool contains volumes with their own snapshots, it`s more likely for entire pool to die
- if pool contains volumes with recently deleted snapshots or recently deleted volumes with snapshots so trim not finished yet at the moment of issuing command above, only mon leader dies
- if pool does not contain any volume with snapshot, all works fine.
s/entire pool/entire quorum/g in prev comment
- Assignee set to Joao Eduardo Luis
- Status changed from New to In Progress
I haven't been able to reproduce this either on 0.61.7 or earlier latest versions.
I was however able to trigger yet another crash, fairly easy to reproduce, on 0.61.7 (doesn't trigger on next): see #5986
- Status changed from In Progress to Pending Backport
- Backport set to cuttlefish
I'm pretty sure this is fixed by d1501938f5d07c067d908501fc5cfe3c857d7281 on next.
- Status changed from Pending Backport to Resolved
Also available in: Atom
PDF