Project

General

Profile

Actions

Bug #7210

closed

mon: does not validate snapshot removal commands

Added by Greg Farnum about 10 years ago. Updated almost 10 years ago.

Status:
Resolved
Priority:
High
Assignee:
Joao Eduardo Luis
Category:
Monitor
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

See thread "[ceph-users] failed to open snapshot after 'rados cppool '" at e.g. http://www.spinics.net/lists/ceph-users/msg07357.html

After doing that cppool, an attempt to delete no-longer-existing rbd snapshots crashes the monitor with

osd/osd_types.cc: 799: FAILED assert(is_unmanaged_snaps_mode())

 ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
 1: /usr/bin/ceph-mon() [0x6c96e9]
 2: (OSDMonitor::prepare_pool_op(MPoolOp*)+0x970) [0x5c3ad0]
 3: (OSDMonitor::prepare_update(PaxosServiceMessage*)+0x1ab) [0x5c3d8b]
 4: (PaxosService::dispatch(PaxosServiceMessage*)+0xa1a) [0x5940ea]
 5: (Context::complete(int)+0x9) [0x565499]
 6: (finish_contexts(CephContext*, std::list<Context*, std::allocator<Context*> >&, int)+0x95) [0x5698b5]
 7: (Paxos::handle_last(MMonPaxos*)+0xb6e) [0x58cdde]
 8: (Paxos::dispatch(PaxosServiceMessage*)+0x2ab) [0x58d68b]
 9: (Monitor::dispatch(MonSession*, Message*, bool)+0x4f0) [0x563620]
 10: (Monitor::_ms_dispatch(Message*)+0x1fb) [0x5639fb]
 11: (Monitor::ms_dispatch(Message*)+0x32) [0x57f212]
 12: (DispatchQueue::entry()+0x582) [0x7de6c2]
 13: (DispatchQueue::DispatchThread::entry()+0xd) [0x7d994d]
 14: (()+0x79d1) [0x7f186b9b59d1]
 15: (clone()+0x6d) [0x7f186a6ecb6d]

Actions #1

Updated by Joao Eduardo Luis about 10 years ago

  • Priority changed from Normal to Urgent
  • Target version changed from v0.76b to 0.78
  • Severity changed from 3 - minor to 2 - major
Actions #2

Updated by Wai Peng Yip about 10 years ago

I've got bitten by this. I have an openstack glance volume, and it contains images and snapshots. I did the following

rados cppool glance glance-new
ceph osd pool delete glance
ceph osd pool rename glance-new glance

when the user delete an image with snapshot, the mons crash.

Temporary workaround - kill openstack glance that is trying to delete the snapshot, and start mon. Then start glance.

Actions #3

Updated by Joao Eduardo Luis about 10 years ago

  • Status changed from New to 12

Finally got to reproduce this. I was missing the part where we actually needed to remove the image's snapshot using 'rbd -p POOL snap rm img@snap'

Actions #4

Updated by Sage Weil about 10 years ago

  • Status changed from 12 to Resolved
Actions #5

Updated by Sage Weil about 10 years ago

  • Status changed from Resolved to Pending Backport
  • Backport set to emperor, dumpling
Actions #6

Updated by Sage Weil about 10 years ago

  • Priority changed from Urgent to High

I would like to see this in the wild before doing the backport.

Actions #7

Updated by Sage Weil almost 10 years ago

  • Status changed from Pending Backport to Resolved
  • Backport deleted (emperor, dumpling)

no backport.

Actions

Also available in: Atom PDF