Project

General

Profile

Actions

Bug #15404

closed

KVM: terminate called after throwing an instance of 'ceph::buffer::end_of_buffer'

Added by Konrad Gutkowski about 8 years ago. Updated almost 7 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

KVM instances throwing "terminate called after throwing an instance of 'ceph::buffer::end_of_buffer'" after adding overlay to storage pool when ceph user not allowed to access "cache" tier (by misconfiguration, or any other reason).

Command flow which led to above situation:
  1. ceph osd pool create cache-ssd 256 256 replicated ssd
  2. ceph osd pool set cache-ssd size 2
  3. ceph osd pool set cache-ssd min-size 1
  4. ceph osd pool set-quota cache-ssd max_bytes 2048G
  5. ceph osd tier add volumes cache-ssd
  6. ceph osd tier cache-mode cache-ssd writeback
  7. ceph osd pool set cache-ssd target_max_bytes 2199023255552
  8. ceph osd tier set-overlay volumes cache-ssd
    < at this point vm's crashed, or logged fs errors >

Obviously the access problem is understood, but it could be handled better. Access on the dev level could be blocked (as per other situations when cluster becomes unavailable) till access should be established, or ceph tool could compare access rights on both pools and require some force parameter to bypass, or both.

client:
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)

osd, mon:
ceph version 9.2.1 (752b6a3020c3de74e07d2a8b4c5e48dab5a6b6fd)

Actions #1

Updated by Greg Farnum about 7 years ago

  • Project changed from Ceph to rbd
Actions #2

Updated by Jason Dillaman about 7 years ago

@Konrad: are you still able to reproduce this issue?

Actions #3

Updated by Jason Dillaman about 7 years ago

  • Status changed from New to Need More Info
Actions #4

Updated by Konrad Gutkowski about 7 years ago

Unfortunately no. I don't have any ceph clusters running currently.

Actions #5

Updated by Jason Dillaman almost 7 years ago

  • Status changed from Need More Info to Can't reproduce
Actions

Also available in: Atom PDF