Project

General

Profile

Bug #3971

can't attach rbd image volume to instance

Added by Khanh Nguyen Dang Quoc about 11 years ago. Updated about 11 years ago.

Status:
Rejected
Priority:
High
Assignee:
Category:
openstack
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

my env:
libvirt-bin: 0.9.13-0ubuntu12.1~cloud0
ceph : 0.56.1

+ i tried disable module apparmor from system.
+ after this, i performed attach rbd volume to an existence instance.

-> but couldn't attach that and received error was " qemuMonitorIOProcess:369 : QEMU_MONITOR_IO_PROCESS: mon=0x7f28e00f30d0 buf={"return": "error reading header from volume-5529a8cd-28db-4a72-a0f0-f7b2a221cf8d\r\ncould not open disk image rbd:volumes/volume-5529a8cd-28db-4a72-a0f0-f7b2a221cf8d: No such file or directory\r\n", "id": "libvirt-13"} "

please help me how to resolve it.

rbd.log View (3.95 KB) Khanh Nguyen Dang Quoc, 02/05/2013 07:48 PM

History

#1 Updated by Ian Colle about 11 years ago

  • Assignee set to Josh Durgin

#2 Updated by Josh Durgin about 11 years ago

Does 'rbd ls volumes' show volume-5529a8cd-28db-4a72-a0f0-f7b2a221cf8d?

If so, could you provide a few more details about your setup?

What does dpkg -l | grep librbd show?
Are your osds running 0.56.1 as well? (Check ceph-osd -v on each node to be sure).

Are you using cephx authentication? If so, what does the 'caps' line in 'ceph auth list' show for the client nova is using?

#3 Updated by Khanh Nguyen Dang Quoc about 11 years ago

+These're all information need to verify:

root@master:~# dpkg -l | grep librbd
ii librbd1 0.56.1-1precise RADOS block device client library
root@master:~# ceph-osd -v
ceph version 0.56.1 (e4a541624df62ef353e754391cbbb707f54b16f7)
+ More information: now i set authentication is none, and remove all entries in 'ceph auth list'.
before that, i set authentication was cephx , but I got this error.

+ you can contact my skype: khanhnguyen0209 for more contact detail ..
Thanks.

#4 Updated by Khanh Nguyen Dang Quoc about 11 years ago

Does 'rbd ls volumes' show volume-5529a8cd-28db-4a72-a0f0-f7b2a221cf8d?
-> yes, i can see it

#5 Updated by Josh Durgin about 11 years ago

Did you restart the monitors and osds after you set auth supported = none in the global section of every /etc/ceph/ceph.conf?

If you did, could you add this to the ceph.conf on the compute node and try attaching, then post the log from /tmp/rbd.log here?

[client]
    log file = /tmp/rbd.log
    debug ms = 1
    debug rbd = 20

#6 Updated by Khanh Nguyen Dang Quoc about 11 years ago

yes sure, restarted all.
Please refer to the attached file for more detail.
Thanks.

#7 Updated by Dan Mick about 11 years ago

1) The log shows an attempt to open volume-ade3b6fb-2386-4d10-9472-16cd4f955faa; this isn't the same volume you show above. Did you expect it to change?

2) is pool 3 the 'volumes' pool? (ceph osd dump will confirm)

3) what happens when you do rados -p volumes ls?

#8 Updated by Josh Durgin about 11 years ago

The log shows it trying to access an rbd_header.volume-ade3b6fb-2386-4d10-9472-16cd4f955faa object without looking at an rbd_id.volume-ade3b6fb-2386-4d10-9472-16cd4f955faa object to find the id of the image. This rbd_id object was added in 0.50, before format 2 was fully supported.

This suggests that you've got an old version of librbd (before 0.50) on that box, with partial format 2 support. Could you double check the installed version of librbd, and make sure the right one is being loaded by qemu (you can strace qemu-img info to check)?

#9 Updated by Khanh Nguyen Dang Quoc about 11 years ago

1) The log shows an attempt to open volume-ade3b6fb-2386-4d10-9472-16cd4f955faa; this isn't the same volume you show above. Did you expect it to change?

I create new volume and attach to vm, so it isn't.

2) is pool 3 the 'volumes' pool? (ceph osd dump will confirm)
yes, sure pool 3 is 'volumes'

dumped osdmap epoch 91
epoch 91
fsid c897b254-aaa3-4a6a-91d2-f6a77a7c05c7
created 2013-01-21 04:08:41.730827
modifed 2013-02-06 04:10:53.353826
flags

pool 0 'data' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 crush_ruleset 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
pool 2 'rbd' rep size 2 crush_ruleset 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
pool 3 'volumes' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 72 owner 18446744073709551615
removed_snaps [1~b]
pool 4 'images' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 81 owner 18446744073709551615
removed_snaps [1~1,3~2]

max_osd 1
osd.0 up in weight 1 up_from 87 up_thru 87 down_at 86 last_clean_interval [83,85) 120.138.73.44:6801/1361 120.138.73.44:6802/1361 120.138.73.44:6803/1361 exists,up 3ffb0573-160d-4b77-bac5-e503e85a5c29

3) what happens when you do rados -p volumes ls?

do "rados -p volumes ls" then result is:

rbd_id.volume-ade3b6fb-2386-4d10-9472-16cd4f955faa

but i do "rbd ls volumes" then result is
volume-ade3b6fb-2386-4d10-9472-16cd4f955faa

#10 Updated by Khanh Nguyen Dang Quoc about 11 years ago

Thanks Josh Durgin.

I found that one of compute node (ubuntu 12.04) had installed the old version of librbd.

But cinder service was installed on another node (ubuntu 12.10).
So i upgraded the compute node to ubuntu 12.10.

This problem is resolved.

#11 Updated by Josh Durgin about 11 years ago

  • Status changed from New to Rejected

Not a bug, just an old package.

Also available in: Atom PDF