Project

General

Profile

Actions

Bug #4632

closed

ceph-deploy: osd create command prepares disk but does not activate in centos

Added by Tamilarasi muthamizhan about 11 years ago. Updated almost 11 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Category:
ceph-deploy
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph branch: wip-4532

tamil@ubuntu:~/ceph-dep-centos/ceph-deploy$ ./ceph-deploy osd create burnupi05:sdd --zap-disk
2013-04-02 18:03:00,856 ceph_deploy.osd DEBUG Preparing cluster ceph disks burnupi05:/dev/sdd:
2013-04-02 18:03:41,254 ceph_deploy.osd DEBUG Deploying osd to burnupi05
2013-04-02 18:03:41,322 ceph_deploy.osd DEBUG Host burnupi05 is now ready for osd use.
2013-04-02 18:03:41,323 ceph_deploy.osd DEBUG Preparing host burnupi05 disk /dev/sdd journal None activate True

tamil@ubuntu:~/ceph-dep-centos/ceph-deploy$ ./ceph-deploy disk list burnupi05
/dev/sda :
/dev/sda1 other, ext4, mounted on /
/dev/sdb :
/dev/sdb1 ceph data, prepared, unknown cluster 20556f76-5b44-4b1c-a7bb-f28b566459e9, journal /dev/sdb2
/dev/sdb2 ceph journal, for /dev/sdb1
/dev/sdc :
/dev/sdc1 ceph data, prepared, unknown cluster 20556f76-5b44-4b1c-a7bb-f28b566459e9, journal /dev/sdc2
/dev/sdc2 ceph journal, for /dev/sdc1
/dev/sdd :
/dev/sdd1 ceph data, prepared, cluster ceph, journal /dev/sdd2
/dev/sdd2 ceph journal, for /dev/sdd1
/dev/sde :
/dev/sde1 ceph data, unprepared
/dev/sdf :
/dev/sdf1 ceph data, unprepared
/dev/sdg :
/dev/sdg1 ceph data, unprepared
/dev/sdh :
/dev/sdh1 ceph data, unprepared
/dev/sdi :
/dev/sdi1 other, btrfs

on burnupi05, the disk is not mounted. disk list shows the disk as prepared and not 'active'.

Actions #1

Updated by Anonymous about 11 years ago

  • Status changed from New to In Progress

It looks like centos udev sysbsytem is does not support the ID_PART_ENTRY_TYPE* envirment variables used to trigger the udev rule to activate the disk. An example of the rule is:

  1. activate ceph-tagged partitions
    ACTION=="add", SUBSYSTEM=="block", \
    ENV{DEVTYPE}=="partition", \
    ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d", \
    RUN+="/usr/sbin/ceph-disk-activate --mount /dev/$name"

On debian precise we get:

glowell@gary-ubuntu-01:~/test2/ceph-deploy$ udevadm info --query=env --name=/dev/sda1 | grep ID_PART
ID_PART_ENTRY_DISK=8:0
ID_PART_ENTRY_NAME=ceph\x20data
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=gpt
ID_PART_ENTRY_SIZE=20969439
ID_PART_ENTRY_TYPE=4fbd7e29-9d25-41b8-afd0-062c0ceff05d
ID_PART_ENTRY_UUID=78ef22ed-f7d8-4c31-8e3b-c9ac514208a3
ID_PART_TABLE_TYPE=gpt

On Centos 6.3:

[ubuntu@gary-centos-01 ceph-deploy]$ udevadm info --query=env --name=/dev/sda1 | grep ID_PART
ID_PART_TABLE_TYPE=gpt

So the udev rule is never triggered and ceph-disk-activate is not called.

Actions #2

Updated by Anonymous about 11 years ago

Debian Precise has version:

glowell@gary-ubuntu-01:~/test2/ceph-deploy$ udevadm --version
175

Centos has version:

[ubuntu@gary-centos-01 ceph-deploy]$ udevadm --version
147

Actions #3

Updated by Tamilarasi muthamizhan about 11 years ago

  • Priority changed from High to Urgent
Actions #4

Updated by Sage Weil almost 11 years ago

  • Status changed from In Progress to Resolved

commit:7ad63d23d74e5bc45c44a0192ab1f49ceb68ffa7

Actions

Also available in: Atom PDF