Bug #18681
closedceph-disk prepare/activate misses steps and fails on [Bluestore]
0%
Description
After prepare disk for bluestore ceph-disk did not chown db and wal partions and action activate failed.
Debian 8.7 (Linux 4.4.35)
ceph version 11.2.0
root@xxx:~# parted /dev/sdb print Model: ATA Samsung SSD 850 (scsi) Disk /dev/sdb: 512GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2128MB 2127MB osd3.db 2 2128MB 10.6GB 8506MB osd3.wal root@xxx:~# ls -l /dev/sdb* brw-rw---- 1 root disk 8, 16 Jan 26 12:19 /dev/sdb brw-rw---- 1 root disk 8, 17 Jan 26 12:19 /dev/sdb1 brw-rw---- 1 root disk 8, 18 Jan 26 12:19 /dev/sdb2 root@n004:~# ls -l /dev/sdc* brw-rw---- 1 root disk 8, 32 Jan 26 12:17 /dev/sdc
Аfter preparing the disc we find the following
root@xxx:~# ceph-disk prepare --zap-disk --cluster ceph --cluster-uuid c49a2428-6a97-4bac-b497-fa6824113fc4 --bluestore /dev/sdc --block.wal /dev/disk/by-partlabel/osd3.wal --block.db /dev/disk/by-partlabel/osd3.db ... The operation has completed successfully. root@xxx:~# ls -l /dev/sdb* brw-rw---- 1 root disk 8, 16 Jan 26 12:19 /dev/sdb brw-rw---- 1 root disk 8, 17 Jan 26 12:19 /dev/sdb1 brw-rw---- 1 root disk 8, 18 Jan 26 12:19 /dev/sdb2 root@xxx:~# ls -l /dev/sdc* brw-rw---- 1 root disk 8, 32 Jan 26 12:29 /dev/sdc brw-rw---- 1 ceph ceph 8, 33 Jan 26 12:29 /dev/sdc1 brw-rw---- 1 ceph ceph 8, 34 Jan 26 12:29 /dev/sdc2 root@xxx:~# ceph-disk -v activate /dev/sdc1 ... 2017-01-26 12:32:58.985407 7fc53692e940 -1 bluestore(/var/lib/ceph/tmp/mnt.5EOgmD) _setup_block_symlink_or_file failed to create block symlink to /dev/sdc2: (17) File exists 2017-01-26 12:32:58.985441 7fc53692e940 -1 OSD::mkfs: ObjectStore::mkfs failed with error -17 2017-01-26 12:32:58.985693 7fc53692e940 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.5EOgmD: (17) File exists ...
But after run command:
root@xxx:~# chown ceph:ceph /dev/sdb1 root@xxx:~# chown ceph:ceph /dev/sdb2 root@xxx:~# ls -l /dev/sdb* brw-rw---- 1 root disk 8, 16 Jan 26 12:37 /dev/sdb brw-rw---- 1 ceph ceph 8, 17 Jan 26 12:42 /dev/sdb1 brw-rw---- 1 ceph ceph 8, 18 Jan 26 12:42 /dev/sdb2 root@xxx:~# ceph-disk activate /dev/sdc1 got monmap epoch 4 added key for osd.3
And all is well
Updated by Wido den Hollander about 7 years ago
I see you split WAL and RocksDB out to different disks. If you try without that, does that work?
I tried with the same versions on Ubuntu 16.04 and that worked out fine when using a single disk. Trying to see if the problem pops up when using different disks for WAL/RocksDB.
Updated by Leonid Prytuliak about 7 years ago
Wido den Hollander wrote:
I see you split WAL and RocksDB out to different disks. If you try without that, does that work?
I tried with the same versions on Ubuntu 16.04 and that worked out fine when using a single disk. Trying to see if the problem pops up when using different disks for WAL/RocksDB.
Thx. At that point that I want to bring to the different partions DB/WAL and in the future be used for these purposes nvme.
Updated by Greg Farnum almost 7 years ago
- Priority changed from Normal to Urgent
Updated by Greg Farnum almost 7 years ago
- Project changed from Ceph to RADOS
- Subject changed from ceph-disk prepare/activate [Bluestore] to ceph-disk prepare/activate misses steps and fails on [Bluestore]
- Category changed from 107 to Administration/Usability
- Component(RADOS) BlueStore added
Moving this to the RADOS bluestore tracker since it's probably owned by that team.
Updated by Sage Weil almost 7 years ago
If you don't use the GPT partition labels/types that ceph-disk uses then the device ownership won't be changed to ceph:ceph and it will fail. (The udev rule won't pick up your random devices.)
If you let ceph-disk create the partitions for you it will label tehm properly. Or you can set the gpt UUID type manually so that the udev rule kicks in (you're going behind ceph-disk's back a bit by setting up the partitions yourself).