Bug #5499
closedceph-deploy --cluster clustername osd prepare fails
0%
Description
ceph-deploy --cluster clustername osd prepare node:/path/to/mountpoint fails with
ceph-disk-prepare -- /path/to/mountpoint returned 1
2013-07-03 22:28:46.062241 7f1d8173f780 -1 did not load config file, using default settings.
ceph-disk: Error: getting cluster uuid from configuration failed
ceph-deploy: Failed to create 1 OSDs
Files
Updated by Robert Sander almost 11 years ago
ssh root@node "ceph-disk-prepare --cluster office -- /path/to/mountpoint"
suceeds
Updated by Abdi Ibrahim almost 11 years ago
Robert Sander wrote:
ssh root@node "ceph-disk-prepare --cluster office -- /path/to/mountpoint"
suceeds
I can confirm that this issue exist on my 3 nodes Ubuntu 12.04 configuration. I have to run the above command on the actual node itself... painful when you have large OSDs. when the ceph-deploy --cluster clustername osd prepare is ran on the admin node the error occurs:
root@cephadmin01:/home/user1/ceph/clusters/openstack# ceph-deploy --cluster openstack osd prepare ceph01:sdb
ceph-disk-prepare -- /dev/sdb returned 1
2013-07-22 16:55:11.107910 7fe3a0a45780 -1 did not load config file, using default settings.
ceph-disk: Error: getting cluster uuid from configuration failed
Updated by Alfredo Deza over 10 years ago
- Status changed from New to Need More Info
Can you confirm the actual output in the node that is failing when you run:
ceph-conf --cluster office --name=osd --lookup fsid
Replace 'office' with the name of the cluster (I am just re-using what you reported).
Updated by Robert Sander over 10 years ago
Alfredo Deza wrote:
Can you confirm the actual output in the node that is failing
I am sorry but we just rebuilt the cluster with the default name 'ceph'.
I am not longer able to reproduce the issue.
Updated by Amit Vijairania over 10 years ago
Hello! I'm running into similar issue..
*** root@svl-swift-1:/home/ceph# /usr/local/bin/ceph-deploy --cluster alln01ceph -v osd prepare svl-swift-1:sdk --zap-disk Preparing cluster alln01ceph disks svl-swift-1:/dev/sdk: Deploying osd to svl-swift-1 Host svl-swift-1 is now ready for osd use. Preparing host svl-swift-1 disk /dev/sdk journal None activate False ceph-disk-prepare --zap-disk -- /dev/sdk returned 1 **************************************************************************** Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk verification and recovery are STRONGLY recommended. **************************************************************************** GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. The operation has completed successfully. Caution: invalid backup GPT header, but valid main header; regenerating backup header from main header. 2013-08-06 00:08:07.118720 7f0d16598780 -1 did not load config file, using default settings. ceph-disk: Error: getting cluster uuid from configuration failed ceph-deploy: Failed to create 1 OSDs *** root@svl-swift-1:/home/ceph# /usr/bin/ceph-conf --cluster alln01ceph osd --lookup fsid 4e79141a-05f2-4e6a-a470-ce54e6236d8e *** root@svl-swift-1:/home/ceph# cat alln01ceph.conf [global] fsid = 4e79141a-05f2-4e6a-a470-ce54e6236d8e mon initial members = svl-swift-1, svl-swift-2, svl-swift-3 mon host = 10.29.125.4, 10.29.125.5, 10.29.125.6 auth cluster required = cephx auth service required = cephx auth client required = cephx [client] rbd cache = true rbd cache writethrough until flush = true rbd cache size = 32 MiB rbd cache max dirty = 24 MiB rbd cache target dirty = 16 MiB rbd cache max dirty age = 1 [mon] osd pool default size = 3 [osd] osd journal size = 10240 filestore xattr use omap = false osd mkfs type = xfs osd mkfs options xfs = "-f -i size=2048 -n size=65536 -l size=65536" osd mount options xfs = "rw,noatime,logbsize=262144,logbufs=8" filestore flusher = false osd op threads = 4 osd disk threads = 4 journal aio = true ***
Thanks!
Amit
Updated by Amit Vijairania over 10 years ago
Amit Vijairania wrote:
Hello! I'm running into similar issue..
***
root@svl-swift-1:/home/ceph# /usr/local/bin/ceph-deploy --cluster alln01ceph \-v osd prepare svl-swift-1:sdk --zap-disk
Preparing cluster alln01ceph disks svl-swift-1:/dev/sdk:
Deploying osd to svl-swift-1
Host svl-swift-1 is now ready for osd use.
Preparing host svl-swift-1 disk /dev/sdk journal None activate False
ceph-disk-prepare --zap-disk -- /dev/sdk returned 1 ********************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended. ********************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
The operation has completed successfully.Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.2013-08-06 00:08:07.118720 7f0d16598780 -1 did not load config file, using default settings.
ceph-disk: Error: getting cluster uuid from configuration failedceph-deploy: Failed to create 1 OSDs
***
root@svl-swift-1:/home/ceph# /usr/bin/ceph-conf --cluster alln01ceph osd --lookup fsid
4e79141a-05f2-4e6a-a470-ce54e6236d8e***
root@svl-swift-1:/home/ceph# cat alln01ceph.conf
[global]
fsid = 4e79141a-05f2-4e6a-a470-ce54e6236d8e
mon initial members = svl-swift-1, svl-swift-2, svl-swift-3
mon host = 10.29.125.4, 10.29.125.5, 10.29.125.6
auth cluster required = cephx
auth service required = cephx
auth client required = cephx[client]
rbd cache = true
rbd cache writethrough until flush = true
rbd cache size = 32 MiB
rbd cache max dirty = 24 MiB
rbd cache target dirty = 16 MiB
rbd cache max dirty age = 1[mon]
osd pool default size = 3[osd]
osd journal size = 10240
filestore xattr use omap = false
osd mkfs type = xfs
osd mkfs options xfs = "-f -i size=2048 -n size=65536 -l size=65536"
osd mount options xfs = "rw,noatime,logbsize=262144,logbufs=8"
filestore flusher = false
osd op threads = 4
osd disk threads = 4
journal aio = true
Thanks!
Amit
Updated by Samir Ibradzic over 10 years ago
This should fix it:
--- /usr/share/pyshared/ceph_deploy/osd.py
++ /usr/share/pyshared/ceph_deploy/osd.py@ -113,6 +113,8
@
args.append('--dmcrypt-key-dir')
args.append(dmcrypt_dir)
args.extend([
'--cluster',
+ cluster,
'--',
disk,
])
Updated by Samir Ibradzic over 10 years ago
Bah, redmine formatting sucks... Patch attached.
Updated by Alfredo Deza over 10 years ago
- Status changed from Need More Info to 12
Thanks for the update, this should get fixed today with a release before the end of the week.
Updated by Alfredo Deza over 10 years ago
- Status changed from 12 to In Progress
Updated by Alfredo Deza over 10 years ago
- Status changed from In Progress to Resolved
Merged into ceph-deploy master branch: 9605cefd71770118097a11f99a9fc27c1e30b1f5