[ajdev0@osd1 ~]$ sudo ceph-disk zap /dev/sdc /dev/sdd /dev/sde Caution: invalid backup GPT header, but valid main header; regenerating backup header from main header. Warning! Main and backup partition tables differ! Use the 'c' and 'e' options on the recovery & transformation menu to examine the two tables. Warning! One or more CRCs don't match. You should repair the disk! **************************************************************************** Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk verification and recovery are STRONGLY recommended. **************************************************************************** GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. The operation has completed successfully. Caution: invalid backup GPT header, but valid main header; regenerating backup header from main header. Warning! Main and backup partition tables differ! Use the 'c' and 'e' options on the recovery & transformation menu to examine the two tables. Warning! One or more CRCs don't match. You should repair the disk! **************************************************************************** Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk verification and recovery are STRONGLY recommended. **************************************************************************** GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. The operation has completed successfully. Caution: invalid backup GPT header, but valid main header; regenerating backup header from main header. Warning! Main and backup partition tables differ! Use the 'c' and 'e' options on the recovery & transformation menu to examine the two tables. Warning! One or more CRCs don't match. You should repair the disk! **************************************************************************** Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk verification and recovery are STRONGLY recommended. **************************************************************************** GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. The operation has completed successfully. ############################################################################################################################# [ajdev0@osd1 ~]$ sudo ceph-disk list /dev/sda : /dev/sda1 other, ext3, mounted on / /dev/sda2 other, 0x0 /dev/sda3 other, swap /dev/sda4 other, 0x0 /dev/sdb other, unknown /dev/sdc other, unknown /dev/sdd other, unknown /dev/sde other, unknown ############################################################################################################################# [ajdev0@osd1 ~]$ sudo rm -f /tmp/ramdisk/journal-sdc.journal ############################################################################################################################# [ajdev0@osd1 ~]$ sudo ceph-disk --verbose prepare /dev/sdc /tmp/ramdisk/journal-sdc.journal command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_type command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs prepare_file: Creating journal file /tmp/ramdisk/journal-sdc.journal with size 0 (ceph-osd will resize and allocate) command: Running command: /sbin/restorecon -R /tmp/ramdisk/journal-sdc.journal command: Running command: /usr/bin/chown -R ceph:ceph /tmp/ramdisk/journal-sdc.journal prepare_file: Journal is file /tmp/ramdisk/journal-sdc.journal prepare_file: OSD will not be hot-swappable if journal is not the same device as the osd data get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid set_data_partition: Creating osd partition on /dev/sdc get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid ptype_tobe_for_name: name = data get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid create_partition: Creating data partition num 1 size 0 on /dev/sdc command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:70b7ab91-566c-4885-a5cd-796f139a0948 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdc The operation has completed successfully. update_partition: Calling partprobe on created device /dev/sdc command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 command: Running command: /usr/bin/flock -s /dev/sdc /sbin/partprobe /dev/sdc command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid get_dm_uuid: get_dm_uuid /dev/sdc1 uuid path is /sys/dev/block/8:33/dm/uuid populate_data_path_device: Creating xfs fs on /dev/sdc1 command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdc1 specified blocksize 4096 is less than device physical sector size 8192 switching to logical sector size 512 meta-data=/dev/sdc1 isize=2048 agcount=4, agsize=7324095 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=29296379, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=14304, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 mount: Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.5iZ6qk with options noatime,inode64 command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.5iZ6qk command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.5iZ6qk populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.5iZ6qk command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.5iZ6qk/ceph_fsid.67481.tmp command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.5iZ6qk/ceph_fsid.67481.tmp command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.5iZ6qk/fsid.67481.tmp command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.5iZ6qk/fsid.67481.tmp command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.5iZ6qk/magic.67481.tmp command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.5iZ6qk/magic.67481.tmp command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.5iZ6qk/journal_uuid.67481.tmp command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.5iZ6qk/journal_uuid.67481.tmp adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.5iZ6qk/journal -> /tmp/ramdisk/journal-sdc.journal command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.5iZ6qk command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.5iZ6qk unmount: Unmounting /var/lib/ceph/tmp/mnt.5iZ6qk command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.5iZ6qk get_dm_uuid: get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdc Warning: The kernel is still using the old partition table. The new table will be used at the next reboot. The operation has completed successfully. update_partition: Calling partprobe on prepared device /dev/sdc command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 command: Running command: /usr/bin/flock -s /dev/sdc /sbin/partprobe /dev/sdc command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdc1 ######################################################################################################################################### [ajdev0@osd1 ~]$ sleep 10 ######################################################################################################################################### [ajdev0@osd1 ~]$ sudo ceph-disk list /dev/sda : /dev/sda1 other, ext3, mounted on / /dev/sda2 other, 0x0 /dev/sda3 other, swap /dev/sda4 other, 0x0 /dev/sdb other, unknown /dev/sdc : /dev/sdc1 ceph data, prepared, cluster ceph, osd.3 /dev/sdd other, unknown /dev/sde other, unknown ######################################################################################################################################### [ajdev0@osd1 ~]$ sudo ceph-disk --verbose activate /dev/sdc1 main_activate: path = /dev/sdc1 get_dm_uuid: get_dm_uuid /dev/sdc1 uuid path is /sys/dev/block/8:33/dm/uuid command: Running command: /sbin/blkid -o udev -p /dev/sdc1 command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdc1 command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs mount: Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.gO6lst with options noatime,inode64 command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.gO6lst command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.gO6lst activate: Cluster uuid is 066f558c-6789-4a93-aaf1-5af1ba01a3ad command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid activate: Cluster name is ceph activate: OSD uuid is 70b7ab91-566c-4885-a5cd-796f139a0948 activate: OSD id is 3 activate: Initializing OSD... command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.gO6lst/activate.monmap got monmap epoch 1 command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 3 --monmap /var/lib/ceph/tmp/mnt.gO6lst/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.gO6lst --osd-journal /var/lib/ceph/tmp/mnt.gO6lst/journal --osd-uuid 70b7ab91-566c-4885-a5cd-796f139a0948 --keyring /var/lib/ceph/tmp/mnt.gO6lst/keyring --setuser ceph --setgroup ceph mount_activate: Failed to activate unmount: Unmounting /var/lib/ceph/tmp/mnt.gO6lst command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.gO6lst Traceback (most recent call last): File "/sbin/ceph-disk", line 9, in load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')() File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5142, in run main(sys.argv[1:]) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5093, in main args.func(args) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3446, in main_activate reactivate=args.reactivate, File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3203, in mount_activate (osd_id, cluster) = activate(path, activate_key_template, init) File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3379, in activate keyring=keyring, File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2851, in mkfs '--setgroup', get_ceph_group(), File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2798, in ceph_osd_mkfs raise Error('%s failed : %s' % (str(arguments), error)) ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', u'3', '--monmap', '/var/lib/ceph/tmp/mnt.gO6lst/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.gO6lst', '--osd-journal', '/var/lib/ceph/tmp/mnt.gO6lst/journal', '--osd-uuid', u'70b7ab91-566c-4885-a5cd-796f139a0948', '--keyring', '/var/lib/ceph/tmp/mnt.gO6lst/keyring', '--setuser', 'ceph', '--setgroup', 'ceph'] failed : 2016-10-31 08:48:28.956391 7fae83b40800 -1 filestore(/var/lib/ceph/tmp/mnt.gO6lst) WARNING: max attr value size (1024) is smaller than osd_max_object_name_len (2048). Your backend filesystem appears to not support attrs large enough to handle the configured max rados name size. You may get unexpected ENAMETOOLONG errors on rados operations or buggy behavior 2016-10-31 08:48:28.981665 7fae83b40800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2016-10-31 08:48:28.981688 7fae83b40800 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 70b7ab91-566c-4885-a5cd-796f139a0948, invalid (someone else's?) journal 2016-10-31 08:48:28.981724 7fae83b40800 -1 filestore(/var/lib/ceph/tmp/mnt.gO6lst) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.gO6lst/journal: (22) Invalid argument 2016-10-31 08:48:28.981753 7fae83b40800 -1 OSD::mkfs: ObjectStore::mkfs failed with error -22 2016-10-31 08:48:28.981864 7fae83b40800 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.gO6lst: (22) Invalid argument