Project

General

Profile

Actions

Feature #6258

closed

ceph-disk: zap should wipefs

Added by Sage Weil over 10 years ago. Updated about 5 years ago.

Status:
Rejected
Priority:
High
Assignee:
-
Category:
ceph cli
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

Actions #1

Updated by Ian Colle over 10 years ago

  • Status changed from New to 4

Waiting for feedback from list.

Actions #2

Updated by Ian Colle over 10 years ago

  • Assignee set to Alfredo Deza
Actions #3

Updated by Ian Colle about 10 years ago

  • Tracker changed from Bug to Feature
  • Status changed from 4 to New
Actions #4

Updated by Alfredo Deza over 9 years ago

A user in the #ceph-devel channel had issues, it wouldn't matter that he tried to zap the disk, the filesystem was still there
and in this case, `btrfs` refused to continue for an OSD prepare:

[root@server0 ceph]#  for i in 0 1 ; do for x in b c d ; do ceph-deploy osd create --zap-disk --fs-type btrfs node${i}:sd${x}:/dev/sd${x}2 ;done ; done
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.18): /usr/bin/ceph-deploy osd create --zap-disk --fs-type btrfs node0:sdb:/dev/sdb2
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node0:/dev/sdb:/dev/sdb2
[node0][DEBUG ] connected to host: node0 
[node0][DEBUG ] detect platform information from remote host
[node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Fedora 20 Heisenbug
[ceph_deploy.osd][DEBUG ] Deploying osd to node0
[node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node0][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host node0 disk /dev/sdb journal /dev/sdb2 activate True
[node0][INFO  ] Running command: ceph-disk -v prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdb /dev/sdb2
[node0][DEBUG ] ****************************************************************************
[node0][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[node0][DEBUG ] verification and recovery are STRONGLY recommended.
[node0][DEBUG ] ****************************************************************************
[node0][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node0][DEBUG ] other utilities.
[node0][DEBUG ] The operation has completed successfully.
[node0][DEBUG ] Setting name!
[node0][DEBUG ] partNum is 0
[node0][DEBUG ] REALLY setting name!
[node0][DEBUG ] The operation has completed successfully.
[node0][WARNIN] DEBUG:ceph-disk:Zapping partition table on /dev/sdb
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --zap-all --clear --mbrtogpt -- /dev/sdb
[node0][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[node0][WARNIN] backup header from main header.
[node0][WARNIN] 
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_btrfs
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_btrfs
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_btrfs
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node0][WARNIN] DEBUG:ceph-disk:Creating journal file /dev/sdb2 with size 0 (ceph-osd will resize and allocate)
[node0][WARNIN] DEBUG:ceph-disk:Journal is file /dev/sdb2
[node0][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
[node0][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdb
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:e10007c3-dd2e-432c-8628-0d3207297aae --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdb
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partprobe /dev/sdb
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[node0][WARNIN] DEBUG:ceph-disk:Creating btrfs fs on /dev/sdb1
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t btrfs -m single -l 32768 -n 32768 -- /dev/sdb1
[node0][WARNIN] /dev/sdb1 appears to contain an existing filesystem (btrfs).
[node0][WARNIN] Error: Use the -f option to force overwrite.
[node0][WARNIN] ceph-disk: Error: Command '['/usr/sbin/mkfs', '-t', 'btrfs', '-m', 'single', '-l', '32768', '-n', '32768', '--', '/dev/sdb1']' returned non-zero exit status 1
[node0][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdb /dev/sdb2
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.18): /usr/bin/ceph-deploy osd create --zap-disk --fs-type btrfs node0:sdc:/dev/sdc2
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node0:/dev/sdc:/dev/sdc2
[node0][DEBUG ] connected to host: node0 
[node0][DEBUG ] detect platform information from remote host
[node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Fedora 20 Heisenbug
[ceph_deploy.osd][DEBUG ] Deploying osd to node0
[node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node0][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host node0 disk /dev/sdc journal /dev/sdc2 activate True
[node0][INFO  ] Running command: ceph-disk -v prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdc /dev/sdc2
[node0][DEBUG ] ****************************************************************************
[node0][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[node0][DEBUG ] verification and recovery are STRONGLY recommended.
[node0][DEBUG ] ****************************************************************************
[node0][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node0][DEBUG ] other utilities.
[node0][DEBUG ] The operation has completed successfully.
[node0][DEBUG ] Setting name!
[node0][DEBUG ] partNum is 0
[node0][DEBUG ] REALLY setting name!
[node0][DEBUG ] The operation has completed successfully.
[node0][WARNIN] DEBUG:ceph-disk:Zapping partition table on /dev/sdc
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --zap-all --clear --mbrtogpt -- /dev/sdc
[node0][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[node0][WARNIN] backup header from main header.
[node0][WARNIN] 
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_btrfs
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_btrfs
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_btrfs
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node0][WARNIN] DEBUG:ceph-disk:Creating journal file /dev/sdc2 with size 0 (ceph-osd will resize and allocate)
[node0][WARNIN] DEBUG:ceph-disk:Journal is file /dev/sdc2
[node0][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
[node0][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdc
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:97309010-5a3b-4dc5-8284-4d8504c0e09e --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdc
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partprobe /dev/sdc
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[node0][WARNIN] DEBUG:ceph-disk:Creating btrfs fs on /dev/sdc1
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t btrfs -m single -l 32768 -n 32768 -- /dev/sdc1
[node0][WARNIN] /dev/sdc1 appears to contain an existing filesystem (btrfs).
[node0][WARNIN] Error: Use the -f option to force overwrite.
[node0][WARNIN] ceph-disk: Error: Command '['/usr/sbin/mkfs', '-t', 'btrfs', '-m', 'single', '-l', '32768', '-n', '32768', '--', '/dev/sdc1']' returned non-zero exit status 1
[node0][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdc /dev/sdc2
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.18): /usr/bin/ceph-deploy osd create --zap-disk --fs-type btrfs node0:sdd:/dev/sdd2
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node0:/dev/sdd:/dev/sdd2
[node0][DEBUG ] connected to host: node0 
[node0][DEBUG ] detect platform information from remote host
[node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Fedora 20 Heisenbug
[ceph_deploy.osd][DEBUG ] Deploying osd to node0
[node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node0][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host node0 disk /dev/sdd journal /dev/sdd2 activate True
[node0][INFO  ] Running command: ceph-disk -v prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdd /dev/sdd2
[node0][DEBUG ] ****************************************************************************
[node0][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[node0][DEBUG ] verification and recovery are STRONGLY recommended.
[node0][DEBUG ] ****************************************************************************
[node0][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node0][DEBUG ] other utilities.
[node0][DEBUG ] The operation has completed successfully.
[node0][DEBUG ] Setting name!
[node0][DEBUG ] partNum is 0
[node0][DEBUG ] REALLY setting name!
[node0][DEBUG ] The operation has completed successfully.
[node0][WARNIN] DEBUG:ceph-disk:Zapping partition table on /dev/sdd
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --zap-all --clear --mbrtogpt -- /dev/sdd
[node0][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[node0][WARNIN] backup header from main header.
[node0][WARNIN] 
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_btrfs
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_btrfs
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_btrfs
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node0][WARNIN] DEBUG:ceph-disk:Creating journal file /dev/sdd2 with size 0 (ceph-osd will resize and allocate)
[node0][WARNIN] DEBUG:ceph-disk:Journal is file /dev/sdd2
[node0][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
[node0][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdd
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:ca5f71ae-7f9b-4562-941b-61e82d553388 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdd
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partprobe /dev/sdd
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[node0][WARNIN] DEBUG:ceph-disk:Creating btrfs fs on /dev/sdd1
[node0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t btrfs -m single -l 32768 -n 32768 -- /dev/sdd1
[node0][WARNIN] /dev/sdd1 appears to contain an existing filesystem (btrfs).
[node0][WARNIN] Error: Use the -f option to force overwrite.
[node0][WARNIN] ceph-disk: Error: Command '['/usr/sbin/mkfs', '-t', 'btrfs', '-m', 'single', '-l', '32768', '-n', '32768', '--', '/dev/sdd1']' returned non-zero exit status 1
[node0][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdd /dev/sdd2
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.18): /usr/bin/ceph-deploy osd create --zap-disk --fs-type btrfs node1:sdb:/dev/sdb2
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/sdb:/dev/sdb2
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Fedora 20 Heisenbug
[ceph_deploy.osd][DEBUG ] Deploying osd to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/sdb journal /dev/sdb2 activate True
[node1][INFO  ] Running command: ceph-disk -v prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdb /dev/sdb2
[node1][DEBUG ] ****************************************************************************
[node1][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[node1][DEBUG ] verification and recovery are STRONGLY recommended.
[node1][DEBUG ] ****************************************************************************
[node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node1][DEBUG ] other utilities.
[node1][DEBUG ] The operation has completed successfully.
[node1][DEBUG ] Setting name!
[node1][DEBUG ] partNum is 0
[node1][DEBUG ] REALLY setting name!
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] DEBUG:ceph-disk:Zapping partition table on /dev/sdb
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --zap-all --clear --mbrtogpt -- /dev/sdb
[node1][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[node1][WARNIN] backup header from main header.
[node1][WARNIN] 
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_btrfs
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_btrfs
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_btrfs
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node1][WARNIN] DEBUG:ceph-disk:Creating journal file /dev/sdb2 with size 0 (ceph-osd will resize and allocate)
[node1][WARNIN] DEBUG:ceph-disk:Journal is file /dev/sdb2
[node1][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
[node1][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdb
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:8f7415b3-2255-48ed-b905-f8cbad952529 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdb
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partprobe /dev/sdb
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[node1][WARNIN] DEBUG:ceph-disk:Creating btrfs fs on /dev/sdb1
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t btrfs -m single -l 32768 -n 32768 -- /dev/sdb1
[node1][WARNIN] /dev/sdb1 appears to contain an existing filesystem (btrfs).
[node1][WARNIN] Error: Use the -f option to force overwrite.
[node1][WARNIN] ceph-disk: Error: Command '['/usr/sbin/mkfs', '-t', 'btrfs', '-m', 'single', '-l', '32768', '-n', '32768', '--', '/dev/sdb1']' returned non-zero exit status 1
[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdb /dev/sdb2
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.18): /usr/bin/ceph-deploy osd create --zap-disk --fs-type btrfs node1:sdc:/dev/sdc2
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/sdc:/dev/sdc2
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Fedora 20 Heisenbug
[ceph_deploy.osd][DEBUG ] Deploying osd to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/sdc journal /dev/sdc2 activate True
[node1][INFO  ] Running command: ceph-disk -v prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdc /dev/sdc2
[node1][DEBUG ] ****************************************************************************
[node1][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[node1][DEBUG ] verification and recovery are STRONGLY recommended.
[node1][DEBUG ] ****************************************************************************
[node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node1][DEBUG ] other utilities.
[node1][DEBUG ] The operation has completed successfully.
[node1][DEBUG ] Setting name!
[node1][DEBUG ] partNum is 0
[node1][DEBUG ] REALLY setting name!
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] DEBUG:ceph-disk:Zapping partition table on /dev/sdc
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --zap-all --clear --mbrtogpt -- /dev/sdc
[node1][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[node1][WARNIN] backup header from main header.
[node1][WARNIN] 
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_btrfs
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_btrfs
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_btrfs
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node1][WARNIN] DEBUG:ceph-disk:Creating journal file /dev/sdc2 with size 0 (ceph-osd will resize and allocate)
[node1][WARNIN] DEBUG:ceph-disk:Journal is file /dev/sdc2
[node1][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
[node1][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdc
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:8c64e4fd-f41f-454a-8698-2569bc61dd03 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdc
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partprobe /dev/sdc
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[node1][WARNIN] DEBUG:ceph-disk:Creating btrfs fs on /dev/sdc1
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t btrfs -m single -l 32768 -n 32768 -- /dev/sdc1
[node1][WARNIN] /dev/sdc1 appears to contain an existing filesystem (btrfs).
[node1][WARNIN] Error: Use the -f option to force overwrite.
[node1][WARNIN] ceph-disk: Error: Command '['/usr/sbin/mkfs', '-t', 'btrfs', '-m', 'single', '-l', '32768', '-n', '32768', '--', '/dev/sdc1']' returned non-zero exit status 1
[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdc /dev/sdc2
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.18): /usr/bin/ceph-deploy osd create --zap-disk --fs-type btrfs node1:sdd:/dev/sdd2
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/sdd:/dev/sdd2
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Fedora 20 Heisenbug
[ceph_deploy.osd][DEBUG ] Deploying osd to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/sdd journal /dev/sdd2 activate True
[node1][INFO  ] Running command: ceph-disk -v prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdd /dev/sdd2
[node1][DEBUG ] ****************************************************************************
[node1][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[node1][DEBUG ] verification and recovery are STRONGLY recommended.
[node1][DEBUG ] ****************************************************************************
[node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node1][DEBUG ] other utilities.
[node1][DEBUG ] The operation has completed successfully.
[node1][DEBUG ] Setting name!
[node1][DEBUG ] partNum is 0
[node1][DEBUG ] REALLY setting name!
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] DEBUG:ceph-disk:Zapping partition table on /dev/sdd
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --zap-all --clear --mbrtogpt -- /dev/sdd
[node1][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[node1][WARNIN] backup header from main header.
[node1][WARNIN] 
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_btrfs
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_btrfs
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_btrfs
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node1][WARNIN] DEBUG:ceph-disk:Creating journal file /dev/sdd2 with size 0 (ceph-osd will resize and allocate)
[node1][WARNIN] DEBUG:ceph-disk:Journal is file /dev/sdd2
[node1][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
[node1][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdd
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:918fff86-4583-4e0f-9466-4447085816a9 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdd
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partprobe /dev/sdd
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[node1][WARNIN] DEBUG:ceph-disk:Creating btrfs fs on /dev/sdd1
[node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t btrfs -m single -l 32768 -n 32768 -- /dev/sdd1
[node1][WARNIN] /dev/sdd1 appears to contain an existing filesystem (btrfs).
[node1][WARNIN] Error: Use the -f option to force overwrite.
[node1][WARNIN] ceph-disk: Error: Command '['/usr/sbin/mkfs', '-t', 'btrfs', '-m', 'single', '-l', '32768', '-n', '32768', '--', '/dev/sdd1']' returned non-zero exit status 1
[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdd /dev/sdd2
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

This seemed to work (in his case), which is obvious in this ticket since he called `wipefs`:

#!/bin/bash

for i in b c d; do
  for n in 1 2 ; do
    wipefs /dev/sd${i}${n}
    dd if=/dev/zero of=/dev/sd${i}${n} bs=4k count=10000
    sgdisk --zap-all --clear -g /dev/sd${i}
#    parted /dev/sd${i} rm ${n}
    kpartx -dug /dev/sd${i}
    partprobe /dev/sd${i}
    dd if=/dev/zero of=/dev/sd${i} bs=4k count=10000
    ceph-disk zap /dev/sd${i}
  done
done

/usr/bin/udevadm settle
Actions #5

Updated by Ben Hines almost 9 years ago

+1 for this, would be nice.

Actions #6

Updated by Sage Weil over 8 years ago

  • Project changed from devops to Ceph
  • Category deleted (ceph-disk)
Actions #7

Updated by Alfredo Deza about 7 years ago

  • Category set to ceph cli
  • Assignee deleted (Alfredo Deza)
Actions #8

Updated by Patrick Donnelly about 5 years ago

  • Status changed from New to Rejected

ceph-disk is dead; long live ceph-volume!

Actions

Also available in: Atom PDF