Project

General

Profile

Bug #16074

`ceph-deploy osd prepare` failed but daemon is running

Added by Shinobu Kinjo almost 8 years ago. Updated about 6 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
ceph-deploy
Crash signature (v1):
Crash signature (v2):

Description

[ceph@octopus conf]$ ceph-deploy osd prepare octopus:sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /bin/ceph-deploy osd prepare octopus:sdc
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('octopus', '/dev/sdc', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0xafe5a8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0xaf0b18>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks octopus:/dev/sdc:
[octopus][DEBUG ] connection detected need for sudo
[octopus][DEBUG ] connected to host: octopus
[octopus][DEBUG ] detect platform information from remote host
[octopus][DEBUG ] detect machine type
[octopus][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to octopus
[octopus][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[octopus][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host octopus disk /dev/sdc journal None activate False
[octopus][INFO ] Running command: sudo ceph-disk v prepare --cluster ceph --fs-type xfs - /dev/sdc
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-allows-journal i 0 --cluster ceph
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
[octopus][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[octopus][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[octopus][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[octopus][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[octopus][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdc
[octopus][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[octopus][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[octopus][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdc
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:cd23feb0-b10e-4c89-859c-7e4b55362487 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -
/dev/sdc
[octopus][DEBUG ] The operation has completed successfully.
[octopus][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdc
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdc
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[octopus][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/cd23feb0-b10e-4c89-859c-7e4b55362487
[octopus][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/cd23feb0-b10e-4c89-859c-7e4b55362487
[octopus][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[octopus][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[octopus][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdc
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:d9781520-f165-42bc-b0b6-02b60d5d1414 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdc
[octopus][DEBUG ] The operation has completed successfully.
[octopus][WARNIN] DEBUG:ceph-disk:Calling partprobe on created device /dev/sdc
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdc
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[octopus][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[octopus][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdc1
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs t xfs -f -i size=2048 - /dev/sdc1
[octopus][DEBUG ] meta-data=/dev/sdc1 isize=2048 agcount=4, agsize=17948607 blks
[octopus][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[octopus][DEBUG ] = crc=0 finobt=0
[octopus][DEBUG ] data = bsize=4096 blocks=71794427, imaxpct=25
[octopus][DEBUG ] = sunit=0 swidth=0 blks
[octopus][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=0
[octopus][DEBUG ] log =internal log bsize=4096 blocks=35055, version=2
[octopus][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[octopus][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[octopus][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.Jqtovf with options noatime,inode64
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount t xfs -o noatime,inode64 - /dev/sdc1 /var/lib/ceph/tmp/mnt.Jqtovf
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.Jqtovf
[octopus][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.Jqtovf
[octopus][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.Jqtovf/journal > /dev/disk/by-partuuid/cd23feb0-b10e-4c89-859c-7e4b55362487
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.Jqtovf/ceph_fsid.8224.tmp
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Jqtovf/ceph_fsid.8224.tmp
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.Jqtovf/fsid.8224.tmp
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Jqtovf/fsid.8224.tmp
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.Jqtovf/journal_uuid.8224.tmp
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Jqtovf/journal_uuid.8224.tmp
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.Jqtovf/magic.8224.tmp
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Jqtovf/magic.8224.tmp
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.Jqtovf
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Jqtovf
[octopus][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.Jqtovf
[octopus][WARNIN] INFO:ceph-disk:Running command: /bin/umount -
/var/lib/ceph/tmp/mnt.Jqtovf
[octopus][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uuid
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdc
[octopus][DEBUG ] The operation has completed successfully.
[octopus][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdc
[octopus][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[octopus][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdc
[octopus][WARNIN] Error: Error informing the kernel about modifications to partition /dev/sdc1 -- Device or resource busy. This means Linux won't know about any changes you made to /dev/sdc1 until you reboot -- so you shouldn't mount it or use it in any way before rebooting.
[octopus][WARNIN] Error: Failed to add partition 1 (Device or resource busy)
[octopus][WARNIN] Traceback (most recent call last):
[octopus][WARNIN] File "/sbin/ceph-disk", line 4036, in <module>
[octopus][WARNIN] main(sys.argv[1:])
[octopus][WARNIN] File "/sbin/ceph-disk", line 3990, in main
[octopus][WARNIN] args.func(args)
[octopus][WARNIN] File "/sbin/ceph-disk", line 1919, in main_prepare
[octopus][WARNIN] luks=luks
[octopus][WARNIN] File "/sbin/ceph-disk", line 1693, in prepare_dev
[octopus][WARNIN] update_partition(data, 'prepared')
[octopus][WARNIN] File "/sbin/ceph-disk", line 1247, in update_partition
[octopus][WARNIN] command_check_call(['partprobe', dev])
[octopus][WARNIN] File "/sbin/ceph-disk", line 375, in command_check_call
[octopus][WARNIN] return subprocess.check_call(arguments)
[octopus][WARNIN] File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
[octopus][WARNIN] raise CalledProcessError(retcode, cmd)
[octopus][WARNIN] subprocess.CalledProcessError: Command '['/sbin/partprobe', '/dev/sdc']' returned non-zero exit status 1
[octopus][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk v prepare --cluster ceph --fs-type xfs - /dev/sdc
[ERROR ] GenericError: Failed to create 1 OSDs

[ceph@octopus conf]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
[snip]
/dev/sdc1 287037488 33700 287003788 1% /var/lib/ceph/osd/ceph-2

[ceph@octopus conf]$ sudo systemctl status ceph-osd@2
- Ceph object storage daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
  • Active: active (running) since Mon 2016-05-30 22:50:23 EDT; 1min 17s ago*
    Main PID: 9174 (ceph-osd)
    CGroup: /system.slice/system-ceph\
    └─9174 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph

May 30 22:50:23 octopus.fullstack.go ceph-osd9174: starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
May 30 22:50:23 octopus.fullstack.go ceph-osd9174: 2016-05-30 22:50:23.843530 7f49f92af7c0 -1 osd.2 0 log_to_monitors {default=true}
May 30 22:50:23 octopus.fullstack.go ceph-osd9174: sh: lsb_release: command not found
May 30 22:50:23 octopus.fullstack.go ceph-osd9174: 2016-05-30 22:50:23.870360 7f49de2e8700 -1 lsb_release_parse - pclose failed: (13) Permission denied
May 30 22:50:24 octopus.fullstack.go systemd1: Started Ceph object storage daemon.
May 30 22:50:24 octopus.fullstack.go systemd1: Started Ceph object storage daemon.
May 30 22:50:25 octopus.fullstack.go systemd1: Started Ceph object storage daemon.
May 30 22:50:25 octopus.fullstack.go systemd1: Started Ceph object storage daemon.
May 30 22:50:26 octopus.fullstack.go systemd1: Started Ceph object storage daemon.
May 30 22:50:27 octopus.fullstack.go systemd1: Started Ceph object storage daemon.

Mmm...

History

#1 Updated by Josh Durgin almost 7 years ago

  • Project changed from Ceph to Ceph-deploy

#2 Updated by Alfredo Deza about 6 years ago

  • Status changed from New to Closed

This ticket is really ceph-disk failing. ceph-deploy no longer uses cehp-disk as a backend, so closing this.

Also available in: Atom PDF