Project

General

Profile

Bug #17849 » ceph.log

Joshua Schmid, 11/09/2016 05:26 PM

 
salt-master :: ~ » ceph-deploy -v osd prepare --dmcrypt salt-minion-2:/dev/vdb 1 ↵
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy -v osd prepare --dmcrypt salt-minion-2:/dev/vdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('salt-minion-2', '/dev/vdb', None)]
[ceph_deploy.cli][INFO ] dmcrypt : True
[ceph_deploy.cli][INFO ] verbose : True
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f050c081248>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f050c4d59b0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks salt-minion-2:/dev/vdb:
[salt-minion-2][DEBUG ] connected to host: salt-minion-2
[salt-minion-2][DEBUG ] detect platform information from remote host
[salt-minion-2][DEBUG ] detect machine type
[salt-minion-2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: SUSE Linux Enterprise Server 12 x86_64
[ceph_deploy.osd][DEBUG ] Deploying osd to salt-minion-2
[salt-minion-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

salt-master :: ~ » ceph-deploy --overwrite-conf -v osd prepare --dmcrypt salt-minion-2:/dev/vdb 1 ↵
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy --overwrite-conf -v osd prepare --dmcrypt salt-minion-2:/dev/vdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('salt-minion-2', '/dev/vdb', None)]
[ceph_deploy.cli][INFO ] dmcrypt : True
[ceph_deploy.cli][INFO ] verbose : True
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1843361248>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f18437b59b0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks salt-minion-2:/dev/vdb:
[salt-minion-2][DEBUG ] connected to host: salt-minion-2
[salt-minion-2][DEBUG ] detect platform information from remote host
[salt-minion-2][DEBUG ] detect machine type
[salt-minion-2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: SUSE Linux Enterprise Server 12 x86_64
[ceph_deploy.osd][DEBUG ] Deploying osd to salt-minion-2
[salt-minion-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host salt-minion-2 disk /dev/vdb journal None activate False
[salt-minion-2][DEBUG ] find the location of an executable
[salt-minion-2][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph --fs-type xfs -- /dev/vdb
[salt-minion-2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[salt-minion-2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph
[salt-minion-2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
[salt-minion-2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
[salt-minion-2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/254:16/dm/uuid
[salt-minion-2][WARNIN] set_type: Will colocate journal with data on /dev/vdb
[salt-minion-2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[salt-minion-2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/254:16/dm/uuid
[salt-minion-2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/254:16/dm/uuid
[salt-minion-2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/254:16/dm/uuid
[salt-minion-2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/254:16/dm/uuid
[salt-minion-2][WARNIN] set_or_create_partition: Creating osd partition on /dev/vdb
[salt-minion-2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/254:16/dm/uuid
[salt-minion-2][WARNIN] ptype_tobe_for_name: name = lockbox
[salt-minion-2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/254:16/dm/uuid
[salt-minion-2][WARNIN] create_partition: Creating lockbox partition num 3 size 10 on /dev/vdb
[salt-minion-2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --new=3:0:+10M --change-name=3:ceph lockbox --partition-guid=3:None --typecode=3:fb3aabf9-d25f-47cc-bf5e-721d181642be --mbrtogpt -- /dev/vdb
[salt-minion-2][DEBUG ] Creating new GPT entries.
[salt-minion-2][DEBUG ] The operation has completed successfully.
[salt-minion-2][WARNIN] update_partition: Calling partprobe on created device /dev/vdb
[salt-minion-2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[salt-minion-2][WARNIN] command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb
[salt-minion-2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[salt-minion-2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/254:16/dm/uuid
[salt-minion-2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/254:16/dm/uuid
[salt-minion-2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb3 uuid path is /sys/dev/block/254:19/dm/uuid
[salt-minion-2][WARNIN] populate: Creating lockbox fs on %s: mkfs -t ext4 /dev/vdb3
[salt-minion-2][WARNIN] command_check_call: Running command: /sbin/mkfs -t ext4 /dev/vdb3
[salt-minion-2][WARNIN] mke2fs 1.42.11 (09-Jul-2014)
[salt-minion-2][DEBUG ] Creating filesystem with 10240 1k blocks and 2560 inodes
[salt-minion-2][DEBUG ] Filesystem UUID: d1e214d4-9e52-4dbe-af30-d1684c468f63
[salt-minion-2][DEBUG ] Superblock backups stored on blocks:
[salt-minion-2][DEBUG ] 8193
[salt-minion-2][DEBUG ]
[salt-minion-2][DEBUG ] Allocating group tables: done
[salt-minion-2][DEBUG ] Writing inode tables: done
[salt-minion-2][DEBUG ] Creating journal (1024 blocks): done
[salt-minion-2][DEBUG ] Writing superblocks and filesystem accounting information: done
[salt-minion-2][DEBUG ]
[salt-minion-2][WARNIN] populate: Mounting lockbox mount -t ext4 /dev/vdb3 /var/lib/ceph/osd-lockbox/ce816935-fb5e-414b-8df8-e0c8973f7fb0
[salt-minion-2][WARNIN] command_check_call: Running command: /usr/bin/mount -t ext4 /dev/vdb3 /var/lib/ceph/osd-lockbox/ce816935-fb5e-414b-8df8-e0c8973f7fb0
[salt-minion-2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd-lockbox/ce816935-fb5e-414b-8df8-e0c8973f7fb0/osd-uuid.1855.tmp
[salt-minion-2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[salt-minion-2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[salt-minion-2][WARNIN] command_check_call: Running command: /usr/bin/ceph config-key put dm-crypt/osd/ce816935-fb5e-414b-8df8-e0c8973f7fb0/luks ZVRXLA0ZU4NsU2oixPPLG9wYsC2jlnztzv3BcXA4jIhSU9L63g/Sjo3RGRfvwHEc3BcowLs2m05MkKds4T0cORw8s8dLYpHnbz3H+Em67oQbj/l6JwSt/ErHJK6fWtVy/TwMxyvfME/ZMy4p/CtKM1t7nlxsUTR+DsojZaSlSpo=
[salt-minion-2][WARNIN] 2016-11-09 12:25:56.386822 7f1e6bb85700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
[salt-minion-2][WARNIN] 2016-11-09 12:25:56.387059 7f1e6bb85700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
[salt-minion-2][WARNIN] 2016-11-09 12:25:56.387153 7f1e6bb85700 0 librados: client.admin initialization error (2) No such file or directory
[salt-minion-2][WARNIN] Error connecting to cluster: ObjectNotFound
[salt-minion-2][WARNIN] Traceback (most recent call last):
[salt-minion-2][WARNIN] File "/usr/sbin/ceph-disk", line 9, in <module>
[salt-minion-2][WARNIN] load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[salt-minion-2][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5009, in run
[salt-minion-2][WARNIN] main(sys.argv[1:])
[salt-minion-2][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4960, in main
[salt-minion-2][WARNIN] args.func(args)
[salt-minion-2][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1793, in main
[salt-minion-2][WARNIN] Prepare.factory(args).prepare()
[salt-minion-2][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1782, in prepare
[salt-minion-2][WARNIN] self.prepare_locked()
[salt-minion-2][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1812, in prepare_locked
[salt-minion-2][WARNIN] self.lockbox.prepare()
[salt-minion-2][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2382, in prepare
[salt-minion-2][WARNIN] self.populate()
[salt-minion-2][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2327, in populate
[salt-minion-2][WARNIN] self.create_key()
[salt-minion-2][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2286, in create_key
[salt-minion-2][WARNIN] base64_key,
[salt-minion-2][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 440, in command_check_call
[salt-minion-2][WARNIN] return subprocess.check_call(arguments)
[salt-minion-2][WARNIN] File "/usr/lib64/python2.7/subprocess.py", line 540, in check_call
[salt-minion-2][WARNIN] raise CalledProcessError(retcode, cmd)
[salt-minion-2][WARNIN] subprocess.CalledProcessError: Command '['/usr/bin/ceph', 'config-key', 'put', 'dm-crypt/osd/ce816935-fb5e-414b-8df8-e0c8973f7fb0/luks', 'ZVRXLA0ZU4NsU2oixPPLG9wYsC2jlnztzv3BcXA4jIhSU9L63g/Sjo3RGRfvwHEc3BcowLs2m05MkKds4T0cORw8s8dLYpHnbz3H+Em67oQbj/l6JwSt/ErHJK6fWtVy/TwMxyvfME/ZMy4p/CtKM1t7nlxsUTR+DsojZaSlSpo=']' returned non-zero exit status 1
[salt-minion-2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v prepare --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph --fs-type xfs -- /dev/vdb
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

    (1-1/1)