Project

General

Profile

Actions

Bug #17821

closed

ceph-disk and dmcrypt does not support cluster names different than 'ceph'

Added by Sébastien Han over 7 years ago. Updated almost 7 years ago.

Status:
Resolved
Priority:
Immediate
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
jewel,kraken
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Command:

ceph-disk -v prepare --cluster ceph --dmcrypt /dev/vdd

Error:

...
...
command_check_call: Running command: /usr/bin/ceph config-key put dm-crypt/osd/2db5b2a2-c3c2-41bd-9b56-9cd3bda07dfd/luks JwGM8VRBtW8QIGBzcWyEERVI/Ta/2VVNRTouwBZbfwdrGljXwRIzFYdoYyNtAkh9LpTzhC8lOpQ7aOKZ8QtRHLhflAi+DAqqlxN+Gnee3duT0nj9iv90pYgXV+LADzCaIsIwHfwonWW0DqxYww600EdATLIbrZ9BVqoiaSoUI3s=
2016-11-08 13:32:04.704310 7fbc2deb9700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2016-11-08 13:32:04.710265 7fbc2deb9700 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
2016-11-08 13:32:04.710270 7fbc2deb9700  0 librados: client.admin authentication error (95) Operation not supported
Error connecting to cluster: Error
Traceback (most recent call last):
  File "/sbin/ceph-disk", line 9, in <module>
    load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5011, in run
    main(sys.argv[1:])
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4962, in main
    args.func(args)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1791, in main
    Prepare.factory(args).prepare()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1779, in prepare
    self.prepare_locked()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1810, in prepare_locked
    self.lockbox.prepare()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2377, in prepare
    self.populate()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2322, in populate
    self.create_key()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2281, in create_key
    base64_key,
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 439, in command_check_call
    return subprocess.check_call(arguments)
  File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/ceph', 'config-key', 'put', 'dm-crypt/osd/2db5b2a2-c3c2-41bd-9b56-9cd3bda07dfd/luks', 'JwGM8VRBtW8QIGBzcWyEERVI/Ta/2VVNRTouwBZbfwdrGljXwRIzFYdoYyNtAkh9LpTzhC8lOpQ7aOKZ8QtRHLhflAi+DAqqlxN+Gnee3duT0nj9iv90pYgXV+LADzCaIsIwHfwonWW0DqxYww600EdATLIbrZ9BVqoiaSoUI3s=']' returned non-zero exit status 1


Related issues 2 (0 open2 closed)

Copied to Ceph - Backport #18972: jewel: ceph-disk does not support cluster names different than 'ceph'ResolvedNathan CutlerActions
Copied to Ceph - Backport #18973: kraken: ceph-disk does not support cluster names different than 'ceph'ResolvedShinobu KinjoActions
Actions #1

Updated by Ken Dreyer over 7 years ago

  • Status changed from New to Fix Under Review
  • Backport set to jewel
Actions #2

Updated by Vikhyat Umrao about 7 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #3

Updated by Loïc Dachary about 7 years ago

  • Backport changed from jewel to jewel,kraken
Actions #4

Updated by Loïc Dachary about 7 years ago

  • Copied to Backport #18972: jewel: ceph-disk does not support cluster names different than 'ceph' added
Actions #5

Updated by Loïc Dachary about 7 years ago

  • Copied to Backport #18973: kraken: ceph-disk does not support cluster names different than 'ceph' added
Actions #6

Updated by Ganesh Mahalingam about 7 years ago

I recently ran into issues deploying ceph using ceph-deploy with this change. I believe these two changes should fix those. I believe i am not tripping on someone else's changes.

https://github.com/ceph/ceph/pull/13527
https://github.com/ceph/ceph-deploy/pull/430

The error i ran into:

ganeshma@otccldstore04:~$ ceph-deploy osd activate otccldstore04:/dev/sdb1:/dev/nvme0n1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ganeshma/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.37): /home/ganeshma/ceph-deploy/virtualenv/bin/ceph-deploy osd activate otccldstore04:/dev/sdb1:/dev/nvme0n1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : activate
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f7c91f3fd88>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f7c9226a6e0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('otccldstore04', '/dev/sdb1', '/dev/nvme0n1')]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks otccldstore04:/dev/sdb1:/dev/nvme0n1
[otccldstore04][DEBUG ] connection detected need for sudo
[otccldstore04][DEBUG ] connected to host: otccldstore04
[otccldstore04][DEBUG ] detect platform information from remote host
[otccldstore04][DEBUG ] detect machine type
[otccldstore04][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.osd][DEBUG ] activating host otccldstore04 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[otccldstore04][DEBUG ] find the location of an executable
[otccldstore04][INFO ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --cluster ceph --mount /dev/sdb1
[otccldstore04][WARNIN] usage: ceph-disk [-h] [-v] [--log-stdout] [--prepend-to-path PATH]
[otccldstore04][WARNIN] [--statedir PATH] [--sysconfdir PATH] [--setuser USER]
[otccldstore04][WARNIN] [--setgroup GROUP]
[otccldstore04][WARNIN] {prepare,activate,activate-lockbox,activate-block,activate-journal,activate-all,list,suppress-activate,unsuppress-activate,deactivate,destroy,zap,trigger}
[otccldstore04][WARNIN] ...
[otccldstore04][WARNIN] ceph-disk: error: unrecognized arguments: --cluster /dev/sdb1
[otccldstore04][ERROR ] RuntimeError: command returned non-zero exit status: 2
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --cluster ceph --mount /dev/sdb1

Actions #7

Updated by Ganesh Mahalingam about 7 years ago

Apologies. Wrong error message.

ganeshma@otccldstore04:~$ ceph-deploy osd activate otccldstore04:/dev/sdb1:/dev/nvme0n1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ganeshma/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.37): /home/ganeshma/ceph-deploy/virtualenv/bin/ceph-deploy osd activate otccldstore04:/dev/sdb1:/dev/nvme0n1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : activate
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f0878a85d88>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f0878db06e0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('otccldstore04', '/dev/sdb1', '/dev/nvme0n1')]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks otccldstore04:/dev/sdb1:/dev/nvme0n1
[otccldstore04][DEBUG ] connection detected need for sudo
[otccldstore04][DEBUG ] connected to host: otccldstore04
[otccldstore04][DEBUG ] detect platform information from remote host
[otccldstore04][DEBUG ] detect machine type
[otccldstore04][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.osd][DEBUG ] activating host otccldstore04 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[otccldstore04][DEBUG ] find the location of an executable
[otccldstore04][INFO ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1
[otccldstore04][WARNIN] main_activate: path = /dev/sdb1
[otccldstore04][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[otccldstore04][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/sdb1
[otccldstore04][WARNIN] Traceback (most recent call last):
[otccldstore04][WARNIN] File "/usr/sbin/ceph-disk", line 9, in <module>
[otccldstore04][WARNIN] load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[otccldstore04][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5298, in run
[otccldstore04][WARNIN] main(sys.argv[1:])
[otccldstore04][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5249, in main
[otccldstore04][WARNIN] args.func(args)
[otccldstore04][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3594, in main_activate
[otccldstore04][WARNIN] cluster=args.cluster,
[otccldstore04][WARNIN] AttributeError: 'Namespace' object has no attribute 'cluster'
[otccldstore04][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1

Actions #8

Updated by Loïc Dachary about 7 years ago

  • Status changed from Pending Backport to In Progress
  • Assignee set to Loïc Dachary
  • Priority changed from Normal to Immediate

Regression: activate must not have a --cluster argument, working on it

Actions #9

Updated by Loïc Dachary about 7 years ago

  • Status changed from In Progress to Fix Under Review
Actions #11

Updated by Loïc Dachary about 7 years ago

  • Subject changed from ceph-disk does not support cluster names different than 'ceph' to ceph-disk and dmcrypt does not support cluster names different than 'ceph'
Actions #12

Updated by Loïc Dachary about 7 years ago

  • Description updated (diff)

For the purpose of backporting only https://github.com/ceph/ceph/pull/13573/commits/7f66672b675abbc0262769d32a38112c781fefac should be cherry-picked

Actions #13

Updated by Loïc Dachary about 7 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #14

Updated by Nathan Cutler almost 7 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF