Project

General

Profile

Actions

Bug #44028

closed

cephadm: usability: failing to add an osd, useless message

Added by Yehuda Sadeh about 4 years ago. Updated about 4 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

[ceph: root@mira118 /]# ceph orchestrator device ls
HOST                         PATH     TYPE  SIZE DEVICE                                 AVAIL REJECT REASONS 
mira111.front.sepia.ceph.com /dev/sdb hdd   931G Hitachi_HUA722010CLA330_JPW9K0N2160WME True                 
mira111.front.sepia.ceph.com /dev/sdc hdd   931G Seagate_ST31000528AS_5VP1J2MM          True                 
mira111.front.sepia.ceph.com /dev/sdd hdd   931G Hitachi_HUA722010CLA330_JPW9J0N214N4VC True                 
mira111.front.sepia.ceph.com /dev/sde hdd   931G Seagate_ST31000528AS_9VP5H1XS          True                 
mira111.front.sepia.ceph.com /dev/sdf hdd   931G Hitachi_HUA722010CLA330_JPW9K0N2162SWE True                 
mira111.front.sepia.ceph.com /dev/sdg hdd   931G Seagate_ST31000528AS_5VP7GGAN          True                 
mira111.front.sepia.ceph.com /dev/sdh hdd   931G Hitachi_HDS721010CLA330_JPS930N122PR9L True                 
mira111.front.sepia.ceph.com /dev/sda hdd   931G Hitachi_HUA722010CLA330_JPW9J0N214VNDC False locked         
mira115.front.sepia.ceph.com /dev/sdh hdd   931G Hitachi_HUA722010CLA330_JPW9K0HD2LMN8L True                 
mira115.front.sepia.ceph.com /dev/sda hdd   931G Hitachi_HUA722010CLA330_JPW9K0HD2NYR8L False locked         
mira115.front.sepia.ceph.com /dev/sdb hdd   931G Hitachi_HUA722010CLA330_JPW9K0HD2GWRLL False locked         
mira115.front.sepia.ceph.com /dev/sdc hdd   931G Hitachi_HUA722010CLA330_JPW9K0HD2NYN0L False locked         
mira115.front.sepia.ceph.com /dev/sdd hdd   931G Hitachi_HUA722010CLA330_JPW9J0HD2G1HYC False locked         
mira115.front.sepia.ceph.com /dev/sde hdd   931G Hitachi_HUA722010CLA330_JPW9K0HD2B325L False locked         
mira115.front.sepia.ceph.com /dev/sdf hdd   931G Hitachi_HUA722010CLA330_JPW9K0HZ22V3VL False locked         
mira115.front.sepia.ceph.com /dev/sdg hdd   931G Hitachi_HUA722010CLA330_JPW9K0N12X9ZDL False locked         
mira118.front.sepia.ceph.com /dev/sdd hdd   931G Hitachi_HUA722010CLA330_JPW9J0N2118HDC True                 
mira118.front.sepia.ceph.com /dev/sde hdd   931G Hitachi_HDS721010CLA330_JPS930N122PBZL True                 
mira118.front.sepia.ceph.com /dev/sdf hdd   931G Hitachi_HUA722010CLA330_JPW9K0HD2LP5LL True                 
mira118.front.sepia.ceph.com /dev/sdg hdd   931G Hitachi_HUA722010CLA330_JPW9J0N211GDKC True                 
mira118.front.sepia.ceph.com /dev/sdh hdd   931G Hitachi_HUA722010CLA330_JPW9J0N211GEWC True                 
mira118.front.sepia.ceph.com /dev/sda hdd   931G Hitachi_HDS721010CLA330_JPS930N1227KEL False locked         
mira118.front.sepia.ceph.com /dev/sdb hdd   931G NA_HUA721010KLA330_PAJ3H6PF            False locked         
mira118.front.sepia.ceph.com /dev/sdc hdd   931G Hitachi_HDS721010CLA330_JPS930N122MMML False locked         
[ceph: root@mira118 /]# ceph orchestrator osd create mira111.front.sepia.ceph.com:/dev/sdc
Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 1069, in _handle_command
    return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf)
  File "/usr/share/ceph/mgr/mgr_module.py", line 309, in call
    return self.func(mgr, **kwargs)
  File "/usr/share/ceph/mgr/orchestrator.py", line 140, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/ceph/mgr/orchestrator_cli/module.py", line 373, in _create_osd
    orchestrator.raise_if_exception(completion)
  File "/usr/share/ceph/mgr/orchestrator.py", line 655, in raise_if_exception
    raise e
  File "/lib64/python3.6/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/lib64/python3.6/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "/usr/share/ceph/mgr/cephadm/module.py", line 139, in do_work
    res = self._on_complete_(*args, **kwargs)
  File "/usr/share/ceph/mgr/cephadm/module.py", line 206, in call_self
    return f(self, *inner_args)
  File "/usr/share/ceph/mgr/cephadm/module.py", line 1492, in _create_osd
    stdin=j)
  File "/usr/share/ceph/mgr/cephadm/module.py", line 1106, in _run_cephadm
    code, '\n'.join(err)))
RuntimeError: cephadm exited with an error code: 1, stderr:INFO:cephadm:/usr/bin/docker:stderr  stderr: unable to read label for /dev/sdc: (2) No such file or directory
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/bin/ceph-authtool --gen-print-key
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 11e7e2c4-7e16-417e-b56a-a45b7a4c6dd6
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-11e7e2c4-7e16-417e-b56a-a45b7a4c6dd6 
INFO:cephadm:/usr/bin/docker:stderr  stderr: Volume group name  has invalid characters
INFO:cephadm:/usr/bin/docker:stderr   Run `lvcreate --help' for more information.
INFO:cephadm:/usr/bin/docker:stderr --> Was unable to complete a new OSD, will rollback changes
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
INFO:cephadm:/usr/bin/docker:stderr  stderr: purged osd.0
INFO:cephadm:/usr/bin/docker:stderr -->  RuntimeError: command returned non-zero exit status: 3
Traceback (most recent call last):
  File "<stdin>", line 2813, in <module>
  File "<stdin>", line 663, in _infer_fsid
  File "<stdin>", line 2116, in command_ceph_volume
  File "<stdin>", line 493, in call_throws
RuntimeError: Failed command: /usr/bin/docker run --rm --net=host --privileged -e CONTAINER_IMAGE=ceph/daemon-base:latest-master-devel -e NODE_NAME=mira111.front.sepia.ceph.com -v /var/run/ceph/58b27a20-4943-11ea-9cf6-00259009bf56:/var/run/ceph:z -v /var/log/ceph/58b27a20-4943-11ea-9cf6-00259009bf56:/var/log/ceph:z -v /var/lib/ceph/58b27a20-4943-11ea-9cf6-00259009bf56/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmpi772gm8_:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpv8bhvkyu:/var/lib/ceph/bootstrap-osd/ceph.keyring:z --entrypoint /usr/sbin/ceph-volume ceph/daemon-base:latest-master-devel lvm prepare --bluestore --data /dev/sdc --no-systemd

Related issues 1 (0 open1 closed)

Related to ceph-volume - Bug #44096: lvm prepare doesn't create vg and thus does not pass vg name to lvcreateResolvedYehuda Sadeh

Actions
Actions #1

Updated by Sebastian Wagner about 4 years ago

The error originated from ceph-volume:

Volume group name  has invalid characters
Run `lvcreate --help' for more information.

Question is: where does "100%FREE" come from?

Actions #2

Updated by Sebastian Wagner about 4 years ago

ceph-ansible! See https://github.com/search?q=ceph+100%25FREE&type=Code

Looks like a regression in ceph-volume to me!

Actions #3

Updated by Sebastian Wagner about 4 years ago

  • Related to Bug #44096: lvm prepare doesn't create vg and thus does not pass vg name to lvcreate added
Actions #4

Updated by Sebastian Wagner about 4 years ago

  • Status changed from New to Resolved

I don't think we can make the error message any more useful. We IMO have to fix bugs in c-v and cephadm if they appear.

Actions

Also available in: Atom PDF