Project

General

Profile

Actions

Feature #49159

closed

"cephadm ceph-volume activate" does not support cephadm

Added by Kenneth Waegeman about 3 years ago. Updated over 1 year ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
cephadm
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
pacific,octopus
Reviewed:
Affected Versions:
Pull request ID:

Description

On 15.2.8, when running cephadm ceph-volume -- lvm activate --all`, I get an error related to dmcrypt:


 [root@osd2803 ~]# cephadm ceph-volume -- lvm activate --all
 Using recent ceph image docker.io/ceph/ceph:v15
 /usr/bin/podman:stderr --> Activating OSD ID 0 FSID 697698fd-3fa0-480f-807b-68492bd292bf
 /usr/bin/podman:stderr Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
 /usr/bin/podman:stderr Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/lockbox.keyring --create-keyring --name client.osd-lockbox.697698fd-3fa0-480f-807b-68492bd292bf --add-key AQAy7Bdg0jQsBhAAj0gcteTEbcpwNNvMGZqTTg==
 /usr/bin/podman:stderr  stdout: creating /var/lib/ceph/osd/ceph-0/lockbox.keyring
 /usr/bin/podman:stderr added entity client.osd-lockbox.697698fd-3fa0-480f-807b-68492bd292bf auth(key=AQAy7Bdg0jQsBhAAj0gcteTEbcpwNNvMGZqTTg==)
 /usr/bin/podman:stderr Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/lockbox.keyring
 /usr/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.osd-lockbox.697698fd-3fa0-480f-807b-68492bd292bf --keyring  /var/lib/ceph/osd/ceph-0/lockbox.keyring config-key get dm-crypt/osd/697698fd-3fa0-480f-807b-68492bd292bf/luks
 /usr/bin/podman:stderr  stderr: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
 /usr/bin/podman:stderr -->  RuntimeError: Unable to retrieve dmcrypt secret
 Traceback (most recent call last):
   File "/usr/sbin/cephadm", line 6111, in <module>
     r = args.func()
   File "/usr/sbin/cephadm", line 1322, in _infer_fsid
     return func()
   File "/usr/sbin/cephadm", line 1381, in _infer_image
     return func()
   File "/usr/sbin/cephadm", line 3611, in command_ceph_volume
     out, err, code = call_throws(c.run_cmd(), verbose=True)
   File "/usr/sbin/cephadm", line 1060, in call_throws
     raise RuntimeError('Failed command: %s' % ' '.join(command))
 RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=osd2803.banette.os -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm docker.io/ceph/ceph:v15 lvm activate --all

The OSDs are encrypted indeed. `cephadm ceph-volume lvm list` and `cephadm shell ceph -s` run just fine, and if I run ceph-volume directly, the same command works, but then of course the daemons are started in the legacy way again, not in containers.

Thanks!!

Kenneth


Related issues 2 (1 open1 closed)

Related to Orchestrator - Documentation #46691: Document manually deploment of OSDsNew

Actions
Has duplicate Orchestrator - Feature #49249: cephadm: Automatically create OSDs after reinstalling base osDuplicate

Actions
Actions #1

Updated by Sebastian Wagner about 3 years ago

there should not be such a big difference between running ceph-volume naively vs in a container.

Please have a look at the osd's unit.run files https://docs.ceph.com/en/latest/cephadm/troubleshooting/#manually-running-containers

this is how cephadm runs ceph-volume activate.

Did you verify that the files within the daemon directory are pointing to the correct cluster?

Actions #2

Updated by Kenneth Waegeman about 3 years ago

Hi,

Thanks for looking into this!
Well, it's a chicken-egg problem:
This is a node that I just reinstalled to a new os version. It has keyrings and ceph.conf in /etc/ceph and ceph/cephadm installed. Normally I would just run `ceph-volume lvm activate --all`,
this will then scan all existing disks for osds, make all needed files including systemd services and start the osds. I can still do that, but then they don't run in containers. I can run cephadm adopt after this, and then all will be fine, but I thought it would be possible to do this in one step running cephadm `ceph-volume`.

Now I looked into those run files and I see the environment is different, indeed if I look at the command that is run when I do `cephadm ceph-volume lvm activate --all'

/usr/bin/podman run --rm --ipc=host --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15.2.8 -e NODE_NAME=osd2802.banette.os -v /var/run/ceph/d3253507-6fa6-42f1-8b22-59f761633f4c:/var/run/ceph:z -v /var/log/ceph/d3253507-6fa6-42f1-8b22-59f761633f4c:/var/log/ceph:z -v /var/lib/ceph/d3253507-6fa6-42f1-8b22-59f761633f4c/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm docker.io/ceph/ceph:v15.2.8 lvm activate --all

and I add a `-v /etc/ceph:/etc/ceph`

it actually does not return an error on the activation, but only at the systemd part

[root@osd2802 ~]# /usr/bin/podman run --rm --ipc=host --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15.2.8 -e NODE_NAME=osd2802.banette.os -v /var/run/ceph/d3253507-6fa6-42f1-8b22-59f761633f4c:/var/run/ceph:z -v /var/log/ceph/d3253507-6fa6-42f1-8b22-59f761633f4c:/var/log/ceph:z -v /var/lib/ceph/d3253507-6fa6-42f1-8b22-59f761633f4c/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /etc/ceph:/etc/ceph -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm docker.io/ceph/ceph:v15.2.8 lvm activate --all
--> Activating OSD ID 7 FSID 3cb0a90b-61e5-4756-9a37-7b70016bec83
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-7
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-7/lockbox.keyring --create-keyring --name client.osd-lockbox.3cb0a90b-61e5-4756-9a37-7b70016bec83 --add-key AQDikllbmrv4ERAA7NvMnZe0Mvgh/BExseQ5gA==
 stdout: creating /var/lib/ceph/osd/ceph-7/lockbox.keyring
 stdout: added entity client.osd-lockbox.3cb0a90b-61e5-4756-9a37-7b70016bec83 auth(key=AQDikllbmrv4ERAA7NvMnZe0Mvgh/BExseQ5gA==)
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7/lockbox.keyring
Running command: /usr/bin/ceph --cluster ceph --name client.osd-lockbox.3cb0a90b-61e5-4756-9a37-7b70016bec83 --keyring /var/lib/ceph/osd/ceph-7/lockbox.keyring config-key get dm-crypt/osd/3cb0a90b-61e5-4756-9a37-7b70016bec83/luks
Running command: /usr/sbin/cryptsetup --key-file - --allow-discards luksOpen /dev/ceph-461e04ab-5c63-4241-8f41-0920f14d8cf4/osd-block-3cb0a90b-61e5-4756-9a37-7b70016bec83 ti0m2t-Po6E-HPEe-wkFx-ldcW-fzg0-6EFJ23
 stderr: Device ti0m2t-Po6E-HPEe-wkFx-ldcW-fzg0-6EFJ23 already exists.
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/mapper/ti0m2t-Po6E-HPEe-wkFx-ldcW-fzg0-6EFJ23 --path /var/lib/ceph/osd/ceph-7 --no-mon-config
Running command: /usr/bin/ln -snf /dev/mapper/ti0m2t-Po6E-HPEe-wkFx-ldcW-fzg0-6EFJ23 /var/lib/ceph/osd/ceph-7/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-7/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-5
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7
Running command: /usr/bin/systemctl enable ceph-volume@lvm-7-3cb0a90b-61e5-4756-9a37-7b70016bec83
 stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-7-3cb0a90b-61e5-4756-9a37-7b70016bec83.service \u2192 /usr/lib/systemd/system/ceph-volume@.service.
--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib64/python3.6/logging/__init__.py", line 996, in emit
    stream.write(msg)
UnicodeEncodeError: 'ascii' codec can't encode character '\u2192' in position 185: ordinal not in range(128)
Call stack:
  File "/usr/sbin/ceph-volume", line 11, in <module>
    load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__
    self.main(self.argv)
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 42, in main
    terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 368, in main
    self.activate_all(args)
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 254, in activate_all
    self.activate(args, osd_id=osd_id, osd_fsid=osd_fsid)
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 294, in activate
    activate_bluestore(lvs, args.no_systemd)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 214, in activate_bluestore
    systemctl.enable_volume(osd_id, osd_fsid, 'lvm')
  File "/usr/lib/python3.6/site-packages/ceph_volume/systemd/systemctl.py", line 82, in enable_volume
    return enable(volume_unit % (device_type, id_, fsid))
  File "/usr/lib/python3.6/site-packages/ceph_volume/systemd/systemctl.py", line 22, in enable
    process.run(['systemctl', 'enable', unit])
  File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 137, in run
    log_descriptors(reads, process, terminal_logging)
  File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 59, in log_descriptors
    log_output(descriptor_name, message, terminal_logging, True)
  File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 34, in log_output
    logger.info(line)
Message: 'stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-7-3cb0a90b-61e5-4756-9a37-7b70016bec83.service \u2192 /usr/lib/systemd/system/ceph-volume@.service.'
Arguments: ()
Running command: /usr/bin/systemctl enable --runtime ceph-osd@7
 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@7.service \u2192 /usr/lib/systemd/system/ceph-osd@.service.
--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib64/python3.6/logging/__init__.py", line 996, in emit
    stream.write(msg)
UnicodeEncodeError: 'ascii' codec can't encode character '\u2192' in position 139: ordinal not in range(128)
Call stack:
  File "/usr/sbin/ceph-volume", line 11, in <module>
    load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__
    self.main(self.argv)
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 42, in main
    terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 368, in main
    self.activate_all(args)
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 254, in activate_all
    self.activate(args, osd_id=osd_id, osd_fsid=osd_fsid)
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 294, in activate
    activate_bluestore(lvs, args.no_systemd)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 217, in activate_bluestore
    systemctl.enable_osd(osd_id)
  File "/usr/lib/python3.6/site-packages/ceph_volume/systemd/systemctl.py", line 70, in enable_osd
    return enable(osd_unit % id_, runtime=True)
  File "/usr/lib/python3.6/site-packages/ceph_volume/systemd/systemctl.py", line 20, in enable
    process.run(['systemctl', 'enable', '--runtime', unit])
  File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 137, in run
    log_descriptors(reads, process, terminal_logging)
  File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 59, in log_descriptors
    log_output(descriptor_name, message, terminal_logging, True)
  File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 34, in log_output
    logger.info(line)
Message: 'stderr Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@7.service \u2192 /usr/lib/systemd/system/ceph-osd@.service.'
Arguments: ()
Running command: /usr/bin/systemctl start ceph-osd@7
 stderr: Failed to connect to bus: No such file or directory
-->  RuntimeError: command returned non-zero exit status: 1


I guess the systemd part is not correct for containers anyway, and if I run it with --no-systemd, it doesnt fail :
[root@osd2803 ~]# /usr/bin/podman run --rm --ipc=host --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15.2.8 -e NODE_NAME=osd2803.banette.os -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /etc/ceph:/etc/ceph -v /run/lock/lvm:/run/lock/lvm docker.io/ceph/ceph:v15.2.8 lvm activate --all --no-system
--> Activating OSD ID 0 FSID 697698fd-3fa0-480f-807b-68492bd292bf
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/lockbox.keyring --create-keyring --name client.osd-lockbox.697698fd-3fa0-480f-807b-68492bd292bf --add-key AQAy7Bdg0jQsBhAAj0gcteTEbcpwNNvMGZqTTg==
 stdout: creating /var/lib/ceph/osd/ceph-0/lockbox.keyring
 stdout: added entity client.osd-lockbox.697698fd-3fa0-480f-807b-68492bd292bf auth(key=AQAy7Bdg0jQsBhAAj0gcteTEbcpwNNvMGZqTTg==)
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/lockbox.keyring
Running command: /usr/bin/ceph --cluster ceph --name client.osd-lockbox.697698fd-3fa0-480f-807b-68492bd292bf --keyring /var/lib/ceph/osd/ceph-0/lockbox.keyring config-key get dm-crypt/osd/697698fd-3fa0-480f-807b-68492bd292bf/luks
Running command: /usr/sbin/cryptsetup --key-file - --allow-discards luksOpen /dev/ceph-4d11990e-bf76-47de-a429-e29381e1ea84/osd-block-697698fd-3fa0-480f-807b-68492bd292bf xyVeIj-F0ci-7nGG-2xXs-nqiI-emWx-Oq14g5
 stderr: Device xyVeIj-F0ci-7nGG-2xXs-nqiI-emWx-Oq14g5 already exists.
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/mapper/xyVeIj-F0ci-7nGG-2xXs-nqiI-emWx-Oq14g5 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Running command: /usr/bin/ln -snf /dev/mapper/xyVeIj-F0ci-7nGG-2xXs-nqiI-emWx-Oq14g5 /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
--> ceph-volume lvm activate successful for osd ID: 0
--> Activating OSD ID 3 FSID 5f2a8a34-e509-4397-baa8-4620b6f6a082
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-3/lockbox.keyring --create-keyring --name client.osd-lockbox.5f2a8a34-e509-4397-baa8-4620b6f6a082 --add-key AQDvDRhgVfKiGRAAlbUAOkebVTtZ4x16h+IjKA==
 stdout: creating /var/lib/ceph/osd/ceph-3/lockbox.keyring
added entity client.osd-lockbox.5f2a8a34-e509-4397-baa8-4620b6f6a082 auth(key=AQDvDRhgVfKiGRAAlbUAOkebVTtZ4x16h+IjKA==)
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/lockbox.keyring
Running command: /usr/bin/ceph --cluster ceph --name client.osd-lockbox.5f2a8a34-e509-4397-baa8-4620b6f6a082 --keyring /var/lib/ceph/osd/ceph-3/lockbox.keyring config-key get dm-crypt/osd/5f2a8a34-e509-4397-baa8-4620b6f6a082/luks
Running command: /usr/sbin/cryptsetup --key-file - --allow-discards luksOpen /dev/ceph-e6a7fa5c-4e54-406b-a0ca-b64a2c93155b/osd-block-5f2a8a34-e509-4397-baa8-4620b6f6a082 v5B6My-2hfA-lr1F-My5r-9AYi-eH1Q-4r5Xtp
 stderr: Device v5B6My-2hfA-lr1F-My5r-9AYi-eH1Q-4r5Xtp already exists.
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/mapper/v5B6My-2hfA-lr1F-My5r-9AYi-eH1Q-4r5Xtp --path /var/lib/ceph/osd/ceph-3 --no-mon-config
Running command: /usr/bin/ln -snf /dev/mapper/v5B6My-2hfA-lr1F-My5r-9AYi-eH1Q-4r5Xtp /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
--> ceph-volume lvm activate successful for osd ID: 3


So the osds are correctly scanned, but nothing is created under /var/lib/ceph, no processes are started and of course no systemd config generated.
Actions #3

Updated by Sebastian Wagner about 3 years ago

Actions #5

Updated by Kenneth Waegeman about 3 years ago

Hi Sebastian,

Since I would need to do this for +500 OSDs, doing this manually is not really an option..
I guess https://tracker.ceph.com/issues/49249 could be a solution.

I can see now the subject of my ticket is wrong, and it actually breaks down in two parts:
- ceph-volume documentation (https://docs.ceph.com/en/latest/ceph-volume/lvm/activate/#activate) notes that activate means:
'This activation process enables a systemd unit that persists the OSD ID and its UUID (also called fsid in Ceph CLI tools), so that at boot time it can understand what OSD is enabled and needs to be mounted.'
-> This is not true/does not work for use with cephadm, ceph-volume can't make the osd directories/files like unit.run (yet) for osds that should run with cephadm

- there is yet no way (documented) that existing OSD disks could be discovered by cephadm/ceph orch on reinstalling a node like it used to be with running ceph-volume activate --all. The workaround I see for now is running

ceph-volume activate --all
for id in `ls -1 /var/lib/ceph/osd`; do echo cephadm adopt --style legacy --name ${id/ceph-/osd.}; done

This removes the ceph-volume units again and creates the cephadm ones :)

Thanks!
Kenneth

Actions #6

Updated by Sebastian Wagner about 3 years ago

  • Subject changed from cephadm ceph-volume activate does not work with dmcrypt to "cephadm ceph-volume activate" does not support cephadm
Actions #7

Updated by Sebastian Wagner about 3 years ago

Great! Please verify that the container image used is consistent across the cluster after running the adoption process.

Actions #8

Updated by Sebastian Wagner about 3 years ago

  • Has duplicate Feature #49249: cephadm: Automatically create OSDs after reinstalling base os added
Actions #9

Updated by Sebastian Wagner about 3 years ago

  • Tracker changed from Bug to Feature
Actions #10

Updated by Sebastian Wagner about 3 years ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 39639
Actions #11

Updated by Sebastian Wagner about 3 years ago

  • Backport set to pacific,octopus
Actions #12

Updated by Sebastian Wagner about 3 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #13

Updated by Sage Weil about 3 years ago

pacific backport merged in https://github.com/ceph/ceph/pull/40135

Actions #14

Updated by Sebastian Wagner about 3 years ago

  • Status changed from Pending Backport to Resolved
Actions #15

Updated by Michel Jouvin over 1 year ago

Hi,

I ran into the same problem as Kenneth after reinstalling the OS on a machine running OSDs to upgrade from EL7 to EL8, running Octopus (15.2.16). I didn't find any other way that the one proposed by Kennet, that worked just fine. A minor detail is that I had to use the command:

ceph-volume lvm activate --all

Also, but it is trivial, as I didn't have ceph-osd RPM installed on the machine (not needed with cephadm), I had to reinstall it to do the reactivation (and remove it after).

If another approach is possible with Octopus (the version that requires the upgrade EL7 -> EL8 to move forward if I'm right), it would be good to have it better documented... I was not able to find anything related to it by Googling a lot. Only a RedHat page mentioned `cephadm osd activate` but it doesn't seem to work in Octopus.

Michel

Actions

Also available in: Atom PDF