Project

General

Profile

Tasks #52575

After reinstall a ceph cluster, osd can not be recovered

Added by Yuyuko Saigyouji about 1 month ago. Updated about 1 month ago.

Status:
Fix Under Review
Priority:
Normal
Target version:
-
% Done:

0%

Tags:
osd, volume
Reviewed:
Affected Versions:
Pull request ID:

Description

Hi guys,

I reinstall a ceph cluster(single node) with old osds from last cluster.
I try to recover these osds to the new cluster so I try:

1. activate all osd (ceph-volume lvm activate --all --no-systemd)
2. adopt it into docker style (cephadm adopt --style legacy --skip-pull -n osd.0)
3. ceph osd create
4. ceph auth add osd.0 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/$fsid/osd.0/keyring
5. systemctl start

The service is running (pic: osd-status.png)
But the osd is in down status (down.png)
Here is the "ceph -w" output (cephw.png)

System: Debian Bulleyes

Any help would be really appreciated :)

osd-status.png View (84.1 KB) Yuyuko Saigyouji, 09/11/2021 01:47 PM

down.png View (55 KB) Yuyuko Saigyouji, 09/11/2021 01:48 PM

cephw.png View (34.2 KB) Yuyuko Saigyouji, 09/11/2021 01:49 PM

History

#1 Updated by Sebastian Wagner about 1 month ago

Please use `cpeh cephadm osd activate` in order to re-active cephadm osds. https://docs.ceph.com/en/latest/cephadm/osd/#activate-existing-osds

#2 Updated by Sebastian Wagner about 1 month ago

  • Tracker changed from Support to Tasks
  • Project changed from Ceph to ceph-volume
  • Category deleted (OSD)
  • Status changed from New to Fix Under Review
  • Assignee set to Sebastian Wagner
  • Pull request ID set to 43142

#3 Updated by Yuyuko Saigyouji about 1 month ago

Sebastian Wagner wrote:

Please use `cpeh cephadm osd activate` in order to re-active cephadm osds. https://docs.ceph.com/en/latest/cephadm/osd/#activate-existing-osds

Thanks for the help! That works well! Sorry for disturbing.

#4 Updated by Loïc Dachary about 1 month ago

  • Target version deleted (v16.2.6)

Also available in: Atom PDF