Project

General

Profile

Actions

Tasks #52575

closed

After reinstall a ceph cluster, osd can not be recovered

Added by Yuyuko Saigyouji over 2 years ago. Updated over 2 years ago.

Status:
Resolved
Priority:
Normal
Target version:
-
% Done:

0%

Tags:
osd, volume
Reviewed:
Affected Versions:
Pull request ID:

Description

Hi guys,

I reinstall a ceph cluster(single node) with old osds from last cluster.
I try to recover these osds to the new cluster so I try:

1. activate all osd (ceph-volume lvm activate --all --no-systemd)
2. adopt it into docker style (cephadm adopt --style legacy --skip-pull n osd.0)
3. ceph osd create
4. ceph auth add osd.0 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/$fsid/osd.0/keyring
5. systemctl start ceph

The service is running (pic: osd-status.png)
But the osd is in down status (down.png)
Here is the "ceph -w" output (cephw.png)

System: Debian Bulleyes

Any help would be really appreciated :)


Files

osd-status.png (84.1 KB) osd-status.png Yuyuko Saigyouji, 09/11/2021 01:47 PM
down.png (55 KB) down.png Yuyuko Saigyouji, 09/11/2021 01:48 PM
cephw.png (34.2 KB) cephw.png Yuyuko Saigyouji, 09/11/2021 01:49 PM
Actions

Also available in: Atom PDF