Tasks #52575
closedAfter reinstall a ceph cluster, osd can not be recovered
0%
Description
Hi guys,
I reinstall a ceph cluster(single node) with old osds from last cluster.
I try to recover these osds to the new cluster so I try:
1. activate all osd (ceph-volume lvm activate --all --no-systemd)
2. adopt it into docker style (cephadm adopt --style legacy --skip-pull n osd.0)$fsid@osd.0
3. ceph osd create
4. ceph auth add osd.0 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/$fsid/osd.0/keyring
5. systemctl start ceph
The service is running (pic: osd-status.png)
But the osd is in down status (down.png)
Here is the "ceph -w" output (cephw.png)
System: Debian Bulleyes
Any help would be really appreciated :)
Files
Updated by Sebastian Wagner over 2 years ago
Please use `cpeh cephadm osd activate` in order to re-active cephadm osds. https://docs.ceph.com/en/latest/cephadm/osd/#activate-existing-osds
Updated by Sebastian Wagner over 2 years ago
- Tracker changed from Support to Tasks
- Project changed from Ceph to ceph-volume
- Category deleted (
OSD) - Status changed from New to Fix Under Review
- Assignee set to Sebastian Wagner
- Pull request ID set to 43142
Updated by Yuyuko Saigyouji over 2 years ago
Sebastian Wagner wrote:
Please use `cpeh cephadm osd activate` in order to re-active cephadm osds. https://docs.ceph.com/en/latest/cephadm/osd/#activate-existing-osds
Thanks for the help! That works well! Sorry for disturbing.
Updated by Sebastian Wagner over 2 years ago
- Status changed from Fix Under Review to Resolved