Bug #22704
closedceph-volume still creates osd ids and auth when creating osd fails
0%
Description
I have a broken disk, resulting in an error running ceph-volume.
Each time the command runs though, it generate a new osd id and auth entry
ceph-volume lvm create --bluestore --data /dev/sdg
stderr: Device /dev/sdg not found (or ignored by filtering).
--> RuntimeError: command returned non-zero exit status: 5
->
ceph osd tree
.....
4 0 osd.4 down 0 1.00000
5 0 osd.5 down 0 1.00000
6 0 osd.6 down 0 1.00000
32 0 osd.32 down 0 1.00000
45 0 osd.45 down 0 1.00000
auth list
...
osd.45
key: AQBV311axK5RLhAASmjRkIm4+TS5HzaYfIZGpQ==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
Updated by Alfredo Deza over 6 years ago
- Status changed from New to Duplicate
This was fixed with #22281 which is not yet released. 12.2.2 is affected. 12.2.3 shouldn't