Actions
Bug #48164
closedOrchestrator: failed deployments leave orphaned auth entries
Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Description
During upgrade tests to cephadm I had troubles deploying mds nodes. In the end I managed to deploy containerized mds daemons but found lots of orphaned auth entries:
[...] mds.cephfs.host6.xabpsr key: AQD0m6df3fxUHRAA8Q5+FnK0bMk8SpFxgmH0nw== caps: [mds] allow caps: [mon] profile mds caps: [osd] allow rw tag cephfs *=* mds.cephfs.host6.xiotap key: AQA1uqhfMhVbDhAAJIlV0EBhAk3bd+5jSEcZBg== caps: [mds] allow caps: [mon] profile mds caps: [osd] allow rw tag cephfs *=* mds.cephfs.host6.xrqumh key: AQAb1qdfCulrABAA1cwH+IRwLMQmA36qQ3aFHQ== caps: [mds] allow caps: [mon] profile mds caps: [osd] allow rw tag cephfs *=* [...] master:~ # ceph auth ls | grep -c "mds\.cephfs\.host6" installed auth entries: 128
I did not try to deploy it 128 times.
There are several services that needed to be redeployed but I could only find orphans for the mds.host6 service, for some reason there are only two entries for the other mds (mds.host5) although I tried to deploy the service on both. One of the host5 entries is the currently active MDS daemon.
master:~ # ceph versions { "mon": { "ceph version 15.2.5-514-g7a2bcdb091 (7a2bcdb091b497f0269c17e39475981053f93903) octopus (stable)": 3 }, "mgr": { "ceph version 15.2.5-514-g7a2bcdb091 (7a2bcdb091b497f0269c17e39475981053f93903) octopus (stable)": 3 }, "osd": { "ceph version 15.2.5-514-g7a2bcdb091 (7a2bcdb091b497f0269c17e39475981053f93903) octopus (stable)": 16 }, "mds": { "ceph version 15.2.5-514-g7a2bcdb091 (7a2bcdb091b497f0269c17e39475981053f93903) octopus (stable)": 2 }, "overall": { "ceph version 15.2.5-514-g7a2bcdb091 (7a2bcdb091b497f0269c17e39475981053f93903) octopus (stable)": 24 } }
Updated by Sebastian Wagner over 3 years ago
- Project changed from Ceph to Orchestrator
Updated by Sebastian Wagner over 3 years ago
- Related to Bug #44699: cephadm: removing services leaves configs behind added
Updated by Sebastian Wagner about 3 years ago
- Status changed from New to Fix Under Review
- Pull request ID set to 39266
Updated by Sebastian Wagner about 3 years ago
- Status changed from Fix Under Review to Resolved
Updated by Sebastian Wagner about 3 years ago
- Related to Bug #49872: cephadm: Don't remove the daemon keyring, if redeploy failes added
Actions