Bug #9489
closed
it's worth noting that the OSD worked fine in the cluster after initial deployment, it's not until the node is rebooted when the problems arise. These are also using dmcrypt though that should not affect this issue
- Status changed from Need More Info to 12
- Assignee set to Alfredo Deza
- Status changed from 12 to Need More Info
A bit more context is needed here, how/what doesn't work as expected? Is it possible to reproduce?
When zap disk doesn't clear enough, what would enough be? (or what isn't currently?)
I believe the original cause of report was likely in error unrelated to ceph-disk. Loic, you had mentioned you might have seen similar behavior in the past?
Unfortunately I have no further information other than I may have seen this in the past as well, but have not observed it as described in recent deployments.
- Status changed from Need More Info to Can't reproduce
- Status changed from Can't reproduce to Rejected
[24.09 19:23] <loicd> bandrus: thanks :-) the error was not resurection of paritions after --zap-disk ?
[24.09 19:25] <bandrus> I don't believe so, the old cluster uid was seemingly being retained from a file that existed outside the ceph-deploy folder, which is probably suitable for a bug in itself.
sorry for the noise...
Also available in: Atom
PDF