Bug #9489
closed--zap-disk does not clear enough
0%
Description
sometime the partitions are resurected
Updated by Loïc Dachary over 9 years ago
it's worth noting that the OSD worked fine in the cluster after initial deployment, it's not until the node is rebooted when the problems arise. These are also using dmcrypt though that should not affect this issue
Updated by Ian Colle over 9 years ago
- Status changed from Need More Info to 12
- Assignee set to Alfredo Deza
Updated by Alfredo Deza over 9 years ago
- Status changed from 12 to Need More Info
A bit more context is needed here, how/what doesn't work as expected? Is it possible to reproduce?
When zap disk doesn't clear enough, what would enough be? (or what isn't currently?)
Updated by Brian Andrus over 9 years ago
I believe the original cause of report was likely in error unrelated to ceph-disk. Loic, you had mentioned you might have seen similar behavior in the past?
Unfortunately I have no further information other than I may have seen this in the past as well, but have not observed it as described in recent deployments.
Updated by Alfredo Deza over 9 years ago
- Status changed from Need More Info to Can't reproduce
Updated by Loïc Dachary over 9 years ago
- Status changed from Can't reproduce to Rejected
[24.09 19:23] <loicd> bandrus: thanks :-) the error was not resurection of paritions after --zap-disk ? [24.09 19:25] <bandrus> I don't believe so, the old cluster uid was seemingly being retained from a file that existed outside the ceph-deploy folder, which is probably suitable for a bug in itself.
sorry for the noise...