Bug #44361
closedok-to-stop errors during ceph upgrade
0%
Description
@Error EBUSY: 1 PGs are already too degraded, would become too degraded or might become unavailable
retrying after 1m0s, last error: failed to check if we can stop the deployment rook-ceph-osd-27: failed to check if rook-ceph-osd-27 was ok to stop.
deployment rook-ceph-osd-27 cannot be stopped. exit status 16
ceph health detail
HEALTH_OK
ceph pg stat
708 pgs: 708 active+clean; 495 GiB data, 993 GiB used, 2.1 TiB / 3.1 TiB avail; 625 KiB/s wr, 0 op/s
Biggest pool size 35%,
ceph osd ok-to-stop 27
Error EBUSY: 1 PGs are already too degraded, would become too degraded or might become unavailable
command terminated with exit code 16@
Having issues upgrading ceph from 14.2.4 to 14.2.7, some OSDs report EBUSY: 1 PGs are already too degraded, all the health checks are healthy,
Feels like a bug to me? Are ok-to-stop checks predictive?
I've enabled PG autoscaling just in case it was related to that, the process has finished, no change in the OSD errors during the upgrade.
Thanks!
Updated by Greg Farnum about 4 years ago
- Project changed from Ceph to mgr
- Category deleted (
OSD)
Updated by Chris Danielewski about 4 years ago
Please feel free to close this. Looks like for new deployment we had one pool which had min_size 1 instead of 2