Project

General

Profile

Actions

Bug #44361

closed

ok-to-stop errors during ceph upgrade

Added by Chris Danielewski about 4 years ago. Updated about 4 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

@Error EBUSY: 1 PGs are already too degraded, would become too degraded or might become unavailable
retrying after 1m0s, last error: failed to check if we can stop the deployment rook-ceph-osd-27: failed to check if rook-ceph-osd-27 was ok to stop.
deployment rook-ceph-osd-27 cannot be stopped. exit status 16
ceph health detail
HEALTH_OK
ceph pg stat
708 pgs: 708 active+clean; 495 GiB data, 993 GiB used, 2.1 TiB / 3.1 TiB avail; 625 KiB/s wr, 0 op/s

Biggest pool size 35%,

ceph osd ok-to-stop 27
Error EBUSY: 1 PGs are already too degraded, would become too degraded or might become unavailable
command terminated with exit code 16@

Having issues upgrading ceph from 14.2.4 to 14.2.7, some OSDs report EBUSY: 1 PGs are already too degraded, all the health checks are healthy,
Feels like a bug to me? Are ok-to-stop checks predictive?
I've enabled PG autoscaling just in case it was related to that, the process has finished, no change in the OSD errors during the upgrade.

Thanks!

Actions

Also available in: Atom PDF