Bug #40711
openProgress Module: Possible cosmetic issue when marking OSDs out
0%
Description
Due to a script that I was testing I noticed that when you mark OSDs out and then back in multiple times when running 'ceph -s' there are messages "Rebalancing after osd.x marked out" that stay there even after the PGs get rebuilt, so it looks like a visual bug. I noticed it first on a functional cluster with some images deployed and then reproduced it on a test one with just one pool and image.
ceph version 14.2.1-468-g994fd9e0cc (994fd9e0ccc50c2f3a55a3b7a3d4e0ba74786d50) nautilus (stable)
ceph -s
cluster:
id: 0db86f44-957c-4b91-b1bc-97a0a0534180
health: HEALTH_OK
services:
mon: 1 daemons, quorum target192168000049 (age 3m)
mgr: target192168000049(active, since 2m)
osd: 4 osds: 4 up (since 2m), 4 in (since 3m)
data:
pools: 1 pools, 133 pgs
objects: 1 objects, 388 B
usage: 4.0 GiB used, 32 GiB / 36 GiB avail
pgs: 133 active+clean
io:
client: 1.7 KiB/s rd, 1 op/s rd, 0 op/s wr
ceph osd out 0 1 2
ceph osd in 0 1 2
ceph osd out 0 1 2
ceph osd in 0 1 2
ceph osd out 0 1 2
ceph osd out 0 1 2
ceph osd in 0 1 2
ceph osd out 0 1 2
ceph osd in 0 1 2
ceph osd out 0 1 2
sleep 600
ceph -s
cluster:
id: 0db86f44-957c-4b91-b1bc-97a0a0534180
health: HEALTH_OK
services:
mon: 1 daemons, quorum target192168000049 (age 27m)
mgr: target192168000049(active, since 27m)
osd: 4 osds: 4 up (since 27m), 4 in (since 11m)
data:
pools: 1 pools, 133 pgs
objects: 1 objects, 388 B
usage: 4.0 GiB used, 32 GiB / 36 GiB avail
pgs: 133 active+clean
io:
client: 2.5 KiB/s rd, 2 op/s rd, 0 op/s wr
progress:
Rebalancing after osd.2 marked out
[..............................]
Rebalancing after osd.2 marked out
[..............................]
Updated by Greg Farnum almost 5 years ago
- Project changed from Ceph to mgr
- Subject changed from Possible cosmetic issue when marking OSDs out to Progress Module: Possible cosmetic issue when marking OSDs out