Bug #6849
closedInconsistent action of ceph osd pool set commands (e.g. volume size)
0%
Description
Hi,
I am seeing some very weird behaviour with the osd commands.
Running on SL6.4 with Emperor (0.72.1)
Doing the same command a couple of times will result in different outcomes.
It is not just cosmetic, it is actually wrongly set in the cluster.
(which explains that my cluster was in a degraded state while it shouldn't have been)
[root@han042 ~]# ceph osd pool set volumes2 size 2
set pool 9 size to 2
[root@han042 ~]# ceph osd pool set volumes size 2
set pool 8 size to 8
[root@han042 ~]# ceph osd pool set volumes size 2
set pool 8 size to 2
[root@han042 ~]# ceph osd pool set volumes size 3
set pool 8 size to 8
[root@han042 ~]# ceph osd pool set volumes size 3
set pool 8 size to 8
[root@han042 ~]# ceph osd pool set volumes size 3
set pool 8 size to 3