Bug #6849
closedInconsistent action of ceph osd pool set commands (e.g. volume size)
0%
Description
Hi,
I am seeing some very weird behaviour with the osd commands.
Running on SL6.4 with Emperor (0.72.1)
Doing the same command a couple of times will result in different outcomes.
It is not just cosmetic, it is actually wrongly set in the cluster.
(which explains that my cluster was in a degraded state while it shouldn't have been)
[root@han042 ~]# ceph osd pool set volumes2 size 2
set pool 9 size to 2
[root@han042 ~]# ceph osd pool set volumes size 2
set pool 8 size to 8
[root@han042 ~]# ceph osd pool set volumes size 2
set pool 8 size to 2
[root@han042 ~]# ceph osd pool set volumes size 3
set pool 8 size to 8
[root@han042 ~]# ceph osd pool set volumes size 3
set pool 8 size to 8
[root@han042 ~]# ceph osd pool set volumes size 3
set pool 8 size to 3
Updated by Greg Farnum over 10 years ago
Are you sure you're running all emperor monitors? (None of them are on dumpling?)
Updated by Robert van Leeuwen over 10 years ago
Well spotted, somehow one of the MON Nodes did not install Emperor.
Strange since it was reinstalled with puppet...
Thx,
Robert
Updated by Greg Farnum over 10 years ago
- Status changed from New to Duplicate
:)
If it persists after upgrading that node let us know, but I'm pretty sure it's #6796, which we'll have a point release for shortly.