Project

General

Profile

Actions

Bug #6849

closed

Inconsistent action of ceph osd pool set commands (e.g. volume size)

Added by Robert van Leeuwen over 10 years ago. Updated over 10 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
Category:
ceph cli
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

I am seeing some very weird behaviour with the osd commands.
Running on SL6.4 with Emperor (0.72.1)
Doing the same command a couple of times will result in different outcomes.
It is not just cosmetic, it is actually wrongly set in the cluster.
(which explains that my cluster was in a degraded state while it shouldn't have been)

[root@han042 ~]# ceph osd pool set volumes2 size 2
set pool 9 size to 2
[root@han042 ~]# ceph osd pool set volumes size 2
set pool 8 size to 8
[root@han042 ~]# ceph osd pool set volumes size 2
set pool 8 size to 2
[root@han042 ~]# ceph osd pool set volumes size 3
set pool 8 size to 8
[root@han042 ~]# ceph osd pool set volumes size 3
set pool 8 size to 8
[root@han042 ~]# ceph osd pool set volumes size 3
set pool 8 size to 3


Related issues 1 (0 open1 closed)

Is duplicate of Ceph - Bug #6796: ceph mons interpretting pg splits very wrongResolvedJoao Eduardo Luis11/18/2013

Actions
Actions

Also available in: Atom PDF