Project

General

Profile

Bug #12018

rbd and pool quota do not go well together

Added by chuanhong wang almost 5 years ago. Updated almost 3 years ago.

Status:
Resolved
Priority:
High
Assignee:
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
hammer
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

The pool "rbd" is already full, but I still flatten a clone successfully. After flatten, "rbd" has more objects, but the "USED" space has no change, thar's weird.

[root@c8 ~]# ceph -s
cluster 927e8bfb-a47a-4538-aaab-b74a8c6163b9
health HEALTH_WARN
pool 'rbd' is full
monmap e1: 1 mons at {c7=10.118.202.97:6789/0}
election epoch 1, quorum 0 c7
osdmap e46: 2 osds: 2 up, 2 in
pgmap v605: 128 pgs, 2 pools, 2048 MB data, 518 objects
4180 MB used, 36759 MB / 40940 MB avail
128 active+clean
[root@c8 ~]# rbd list
im1
im1_clone1
[root@c8 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED RAW USED
40940M 36759M 4180M 10.21
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 2048M 5.00 18379M 518
volums 1 0 0 18379M 0
[root@c8 ~]# rbd flatten im1_clone1
Image flatten: 100
complete...done.
[root@c8 ~]# ceph -s
cluster 927e8bfb-a47a-4538-aaab-b74a8c6163b9
health HEALTH_WARN
pool 'rbd' is full
monmap e1: 1 mons at {c7=10.118.202.97:6789/0}
election epoch 1, quorum 0 c7
osdmap e46: 2 osds: 2 up, 2 in
pgmap v625: 128 pgs, 2 pools, 2048 MB data, 3078 objects
4216 MB used, 36723 MB / 40940 MB avail
128 active+clean
client io 614 B/s rd, 0 B/s wr, 145 op/s
[root@c8 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
40940M 36723M 4216M 10.30
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 2048M 5.00 18360M 3078
volums 1 0 0 18360M 0


Related issues

Related to rbd - Bug #12069: ENOSPC hidden by cache not detected by callers of flatten Resolved 06/18/2015
Copied to rbd - Backport #14824: hammer: rbd and pool quota do not go well together Rejected

Associated revisions

Revision 67de12bf (diff)
Added by xinxin shu over 4 years ago

Fixes : #12018
osd/OSD.cc : drop write if pool is full

Signed-off-by: xinxin shu <>

Revision dbcf2e40 (diff)
Added by xinxin shu over 4 years ago

Fixes : #12018

resend writes after pool loses full flag

Signed-off-by: xinxin shu <>

History

#1 Updated by Nathan Cutler almost 5 years ago

  • Project changed from Ceph to rbd
  • Affected Versions deleted (v0.94)

#2 Updated by Xinxin Shu almost 5 years ago

can you run "ceph osd pool get-quota rbd" ?

#3 Updated by chuanhong wang almost 5 years ago

the quota of rbd is 2G
[root@c8 ~]# ceph osd pool get-quota rbd
quotas for pool 'rbd':
max objects: N/A
max bytes : 2048MB

#4 Updated by Josh Durgin almost 5 years ago

  • Priority changed from Normal to High

#5 Updated by Josh Durgin almost 5 years ago

I think this is a general issue with pool quota handling. It should be treated the same as cluster full handling, but currently is not.

The osd replies with ENOSPC for writes with pool quotas, but rbd seems to be masking this error in the flatten call.

#6 Updated by Josh Durgin almost 5 years ago

  • Subject changed from rbd flatten still can be done when the pool is full to rbd and pool quota do not go well together

#7 Updated by Xinxin Shu almost 5 years ago

  • Status changed from New to 12

#8 Updated by Xinxin Shu almost 5 years ago

  • Assignee set to Xinxin Shu

#9 Updated by Josh Durgin almost 5 years ago

I think we should handle this roughly the same way as we did with the cluster becoming full:

http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/18020

More discussion on the first version of the patches:

http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/17923

Essentially make the osd drop the request, and update the client side (osdc/Objecter.cc in userspace) to resend all requests to a particular pool when that pool goes from FULL -> not FULL according to the osdmap. The main diffence from entire cluster full handling is that it only affects requests to particular pools.

See commit:4111729dda7437c23f59e7100b3c4a9ec4101dd0 and commit:e32874fc5aa6f59494766b7bbeb2b6ec3d8f190e and commit:3caf3effcb113f843b54e06099099909eb335453 in userspace.

#10 Updated by Xinxin Shu almost 5 years ago

josh, thans for pointing me out, i will rewrite PR

#11 Updated by Josh Durgin over 4 years ago

  • Status changed from 12 to Fix Under Review

#12 Updated by Josh Durgin over 4 years ago

  • Status changed from Fix Under Review to 7

#13 Updated by Josh Durgin over 4 years ago

  • Status changed from 7 to Resolved

osd: commit:67de12bf9b67c29bf613e831f4146ff9809e42f7 client: commit:dbcf2e40d3e8a92f280879c74b7ca954a902b2d1 test: commit:16ead95daa3d1309e8e76e57416b4201e71d0449

#14 Updated by Loic Dachary about 4 years ago

  • Status changed from Resolved to Pending Backport
  • Backport set to hammer

#15 Updated by Loic Dachary about 4 years ago

  • Copied to Backport #14824: hammer: rbd and pool quota do not go well together added

#17 Updated by Loic Dachary about 4 years ago

  • Copied to Backport #14824: hammer: rbd and pool quota do not go well together added

#18 Updated by Nathan Cutler almost 3 years ago

  • Status changed from Pending Backport to Resolved

Also available in: Atom PDF