Project

General

Profile

Bug #8714

we do not block old clients from breaking cache pools

Added by Greg Farnum about 5 years ago. Updated about 5 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
Start date:
07/01/2014
Due date:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
firefly
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:

Description

We got an email on ceph-users which implies that we are letting old kernel clients maul cache pools (by accessing base pools without understanding the caching rules at all).

Good day!
I have server with Ubunu 14.04 and installed ceph firefly. Configured main_pool (2 osd) and ssd_pool (1 ssd osd). I want use ssd_pool as cache pool for main_pool

  ceph osd tier add main_pool ssd_pool
  ceph osd tier cache-mode ssd_pool writeback
  ceph osd tier set-overlay main_pool ssd_pool

  ceph osd pool set ssd_pool hit_set_type bloom
  ceph osd pool set ssd_pool hit_set_count 1
  ceph osd pool set ssd_pool hit_set_period 600
  ceph osd pool set ssd_pool target_max_bytes 100000000000

 If use tgt as:
 tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 --bstype rbd --backing-store main_pool/store_main --bsopts "conf=/etc/ceph/ceph.conf" 
 and then connected from iscsi initiator to this Lun1, i see that ssd_pool is used as cache (i see through iostat -x 1) but slow speed

 If use tgt as (or other sush as scst, iscsitarget):
 tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 -b /dev/rbd1 (where rbd1=main_pool/store_main)
 and then connected from iscsi initiator to this Lun1, i see that ssd_pool is not used, that write through to 2 osd

 Help me, anyone work this iscsi and cache pool?

We should block this. Obviously a per-pool mount failure would be what we really want, but failing that we can return error codes if a base pool gets IO from a client which doesn't have the caching feature bits?
Oooo, or we could modify rbd so that the rbd mount returns an error code! That would be better, if it's possible through the object class interface.

Associated revisions

Revision 0190df53 (diff)
Added by Sage Weil about 5 years ago

osd: prevent old clients from using tiered pools

If the client is old and doesn't understand tiering, don't let them use a
tiered pool. Reply with EOPNOTSUPP.

Fixes: #8714
Backport: firefly
Signed-off-by: Sage Weil <>

Revision e3bc1534 (diff)
Added by Sage Weil about 5 years ago

osd: prevent old clients from using tiered pools

If the client is old and doesn't understand tiering, don't let them use a
tiered pool. Reply with EOPNOTSUPP.

Fixes: #8714
Backport: firefly
Signed-off-by: Sage Weil <>
(cherry picked from commit 0190df53056834f219e33ada2af3a79e8c4dfb77)

History

#1 Updated by Sage Weil about 5 years ago

  • Backport changed from firefly, dumpling to firefly

#2 Updated by Sage Weil about 5 years ago

how about we return EPERM or EOPNOTSUPP on osd ops from clients w/o the caching features?

#3 Updated by Sage Weil about 5 years ago

  • Assignee set to Sage Weil
  • Priority changed from High to Urgent

#4 Updated by Sage Weil about 5 years ago

  • Status changed from New to Need Review

#5 Updated by Sage Weil about 5 years ago

  • Status changed from Need Review to Pending Backport

#6 Updated by Sage Weil about 5 years ago

  • Status changed from Pending Backport to Resolved

Also available in: Atom PDF