Project

General

Profile

Actions

Bug #14590

closed

cephfs set directory layout datapool to 0 failed

Added by huang jun about 8 years ago. Updated about 8 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I want to use rbd pool(its poolid = 0) as data pool,
by doing follow steps:
1. mount -t ceph 192.168.3.101:/ /ceph
2. mkdir /ceph/rbd
3. ceph mds add_data_pool 0
4. cephfs /ceph/rbd set_layout -p 0
5. cephfs /ceph/rbd show_layout
layout.data_pool: 1
layout.object_size: 4194304
layout.stripe_unit: 4194304
layout.stripe_count: 1

I traced the code, find that
In ioctl.c,
diff --git a/kc-3.18/ceph/ioctl.c b/kc-3.18/ceph/ioctl.c
index 6f6cfc4..2cdc530 100644
--- a/kc-3.18/ceph/ioctl.c
+++ b/kc-3.18/ceph/ioctl.c
@ -91,7 +91,7 @ static long ceph_ioctl_set_layout(struct file *file, void __user *arg)
nl.object_size = l.object_size;
else
nl.object_size = ceph_file_layout_object_size(ci->i_layout);
- if (l.data_pool)
+ if (l.data_pool >= 0)
nl.data_pool = l.data_pool;
else
nl.data_pool = ceph_file_layout_pg_pool(ci->i_layout);

and MDS/Server.cc

diff --git a/src/mds/Server.cc b/src/mds/Server.cc
index ba4fe15..ff30285 100644
--- a/src/mds/Server.cc
+++ b/src/mds/Server.cc
@ 3804,7 3804,7 @@ void Server::handle_client_setlayout(MDRequestRef& mdr)
layout.fl_cas_hash = req
>head.args.setlayout.layout.fl_cas_hash;
if (req->head.args.setlayout.layout.fl_object_stripe_unit > 0)
layout.fl_object_stripe_unit = req->head.args.setlayout.layout.fl_object_stripe_unit;
- if (req->head.args.setlayout.layout.fl_pg_pool > 0) {
if (req->head.args.setlayout.layout.fl_pg_pool >= 0) {
layout.fl_pg_pool = req->head.args.setlayout.layout.fl_pg_pool;

if (layout.fl_pg_pool != old_pool) {
@
Actions #1

Updated by huang jun about 8 years ago

diff --git a/src/mds/Server.cc b/src/mds/Server.cc
index ba4fe15..b894233 100644
--- a/src/mds/Server.cc
+++ b/src/mds/Server.cc
@ 3804,7 3804,7 @@ void Server::handle_client_setlayout(MDRequestRef& mdr)
layout.fl_cas_hash = req
>head.args.setlayout.layout.fl_cas_hash;
if (req->head.args.setlayout.layout.fl_object_stripe_unit > 0)
layout.fl_object_stripe_unit = req->head.args.setlayout.layout.fl_object_stripe_unit;
- if (req->head.args.setlayout.layout.fl_pg_pool > 0) {
if (req->head.args.setlayout.layout.fl_pg_pool >= 0) {
layout.fl_pg_pool = req->head.args.setlayout.layout.fl_pg_pool;

if (layout.fl_pg_pool != old_pool) {
@ -3899,7 +3899,7 @ void Server::handle_client_setdirlayout(MDRequestRef& mdr)
layout.fl_cas_hash = req->head.args.setlayout.layout.fl_cas_hash;
if (req->head.args.setlayout.layout.fl_object_stripe_unit > 0)
layout.fl_object_stripe_unit = req->head.args.setlayout.layout.fl_object_stripe_unit;
- if (req->head.args.setlayout.layout.fl_pg_pool > 0) {
+ if (req->head.args.setlayout.layout.fl_pg_pool >= 0) {
if (req->head.args.setlayout.layout.fl_pg_pool != layout.fl_pg_pool) {
access |= MAY_SET_POOL;
}@
Actions #2

Updated by Greg Farnum about 8 years ago

I don't think we're concerned about the ioctl since it's deprecated in favor of virtual xattrs. But please make sure those work, Zheng. :)

Actions #3

Updated by Sage Weil about 8 years ago

Why do you want to set the cephfs data pool to rbd? This is generally a bad idea and won't work in all circumstances because of the way the old interfaces are designed (pg_pool 0 -> no change)

Actions #4

Updated by Greg Farnum about 8 years ago

Mixing pools probably isn't great, but we still want pool 0 to be available — not every user will have RBD running.

Actions #5

Updated by Sage Weil about 8 years ago

Greg Farnum wrote:

Mixing pools probably isn't great, but we still want pool 0 to be available — not every user will have RBD running.

It's not possible either currently or after this change. The ioctl interface and layout arg to create use 0 to indicate default or "no change". This limitation is more explicit in https://github.com/liewegas/ceph/commit/e983b8dfe454d0b9c81ee7d334e8dc87c0db51a9, which has to convert between ceph_file_layout and file_layout_t.

In practice the mon usually creates the rbd pool (id 0). Only users who feed a precreated osdmap into the ceph-mon --mkfs won't have it.

Hmm, we could make the 'fs new' command reject a data pool of id 0 to prevent any surprises.. I'll add a patch that does that.

Actions #6

Updated by Sage Weil about 8 years ago

  • Status changed from New to Won't Fix
Actions

Also available in: Atom PDF