Project

General

Profile

Actions

Bug #16588

closed

ceph mds dump show incorrect number of metadata pools.

Added by Rohith Radhakrishnan almost 8 years ago. Updated almost 7 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Administration/Usability
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDSMonitor
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Ceph mds dump shows metadata_pool id as 0. When no FS is present, then metadata_pool id should be left blank.

ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

ceph mds dump:-
dumped fsmap epoch 10
fs_name cephfs
failed
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled
14775: 10.242.42.220:6802/296988 'mds3' mds.0.9 up:active seq 24

Actions #1

Updated by Xiaoxi Chen almost 8 years ago

  • Status changed from New to 12
  • Assignee set to Xiaoxi Chen
Actions #2

Updated by Xiaoxi Chen almost 8 years ago

  • Status changed from 12 to Rejected

This is not a bug.

The numbers following "data_pools" and "metadata_pool" are not count, but the pool ids.

root@slx03c-5zkd:~# ceph mds dump
dumped fsmap epoch 58866
fs_name testfs
epoch 58863
flags 0
created 2016-06-15 23:29:39.120090
modified 2016-06-27 23:54:26.236539
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 16565
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in 0
up {0=147225}
failed
damaged
stopped
data_pools 2
metadata_pool 3

inline_data disabled
147225: 10.153.11.0:6801/40798 'slx03c-0pkm-standby' mds.0.58860 up:active seq 4

root@slx03c-5zkd:~# ceph fs ls
name: testfs,* metadata pool: meta, data pools: [data ]*

root@slx03c-5zkd:~# ceph osd pool stats
pool data id 2
nothing is going on

pool meta id 3
nothing is going on

Actions #3

Updated by Rohith Radhakrishnan almost 8 years ago

on what basis is the pool id generated? There are no existing pools. So shouldn't the count start with 0 or 1?

Also in the below output the metadata_pool is given "0" id even when though no pools have been created yet. Shouldn't the field be blank like in data_pools:-

ceph mds dump
dumped fsmap epoch 1
fs_name cephfs
epoch 1
flags 0
created 0.000000
modified 0.000000
tableserver 0
root 0
session_timeout 0
session_autoclose 0
max_file_size 0
last_failure 0
last_failure_osd_epoch 0
compat compat={},rocompat={},incompat={}
max_mds 0
in
up {}
failed
damaged
stopped
data_pools
metadata_pool 0

inline_data disabled

ceph fs ls
No filesystems enabled

ceph osd pool stats
pool rbd id 0
nothing is going on

Actions #4

Updated by Rohith Radhakrishnan almost 8 years ago

ceph osd pool stats
there are no pools!

ems@rack2-client-3:~$ ceph mds dump
dumped fsmap epoch 3
fs_name cephfs
epoch 3
flags 0
created 0.000000
modified 0.000000
tableserver 0
root 0
session_timeout 0
session_autoclose 0
max_file_size 0
last_failure 0
last_failure_osd_epoch 0
compat compat={},rocompat={},incompat={}
max_mds 0
in
up {}
failed
damaged
stopped
data_pools
metadata_pool 0
inline_data disabled
ems@rack2-client-3:~$

Actions #5

Updated by Xiaoxi Chen almost 8 years ago

  • Status changed from Rejected to 12

Hmm, yes, this is because metadata_pool is initialized to 0 , this seems worth to fix.

The bug is , when no FS present, mdtadat_pool field should output blank instead of metadata_pool 0,
is it right?

If yes, would you mind update the description?

Actions #6

Updated by Rohith Radhakrishnan almost 8 years ago

Rohith Radhakrishnan wrote:

Ceph mds dump shows metadata_pool id as 0. When no FS is present, then metadata_pool id should be left blank.

ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

ceph mds dump:-
dumped fsmap epoch 10
fs_name cephfs
failed
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled
14775: 10.242.42.220:6802/296988 'mds3' mds.0.9 up:active seq 24

Actions #7

Updated by Rohith Radhakrishnan almost 8 years ago

Hi Xiaoxi,

You are right about the bug. The metadata_pool field should be left blank. I have changed the description as required. But not sure how to get it to the top...

Actions #8

Updated by Nathan Cutler almost 8 years ago

  • Description updated (diff)

original description

Ceph mds dump shows metadata pool count as 2, even though only one metadata pool is present.

ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

ceph mds dump:-
dumped fsmap epoch 10
fs_name cephfs
failed
damaged
stopped
data_pools 1
metadata_pool 2
inline_data disabled
14775: 10.242.42.220:6802/296988 'mds3' mds.0.9 up:active seq 24

Actions #9

Updated by Xiaoxi Chen almost 8 years ago

  • Subject changed from ceph osd dump show incorrect number of metadata pools. to ceph mds dump show incorrect number of metadata pools.
Actions #10

Updated by Xiaoxi Chen almost 8 years ago

  • Status changed from 12 to Fix Under Review
Actions #11

Updated by Greg Farnum almost 7 years ago

  • Project changed from Ceph to CephFS
  • Category set to Administration/Usability
  • Status changed from Fix Under Review to Resolved
  • Component(FS) MDSMonitor added
Actions

Also available in: Atom PDF