Actions
Bug #16730
closedmds'dump display incomplete
Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
ceph-qa-suite:
fs
Component(FS):
MDSMonitor
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
create "cephfs&&leadorfs2" fs when run "create fs flag set enable_multiple"
ceph fs new cephfs cephfs_metadata cephfs_data ceph fs new leadorfs2 cephfs_metadata2 cephfs_data2
1. use "ceph mds dump"
[root@cephfs102 3336422]# ceph mds dump dumped fsmap epoch 131 fs_name cephfs epoch 130 flags 0 created 2016-07-13 12:15:17.968014 modified 2016-07-13 12:15:17.968014 tableserver 0 root 0 session_timeout 60 session_autoclose 300 max_file_size 1099511627776 last_failure 0 last_failure_osd_epoch 1238 compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2} max_mds 1 in 0 up {0=767974} failed damaged stopped data_pools 1 metadata_pool 2 inline_data disabled 767974: 100.100.100.103:6800/15523 'cephfs103' mds.0.127 up:active seq 853
2. use "ceph fs dump"
[root@cephfs102 3336422]# ceph fs dump dumped fsmap epoch 131 e131 enable_multiple, ever_enabled_multiple: 1,0 compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2} Filesystem 'cephfs' (1) fs_name cephfs epoch 130 flags 0 created 2016-07-13 12:15:17.968014 modified 2016-07-13 12:15:17.968014 tableserver 0 root 0 session_timeout 60 session_autoclose 300 max_file_size 1099511627776 last_failure 0 last_failure_osd_epoch 1238 compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2} max_mds 1 in 0 up {0=767974} failed damaged stopped data_pools 1 metadata_pool 2 inline_data disabled 767974: 100.100.100.103:6800/15523 'cephfs103' mds.0.127 up:active seq 853 Filesystem 'leadorfs2' (2) fs_name leadorfs2 epoch 113 flags 0 created 2016-07-14 19:51:06.262471 modified 2016-07-14 19:51:06.262471 tableserver 0 root 0 session_timeout 60 session_autoclose 300 max_file_size 1099511627776 last_failure 0 last_failure_osd_epoch 0 compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2} max_mds 1 in 0 up {0=384086} failed damaged stopped data_pools 3 metadata_pool 4 inline_data disabled 384086: 100.100.100.104:6823/24651 'cephfs104' mds.0.112 up:active seq 3024 Standby daemons: 644648: 100.100.100.102:6811/3487948 'cephfs102' mds.-1.0 up:standby seq 1
mds'dump does not display the following 2 items
(1) Standby daemons
(2) fs_name is "leadorfs2"
Updated by huanwen ren almost 8 years ago
mds dump" and "fs dump" are repeated,and "mds dump" display incomplete.
so delete "mds dump" I think is the best choice
I submitted a PR
https://github.com/ceph/ceph/pull/10338
Updated by Greg Farnum almost 8 years ago
- Status changed from New to Won't Fix
This is deliberate. "mds dump" dumps a specific filesystem (it defaults to the first one, but a client which is set up to use the non-default will see its own fs); "fs dump" dumps all of them. This is one of the things that will be more useful as we extend the preview-multifs code to a better security model.
Actions