Bug #23429
closedFile corrupt after writing to cephfs
0%
Description
We fount millions of file corruption on our CephFS+ec+overwrite cluster.I select some of them and use diff tools to compare.All error files is lager then origin file,it seems these file have been write twice in some block.
I think we have meet a bug.the file sample is too big to upload,so I make a screenshot on diff tool to show you.
Files
Updated by Zheng Yan about 6 years ago
Is file size larger than it should be? do you store cephfs metadata on ec pools?
Updated by Patrick Donnelly about 6 years ago
- Status changed from New to Need More Info
- Target version deleted (
v12.2.3) - Tags deleted (
CephFS,erasurecode)
Can you paste `ceph fs dump` and `ceph osd dump`.
Updated by lin rj about 6 years ago
Zheng Yan wrote:
Is file size larger than it should be? do you store cephfs metadata on ec pools?
metadata in replicate pool.
Updated by lin rj about 6 years ago
Patrick Donnelly wrote:
Can you paste `ceph fs dump` and `ceph osd dump`.
dumped fsmap epoch 83
e83
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2}
legacy client fscid: 1
Filesystem 'cephfs' (1)
fs_name cephfs
epoch 83
flags c
created 2018-03-01 14:08:40.207564
modified 2018-03-01 14:08:40.207564
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure 0
last_failure_osd_epoch 0
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2}
max_mds 1
in 0
up {0=16149}
failed
damaged
stopped 1
data_pools [7,8]
metadata_pool 6
inline_data disabled
balancer
standby_count_wanted 1
16149: 10.12.10.33:6800/1681698867 '33' mds.0.6 up:active seq 456739
Standby daemons:
15884: 10.12.10.32:6800/106179882 '32' mds.-1.0 up:standby seq 2
27678: 10.12.10.34:6800/1566981326 '34' mds.-1.0 up:standby seq 2
Updated by lin rj about 6 years ago
- File osdmap.tar.gz osdmap.tar.gz added
Patrick Donnelly wrote:
Can you paste `ceph fs dump` and `ceph osd dump`.
Updated by Zheng Yan about 6 years ago
When did you create the filesystem. were the filesystem created before luminous?
Do you know how these files were created. more specifically, do these files ever get truncated
Updated by Patrick Donnelly about 5 years ago
- Status changed from Need More Info to Can't reproduce