Bug #45024
closedmds: wrong link count under certain circumstance
0%
Description
I'm simulating a condition that there are two active mds, when making a hard link cross the two mds, both mds and client crashes. As expected, when all MDS and client are up again, the hard link request should either succeed or be rollbacked. However, in my simulation, it is not. The crash is supposed to happen in a scene like this: slave mds has sent slave_prep_ack to master, and trimmed this log (this log shouldn't be tried but it did happen), then crash. At the sametime,somehow for master mds, the slave_prep_ack message is missing or delayed, and it also crahes. To achieve the message-missing effect, I modify the source code, making Server::handle_slave_link_prep_ack() return at its beginning.
So with the drop prep-ack version mds, we can reproduce the bug as follow steps:
1.set configs so that journal can be flushed as soon as possible:mds_log_events_per_segment = 1; mds_log_max_events = 1
2.run `ceph-fuse /mnt` on an individual machine.
3.pin dirs to different mds rank, run
mkdir /mnt/mds0
mkdir /mnt/mds1
setattr -n ceph.dir.pin -v 0 /mnt/mds0
setattr -n ceph.dir.pin -v 1 /mnt/mds1
touch /mnt/mds0/0
ln /mnt/mds0/0 /mnt/mds1/0-link #this command will hang
4.wait for rank0 has flushed all logs. This can be judged by grep 'try_to_expire success' from mds log. In my environment, wait for 1 minutes is enough.
5.reboot client machine.
6.restart two mds, wait both of them are active
7.mount on the client. see results of `ls -l /mnt/mds0` and `ls -l /mnt/mds1`. /mnt/mds0/0 has a link count 2, but there is nothing under /mnt/mds1 dir.
I tried Luminous and master branch, both of them give the same result as above.