This is a good find. The relevant part of the log:
2024-01-10T20:43:52.618+0000 7f8ea24ef700 10 MDSContext::complete: 12C_MDS_VoidFn
2024-01-10T20:43:52.618+0000 7f8ea24ef700 1 mds.0.280 resolve_done
2024-01-10T20:43:52.618+0000 7f8ea24ef700 3 mds.0.280 request_state up:reconnect
2024-01-10T20:43:52.618+0000 7f8ea24ef700 5 mds.beacon.e set_want_state: up:resolve -> up:reconnect
2024-01-10T20:43:52.618+0000 7f8ea24ef700 5 mds.beacon.e Sending beacon up:reconnect seq 40
2024-01-10T20:43:52.618+0000 7f8ea24ef700 1 -- [v2:172.21.15.47:6832/3562607475,v1:172.21.15.47:6833/3562607475] --> [v2:172.21.15.47:3300/0,v1:172.21.15.47:6789/0] -- mdsbeacon(8022/e up:reconnect seq=40 v281) v8 -- 0x55ce75078b00 con 0x55ce7507ac00
2024-01-10T20:43:52.618+0000 7f8ea24ef700 10 mds.0.snapclient sync
2024-01-10T20:43:52.618+0000 7f8ea24ef700 10 mds.0.snapclient refresh want 1
2024-01-10T20:43:52.618+0000 7f8ea24ef700 1 -- [v2:172.21.15.47:6832/3562607475,v1:172.21.15.47:6833/3562607475] send_to--> mds [v2:172.21.15.47:6832/3562607475,v1:172.21.15.47:6833/3562607475] -- mds_table_request(snaptable query 1 9 bytes) v1 -- ?+0 0x55ce7520e480
2024-01-10T20:43:52.618+0000 7f8ea24ef700 1 -- [v2:172.21.15.47:6832/3562607475,v1:172.21.15.47:6833/3562607475] --> [v2:172.21.15.47:6832/3562607475,v1:172.21.15.47:6833/3562607475] -- mds_table_request(snaptable query 1 9 bytes) v1 -- 0x55ce7520e480 con 0x55ce7507a000
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 10 MDSContext::complete: 25C_IO_MDC_FragmentPurgeOld
2024-01-10T20:43:52.618+0000 7f8ea24ef700 1 -- [v2:172.21.15.47:6832/3562607475,v1:172.21.15.47:6833/3562607475] <== mds.0 v2:172.21.15.47:6832/3562607475 0 ==== mds_table_request(snaptable query 1 9 bytes) v1 ==== 0+0+0 (unknown 0 0 0) 0x55ce7520e480 con 0x55ce7507a000
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 10 mds.0.cache fragment_old_purged 0x10000003d88
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 10 mds.0.log submit_entry also starting new segment: last = 64925/241326736, event seq = 65806
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 7 mds.0.log _prepare_new_segment seq 65807
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 10 mds.0.cache advance_stray to index 1 fragmenting index -1
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 7 mds.0.log _journal_segment_subtree_map
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 10 mds.0.cache create_subtree_map 12 subtrees, 11 fullauth
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 10 mds.0.cache number of subtrees = 12; not printing subtrees
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 15 mds.0.cache auth subtree [dir 0x1 / [2,head] auth v=826 cv=0/0 dir_auth=0 state=1610612736 f(v0 m2024-01-10T20:03:30.742671+0000 1=0+1) n(v24 rc2024-01-10T20:43:11.194482+0000 b170731713 18232=17060+1172) hs=1+0,ss=0+0 dirty=1 | child=1 subtree=1 dirty=1 0x55ce7436ed00]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 15 mds.0.cache subtree bound [dir 0x10000002bea /client.0/tmp/linux-2.6.33/arch/ [2,head] rep@1.0 dir_auth=1 state=134217728 f(v10 m2024-01-10T20:42:28.420395+0000 24=2+22)/f(v10 m2024-01-10T20:42:21.532543+0000 23=2+21) n(v18 rc2024-01-10T20:42:45.387033+0000 b74360890 13407=12530+877)/n(v18 rc2024-01-10T20:42:34.965256+0000 b71425106 12586=11736+850) hs=9+0,ss=0+0 | child=1 subtree=1 dirty=0 0x55ce7947c000]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 15 mds.0.cache auth subtree [dir 0x100 ~mds0/ [2,head] auth v=21965 cv=0/0 dir_auth=0 state=1610612736 f(v0 10=0+10) n(v68 rc2024-01-10T20:37:59.960150+0000 b5173082 889=524+365) hs=10+0,ss=0+0 dirty=10 | child=1 subtree=1 dirty=1 0x55ce7436f180]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 15 mds.0.cache auth subtree [dir 0x20000004fb1 /client.0/tmp/linux-2.6.33/arch/m68k/ [2,head] auth v=975 cv=0/0 dir_auth=0 state=1610612737|complete f(v0 m2024-01-10T20:40:16.654210+0000 24=4+20) n(v4 rc2024-01-10T20:40:16.657210+0000 b4928581 473=449+24) hs=24+0,ss=0+0 dirty=24 | child=1 subtree=1 dirty=1 0x55ce7550b600]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 15 mds.0.cache auth subtree [dir 0x20000006479 /client.0/tmp/linux-2.6.33/arch/um/ [2,head] auth v=761 cv=0/0 dir_auth=0 state=1610612737|complete f(v0 m2024-01-10T20:42:28.385396+0000 24=15+9) n(v5 rc2024-01-10T20:42:28.419395+0000 b854076 371=344+27) hs=24+0,ss=0+0 dirty=24 | child=1 subtree=1 dirty=1 0x55ce762ba480]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 15 mds.0.cache auth subtree [dir 0x20000006640 /client.0/tmp/linux-2.6.33/arch/x86/include/ [2,head] auth v=1199 cv=0/0 dir_auth=0 state=1610612737|complete f(v0 m2024-01-10T20:42:40.091146+0000 1=0+1) n(v2 rc2024-01-10T20:42:42.666091+0000 b1121623 318=314+4) hs=1+0,ss=0+0 dirty=1 | child=1 subtree=1 dirty=1 0x55ce7656b180]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 15 mds.0.cache auth subtree [dir 0x20000006224 /client.0/tmp/linux-2.6.33/arch/sparc/ [2,head] auth v=1177 cv=0/0 dir_auth=0 state=1610612737|complete f(v0 m2024-01-10T20:42:21.511543+0000 12=3+9) n(v5 rc2024-01-10T20:42:21.532543+0000 b3403846 596=586+10) hs=12+0,ss=0+0 dirty=12 | child=1 subtree=1 dirty=1 0x55ce76ddba80]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 15 mds.0.cache auth subtree [dir 0x20000004cbc /client.0/tmp/linux-2.6.33/arch/ia64/ [2,head] auth v=1000 cv=0/0 dir_auth=0 state=1610612737|complete f(v0 m2024-01-10T20:40:08.471385+0000 20=5+15) n(v5 rc2024-01-10T20:40:08.485385+0000 b3773572 519=486+33) hs=20+0,ss=0+0 dirty=20 | child=1 subtree=1 dirty=1 0x55ce7734da80]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 15 mds.0.cache auth subtree [dir 0x200000052d8 /client.0/tmp/linux-2.6.33/arch/mips/ [2,head] auth v=2280 cv=0/0 dir_auth=0 state=1610612737|complete f(v1 m2024-01-10T20:41:27.825690+0000 39=3+36) n(v7 rc2024-01-10T20:41:27.841689+0000 b7399933 1462=1324+138) hs=39+0,ss=0+0 dirty=39 | child=1 subtree=1 dirty=1 0x55ce77624900]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 15 mds.0.cache auth subtree [dir 0x20000005ed4 /client.0/tmp/linux-2.6.33/arch/sh/ [2,head] auth v=1079 cv=0/0 dir_auth=0 state=1610612737|complete f(v0 m2024-01-10T20:42:11.678753+0000 17=4+13) n(v9 rc2024-01-10T20:42:11.681753+0000 b4445763 847=759+88) hs=17+0,ss=0+0 dirty=17 | child=1 subtree=1 dirty=1 0x55ce786aa400]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 15 mds.0.cache auth subtree [dir 0x2000000577f /client.0/tmp/linux-2.6.33/arch/powerpc/ [2,head] auth v=2209 cv=0/0 dir_auth=0 state=1610612737|complete f(v0 m2024-01-10T20:41:55.365101+0000 16=4+12) n(v10 rc2024-01-10T20:41:55.403101+0000 b12428948 1437=1389+48) hs=16+0,ss=0+0 dirty=16 | child=1 subtree=1 dirty=1 0x55ce78963b00]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 15 mds.0.cache auth subtree [dir 0x10000002d1c /client.0/tmp/linux-2.6.33/arch/arm/ [2,head] auth v=8180 cv=0/0 dir_auth=0 state=1610612737|complete f(v0 m2024-01-10T20:38:51.494038+0000 86=4+82) n(v7 rc2024-01-10T20:38:51.502038+0000 b18201892 3343=3128+215) hs=86+0,ss=0+0 dirty=86 | child=1 subtree=1 dirty=1 0x55ce795c1180]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 20 mds.0.journal EMetaBlob::add_dir_context final:
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 20 mds.0.journal EMetaBlob::add_dir_context final:
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 20 mds.0.journal EMetaBlob::add_dir_context(0x55ce7436ed00) have lump 0x1
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 20 mds.0.journal EMetaBlob::add_dir_context final: 0x55ce74333900,0x55ce7a47ec80,0x55ce77f09b80,0x55ce7a21aa00
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 -1 mds.0.cache.den(0x1 client.0) newly corrupt dentry to be committed: [dentry #0x1/client.0 [2,head] auth (dversion lock) v=825 ino=0x10000000000 state=1610612736 | inodepin=1 dirty=1 0x55ce74333900]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 10 mds.0.cache.dir(0x1) go_bad_dentry client.0
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 -1 log_channel(cluster) log [ERR] : MDS abort because newly corrupt dentry to be committed: [dentry #0x1/client.0 [2,head] auth (dversion lock) v=825 ino=0x10000000000 state=1610612736 | inodepin=1 dirty=1 0x55ce74333900]
2024-01-10T20:43:52.618+0000 7f8e9c4e3700 1 -- [v2:172.21.15.47:6832/3562607475,v1:172.21.15.47:6833/3562607475] --> [v2:172.21.15.47:3300/0,v1:172.21.15.47:6789/0] -- log(1 entries from seq 1 at 2024-01-10T20:43:52.619991+0000) v1 -- 0x55ce743468c0 con 0x55ce7507ac00
2024-01-10T20:43:52.620+0000 7f8e9c4e3700 -1 /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.2.7-278-gd6f81946/rpm/el8/BUILD/ceph-17.2.7-278-gd6f81946/src/mds/MDSRank.cc: In function 'void MDSRank::abort(std::string_view)' thread 7f8e9c4e3700 time 2024-01-10T20:43:52.620029+0000
/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.2.7-278-gd6f81946/rpm/el8/BUILD/ceph-17.2.7-278-gd6f81946/src/mds/MDSRank.cc: 941: ceph_abort_msg("abort() called")
The issue is that the snapclient is not yet sync'd but the snapserver is ready. So the current snap version is 1 causing the abort. I will look into a solution soon.
(No legitimate corruption found.)