From the logs, the dir(0x1) will submit the volumes Dentry to metadata pool:
1726069 2022-11-24T09:18:10.177+0100 7fe0347eb700 10 mds.0.cache.dir(0x1) commit want 0 on [dir 0x1 / [2,head] auth pv=5067819 v=5067817 cv=0/0 dir_auth=0 ap=2+1 state=1610612800|fetching f(v0 m2022-02-14T14:08:22.630497+0100 1=0+1) n(v203259 rc2022-11-23T17:09:36.965133+0100 b324608642500 rs3145360=132068+13292) hs=1+0,ss=0+0 dirty=1 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=1 authpin=1 0x5569a9e49600]
1726070 2022-11-24T09:18:10.178+0100 7fe0347eb700 10 mds.0.cache.dir(0x1) auth_pin by 0x5569a9e49600 on [dir 0x1 / [2,head] auth pv=5067819 v=5067817 cv=0/0 dir_auth=0 ap=3+1 state=1610612800|fetching f(v0 m2022-02-14T14:08:22.630497+0100 1=0+1) n(v203259 rc2022-11-23T17:09:36.965133+0100 b324608642500 rs3 145360=132068+13292) hs=1+0,ss=0+0 dirty=1 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=1 authpin=1 0x5569a9e49600] count now 3
1726071 2022-11-24T09:18:10.178+0100 7fe0347eb700 10 mds.0.cache.dir(0x1) _commit want 5067817 on [dir 0x1 / [2,head] auth pv=5067819 v=5067817 cv=0/0 dir_auth=0 ap=3+1 state=1610612800|fetching f(v0 m2022-02-14T14:08:22.630497+0100 1=0+1) n(v203259 rc2022-11-23T17:09:36.965133+0100 b324608642500 rs3 145360=132068+13292) hs=1+0,ss=0+0 dirty=1 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=1 authpin=1 0x5569a9e49600]
1726072 2022-11-24T09:18:10.178+0100 7fe0347eb700 10 mds.0.cache.dir(0x1) marking committing
1726073 2022-11-24T09:18:10.178+0100 7fe0347eb700 20 mds.0.bal hit_dir 4 pop is 1, frag * size 1 [pop IRD:[C 0.00e+00] IWR:[C 0.00e+00] RDR:[C 0.00e+00] FET:[C 9.82e-01] STR:[C 1.00e+00] *LOAD:6.0]
1726074
1726075 2022-11-24T09:18:10.178+0100 7fe0347eb700 10 mds.0.cache.dir(0x1) _omap_commit
1726076 2022-11-24T09:18:10.178+0100 7fe0347eb700 10 mds.0.cache.dir(0x1) set volumes [dentry #0x1/volumes [2,head] auth (dversion lock) pv=5067818 v=5067816 ino=0x10000000000 state=1610612736 | inodepin=1 dirty=1 0x5569b24b2280]
1726077 2022-11-24T09:18:10.178+0100 7fe0347eb700 14 mds.0.cache.dir(0x1) dn 'volumes' inode [inode 0x10000000000 [...c0d9,head] /volumes/ auth v5067816 pv5067818 ap=1 f(v0 m2022-11-23T11:31:31.711158+0100 5235=5229+6) n(v377862 rc2022-11-23T17:09:36.965133+0100 b324608642500 rs3 145360=132068+13292)/n(v377861 rc2022-11-23T17:09:36.965133+0100 b324608642500 rs3 145360=132068+13292) old_inodes=25874 (inest lock w=1 dirty) (iversion l ock w=1) | dirtyscattered=1 lock=2 dirfrag=1 dirtyrstat=0 dirty=1 authpin=1 0x5569ae8beb00]
1726078 2022-11-24T09:18:10.178+0100 7fe0347eb700 15 mds.0.journal try_to_expire committing [dir 0x100 ~mds0/ [2,head] auth pv=226292767 v=226292765 cv=0/0 dir_auth=0 ap=1+1 state=1610612737|complete f(v0 10=0+10) n(v497237 rc2022-11-23T17:09:36.965133+0100 b1041 14=3+11) hs=10+0,ss=0+0 dirty=10 | child=1 subtree=1 subtreetemp=0 dirty=1 waiter=0 authpin=1 0x5569a9e49a80]
1726079 2022-11-24T09:18:10.178+0100 7fe02ffe2700 10 mds.0.cache.dir(0x1) _omap_commit_ops
But it submitted two osd reqeusts:
The first one, which was a invalidate request:
1733304 2022-11-24T09:18:10.228+0100 7fe02ffe2700 1 -- [v2:134.158.80.254:6800/3072911038,v1:134.158.80.254:6801/3072911038] --> [v2:134.158.80.251:6820/1129259,v1:134.158.80.251:6821/1129259] -- osd_op(unknown.0.12135:45216 2.1f 2:ff5b34d6:::1.00000000:head [] snapc 0=[] ondisk+write+known_if_redirected+full_force+supports_pool_eio e62331) v8 -- 0x5569b7730800 con 0x5569aab94400
And it will get -22 error:
1862249 2022-11-24T09:18:11.481+0100 7fe0397f5700 1 -- [v2:134.158.80.254:6800/3072911038,v1:134.158.80.254:6801/3072911038] <== osd.20 v2:134.158.80.251:6820/1129259 1455 ==== osd_op_reply(45216 1.00000000 [] v0'0 uv0 ondisk = -22 ((22) Invalid argument)) v8 ==== 112+0+0 (crc 0 0 0) 0x5569af77ab40 con 0x5569aab94400
And finally will make the cephfs readonly:
2175076 2022-11-24T09:18:12.965+0100 7fe02ffe2700 10 MDSContext::complete: 18C_IO_Dir_Committed
2175077 2022-11-24T09:18:12.965+0100 7fe02ffe2700 1 mds.0.cache.dir(0x1) commit error -22 v 5067817
2175078 2022-11-24T09:18:12.965+0100 7fe035fee700 1 -- [v2:134.158.80.254:6800/3072911038,v1:134.158.80.254:6801/3072911038] <== client.4166749 v1:134 .158.80.110:0/3015200204 732 ==== client_request(client.4166749:132710 lookup #0x1000002dc6e/1 2022-11-24T09:18:12.403544+0100 caller_uid=0, caller_gid=0{}) v4 ==== 137+0+0 (unknown 2206901006 0 0) 0x5569bce93340 con 0x5569b7730400
2175079 2022-11-24T09:18:12.965+0100 7fe02ffe2700 -1 log_channel(cluster) log [ERR] : failed to commit dir 0x1 object, errno -22
2175080 2022-11-24T09:18:12.965+0100 7fe02ffe2700 -1 mds.0.12135 unhandled write error (22) Invalid argument, force readonly...
2175081 2022-11-24T09:18:12.965+0100 7fe02ffe2700 1 mds.0.cache force file system read-only
2175082 2022-11-24T09:18:12.965+0100 7fe02ffe2700 0 log_channel(cluster) log [WRN] : force file system read-only
2175083 2022-11-24T09:18:12.965+0100 7fe02ffe2700 10 mds.0.server force_clients_readonly
The second one:
1733315 2022-11-24T09:18:10.229+0100 7fe02ffe2700 1 -- [v2:134.158.80.254:6800/3072911038,v1:134.158.80.254:6801/3072911038] --> [v2:134.158.80.251:6820/1129259,v1:134.158.80.251:6821/1129259] -- osd_op(unknown.0.12135:45217 2.1f 2:ff5b34d6:::1.00000000:head [omap-set-header in=274b,omap-set-vals in=11928426b] snapc 0=[] ondisk+write+known_if_redirected+full_force+supports_pool_eio e62331) v8 -- 0x5569b7731400 con 0x5569aab94400
The volumes CInode seems will include a large number of snapshots [...c0d9,head]:
1726077 2022-11-24T09:18:10.178+0100 7fe0347eb700 14 mds.0.cache.dir(0x1) dn 'volumes' inode [inode 0x10000000000 [...c0d9,head] /volumes/ auth v5067816 pv5067818 ap=1 f(v0 m2022-11-23T11:31:31.711158+0100 5235=5229+6) n(v377862 rc2022-11-23T17:09:36.965133+0100 b324608642500 rs3 145360=132068+13292)/n(v377861 rc2022-11-23T17:09:36.965133+0100 b324608642500 rs3 145360=132068+13292) old_inodes=25874 (inest lock w=1 dirty) (iversion l ock w=1) | dirtyscattered=1 lock=2 dirfrag=1 dirtyrstat=0 dirty=1 authpin=1 0x5569ae8beb00]
So for the volumes itself could have a very large size when submitting to metadata pool.
The mds source code is buggy.