Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2024-03-15T06:18:07ZCeph
Redmine CephFS - Backport #64941 (New): quincy: qa: Add multifs root_squash testcasehttps://tracker.ceph.com/issues/649412024-03-15T06:18:07ZBackport BotCephFS - Backport #64940 (New): reef: qa: Add multifs root_squash testcasehttps://tracker.ceph.com/issues/649402024-03-15T06:18:00ZBackport BotCephFS - Backport #64939 (New): squid: qa: Add multifs root_squash testcasehttps://tracker.ceph.com/issues/649392024-03-15T06:17:52ZBackport BotCephFS - Bug #64641 (Pending Backport): qa: Add multifs root_squash testcasehttps://tracker.ceph.com/issues/646412024-02-29T07:37:36ZKotresh Hiremath Ravishankar
<p>Multifs root_squash test is missing. Add it.</p> CephFS - Backport #64583 (In Progress): squid: qa: subvolume_snapshot_rm.sh stalls when waiting f...https://tracker.ceph.com/issues/645832024-02-27T05:51:16ZBackport Bot
<p><a class="external" href="https://github.com/ceph/ceph/pull/55830">https://github.com/ceph/ceph/pull/55830</a></p> CephFS - Backport #64582 (In Progress): reef: qa: subvolume_snapshot_rm.sh stalls when waiting fo...https://tracker.ceph.com/issues/645822024-02-27T05:51:08ZBackport Bot
<p><a class="external" href="https://github.com/ceph/ceph/pull/55829">https://github.com/ceph/ceph/pull/55829</a></p> CephFS - Backport #64581 (In Progress): quincy: qa: subvolume_snapshot_rm.sh stalls when waiting ...https://tracker.ceph.com/issues/645812024-02-27T05:51:01ZBackport Bot
<p><a class="external" href="https://github.com/ceph/ceph/pull/55828">https://github.com/ceph/ceph/pull/55828</a></p> CephFS - Bug #64298 (New): CephFS metadata pool has large OMAP objects corresponding to strayshttps://tracker.ceph.com/issues/642982024-02-02T12:37:42ZAlexander Patrakov
<p>Hello developers,</p>
<p>A customer has a cluster which currently has 4 large OMAP objects (one old and three new) in its metadata pool. I am aware of <a class="external" href="https://tracker.ceph.com/issues/45333">https://tracker.ceph.com/issues/45333</a>, and in this comment <a class="external" href="https://tracker.ceph.com/issues/45333#note-6">https://tracker.ceph.com/issues/45333#note-6</a> a procedure for triggering the directory fragmentation exists: reconstruct the directory path and list that directory to get it fragmented. However, in our case, this procedure is inapplicable.</p>
<pre>
# rados getxattr --pool=mainfs.meta 100290d9cb3.00000000 parent | ceph-dencoder type inode_backtrace_t import - decode dump_json
{
"ino": 1100200385715,
"ancestors": [
{
"dirino": 1543,
"dname": "100290d9cb3",
"version": 318702055
},
{
"dirino": 256,
"dname": "stray7",
"version": 1405762425
}
],
"pool": 2,
"old_pools": []
}
</pre>
<p>See - it is a stray. Actually, all three new large OMAP objects correspond to stray directories that for this reason cannot be listed. Instructions should be provided on how to deal with this situation.</p>
<p>Regarding possible snapshots: the oldest snapshot of a directory that "officially" should have snapshots is dated January 28, 2024. There might be older snapshots of other directories, I have not searched for them and I don't know if they exist.</p>
<p>Regarding the contents of one of the stray objects, I did this to get some statistics:</p>
<pre>
# ceph tell mds.0 dump tree "~mdsdir/stray7" > stray7.json
# ls -l stray7.json
-rw-r--r-- 1 root root 710084873 Feb 2 08:25 stray7.json
# wc -l stray7.json
23391176 stray7.json
# grep stray_prior_path stray7.json | wc -l
135172
# grep stray_prior_path stray7.json | grep -v '"stray_prior_path": ""' | wc -l
358
</pre>
<p>I can confirm that the entries with non-empty stray_prior_path are "clustered" in two different directories. I have checked one entry manually - it does not exist as either a file or a directory, but its parent does and contains a lot of existing subdirectories named in a similar way.</p> CephFS - Bug #64149 (New): valgrind+mds/client: gracefully shutdown the mds during valgrind testshttps://tracker.ceph.com/issues/641492024-01-24T14:50:38ZVenky Shankarvshankar@redhat.com
<p>Currently valgrind tests enable `mds_valgrind_exit` and the MDS just does an `exit(0)` thereby not giving the MDS a chance to shutdown gracefully (MDSRankDispatcher::shutdown()). AFAICS, there isn't anything that stopping the MDS to do so when running with valgrind - the mds choose to exit when asked to respawn to it may as well clean things up before shutdown to keep valgrind happy.</p> CephFS - Bug #63471 (New): client: error code inconsistency when accessing a mount of a deleted dirhttps://tracker.ceph.com/issues/634712023-11-07T16:51:06ZRobert Vasek
<p>Accessing a FUSE mount of a volume that has been deleted in the meantime results in ENOENT, and a faulty info when listing a directory:</p>
<pre><code class="text syntaxhl"><span class="CodeRay">root# ls -l
ls: cannot access 'mnt': No such file or directory
total 0
...
d?????????? ? ? ? ? ? mnt
</span></code></pre>
<p>This is inconsistent with kernel client's error code EACCES.</p>
<p>ENOENT breaks automation tools (namely ceph-csi) as it leads it to thinking that the mountpoint is already gone, while it's just that a user mistakenly deleted the cephfs directory from an external tool before stopping the workload accessing the mount. Indeed the automation tools could be fixed to accommodate this case, however it would be nice if the FUSE and kernel clients were returning error codes consistently.</p>
<p>Thanks!</p>
<p>Steps to reproduce:<br />1. Create a subvol<br />2. Mount it<br />3. Delete the subvol<br />4. Try to access the mount, observe the errors returned by ceph-fuse and the kernel client</p>
<p>Attaching logs:</p>
<p>ceph-fuse<br /><pre><code class="text syntaxhl"><span class="CodeRay">[root@rvasek-1-27-6-2-qqbsjsnaopix-node-0 tmp]# ceph-fuse -d -f -k pvc-2385dfc3-fb5d-4c7a-ae[root@rvasek-1-27-6-2-qqbsjsnaopix-master-0 tmp]# ceph-fuse -d -f -k pvc-2385dfc3-fb5d-4c7a-aebe-d7e9100f8f1a.cephx.keyring --id pvc-2385dfc3-fb5d-4c7a-aebe-d7e9100f8f1a -m 188.185.66.208:6790,188.184.94.56:6790,188.184.86.25:6790 -r /volumes/_nogroup/e78a0069-f781-46f5-b674-cce4ed0c57ef/ffe5424c-8fdb-44b6-8d19-9524c1b6f7be /mnt
2023-11-07T16:43:58.145+0000 7f21b5450700 -1 --2- 188.185.120.133:0/3239993688 >> v2:188.184.94.56:6790/0 conn(0x55ce41c93a00 0x55ce41c988b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner peer v2:188.184.94.56:6790/0 is using msgr V1 protocol
2023-11-07T16:43:58.145+0000 7f21b5c51700 -1 --2- 188.185.120.133:0/3239993688 >> v2:188.184.86.25:6790/0 conn(0x55ce41c930d0 0x55ce41c934c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner peer v2:188.184.86.25:6790/0 is using msgr V1 protocol
2023-11-07T16:43:58.154+0000 7f21be585580 0 ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable), process ceph-fuse, pid 4167065
2023-11-07T16:43:58.156+0000 7f21be585580 -1 init, newargv = 0x55ce41ca0950 newargc=16
ceph-fuse[4167065]: starting ceph client
FUSE library version: 2.9.7
ceph-fuse[4167065]: starting fuse
unique: 2, opcode: INIT (26), nodeid: 0, insize: 104, pid: 0
INIT: 7.38
flags=0x73fffffb
max_readahead=0x00020000
INIT: 7.19
flags=0x0000043b
max_readahead=0x00020000
max_write=0x00020000
max_background=0
congestion_threshold=0
unique: 2, success, outsize: 40
unique: 4, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 4167095
unique: 4, success, outsize: 120
unique: 6, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 4167095
unique: 6, success, outsize: 120
unique: 8, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 4167306
unique: 8, success, outsize: 120
unique: 10, opcode: GETXATTR (22), nodeid: 1, insize: 72, pid: 4167306
unique: 10, error: -95 (Operation not supported), outsize: 16
unique: 12, opcode: GETXATTR (22), nodeid: 1, insize: 64, pid: 4167306
unique: 12, error: -95 (Operation not supported), outsize: 16
unique: 14, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 4167411
unique: 14, success, outsize: 120
unique: 16, opcode: GETXATTR (22), nodeid: 1, insize: 72, pid: 4167411
unique: 16, error: -95 (Operation not supported), outsize: 16
unique: 18, opcode: GETXATTR (22), nodeid: 1, insize: 64, pid: 4167411
unique: 18, error: -95 (Operation not supported), outsize: 16
unique: 20, opcode: OPENDIR (27), nodeid: 1, insize: 48, pid: 4167411
unique: 20, success, outsize: 32
unique: 22, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 4167411
unique: 22, success, outsize: 120
unique: 24, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 4167411
unique: 24, success, outsize: 80
unique: 26, opcode: READDIR (28), nodeid: 1, insize: 80, pid: 4167411
unique: 26, success, outsize: 16
unique: 28, opcode: RELEASEDIR (29), nodeid: 1, insize: 64, pid: 0
unique: 28, success, outsize: 16
<<< DELETING THE SUBVOL NOW >>>
2023-11-07T16:44:52.089+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
unique: 30, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 4167759
2023-11-07T16:45:05.112+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.114+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.116+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.118+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.120+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.122+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.125+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.126+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.128+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.130+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.132+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.135+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.137+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.139+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.141+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.143+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.145+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.147+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.150+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.151+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.154+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.156+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.158+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
2023-11-07T16:45:05.159+0000 7f219d7fa700 0 client.1712074284 ms_handle_remote_reset on v2:188.184.83.152:6800/665755794
unique: 30, error: -2 (No such file or directory), outsize: 16
<<< umount >>>
unique: 30, error: -2 (No such file or directory), outsize: 16
unique: 32, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 4168557
unique: 32, error: -2 (No such file or directory), outsize: 16
ceph-fuse[4167065]: fuse finished with error 0 and tester_r 0
</span></code></pre></p>
<p>Kernel logs:<br /><pre><code class="text syntaxhl"><span class="CodeRay">[root@rvasek-1-27-6-2-qqbsjsnaopix-node-0 core]# dmesg | grep -i ceph
[432918.048321] libceph: mon1 (1)188.184.94.56:6790 session established
[432918.050011] libceph: client1695439645 fsid dd535a7e-4647-4bee-853d-f34112615f81
[433038.193633] libceph: mds0 (1)188.184.83.152:6801 socket closed (con state OPEN)
[433039.056883] libceph: mds0 (1)188.184.83.152:6801 session reset
[433039.056890] ceph: mds0 closed our session
[433039.056891] ceph: mds0 reconnect start
[433039.058421] ceph: mds0 reconnect denied
[433040.732088] libceph: mds0 (1)188.184.83.152:6801 socket closed (con state V1_CONNECT_MSG)
</span></code></pre></p>
<p>Host:<br /><pre><code class="text syntaxhl"><span class="CodeRay"># uname -a
Linux rvasek-1-27-6-2-qqbsjsnaopix-node-0 6.4.15-200.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Sep 7 00:25:01 UTC 2023 x86_64 GNU/Linux
</span></code></pre></p>
<p>Cheers,<br />Robert Vasek</p> CephFS - Bug #63132 (Pending Backport): qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_...https://tracker.ceph.com/issues/631322023-10-07T17:32:32ZVenky Shankarvshankar@redhat.com
<p>/a/yuriw-2023-10-06_22:26:38-fs-wip-yuri3-testing-2023-10-06-0948-quincy-distro-default-smithi/7415802</p>
<p>Probably not related to cephfs as such, but let's debug and hand it over to the rados folks if nothing stands out.</p> CephFS - Bug #62484 (Triaged): qa: ffsb.sh test failurehttps://tracker.ceph.com/issues/624842023-08-17T17:04:47ZPatrick Donnellypdonnell@redhat.com
<pre>
2023-08-09T03:00:49.009 INFO:tasks.workunit.client.0.smithi129.stdout:Wrote -1 instead of 4096 bytes.
2023-08-09T03:00:49.009 INFO:tasks.workunit.client.0.smithi129.stdout:Probably out of disk space
2023-08-09T03:00:49.009 INFO:tasks.workunit.client.0.smithi129.stderr:write: Input/output error
2023-08-09T03:00:49.201 DEBUG:teuthology.orchestra.run:got remote process result: 1
2023-08-09T03:00:49.202 INFO:tasks.workunit:Stopping ['suites/ffsb.sh'] on client.0...
2023-08-09T03:00:49.202 DEBUG:teuthology.orchestra.run.smithi129:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0
2023-08-09T03:00:50.309 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_teuthology_7fda95956ac10132c9b74016ba832db907df09fa/teuthology/run_tasks.py", line 105, in run_tasks
manager = run_one_task(taskname, ctx=ctx, config=config)
File "/home/teuthworker/src/git.ceph.com_teuthology_7fda95956ac10132c9b74016ba832db907df09fa/teuthology/run_tasks.py", line 83, in run_one_task
return task(**kwargs)
File "/home/teuthworker/src/github.com_ceph_ceph-c_c99915ec04212fb433eef93cd14cd65cd72d46b4/qa/tasks/workunit.py", line 145, in task
_spawn_on_all_clients(ctx, refspec, all_tasks, config.get('env'),
File "/home/teuthworker/src/github.com_ceph_ceph-c_c99915ec04212fb433eef93cd14cd65cd72d46b4/qa/tasks/workunit.py", line 295, in _spawn_on_all_clients
p.spawn(_run_tests, ctx, refspec, role, [unit], env,
File "/home/teuthworker/src/git.ceph.com_teuthology_7fda95956ac10132c9b74016ba832db907df09fa/teuthology/parallel.py", line 84, in __exit__
for result in self:
File "/home/teuthworker/src/git.ceph.com_teuthology_7fda95956ac10132c9b74016ba832db907df09fa/teuthology/parallel.py", line 98, in __next__
resurrect_traceback(result)
File "/home/teuthworker/src/git.ceph.com_teuthology_7fda95956ac10132c9b74016ba832db907df09fa/teuthology/parallel.py", line 30, in resurrect_traceback
raise exc.exc_info[1]
File "/home/teuthworker/src/git.ceph.com_teuthology_7fda95956ac10132c9b74016ba832db907df09fa/teuthology/parallel.py", line 23, in capture_traceback
return func(*args, **kwargs)
File "/home/teuthworker/src/github.com_ceph_ceph-c_c99915ec04212fb433eef93cd14cd65cd72d46b4/qa/tasks/workunit.py", line 424, in _run_tests
remote.run(
File "/home/teuthworker/src/git.ceph.com_teuthology_7fda95956ac10132c9b74016ba832db907df09fa/teuthology/orchestra/remote.py", line 522, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/teuthworker/src/git.ceph.com_teuthology_7fda95956ac10132c9b74016ba832db907df09fa/teuthology/orchestra/run.py", line 455, in run
r.wait()
File "/home/teuthworker/src/git.ceph.com_teuthology_7fda95956ac10132c9b74016ba832db907df09fa/teuthology/orchestra/run.py", line 161, in wait
self._raise_for_status()
File "/home/teuthworker/src/git.ceph.com_teuthology_7fda95956ac10132c9b74016ba832db907df09fa/teuthology/orchestra/run.py", line 181, in _raise_for_status
raise CommandFailedError(
teuthology.exceptions.CommandFailedError: Command failed (workunit test suites/ffsb.sh) on smithi129 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c99915ec04212fb433eef93cd14cd65cd72d46b4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'
</pre>
<p>From: /teuthology/yuriw-2023-08-09_01:12:27-fs-wip-yuri5-testing-2023-08-08-0807-quincy-distro-default-smithi/7363487/teuthology.log</p>
<p>Looks like a new instance of <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: ffsb.sh test failure (Resolved)" href="https://tracker.ceph.com/issues/54461">#54461</a></p> CephFS - Feature #50150 (Pending Backport): qa: begin grepping kernel logs for kclient warnings/f...https://tracker.ceph.com/issues/501502021-04-06T01:50:13ZPatrick Donnellypdonnell@redhat.com
<p>Right now, TMK, we are not confirming there are no warnings/errors/lockups in the kclient before passing a test. We do normally search for core dumps and errors in the ceph cluster logs but not much else.</p>
<p>@Jeff: what would you like to see grepped for?</p> CephFS - Feature #48577 (In Progress): pybind/mgr/volumes: support snapshots on subvolumegroupshttps://tracker.ceph.com/issues/485772020-12-12T03:10:46ZPatrick Donnellypdonnell@redhat.com
<p>We removed this recently but I think it needs to come back based on new developments in kubernetes with VolumeGroups. The main challenge of this ticket is to confirm that this is safe to do with the new subvolume flag.</p> CephFS - Bug #41069 (Need More Info): nautilus: test_subvolume_group_create_with_desired_mode fai...https://tracker.ceph.com/issues/410692019-08-05T10:50:47ZVenky Shankarvshankar@redhat.com
<p>Seen here is nautilus run: <a class="external" href="http://qa-proxy.ceph.com/teuthology/yuriw-2019-07-30_20:57:10-fs-wip-yuri-testing-2019-07-30-1543-nautilus-testing-basic-smithi/4165630/teuthology.log">http://qa-proxy.ceph.com/teuthology/yuriw-2019-07-30_20:57:10-fs-wip-yuri-testing-2019-07-30-1543-nautilus-testing-basic-smithi/4165630/teuthology.log</a></p>
<p>2019-07-31T01:04:51.292 INFO:tasks.cephfs_test_runner:<br />2019-07-31T01:04:51.293 INFO:tasks.cephfs_test_runner:======================================================================<br />2019-07-31T01:04:51.293 INFO:tasks.cephfs_test_runner:FAIL: test_subvolume_group_create_with_desired_mode (tasks.cephfs.test_volumes.TestVolumes)<br />2019-07-31T01:04:51.293 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------<br />2019-07-31T01:04:51.293 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):<br />2019-07-31T01:04:51.293 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/github.com_ceph_ceph-c_wip-yuri-testing-2019-07-30-1543-nautilus/qa/tasks/cephfs/test_volumes.py", line 252, in test_subvolume_group_create_with_desired_mode<br />2019-07-31T01:04:51.293 INFO:tasks.cephfs_test_runner: self.assertEqual(actual_mode2, expected_mode2)<br />2019-07-31T01:04:51.294 INFO:tasks.cephfs_test_runner:AssertionError: '755' != '777'</p>