Project

General

Profile

Backport #53861

pacific: qa: tasks.cephfs.fuse_mount:mount command failed

Added by Backport Bot about 2 years ago. Updated over 1 year ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
Release:
pacific
Crash signature (v1):
Crash signature (v2):


Related issues

Copied from CephFS - Bug #51705: qa: tasks.cephfs.fuse_mount:mount command failed Resolved

History

#1 Updated by Backport Bot about 2 years ago

  • Copied from Bug #51705: qa: tasks.cephfs.fuse_mount:mount command failed added

#2 Updated by Xiubo Li about 2 years ago

  • Assignee set to Xiubo Li

#3 Updated by Xiubo Li about 2 years ago

  • Description updated (diff)
  • Status changed from New to In Progress

#4 Updated by Yuri Weinstein about 2 years ago

Backport Bot wrote:

https://github.com/ceph/ceph/pull/44621

merged

#5 Updated by Loïc Dachary about 2 years ago

  • Status changed from In Progress to Resolved
  • Target version set to v16.2.8

This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44621
merge commit 88da96ee21f8cf32856e17a6ca9970d1fd859bce (v16.2.7-252-g88da96ee21f)

#7 Updated by Xiubo Li over 1 year ago

Kotresh Hiremath Ravishankar wrote:

Xiubo,

Looks like this is seen again in this pacific run ?

https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi/6946337

Seems a different issue:

2022-07-24T15:47:30.290 DEBUG:teuthology.orchestra.run.smithi137:> sudo mount -t fusectl /sys/fs/fuse/connections /sys/fs/fuse/connections
2022-07-24T15:47:30.348 INFO:teuthology.orchestra.run.smithi137.stderr:mount: /sys/fs/fuse/connections: /sys/fs/fuse/connections already mounted or mount point busy.
2022-07-24T15:47:30.349 INFO:tasks.cephfs.fuse_mount.ceph-fuse.vol_data_isolated.smithi137.stderr:2022-07-24 15:47:30.343 7f3e031a7700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]
2022-07-24T15:47:30.349 INFO:tasks.cephfs.fuse_mount.ceph-fuse.vol_data_isolated.smithi137.stderr:2022-07-24 15:47:30.343 7f3e029a6700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]
2022-07-24T15:47:30.393 DEBUG:teuthology.orchestra.run:got remote process result: 32
2022-07-24T15:47:30.397 INFO:teuthology.orchestra.run:Running command with timeout 900
2022-07-24T15:47:30.398 DEBUG:teuthology.orchestra.run.smithi137:> ls /sys/fs/fuse/connections
2022-07-24T15:47:30.399 INFO:tasks.cephfs.fuse_mount.ceph-fuse.vol_data_isolated.smithi137.stderr:failed to fetch mon config (--no-mon-config to skip)
2022-07-24T15:47:30.530 INFO:tasks.cephfs.fuse_mount.ceph-fuse.vol_data_isolated.smithi137.stderr:daemon-helper: command failed with exit status 1
2022-07-24T15:47:30.549 DEBUG:teuthology.orchestra.run:got remote process result: 1
2022-07-24T15:47:30.550 INFO:tasks.cephfs.fuse_mount:mount command failed.
2022-07-24T15:47:30.551 ERROR:teuthology.run_tasks:Saw exception from tasks.

#8 Updated by Xiubo Li over 1 year ago

Xiubo Li wrote:

Kotresh Hiremath Ravishankar wrote:

Xiubo,

Looks like this is seen again in this pacific run ?

https://pulpito.ceph.com/yuriw-2022-07-24_15:34:38-fs-wip-yuri2-testing-2022-07-15-0755-pacific-distro-default-smithi/6946337

Seems a different issue:

[...]

22-07-24T15:47:25.443 INFO:tasks.workunit.client.0.smithi137.stderr:+ local T=/tmp/tmp.t0CieZarQn
2022-07-24T15:47:25.444 INFO:tasks.workunit.client.0.smithi137.stderr:+ tee /tmp/tmp.t0CieZarQn
2022-07-24T15:47:25.444 INFO:tasks.workunit.client.0.smithi137.stderr:Traceback (most recent call last):
2022-07-24T15:47:25.444 INFO:tasks.workunit.client.0.smithi137.stderr:  File "<stdin>", line 2, in <module>
2022-07-24T15:47:25.445 INFO:tasks.workunit.client.0.smithi137.stderr:ModuleNotFoundError: No module named 'ceph_volume_client'
2022-07-24T15:47:25.445 INFO:tasks.workunit.client.0.smithi137.stderr:+ sudo touch -- /etc/ceph/ceph.client.vol_data_isolated.keyring
2022-07-24T15:47:25.497 INFO:tasks.workunit.client.0.smithi137.stderr:+ sudo ceph-authtool /etc/ceph/ceph.client.vol_data_isolated.keyring --import-keyring /tmp/tmp.t0CieZarQn

It's failed to generate the keyring contents for the client.vol_data_isolated, and then when doing the authentication it will sent a corrupt payload data to monitor and the monitor will just fail it:

2022-07-24 15:47:30.344 7f52d6623700 10 cephx server client.vol_data_isolated: handle_request get_auth_session_key for client.vol_data_isolated
2022-07-24 15:47:30.344 7f52d7625700  1 -- [v2:172.21.15.29:3300/0,v1:172.21.15.29:6789/0] <== client.? v1:172.21.15.137:0/2195577994 1 ==== auth(proto 0 42 bytes epoch 0) v1 ==== 72+0+0 (unknown 479712082 0 0) 0x5568e4736d80 con 0x5568e49be880
2022-07-24 15:47:30.344 7f52d6623700  0 cephx server client.vol_data_isolated: handle_request failed to decode CephXAuthenticate: buffer::end_of_buffer
2022-07-24 15:47:30.344 7f52d7625700 10 mon.a@0(leader) e1 _ms_dispatch new session 0x5568e4738400 MonSession(client.? v1:172.21.15.137:0/2195577994 is open , features 0x3ffddff8ffecffff (luminous)) features 0x3ffddff8ffecffff
2022-07-24 15:47:30.344 7f52d7625700 20 mon.a@0(leader) e1  entity_name  global_id 0 (none) caps
2022-07-24 15:47:30.344 7f52d7625700 10 mon.a@0(leader).paxosservice(auth 1..3) dispatch 0x5568e4736d80 auth(proto 0 42 bytes epoch 0) v1 from client.? v1:172.21.15.137:0/2195577994 con 0x5568e49be880
2022-07-24 15:47:30.344 7f52d6623700  1 --2- [v2:172.21.15.29:3300/0,v1:172.21.15.29:6789/0] >>  conn(0x5568e49bfa80 0x5568e4693100 secure :-1 s=AUTH_ACCEPTING_MORE pgs=0 cs=0 l=1 rev1=1 rx=0 tx=0)._auth_bad_method auth_method 2 r (1) Operation not permitted, allowed_methods [2], allowed_modes [2,1]
2022-07-24 15:47:30.344 7f52d7625700  5 mon.a@0(leader).paxos(paxos active c 1..100) is_readable = 1 - now=2022-07-24 15:47:30.347662 lease_expire=2022-07-24 15:47:34.440663 has v0 lc 100

I will create one new PR to fix this and will add the detail logs to it.

#9 Updated by Xiubo Li over 1 year ago

Created a new tracker to fix it https://tracker.ceph.com/issues/57083.

Also available in: Atom PDF