Project

General

Profile

Actions

Bug #12506

closed

"Fuse mount failed to populate" error

Added by Yuri Weinstein over 8 years ago. Updated over 8 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
Testing
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
upgrade/hammer
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Run: http://pulpito.ceph.com/teuthology-2015-07-27_16:05:09-upgrade:hammer-hammer-distro-basic-vps/
Jobs: ['988514', '988563']
Logs: http://qa-proxy.ceph.com/teuthology/teuthology-2015-07-27_16:05:09-upgrade:hammer-hammer-distro-basic-vps/988563/teuthology.log

2015-07-27T17:15:11.129 INFO:teuthology.orchestra.run.vpm197:Running: 'sudo mount -t fusectl /sys/fs/fuse/connections /sys/fs/fuse/connections'
2015-07-27T17:15:11.141 INFO:teuthology.orchestra.run.vpm197.stderr:mount: /sys/fs/fuse/connections already mounted or /sys/fs/fuse/connections busy
2015-07-27T17:15:11.141 INFO:teuthology.orchestra.run.vpm197.stderr:mount: according to mtab, none is already mounted on /sys/fs/fuse/connections
2015-07-27T17:15:11.142 INFO:teuthology.orchestra.run.vpm197:Running: 'ls /sys/fs/fuse/connections'
2015-07-27T17:15:11.482 INFO:tasks.mon_thrash.ceph_manager:quorum_status is {"election_epoch":12,"quorum":[0,2],"quorum_names":["a","c"],"quorum_leader_name":"a","monmap":{"epoch":1,"fsid":"f9f9b968-59db-4e09-8499-70f2329637ed","modified":"2015-07-27 23:51:09.257018","created":"2015-07-27 23:51:09.257018","mons":[{"rank":0,"name":"a","addr":"10.214.130.166:6789\/0"},{"rank":1,"name":"b","addr":"10.214.130.197:6789\/0"},{"rank":2,"name":"c","addr":"10.214.130.197:6790\/0"}]}}

2015-07-27T17:15:11.483 INFO:tasks.mon_thrash.ceph_manager:quorum is size 2
2015-07-27T17:15:11.483 INFO:teuthology.orchestra.run.vpm166:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph -m 10.214.130.197:6790 mon_status'
2015-07-27T17:15:11.692 INFO:teuthology.orchestra.run.vpm166:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph -m 10.214.130.166:6789 mon_status'
2015-07-27T17:15:11.929 INFO:tasks.mon_thrash.mon_thrasher:waiting for 20.0 secs before reviving monitors
2015-07-27T17:15:12.213 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 56, in run_tasks
    manager.__enter__()
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/var/lib/teuthworker/src/ceph-qa-suite_hammer/tasks/ceph_fuse.py", line 114, in task
    mount.mount()
  File "/var/lib/teuthworker/src/ceph-qa-suite_hammer/tasks/cephfs/fuse_mount.py", line 109, in mount
    waited
RuntimeError: Fuse mount failed to populate /sys/ after 31 seconds
Actions #1

Updated by Greg Farnum over 8 years ago

  • Project changed from Ceph to CephFS
  • Category set to Testing
Actions #2

Updated by Greg Farnum over 8 years ago

  • Status changed from New to 12
  • Priority changed from Normal to Urgent

This is continuing to cause trouble in the nightlies. http://pulpito.ceph.com/teuthology-2015-09-14_23:04:02-fs-master---basic-multi/1057517 for instance, but it's popped up a lot in other places too!

2015-09-15T19:01:22.837 INFO:teuthology.run_tasks:Running task ceph_fuse...
2015-09-15T19:01:22.839 INFO:tasks.ceph_fuse:Mounting ceph-fuse clients...
2015-09-15T19:01:22.839 INFO:tasks.cephfs.fuse_mount:Client client.0 config is {}
2015-09-15T19:01:22.839 INFO:tasks.cephfs.fuse_mount:Mounting ceph-fuse client.0 at ubuntu@plana18.front.sepia.ceph.com /home/ubuntu/cephtest/mnt.0...
2015-09-15T19:01:22.839 INFO:teuthology.orchestra.run.plana18:Running: 'mkdir -- /home/ubuntu/cephtest/mnt.0'
2015-09-15T19:01:22.845 INFO:teuthology.orchestra.run.plana18:Running: 'sudo mount -t fusectl /sys/fs/fuse/connections /sys/fs/fuse/connections'
2015-09-15T19:01:22.920 INFO:teuthology.orchestra.run.plana18.stderr:mount: /sys/fs/fuse/connections already mounted or /sys/fs/fuse/connections busy
2015-09-15T19:01:22.921 INFO:teuthology.orchestra.run.plana18.stderr:mount: according to mtab, none is already mounted on /sys/fs/fuse/connections
2015-09-15T19:01:22.922 INFO:teuthology.orchestra.run.plana18:Running: 'ls /sys/fs/fuse/connections'
2015-09-15T19:01:22.991 INFO:tasks.cephfs.fuse_mount:Pre-mount connections: []
2015-09-15T19:01:22.991 INFO:teuthology.orchestra.run.plana18:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --name client.0 /home/ubuntu/cephtest/mnt.0'
2015-09-15T19:01:23.058 INFO:teuthology.orchestra.run.plana18:Running: 'sudo mount -t fusectl /sys/fs/fuse/connections /sys/fs/fuse/connections'
2015-09-15T19:01:23.070 INFO:teuthology.orchestra.run.plana18.stderr:mount: /sys/fs/fuse/connections already mounted or /sys/fs/fuse/connections busy
2015-09-15T19:01:23.070 INFO:teuthology.orchestra.run.plana18.stderr:mount: according to mtab, none is already mounted on /sys/fs/fuse/connections
2015-09-15T19:01:23.071 INFO:teuthology.orchestra.run.plana18:Running: 'ls /sys/fs/fuse/connections'
2015-09-15T19:01:23.129 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.plana18.stdout:ceph-fuse[7758]: starting ceph client
2015-09-15T19:01:23.129 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.plana18.stderr:2015-09-15 19:01:23.128395 7fce8e3e37c0 -1 init, newargv = 0x7fce90fb5140 newargc=11
2015-09-15T19:01:24.143 INFO:teuthology.orchestra.run.plana18:Running: 'sudo mount -t fusectl /sys/fs/fuse/connections /sys/fs/fuse/connections'
2015-09-15T19:01:28.618 INFO:teuthology.orchestra.run.plana18.stderr:mount: /sys/fs/fuse/connections already mounted or /sys/fs/fuse/connections busy
2015-09-15T19:01:28.618 INFO:teuthology.orchestra.run.plana18.stderr:mount: according to mtab, none is already mounted on /sys/fs/fuse/connections

etc until it times out
Actions #3

Updated by Zheng Yan over 8 years ago

  • Assignee set to Zheng Yan

this timeout only happens for jobs that contain clusters/standby-replay.yaml. I reproduce this issue locally by set "mds standby replay = true"

Actions #4

Updated by Zheng Yan over 8 years ago

  • Status changed from 12 to Fix Under Review
Actions #5

Updated by Greg Farnum over 8 years ago

  • Status changed from Fix Under Review to In Progress
Actions #6

Updated by Greg Farnum over 8 years ago

  • Status changed from In Progress to Resolved
Actions

Also available in: Atom PDF