Project

General

Profile

Bug #22524

NameError: global name 'get_mds_map' is not defined

Added by Ramana Raja over 6 years ago. Updated almost 5 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
VolumeClient
Labels (FS):
Manila
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hit a error while running test_volume_client on a dev vstart cluster (Ceph master) using
  1. LD_LIBRARY_PATH=`pwd`/lib PYTHONPATH=/home/rraja/git/teuthology/:`pwd`/../qa:`pwd`/../src/pybind:`pwd`/lib/cython_modules/lib.2 python `pwd`/../qa/tasks/vstart_runner.py --interactive tasks.cephfs.test_volume_client.TestVolumeClient.test_lifecycle

The traceback ,

2017-12-21 17:52:51,976.976 INFO:__main__:Running ['python', '-c', '\nfrom ceph_volume_client import CephFSVolumeClient, VolumePath\nimport logging\nlog = logging.getLogger("ceph_volume_client")\nlog.addHandler(logging.StreamHandler())\nlog.setLevel(logging.DEBUG)\nvc = CephFSVolumeClient("manila", "./ceph.conf", "ceph", "/myprefix", "mynsprefix_")\nvc.connect()\n\nvp = VolumePath("grpid", "volid")\nvc.deauthorize(vp, "guest")\nvc.evict("guest")\n\nvc.disconnect()\n ']
Connecting to RADOS with config ./ceph.conf...
Connection to RADOS complete
Connecting to cephfs...
CephFS initializing...
CephFS mounting...
Connection to cephfs complete
Recovering from partial auth updates (if any)...
Recovered from partial auth updates (if any).
evict clients with auth_name=guest
Traceback (most recent call last):
File "<string>", line 12, in <module>
File "/home/rraja/git/ceph/src/pybind/ceph_volume_client.py", line 394, in evict
mds_map = get_mds_map()
NameError: global name 'get_mds_map' is not defined

The bug was introduced by https://github.com/ceph/ceph/commit/cbbdd0da7d40e4e5def5cc0b9a9250348e71019f#diff-8625369b924524f064e083e735bd34beR394


Related issues

Related to CephFS - Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster Resolved

History

#1 Updated by Ramana Raja over 6 years ago

  • Status changed from In Progress to Fix Under Review

#2 Updated by Patrick Donnelly over 6 years ago

  • Status changed from Fix Under Review to Pending Backport
  • Backport set to luminous

#3 Updated by Ramana Raja over 6 years ago

  • Status changed from Pending Backport to Resolved
  • Backport deleted (luminous)

We don't need to backport this fix to luminous. The commit that introduced
this bug, https://github.com/ceph/ceph/commit/cbbdd0da7d40e4e5def5cc0b9a9250348e71019f
resolves ticket, http://tracker.ceph.com/issues/20596, which is not slated
for backport yet.

#4 Updated by Patrick Donnelly about 5 years ago

  • Category deleted (87)
  • Labels (FS) Manila added

#5 Updated by Nathan Cutler almost 5 years ago

Note: luminous backport is tracked by #40182, where cbbdd0da7d40e4e5def5cc0b9a9250348e71019f is also being backported to luminous.

#6 Updated by Nathan Cutler almost 5 years ago

  • Related to Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster added

Also available in: Atom PDF