Project

General

Profile

Bug #27210

custom cluster names fail on filestore trigger

Added by Alfredo Deza 4 months ago. Updated 2 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
-
Start date:
08/23/2018
Due date:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:

Description

Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/main.py", line 153, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py", line 182, in dispatch
    instance.main()
  File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/main.py", line 38, in main
    terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py", line 182, in dispatch
    instance.main()
  File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/trigger.py", line 70, in main
    Activate(['--auto-detect-objectstore', osd_id, osd_uuid]).main()
  File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/activate.py", line 318, in main
    self.activate(args)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/activate.py", line 238, in activate
    return activate_filestore(lvs)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/activate.py", line 63, in activate_filestore
    prepare_utils.mount_osd(source, osd_id, is_vdo=is_vdo)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/util/prepare.py", line 206, in mount_osd
    flags = conf.ceph.get_list(
AttributeError: 'property' object has no attribute 'get_list'

This happens because the ceph.conf is not loaded by main.py, and later the code wants to poke in conf.ceph which is a bare property that doesn't have anything
in it.

The activate script should be aware of the custom cluster name by that time, and it should try to load the conf file. It should also try to fallback if that is
not able to be located, so that default flags for mounting don't end up preventing an OSD starting up

History

#1 Updated by Andrew Mitroshin 3 months ago

Workaround:

Add line
Environment=CEPH_CONF=/etc/ceph/yourclustername.conf
In section [Service]
Of the /usr/lib/systemd/system/ceph-volume@.service

then run systemctl daemon-reload

#2 Updated by Alfredo Deza 3 months ago

  • Status changed from New to Need Review

#3 Updated by Alfredo Deza 2 months ago

  • Status changed from Need Review to Resolved

Also available in: Atom PDF