Project

General

Profile

Actions

Bug #51765

closed

executing create_from_spec_one failed / KeyError: 'ceph.cluster_fsid'

Added by Tobias Fischer over 2 years ago. Updated over 2 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

created following osd spec:

service_type: osd
service_id: pod6-sc36-ssd
placement:
  host_pattern: 'pod6-sc36'
data_devices:
  rotational: 0
  size: '400G:'

and applied with
ceph orch apply osd -i pod6-ssd.yml

but cephadm crashes
2021-07-21T12:54:46.366885+0000 mgr.pod2-sc1 (mgr.134852083) 12088 : cephadm [ERR] executing create_from_spec_one(([('pod6-sc36', <ceph.deployment.drive_selection.selector.DriveSelection object at 0x7effd8d96438>)],)) failed.
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/cephadm/utils.py", line 59, in do_work
    return f(*arg)
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 47, in create_from_spec_one
    host, cmd, replace_osd_ids=osd_id_claims.get(host, []), env_vars=env_vars
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 88, in create_single_host
    if osd['tags']['ceph.cluster_fsid'] != fsid:
KeyError: 'ceph.cluster_fsid'
2021-07-21T12:54:46.367384+0000 mgr.pod2-sc1 (mgr.134852083) 12089 : cephadm [ERR] Failed to apply osd.pod6-sc36-ssd spec DriveGroupSpec(name=pod6-sc36-ssd->placement=PlacementSpec(host_pattern='pod6-sc36'), service_id='pod6-sc36-ssd', service_type='osd', data_devices=DeviceSelection(size='400G:', rotational=0, all=False), osd_id_claims={}, unmanaged=False, filter_logic='AND', preview_only=False): 'ceph.cluster_fsid'
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/cephadm/serve.py", line 412, in _apply_all_services
    if self._apply_service(spec):
  File "/usr/share/ceph/mgr/cephadm/serve.py", line 450, in _apply_service
    self.mgr.osd_service.create_from_spec(cast(DriveGroupSpec, spec))
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 51, in create_from_spec
    ret = create_from_spec_one(self.prepare_drivegroup(drive_group))
  File "/usr/share/ceph/mgr/cephadm/utils.py", line 65, in forall_hosts_wrapper
    return CephadmOrchestrator.instance._worker_pool.map(do_work, vals)
  File "/lib64/python3.6/multiprocessing/pool.py", line 266, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/lib64/python3.6/multiprocessing/pool.py", line 644, in get
    raise self._value
  File "/lib64/python3.6/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/lib64/python3.6/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "/usr/share/ceph/mgr/cephadm/utils.py", line 59, in do_work
    return f(*arg)
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 47, in create_from_spec_one
    host, cmd, replace_osd_ids=osd_id_claims.get(host, []), env_vars=env_vars
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 88, in create_single_host
    if osd['tags']['ceph.cluster_fsid'] != fsid:
KeyError: 'ceph.cluster_fsid'
2021-07-21T12:54:48.506596+0000 mgr.pod2-sc1 (mgr.134852083) 12092 : cephadm [INF] Applying drive group pod6-sc37-ssd on host pod6-sc37...
2021-07-21T12:54:55.879225+0000 mgr.pod2-sc1 (mgr.134852083) 12097 : cephadm [INF] Applying drive group pod6-sc36-ssd on host pod6-sc36...
2021-07-21T12:55:01.118781+0000 mgr.pod2-sc1 (mgr.134852083) 12101 : cephadm [ERR] executing create_from_spec_one(([('pod6-sc36', <ceph.deployment.drive_selection.selector.DriveSelection object at 0x7f00178913c8>)],)) failed.
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/cephadm/utils.py", line 59, in do_work
    return f(*arg)
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 47, in create_from_spec_one
    host, cmd, replace_osd_ids=osd_id_claims.get(host, []), env_vars=env_vars
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 88, in create_single_host
    if osd['tags']['ceph.cluster_fsid'] != fsid:
KeyError: 'ceph.cluster_fsid'
2021-07-21T12:55:01.119353+0000 mgr.pod2-sc1 (mgr.134852083) 12102 : cephadm [ERR] Failed to apply osd.pod6-sc36-ssd spec DriveGroupSpec(name=pod6-sc36-ssd->placement=PlacementSpec(host_pattern='pod6-sc36'), service_id='pod6-sc36-ssd', service_type='osd', data_devices=DeviceSelection(size='400G:', rotational=0, all=False), osd_id_claims={}, unmanaged=False, filter_logic='AND', preview_only=False): 'ceph.cluster_fsid'
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/cephadm/serve.py", line 412, in _apply_all_services
    if self._apply_service(spec):
  File "/usr/share/ceph/mgr/cephadm/serve.py", line 450, in _apply_service
    self.mgr.osd_service.create_from_spec(cast(DriveGroupSpec, spec))
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 51, in create_from_spec
    ret = create_from_spec_one(self.prepare_drivegroup(drive_group))
  File "/usr/share/ceph/mgr/cephadm/utils.py", line 65, in forall_hosts_wrapper
    return CephadmOrchestrator.instance._worker_pool.map(do_work, vals)
  File "/lib64/python3.6/multiprocessing/pool.py", line 266, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/lib64/python3.6/multiprocessing/pool.py", line 644, in get
    raise self._value
  File "/lib64/python3.6/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/lib64/python3.6/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "/usr/share/ceph/mgr/cephadm/utils.py", line 59, in do_work
    return f(*arg)
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 47, in create_from_spec_one
    host, cmd, replace_osd_ids=osd_id_claims.get(host, []), env_vars=env_vars
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 88, in create_single_host
    if osd['tags']['ceph.cluster_fsid'] != fsid:
KeyError: 'ceph.cluster_fsid'


Related issues 2 (0 open2 closed)

Related to Orchestrator - Bug #45587: mgr/cephadm: Failed to create encrypted OSDResolved

Actions
Related to ceph-volume - Bug #24796: ceph-volume fails with KeyError: 'ceph.cluster_name'ResolvedAndrew Schoen07/06/2018

Actions
Actions #1

Updated by Sebastian Wagner over 2 years ago

  • Related to Bug #45587: mgr/cephadm: Failed to create encrypted OSD added
Actions #2

Updated by Sebastian Wagner over 2 years ago

  • Status changed from New to Need More Info

could you run `ceph-volume lvm list` and verify the osds have cluster_fsid values?

Actions #3

Updated by Tobias Fischer over 2 years ago

unfortunately i can't help because I already created the OSDs by hand - sorry

Actions #4

Updated by Sebastian Wagner over 2 years ago

  • Related to Bug #24796: ceph-volume fails with KeyError: 'ceph.cluster_name' added
Actions #5

Updated by Sebastian Wagner over 2 years ago

  • Project changed from Orchestrator to ceph-volume
  • Category deleted (cephadm/osd)
Actions #6

Updated by Sebastian Wagner over 2 years ago

  • Status changed from Need More Info to Duplicate
Actions

Also available in: Atom PDF