Project

General

Profile

Actions

Bug #53589

closed

ceph-volume unstable provisioning behaviour

Added by Nigel Williams over 2 years ago. Updated about 2 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Community (user)
Tags:
ceph-volume
Backport:
Regression:
No
Severity:
4 - irritation
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When applying a spec.yaml via "ceph orch apply" ceph-volume sometimes fails to deploy onto one or two devices on the target host and/or fails to calculate the block.db device size correctly.

Across 9 hosts of identical hardware, in 5 hosts 100% deployment with the correct block.db device size, in the remaining hosts either one or two devices are not included as working OSDs or some or all block.db partitions are incorrectly calculated.

These hosts have 50 x HDD, and two large SSDs for block.db/wal.

Nothing obvious in the logs although of concern is the paths logged in the logfiles are incorrect and do not include the cluster-uuid in the path name.

Actions #2

Updated by Guillaume Abrioux about 2 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF