Project

General

Profile

Actions

Bug #44888

closed

Drivegroup's :limit: isn't working correctly

Added by Joshua Schmid about 4 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
cephadm/osd
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Each iteration of an osd deployment deploys OSDs up to a set :limit:
Since we're deploying every $sleep_interval seconds it will eventually deploy all OSDs.

We need to factor in the already deployed nodes when working with the :limit: directive.

To properly find out which osd was being deployed by which drivegroup we need to address https://tracker.ceph.com/issues/44755 first.


Related issues 2 (0 open2 closed)

Related to Orchestrator - Bug #45861: data_devices: limit 3 deployed 6 osds per nodeResolvedJoshua Schmid

Actions
Blocked by RADOS - Bug #44755: Create stronger affinity between drivegroup specs and osd daemonsResolvedJoshua Schmid

Actions
Actions #1

Updated by Joshua Schmid about 4 years ago

  • Blocked by Bug #44755: Create stronger affinity between drivegroup specs and osd daemons added
Actions #2

Updated by Sebastian Wagner almost 4 years ago

  • Related to Bug #45861: data_devices: limit 3 deployed 6 osds per node added
Actions #3

Updated by Joshua Schmid almost 4 years ago

  • Pull request ID set to 35945
Actions #4

Updated by Joshua Schmid almost 4 years ago

  • Status changed from New to Fix Under Review
Actions #5

Updated by Sebastian Wagner over 3 years ago

  • Category changed from cephadm to cephadm/osd
Actions #6

Updated by Sebastian Wagner over 3 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF