Project

General

Profile

Actions

Bug #57627

closed

ceph-volume activate takes time to complete

Added by Guillaume Abrioux over 1 year ago. Updated about 1 year ago.

Status:
Resolved
Priority:
Urgent
Target version:
-
% Done:

0%

Source:
Tags:
backport_processed
Backport:
quincy,pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

when hosts have a large number of devices, a `ceph-volume activate` can take a very long time to complete

ie:

real 1m33.870s
user 1m4.498s
sys 0m30.849s


Related issues 2 (0 open2 closed)

Copied to ceph-volume - Backport #58789: quincy: ceph-volume activate takes time to completeResolvedGuillaume AbriouxActions
Copied to ceph-volume - Backport #58790: pacific: ceph-volume activate takes time to completeResolvedGuillaume AbriouxActions
Actions #1

Updated by Guillaume Abrioux over 1 year ago

  • Pull request ID set to 48200
Actions #2

Updated by Guillaume Abrioux over 1 year ago

  • Status changed from In Progress to Fix Under Review
Actions #3

Updated by Guillaume Abrioux over 1 year ago

  • Status changed from Fix Under Review to Resolved
Actions #4

Updated by Guillaume Abrioux about 1 year ago

  • Status changed from Resolved to Pending Backport
Actions #5

Updated by Guillaume Abrioux about 1 year ago

  • Copied to Backport #58789: quincy: ceph-volume activate takes time to complete added
Actions #6

Updated by Guillaume Abrioux about 1 year ago

  • Copied to Backport #58790: pacific: ceph-volume activate takes time to complete added
Actions #7

Updated by Guillaume Abrioux about 1 year ago

  • Tags set to backport_processed
Actions #8

Updated by Adam King about 1 year ago

  • Status changed from Pending Backport to Resolved
Actions #9

Updated by Michel Jouvin about 1 year ago

Hi,

It seems that this fix didn't make 16.2.11 and that it is causing problems with cephadm-managed clusters having several tens or hundreds of OSDs (which is our case!). What is the status of this fix? When can we expect it to be in an official release?

I guess the recommendation is to stay with 16.2.10 until it is done if you are running a cephadm cluster with a large number of OSDs, despite the impressive number of fixes in 16.2.11?

Michel

Actions

Also available in: Atom PDF