Project

General

Profile

Actions

Bug #61762

open

PGs are stucked in creating+peering when starting up OSDs

Added by Venky Shankar 11 months ago. Updated 9 months ago.

Status:
New
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

/a/vshankar-2023-06-20_10:07:44-fs-wip-vshankar-testing-20230620.052303-testing-default-smithi/7308858

qa/tasks/cephfs/filesystem.py::create() creates a new ceph file system and blocks for all PGs to be clean. This routine also creates data and metadata pools with --pg_num_min=64. ceph_manager.py::wait_for_clean() times out waiting for all PGS to be clean.

I haven't seen this issue before, so creating a tracker. Looks unrelated to CephFS and might require looking to OSD log to infer as to why the PGs were not clean.

Update:

Looks like the problem is with PGs stuck in creating + peering for 20 minutes ever since we started the OSDs.


Related issues 1 (1 open0 closed)

Related to RADOS - Bug #59172: test_pool_min_size: AssertionError: wait_for_clean: failed before timeout expired due to down PGsPending BackportKamoltat (Junior) Sirivadhna

Actions
Actions

Also available in: Atom PDF