Project

General

Profile

Actions

Bug #9120

open

ceph_manager locks are a noop

Added by Loïc Dachary over 9 years ago. Updated over 8 years ago.

Status:
Need More Info
Priority:
Low
Assignee:
-
Category:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

The locks used in ceph_manager are using threading.RLocks which is documented as

Once a thread has acquired a reentrant lock, the same thread may acquire it again without blocking; the thread must release it once for each time it has acquired it.

Since teuthology runs each task in a gevent Lightweight pseudothread which is documented to be Cooperative multitasking meaning "The greenlets all run in the same OS thread", the lock will never block, even if acquired from two different gevent threads.

Actions #1

Updated by Zack Cerza over 9 years ago

  • Priority changed from Normal to Low
Actions #2

Updated by Dan Mick over 8 years ago

  • Status changed from New to Need More Info
  • Regression set to No

I guess I'm unclear on why this is not a practical problem if it's true as stated. Have we noticed any races which might result from this?

Actions #3

Updated by Loïc Dachary over 8 years ago

I don't remember. A teuthology cluster generates so much noise that it would be surprising if some of that noise does not come from this race condition.

Actions

Also available in: Atom PDF