Project

General

Profile

Bug #43412

cephadm ceph_manager IndexError: list index out of range

Added by Sage Weil about 4 years ago. Updated about 4 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2019-12-23T23:05:44.397 INFO:teuthology.orchestra.run.smithi059:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo ceph osd pool create base 4'
2019-12-23T23:05:44.397 INFO:tasks.thrashosds.thrasher:starting do_thrash
2019-12-23T23:05:44.397 INFO:tasks.thrashosds.thrasher:in_osds:  [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] out_osds:  [] dead_osds:  [] live_osds:  [11, 10, 1, 0, 3, 2, 5, 4, 7, 6, 9, 8]
2019-12-23T23:05:44.398 INFO:tasks.thrashosds.thrasher:choose_action: min_in 4 min_out 0 min_live 2 min_dead 0
2019-12-23T23:05:44.402 INFO:tasks.thrashosds.thrasher:Traceback (most recent call last):
  File "/home/teuthworker/src/github.com_liewegas_ceph_wip-old-clients-cephadm/qa/tasks/ceph_manager.py", line 992, in wrapper
    return func(self)
  File "/home/teuthworker/src/github.com_liewegas_ceph_wip-old-clients-cephadm/qa/tasks/ceph_manager.py", line 1122, in _do_thrash
    self.choose_action()()
  File "/home/teuthworker/src/github.com_liewegas_ceph_wip-old-clients-cephadm/qa/tasks/ceph_manager.py", line 667, in grow_pool
    pool = self.ceph_manager.get_pool()
  File "/home/teuthworker/src/github.com_liewegas_ceph_wip-old-clients-cephadm/qa/tasks/ceph_manager.py", line 1879, in get_pool
    return random.choice(self.pools.keys())
  File "/usr/lib/python2.7/random.py", line 275, in choice
    return seq[int(self.random() * len(seq))]  # raises IndexError if seq is empty
IndexError: list index out of range

2019-12-23T23:05:44.402 ERROR:tasks.thrashosds.thrasher:exception:
Traceback (most recent call last):
  File "/home/teuthworker/src/github.com_liewegas_ceph_wip-old-clients-cephadm/qa/tasks/ceph_manager.py", line 980, in do_thrash
    self._do_thrash()
  File "/home/teuthworker/src/github.com_liewegas_ceph_wip-old-clients-cephadm/qa/tasks/ceph_manager.py", line 992, in wrapper
    return func(self)
  File "/home/teuthworker/src/github.com_liewegas_ceph_wip-old-clients-cephadm/qa/tasks/ceph_manager.py", line 1122, in _do_thrash
    self.choose_action()()
  File "/home/teuthworker/src/github.com_liewegas_ceph_wip-old-clients-cephadm/qa/tasks/ceph_manager.py", line 667, in grow_pool
    pool = self.ceph_manager.get_pool()
  File "/home/teuthworker/src/github.com_liewegas_ceph_wip-old-clients-cephadm/qa/tasks/ceph_manager.py", line 1879, in get_pool
    return random.choice(self.pools.keys())
  File "/usr/lib/python2.7/random.py", line 275, in choice
    return seq[int(self.random() * len(seq))]  # raises IndexError if seq is empty
IndexError: list index out of range

/a/sage-2019-12-23_22:35:19-rados:thrash-old-clients-wip-sage-testing-2019-12-23-1239-distro-basic-smithi/4628965

History

#1 Updated by Matthew Oliver about 4 years ago

I'm guessing it's caused by there being no pools at the time. So the random choice fails. Maybe we need to do something like:

diff --git a/qa/tasks/ceph_manager.py b/qa/tasks/ceph_manager.py                                                                                                                                                                              
index fb3e1b2114..ab25bf86b4 100644                                                                                                                                                                                                           
--- a/qa/tasks/ceph_manager.py                                                                                                                                                                                                                
+++ b/qa/tasks/ceph_manager.py
@@ -666,24 +666,30 @@ class OSDThrasher(Thrasher):
         Increase the size of the pool
         """ 
         pool = self.ceph_manager.get_pool()
-        self.log("Growing pool %s" % (pool,))
-        if self.ceph_manager.expand_pool(pool,
-                                         self.config.get('pool_grow_by', 10),
-                                         self.max_pgs):
-            self.pools_to_fix_pgp_num.add(pool)
+        if pool:
+            self.log("Growing pool %s" % (pool,))
+            if self.ceph_manager.expand_pool(pool,
+                                            self.config.get('pool_grow_by', 10),
+                                            self.max_pgs):
+                self.pools_to_fix_pgp_num.add(pool)
+        else:
+            self.log("No pools to grow")

     def shrink_pool(self):
         """ 
         Decrease the size of the pool
         """ 
         pool = self.ceph_manager.get_pool()
-        _ = self.ceph_manager.get_pool_pg_num(pool)
-        self.log("Shrinking pool %s" % (pool,))
-        if self.ceph_manager.contract_pool(
-                pool,
-                self.config.get('pool_shrink_by', 10),
-                self.min_pgs):
-            self.pools_to_fix_pgp_num.add(pool)
+        if pool:
+            _ = self.ceph_manager.get_pool_pg_num(pool)
+            self.log("Shrinking pool %s" % (pool,))
+            if self.ceph_manager.contract_pool(
+                    pool,
+                    self.config.get('pool_shrink_by', 10),
+                    self.min_pgs):
+                self.pools_to_fix_pgp_num.add(pool)
+        else:
+            self.log("No pools to shrink")

     def fix_pgp_num(self, pool=None):
         """ 
@@ -1877,7 +1883,9 @@ class CephManager:
         Pick a random pool
         """ 
         with self.lock:
-            return random.choice(self.pools.keys())
+            if self.pools.keys():
+                return random.choice(self.pools.keys())
+            return None

     def get_pool_pg_num(self, pool_name):
         """ 

#2 Updated by Josh Durgin about 4 years ago

  • Assignee set to Kefu Chai
  • Pull request ID set to 32519

Kefu's got a PR for this

#3 Updated by Kefu Chai about 4 years ago

  • Status changed from New to Fix Under Review

#4 Updated by Kefu Chai about 4 years ago

  • Status changed from Fix Under Review to Resolved

Also available in: Atom PDF