Project

General

Profile

Bug #47700

during OSD deletion: Module 'cephadm' has failed: Set changed size during iteration

Added by Joshua Schmid over 3 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Normal
Category:
cephadm/osd
Target version:
% Done:

0%

Source:
Development
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Deleted all OSDs on one node with this command:

for OSD in $(ceph osd ls-tree sesnode2); do ceph orch osd rm $OSD --force; done

--> now ceph -shows this:

sesnode1:~ # ceph -s
  cluster:
    id:     88c8695a-f29d-11ea-ab5b-40a6b7164708
    health: HEALTH_ERR
            Module 'cephadm' has failed: Set changed size during iteration
            3 slow ops, oldest one blocked for 4358 sec, daemons [mon.sesnode2,mon.sesnode3] have slow ops.

  services:
    mon: 3 daemons, quorum sesnode1,sesnode2,sesnode3 (age 11h)
    mgr: sesnode3.hpyadv(active, since 16m), standbys: sesnode1.zzsnjy, sesnode2.lwcuju
    mds: cephfs:1 {0=cephfs.sesnode6.rjpieg=up:active} 1 up:standby
    osd: 152 osds: 152 up (since 4m), 152 in (since 4m)
    rgw: 3 daemons active (default.default.sesnode4.xqbctf, default.default.sesnode5.qktkld, default.default.sesnode6.ffagxh)

  task status:
    scrub status:
        mds.cephfs.sesnode6.rjpieg: idle

  data:
    pools:   9 pools, 2225 pgs
    objects: 5.44k objects, 20 GiB
    usage:   5.2 TiB used, 371 TiB / 377 TiB avail
    pgs:     2225 active+clean

  io:
    client:   254 B/s rd, 0 op/s rd, 0 op/s wr

sesnode1:~ # ceph health detail
HEALTH_ERR Module 'cephadm' has failed: Set changed size during iteration; 3 slow ops, oldest one blocked for 4478 sec, daemons [mon.sesnode2,mon.sesnode3] have slow ops.
[ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: Set changed size during iteration
    Module 'cephadm' has failed: Set changed size during iteration
[WRN] SLOW_OPS: 3 slow ops, oldest one blocked for 4478 sec, daemons [mon.sesnode2,mon.sesnode3] have slow ops.

From the log of the manager:

debug 2020-09-14T09:42:27.590+0000 7f1e38634700  0 log_channel(audit) log [DBG] : from='client.62407 -' entity='client.admin' cmd=[{"prefix": "orch osd rm", "svc_id": ["106"], "force": true, "targe
t": ["mon-mgr", ""]}]: dispatch
debug 2020-09-14T09:42:27.694+0000 7f1e3daff700 -1 log_channel(cluster) log [ERR] : Unhandled exception from module 'cephadm' while running on mgr.sesnode3.hpyadv: Set changed size during iteration
debug 2020-09-14T09:42:27.694+0000 7f1e3daff700 -1 cephadm.serve:
debug 2020-09-14T09:42:27.694+0000 7f1e3daff700 -1 Traceback (most recent call last):
  File "/usr/share/ceph/mgr/cephadm/module.py", line 502, in serve
    self.rm_util.process_removal_queue()
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 295, in process_removal_queue
    self.cleanup()
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 352, in cleanup
    not_in_cluster_osds = self.mgr.to_remove_osds.not_in_cluster()
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 671, in not_in_cluster
    return [osd for osd in self if not osd.exists]
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 671, in <listcomp>
    return [osd for osd in self if not osd.exists]
RuntimeError: Set changed size during iteration

History

#1 Updated by Joshua Schmid over 3 years ago

  • Pull request ID set to 37492

#2 Updated by Joshua Schmid over 3 years ago

  • Status changed from New to Fix Under Review

#3 Updated by Sebastian Wagner over 3 years ago

  • Assignee changed from Joshua Schmid to Sebastian Wagner

#4 Updated by Michael Fritch about 3 years ago

  • Pull request ID changed from 37492 to 38815

#5 Updated by Sebastian Wagner about 3 years ago

  • Status changed from Fix Under Review to Resolved

Also available in: Atom PDF