Bug #23877
closedosd/OSDMap.cc: assert(target > 0)
0%
Description
- begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 150
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54
- devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
device 3 osd.3 class hdd
device 4 osd.4 class hdd
device 5 osd.5 class hdd
device 6 osd.6 class hdd
device 7 osd.7 class hdd
device 8 osd.8 class hdd
- types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root
- buckets
host huangjun-1 {
id -3 # do not change unnecessarily # weight 9.000
alg straw2
hash 0 # rjenkins1
item osd.0 weight 1.000
item osd.1 weight 1.000
item osd.7 weight 1.000
}
host huangjun-2 {
id -4 # do not change unnecessarily # weight 9.000
alg straw2
hash 0 # rjenkins1
item osd.2 weight 1.000
item osd.3 weight 1.000
item osd.6 weight 1.000
}
host huangjun-3 {
id -5 # do not change unnecessarily # weight 9.000
alg straw2
hash 0 # rjenkins1
item osd.4 weight 1.000
item osd.5 weight 1.000
}
host huangjun {
id -2 # do not change unnecessarily
id -3 class hdd # do not change unnecessarily # weight 9.000
alg straw2
hash 0 # rjenkins1
item osd.0 weight 1.000
item osd.1 weight 1.000
item osd.2 weight 1.000
item osd.3 weight 1.000
item osd.4 weight 1.000
item osd.5 weight 1.000
item osd.6 weight 1.000
item osd.7 weight 1.000
item osd.8 weight 1.000
}
root default {
id -1 # do not change unnecessarily
id -4 class hdd # do not change unnecessarily # weight 9.000
alg straw2
hash 0 # rjenkins1
item huangjun weight 9.000
}
id -6 # do not change unnecessarily # weight 9.000
alg straw2
hash 0 # rjenkins1
item huangjun-1 weight 3.000
item huangjun-2 weight 3.000
item huangjun-3 weight 2.000
}
- rules
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 0 type osd
step emit
}
id 1
type erasure
min_size 1
max_size 10
step take huangjun-1
step chooseleaf indep 2 type osd
step emit
step take huangjun-2
step chooseleaf indep 2 type osd
step emit
step take huangjun-3
step chooseleaf indep 2 type osd
step emit
}
- end crush map
we create a k:m=4:2 crush-failure-domain=osd erasure pool with 256 pgs
and do the remap
then unlink osd.6 from huangjun-2
./bin/ceph osd crush unlink osd.6 huangjun-2
and then do remap again
it crashed with -18> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.0 weight 0.333333 pgs 170
-17> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.1 weight 0.333333 pgs 170
-16> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.2 weight 0.5 pgs 256
-15> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.3 weight 0.5 pgs 250
-14> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.4 weight 0.5 pgs 256
-13> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.5 weight 0.5 pgs 256
-12> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.7 weight 0.333333 pgs 172
-11> 2018-04-26 01:13:10.885 7f0a2e0c8700 10 osd_weight_total 3
-10> 2018-04-26 01:13:10.885 7f0a2e0c8700 10 pgs_per_weight 512
-9> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.0 pgs 170 target 170.667 deviation -0.666672
-8> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.1 pgs 170 target 170.667 deviation -0.666672
-7> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.2 pgs 256 target 256 deviation 0
-6> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.3 pgs 250 target 256 deviation -6
-5> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.4 pgs 256 target 256 deviation 0
-4> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.5 pgs 256 target 256 deviation 0
-3> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.6 pgs 6 target 0 deviation 6
-2> 2018-04-26 01:13:10.885 7f0a2e0c8700 20 osd.7 pgs 172 target 170.667 deviation 1.33333
-1> 2018-04-26 01:13:10.885 7f0a2e0c8700 10 total_deviation 14.6667 overfull 6,7 underfull [3]
0> 2018-04-26 01:13:10.891 7f0a2e0c8700 -1 /usr/src/ceph-int/src/osd/OSDMap.cc: In function 'int OSDMap::calc_pg_upmaps(CephContext*, float, int, const std::set<long int>&, OSDMap::Incremental*)' thread 7f0a2e0c8700 time 2018-04-26 01:13:10.886506
/usr/src/ceph-int/src/osd/OSDMap.cc: 4124: FAILED assert(target > 0) (null)
Updated by Kefu Chai about 6 years ago
- Status changed from New to Need More Info
- Assignee set to huang jun
could you reproduce this issue with "debug-osd=10", and attach the log?
Updated by Kefu Chai about 6 years ago
- Is duplicate of Bug #23878: assert on pg upmap added
Updated by Kefu Chai about 6 years ago
- Status changed from Need More Info to Duplicate