Project

General

Profile

Bug #38282

cephtool/test.sh failure in test_mon_osd_pool_set

Added by Sage Weil 6 months ago. Updated 6 months ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
-
Start date:
02/12/2019
Due date:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:

Description

2019-02-12T19:42:27.353 INFO:tasks.workunit.client.0.smithi040.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:2019: test_mon_osd_pool_set:  ceph osd pool set pool_getset pg_num 10
2019-02-12T19:42:29.380 INFO:tasks.workunit.client.0.smithi040.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:2020: test_mon_osd_pool_set:  wait_for_clean
...

then timeout

/a/sage-2019-02-12_18:52:20-rados-wip-sage-testing-2019-02-12-0933-distro-basic-smithi/3580586

I saw the same failure a day or two ago too.


Related issues

Related to RADOS - Bug #38283: max-pg-per-osd tests failing Resolved 02/12/2019
Related to Messengers - Bug #38330: osd/OSD.cc: 1515: abort() in Service::build_incremental_map_msg Resolved 02/15/2019
Related to RADOS - Bug #38040: osd_map_message_max default is too high? Pending Backport 01/24/2019
Duplicated by RADOS - Bug #38293: qa/standalone/osd/osd-backfill-prio.sh failed Duplicate 02/13/2019

History

#1 Updated by Sage Weil 6 months ago

2019-02-12 19:42:30.223 7f38463b7700 10 osd.1 474 send_incremental_map 472 -> 474 to 0x55787efd5c00 v1:172.21.15.40:6806/33950
2019-02-12 19:42:30.223 7f38463b7700 10 osd.1 474 build_incremental_map_msg oldest map 1 < since 472, starting with full map
2019-02-12 19:42:30.223 7f38473b9700 20 --1- v1:172.21.15.40:6810/33949 >> v1:172.21.15.40:6813/1033948 conn(0x5578816f0000 0x557881286680 :6810 s=OPENED pgs=9 cs=3 l=0).write_message signed m=0x55788123b400): sig = 18430526848664346980
2019-02-12 19:42:30.223 7f38473b9700 20 --1- v1:172.21.15.40:6810/33949 >> v1:172.21.15.40:6813/1033948 conn(0x5578816f0000 0x557881286680 :6810 s=OPENED pgs=9 cs=3 l=0).write_message sending message type=41 src osd.1 front=22669 data=0 off 0
2019-02-12 19:42:30.223 7f38463b7700  1 -- v1:172.21.15.40:6810/33949 --> v1:172.21.15.40:6806/33950 -- osd_map(1..40 src has 1..474) v4 -- 0x557881243b80 con 0x55787efd5c00

looks like it's broken by https://github.com/ceph/ceph/pull/26340

#2 Updated by Sage Weil 6 months ago

  • Status changed from Verified to Need Review

#3 Updated by Sage Weil 6 months ago

  • Related to Bug #38283: max-pg-per-osd tests failing added

#4 Updated by Sage Weil 6 months ago

I have a feeling #38283 has the same root cause...

#5 Updated by David Zafman 6 months ago

  • Pull request ID set to 26413

#6 Updated by David Zafman 6 months ago

  • Related to Bug #38293: qa/standalone/osd/osd-backfill-prio.sh failed added

#7 Updated by Kefu Chai 6 months ago

/a/kchai-2019-02-14_06:27:37-rados-wip-kefu2-testing-2019-02-14-1156-distro-basic-smithi/3590390

#8 Updated by David Zafman 6 months ago

  • Duplicated by Bug #38293: qa/standalone/osd/osd-backfill-prio.sh failed added

#9 Updated by David Zafman 6 months ago

  • Related to deleted (Bug #38293: qa/standalone/osd/osd-backfill-prio.sh failed)

#10 Updated by Kefu Chai 6 months ago

  • Status changed from Need Review to Resolved

#11 Updated by Sage Weil 6 months ago

  • Related to Bug #38330: osd/OSD.cc: 1515: abort() in Service::build_incremental_map_msg added

#12 Updated by Sage Weil 6 months ago

  • Related to Bug #38040: osd_map_message_max default is too high? added

Also available in: Atom PDF