Actions
Bug #38282
closedcephtool/test.sh failure in test_mon_osd_pool_set
Status:
Resolved
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
mimic
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2019-02-12T19:42:27.353 INFO:tasks.workunit.client.0.smithi040.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:2019: test_mon_osd_pool_set: ceph osd pool set pool_getset pg_num 10 2019-02-12T19:42:29.380 INFO:tasks.workunit.client.0.smithi040.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:2020: test_mon_osd_pool_set: wait_for_clean ...
then timeout
/a/sage-2019-02-12_18:52:20-rados-wip-sage-testing-2019-02-12-0933-distro-basic-smithi/3580586
I saw the same failure a day or two ago too.
Updated by Sage Weil about 5 years ago
2019-02-12 19:42:30.223 7f38463b7700 10 osd.1 474 send_incremental_map 472 -> 474 to 0x55787efd5c00 v1:172.21.15.40:6806/33950 2019-02-12 19:42:30.223 7f38463b7700 10 osd.1 474 build_incremental_map_msg oldest map 1 < since 472, starting with full map 2019-02-12 19:42:30.223 7f38473b9700 20 --1- v1:172.21.15.40:6810/33949 >> v1:172.21.15.40:6813/1033948 conn(0x5578816f0000 0x557881286680 :6810 s=OPENED pgs=9 cs=3 l=0).write_message signed m=0x55788123b400): sig = 18430526848664346980 2019-02-12 19:42:30.223 7f38473b9700 20 --1- v1:172.21.15.40:6810/33949 >> v1:172.21.15.40:6813/1033948 conn(0x5578816f0000 0x557881286680 :6810 s=OPENED pgs=9 cs=3 l=0).write_message sending message type=41 src osd.1 front=22669 data=0 off 0 2019-02-12 19:42:30.223 7f38463b7700 1 -- v1:172.21.15.40:6810/33949 --> v1:172.21.15.40:6806/33950 -- osd_map(1..40 src has 1..474) v4 -- 0x557881243b80 con 0x55787efd5c00
looks like it's broken by https://github.com/ceph/ceph/pull/26340
Updated by Sage Weil about 5 years ago
- Status changed from 12 to Fix Under Review
Updated by Sage Weil about 5 years ago
- Related to Bug #38283: max-pg-per-osd tests failing added
Updated by Sage Weil about 5 years ago
I have a feeling #38283 has the same root cause...
Updated by David Zafman about 5 years ago
- Related to Bug #38293: qa/standalone/osd/osd-backfill-prio.sh failed added
Updated by Kefu Chai about 5 years ago
/a/kchai-2019-02-14_06:27:37-rados-wip-kefu2-testing-2019-02-14-1156-distro-basic-smithi/3590390
Updated by David Zafman about 5 years ago
- Has duplicate Bug #38293: qa/standalone/osd/osd-backfill-prio.sh failed added
Updated by David Zafman about 5 years ago
- Related to deleted (Bug #38293: qa/standalone/osd/osd-backfill-prio.sh failed)
Updated by Kefu Chai about 5 years ago
- Status changed from Fix Under Review to Resolved
Updated by Sage Weil about 5 years ago
- Related to Bug #38330: osd/OSD.cc: 1515: abort() in Service::build_incremental_map_msg added
Updated by Sage Weil about 5 years ago
- Related to Bug #38040: osd_map_message_max default is too high? added
Updated by David Zafman over 4 years ago
- Status changed from Resolved to Pending Backport
- Backport set to mimic, nautilus
Updated by David Zafman over 4 years ago
- Backport changed from mimic, nautilus to mimic
Updated by David Zafman over 4 years ago
- Copied to Backport #42558: mimic: cephtool/test.sh failure in test_mon_osd_pool_set added
Updated by Nathan Cutler over 4 years ago
- Status changed from Pending Backport to Resolved
- Backport deleted (
mimic)
mimic backport: https://github.com/ceph/ceph/pull/31236
Updated by Nathan Cutler over 4 years ago
- Status changed from Resolved to Pending Backport
- Backport set to mimic
Updated by Nathan Cutler over 4 years ago
- Status changed from Pending Backport to Resolved
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".
Actions