Project

General

Profile

Actions

Bug #43190

open

qa/standalone/osd/osd-recovery-prio.sh has a race

Added by David Zafman over 4 years ago.

Status:
New
Priority:
Normal
Assignee:
David Zafman
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

http://pulpito.ceph.com/dzafman-2019-12-08_11:51:45-rados-master-distro-basic-smithi/4582053/

The test expected osd.0 to be the local reservation (primary) and osd.1 remote reservation (replica), but the opposite resulted. Usually, it does work out the way it is expected.

2019-12-08T21:59:23.517 INFO:tasks.workunit.client.0.smithi116.stdout:osd.0
2019-12-08T21:59:23.518 INFO:tasks.workunit.client.0.smithi116.stdout:{
2019-12-08T21:59:23.518 INFO:tasks.workunit.client.0.smithi116.stdout:    "local_reservations": {
2019-12-08T21:59:23.518 INFO:tasks.workunit.client.0.smithi116.stdout:        "max_allowed": 1,
2019-12-08T21:59:23.519 INFO:tasks.workunit.client.0.smithi116.stdout:        "min_priority": 0,
2019-12-08T21:59:23.519 INFO:tasks.workunit.client.0.smithi116.stdout:        "queues": [],
2019-12-08T21:59:23.519 INFO:tasks.workunit.client.0.smithi116.stdout:        "in_progress": []
2019-12-08T21:59:23.519 INFO:tasks.workunit.client.0.smithi116.stdout:    },
2019-12-08T21:59:23.519 INFO:tasks.workunit.client.0.smithi116.stdout:    "remote_reservations": {
2019-12-08T21:59:23.520 INFO:tasks.workunit.client.0.smithi116.stdout:        "max_allowed": 1,
2019-12-08T21:59:23.520 INFO:tasks.workunit.client.0.smithi116.stdout:        "min_priority": 0,
2019-12-08T21:59:23.520 INFO:tasks.workunit.client.0.smithi116.stdout:        "queues": [],
2019-12-08T21:59:23.520 INFO:tasks.workunit.client.0.smithi116.stdout:        "in_progress": [
2019-12-08T21:59:23.520 INFO:tasks.workunit.client.0.smithi116.stdout:            {
2019-12-08T21:59:23.520 INFO:tasks.workunit.client.0.smithi116.stdout:                "item": "1.0",
2019-12-08T21:59:23.521 INFO:tasks.workunit.client.0.smithi116.stdout:                "prio": 191,
2019-12-08T21:59:23.521 INFO:tasks.workunit.client.0.smithi116.stdout:                "can_preempt": true
2019-12-08T21:59:23.521 INFO:tasks.workunit.client.0.smithi116.stdout:            }
2019-12-08T21:59:23.521 INFO:tasks.workunit.client.0.smithi116.stdout:        ]
2019-12-08T21:59:23.521 INFO:tasks.workunit.client.0.smithi116.stdout:    }
2019-12-08T21:59:23.522 INFO:tasks.workunit.client.0.smithi116.stdout:}

2019-12-08T21:59:23.366 INFO:tasks.workunit.client.0.smithi116.stdout:osd.1
2019-12-08T21:59:23.366 INFO:tasks.workunit.client.0.smithi116.stdout:{
2019-12-08T21:59:23.366 INFO:tasks.workunit.client.0.smithi116.stdout:    "local_reservations": {
2019-12-08T21:59:23.366 INFO:tasks.workunit.client.0.smithi116.stdout:        "max_allowed": 1,
2019-12-08T21:59:23.366 INFO:tasks.workunit.client.0.smithi116.stdout:        "min_priority": 0,
2019-12-08T21:59:23.367 INFO:tasks.workunit.client.0.smithi116.stdout:        "queues": [],
2019-12-08T21:59:23.367 INFO:tasks.workunit.client.0.smithi116.stdout:        "in_progress": [
2019-12-08T21:59:23.367 INFO:tasks.workunit.client.0.smithi116.stdout:            {
2019-12-08T21:59:23.367 INFO:tasks.workunit.client.0.smithi116.stdout:                "item": "1.0",
2019-12-08T21:59:23.367 INFO:tasks.workunit.client.0.smithi116.stdout:                "prio": 191,
2019-12-08T21:59:23.367 INFO:tasks.workunit.client.0.smithi116.stdout:                "can_preempt": true
2019-12-08T21:59:23.367 INFO:tasks.workunit.client.0.smithi116.stdout:            }
2019-12-08T21:59:23.368 INFO:tasks.workunit.client.0.smithi116.stdout:        ]
2019-12-08T21:59:23.368 INFO:tasks.workunit.client.0.smithi116.stdout:    },
2019-12-08T21:59:23.368 INFO:tasks.workunit.client.0.smithi116.stdout:    "remote_reservations": {
2019-12-08T21:59:23.368 INFO:tasks.workunit.client.0.smithi116.stdout:        "max_allowed": 1,
2019-12-08T21:59:23.368 INFO:tasks.workunit.client.0.smithi116.stdout:        "min_priority": 0,
2019-12-08T21:59:23.368 INFO:tasks.workunit.client.0.smithi116.stdout:        "queues": [],
2019-12-08T21:59:23.368 INFO:tasks.workunit.client.0.smithi116.stdout:        "in_progress": []
2019-12-08T21:59:23.369 INFO:tasks.workunit.client.0.smithi116.stdout:    }
2019-12-08T21:59:23.369 INFO:tasks.workunit.client.0.smithi116.stdout:}
2019-12-08T21:59:23.549 INFO:tasks.workunit.client.0.smithi116.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:467: TEST_recovery_pooleval ITEM=null
2019-12-08T21:59:23.549 INFO:tasks.workunit.client.0.smithi116.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:467: TEST_recovery_pooITEM=null
2019-12-08T21:59:23.549 INFO:tasks.workunit.client.0.smithi116.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:468: TEST_recovery_pool'[' null '!=' 2.0 ']'
2019-12-08T21:59:23.549 INFO:tasks.workunit.client.0.smithi116.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:470: TEST_recovery_poolecho 'The primary PG for test2 didn'\''t become the in progress item'
2019-12-08T21:59:23.549 INFO:tasks.workunit.client.0.smithi116.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:471: TEST_recovery_pooexpr 0 + 1
2019-12-08T21:59:23.550 INFO:tasks.workunit.client.0.smithi116.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:471: TEST_recovery_poolERRORS=1
2019-12-08T21:59:23.550 INFO:tasks.workunit.client.0.smithi116.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:482: TEST_recovery_poocat td/osd-recovery-prio/dump.1.out
2019-12-08T21:59:23.550 INFO:tasks.workunit.client.0.smithi116.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:482: TEST_recovery_poojq '.remote_reservations.in_progress[0].item'
2019-12-08T21:59:23.550 INFO:tasks.workunit.client.0.smithi116.stdout:The primary PG for test2 didn't become the in progress item
2019-12-08T21:59:23.553 INFO:tasks.workunit.client.0.smithi116.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:482: TEST_recovery_pooleval ITEM=null
2019-12-08T21:59:23.553 INFO:tasks.workunit.client.0.smithi116.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:482: TEST_recovery_pooITEM=null
2019-12-08T21:59:23.553 INFO:tasks.workunit.client.0.smithi116.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:483: TEST_recovery_pool'[' null '!=' 2.0 ']'
2019-12-08T21:59:23.553 INFO:tasks.workunit.client.0.smithi116.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:485: TEST_recovery_poolecho 'The primary PG 2.0 didn'\''t become the in progress item on remote'
2019-12-08T21:59:23.554 INFO:tasks.workunit.client.0.smithi116.stdout:The primary PG 2.0 didn't become the in progress item on remote
2019-12-08T21:59:23.560 INFO:tasks.workunit.client.0.smithi116.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:486: TEST_recovery_pooexpr 1 + 1
2019-12-08T21:59:23.561 INFO:tasks.workunit.client.0.smithi116.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-recovery-prio.sh:486: TEST_recovery_poolERRORS=2

No data to display

Actions

Also available in: Atom PDF