Project

General

Profile

Actions

Bug #62449

open

test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure

Added by Laura Flores 9 months ago. Updated 6 months ago.

Status:
Pending Backport
Priority:
Normal
Target version:
-
% Done:

0%

Source:
Tags:
notifications backport_processed
Backport:
reef
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

/a/yuriw-2023-08-11_02:49:40-rados-wip-yuri4-testing-2023-08-10-1739-distro-default-smithi/7367069

2023-08-11T10:27:29.321 INFO:tasks.workunit.client.0.smithi088.stdout:[ RUN      ] TestCls2PCQueue.MultiProducer
2023-08-11T10:27:30.684 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:30 smithi196 ceph-mon[163147]: osdmap e866: 8 total, 8 up, 8 in
2023-08-11T10:27:30.684 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:30 smithi196 ceph-mon[163147]: pgmap v972: 105 pgs: 105 active+clean; 583 KiB data, 2.9 GiB used, 712 GiB / 715 GiB avail
2023-08-11T10:27:30.787 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:30 smithi088 ceph-mon[178644]: osdmap e866: 8 total, 8 up, 8 in
2023-08-11T10:27:30.788 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:30 smithi088 ceph-mon[178644]: pgmap v972: 105 pgs: 105 active+clean; 583 KiB data, 2.9 GiB used, 712 GiB / 715 GiB avail
2023-08-11T10:27:30.788 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:30 smithi088 ceph-mon[182061]: osdmap e866: 8 total, 8 up, 8 in
2023-08-11T10:27:30.789 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:30 smithi088 ceph-mon[182061]: pgmap v972: 105 pgs: 105 active+clean; 583 KiB data, 2.9 GiB used, 712 GiB / 715 GiB avail
2023-08-11T10:27:31.364 INFO:tasks.workunit.client.0.smithi088.stdout:/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.2.6-882-gfd55b450/rpm/el8/BUILD/ceph-17.2.6-882-gfd55b450/src/test/cls_2pc_queue/test_cls_2pc_queue.cc:720: Failure
2023-08-11T10:27:31.364 INFO:tasks.workunit.client.0.smithi088.stdout:Expected equality of these values:
2023-08-11T10:27:31.364 INFO:tasks.workunit.client.0.smithi088.stdout:  0
2023-08-11T10:27:31.364 INFO:tasks.workunit.client.0.smithi088.stdout:  ioctx.operate(queue_name, &op)
2023-08-11T10:27:31.365 INFO:tasks.workunit.client.0.smithi088.stdout:    Which is: -22
2023-08-11T10:27:31.684 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:31 smithi196 ceph-mon[163147]: osdmap e867: 8 total, 8 up, 8 in
2023-08-11T10:27:31.684 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:31 smithi196 ceph-mon[163147]: from='client.? 172.21.15.88:0/2067352807' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-17","app": "rados","yes_i_really_mean_it": true}]: dispatch
2023-08-11T10:27:31.684 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:31 smithi196 ceph-mon[163147]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-17","app": "rados","yes_i_really_mean_it": true}]: dispatch
2023-08-11T10:27:31.786 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:31 smithi088 ceph-mon[178644]: osdmap e867: 8 total, 8 up, 8 in
2023-08-11T10:27:31.786 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:31 smithi088 ceph-mon[178644]: from='client.? 172.21.15.88:0/2067352807' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-17","app": "rados","yes_i_really_mean_it": true}]: dispatch
2023-08-11T10:27:31.787 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:31 smithi088 ceph-mon[178644]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-17","app": "rados","yes_i_really_mean_it": true}]: dispatch
2023-08-11T10:27:31.787 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:31 smithi088 ceph-mon[182061]: osdmap e867: 8 total, 8 up, 8 in
2023-08-11T10:27:31.787 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:31 smithi088 ceph-mon[182061]: from='client.? 172.21.15.88:0/2067352807' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-17","app": "rados","yes_i_really_mean_it": true}]: dispatch
2023-08-11T10:27:31.787 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:31 smithi088 ceph-mon[182061]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-17","app": "rados","yes_i_really_mean_it": true}]: dispatch
2023-08-11T10:27:32.684 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:32 smithi196 ceph-mon[163147]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-17","app": "rados","yes_i_really_mean_it": true}]': finished
2023-08-11T10:27:32.684 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:32 smithi196 ceph-mon[163147]: osdmap e868: 8 total, 8 up, 8 in
2023-08-11T10:27:32.685 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:32 smithi196 ceph-mon[163147]: pgmap v975: 137 pgs: 8 creating+peering, 21 unknown, 108 active+clean; 583 KiB data, 2.9 GiB used, 712 GiB / 715 GiB avail
2023-08-11T10:27:32.786 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:32 smithi088 ceph-mon[178644]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-17","app": "rados","yes_i_really_mean_it": true}]': finished
2023-08-11T10:27:32.786 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:32 smithi088 ceph-mon[178644]: osdmap e868: 8 total, 8 up, 8 in
2023-08-11T10:27:32.786 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:32 smithi088 ceph-mon[178644]: pgmap v975: 137 pgs: 8 creating+peering, 21 unknown, 108 active+clean; 583 KiB data, 2.9 GiB used, 712 GiB / 715 GiB avail
2023-08-11T10:27:32.787 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:32 smithi088 ceph-mon[182061]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-17","app": "rados","yes_i_really_mean_it": true}]': finished
2023-08-11T10:27:32.787 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:32 smithi088 ceph-mon[182061]: osdmap e868: 8 total, 8 up, 8 in
2023-08-11T10:27:32.787 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:32 smithi088 ceph-mon[182061]: pgmap v975: 137 pgs: 8 creating+peering, 21 unknown, 108 active+clean; 583 KiB data, 2.9 GiB used, 712 GiB / 715 GiB avail
2023-08-11T10:27:33.286 INFO:journalctl@ceph.mgr.y.smithi088.stdout:Aug 11 10:27:32 smithi088 conmon[171663]: ::ffff:172.21.15.196 - - [11/Aug/2023:10:27:32] "GET /metrics HTTP/1.1" 200 33816 "" "Prometheus/2.43.0" 
2023-08-11T10:27:33.684 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:33 smithi196 ceph-mon[163147]: osdmap e869: 8 total, 8 up, 8 in
2023-08-11T10:27:33.786 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:33 smithi088 ceph-mon[178644]: osdmap e869: 8 total, 8 up, 8 in
2023-08-11T10:27:33.786 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:33 smithi088 ceph-mon[182061]: osdmap e869: 8 total, 8 up, 8 in
2023-08-11T10:27:34.934 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:34 smithi196 ceph-mon[163147]: pgmap v977: 137 pgs: 8 creating+peering, 7 unknown, 122 active+clean; 802 KiB data, 2.9 GiB used, 712 GiB / 715 GiB avail; 605 KiB/s rd, 1.9 MiB/s wr, 2.05k op/s
2023-08-11T10:27:35.036 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:34 smithi088 ceph-mon[178644]: pgmap v977: 137 pgs: 8 creating+peering, 7 unknown, 122 active+clean; 802 KiB data, 2.9 GiB used, 712 GiB / 715 GiB avail; 605 KiB/s rd, 1.9 MiB/s wr, 2.05k op/s
2023-08-11T10:27:35.037 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:34 smithi088 ceph-mon[182061]: pgmap v977: 137 pgs: 8 creating+peering, 7 unknown, 122 active+clean; 802 KiB data, 2.9 GiB used, 712 GiB / 715 GiB avail; 605 KiB/s rd, 1.9 MiB/s wr, 2.05k op/s
2023-08-11T10:27:37.185 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:36 smithi196 ceph-mon[163147]: pgmap v978: 137 pgs: 3 creating+peering, 134 active+clean; 802 KiB data, 3.0 GiB used, 712 GiB / 715 GiB avail; 457 KiB/s rd, 1.4 MiB/s wr, 1.55k op/s
2023-08-11T10:27:37.286 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:36 smithi088 ceph-mon[182061]: pgmap v978: 137 pgs: 3 creating+peering, 134 active+clean; 802 KiB data, 3.0 GiB used, 712 GiB / 715 GiB avail; 457 KiB/s rd, 1.4 MiB/s wr, 1.55k op/s
2023-08-11T10:27:37.287 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:36 smithi088 ceph-mon[178644]: pgmap v978: 137 pgs: 3 creating+peering, 134 active+clean; 802 KiB data, 3.0 GiB used, 712 GiB / 715 GiB avail; 457 KiB/s rd, 1.4 MiB/s wr, 1.55k op/s
2023-08-11T10:27:39.184 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:38 smithi196 ceph-mon[163147]: pgmap v979: 137 pgs: 137 active+clean; 1.9 MiB data, 3.0 GiB used, 712 GiB / 715 GiB avail; 2.4 MiB/s rd, 7.6 MiB/s wr, 8.37k op/s
2023-08-11T10:27:39.230 INFO:tasks.workunit.client.0.smithi088.stdout:/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.2.6-882-gfd55b450/rpm/el8/BUILD/ceph-17.2.6-882-gfd55b450/src/test/cls_2pc_queue/test_cls_2pc_queue.cc:726: Failure
2023-08-11T10:27:39.230 INFO:tasks.workunit.client.0.smithi088.stdout:Expected equality of these values:
2023-08-11T10:27:39.231 INFO:tasks.workunit.client.0.smithi088.stdout:  consume_count
2023-08-11T10:27:39.231 INFO:tasks.workunit.client.0.smithi088.stdout:    Which is: 0
2023-08-11T10:27:39.231 INFO:tasks.workunit.client.0.smithi088.stdout:  number_of_ops*number_of_elements*max_producer_count
2023-08-11T10:27:39.231 INFO:tasks.workunit.client.0.smithi088.stdout:    Which is: 69000
2023-08-11T10:27:39.286 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:38 smithi088 ceph-mon[178644]: pgmap v979: 137 pgs: 137 active+clean; 1.9 MiB data, 3.0 GiB used, 712 GiB / 715 GiB avail; 2.4 MiB/s rd, 7.6 MiB/s wr, 8.37k op/s
2023-08-11T10:27:39.286 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:38 smithi088 ceph-mon[182061]: pgmap v979: 137 pgs: 137 active+clean; 1.9 MiB data, 3.0 GiB used, 712 GiB / 715 GiB avail; 2.4 MiB/s rd, 7.6 MiB/s wr, 8.37k op/s
2023-08-11T10:27:39.919 INFO:tasks.workunit.client.0.smithi088.stdout:[  FAILED  ] TestCls2PCQueue.MultiProducer (10599 ms)

2023-08-11T10:27:39.919 INFO:tasks.workunit.client.0.smithi088.stdout:[ RUN      ] TestCls2PCQueue.AsyncConsumer
2023-08-11T10:27:41.184 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:40 smithi196 ceph-mon[163147]: pgmap v980: 137 pgs: 137 active+clean; 1.9 MiB data, 3.0 GiB used, 712 GiB / 715 GiB avail; 2.1 MiB/s rd, 6.7 MiB/s wr, 7.40k op/s
2023-08-11T10:27:41.184 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:40 smithi196 ceph-mon[163147]: osdmap e870: 8 total, 8 up, 8 in
2023-08-11T10:27:41.286 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:40 smithi088 ceph-mon[178644]: pgmap v980: 137 pgs: 137 active+clean; 1.9 MiB data, 3.0 GiB used, 712 GiB / 715 GiB avail; 2.1 MiB/s rd, 6.7 MiB/s wr, 7.40k op/s
2023-08-11T10:27:41.287 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:40 smithi088 ceph-mon[178644]: osdmap e870: 8 total, 8 up, 8 in
2023-08-11T10:27:41.287 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:40 smithi088 ceph-mon[182061]: pgmap v980: 137 pgs: 137 active+clean; 1.9 MiB data, 3.0 GiB used, 712 GiB / 715 GiB avail; 2.1 MiB/s rd, 6.7 MiB/s wr, 7.40k op/s
2023-08-11T10:27:41.287 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:40 smithi088 ceph-mon[182061]: osdmap e870: 8 total, 8 up, 8 in
2023-08-11T10:27:42.286 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:41 smithi088 ceph-mon[178644]: osdmap e871: 8 total, 8 up, 8 in
2023-08-11T10:27:42.287 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:41 smithi088 ceph-mon[178644]: from='client.? 172.21.15.88:0/2020995373' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-18","app": "rados","yes_i_really_mean_it": true}]: dispatch
2023-08-11T10:27:42.287 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:41 smithi088 ceph-mon[178644]: from='mgr.34107 172.21.15.88:0/3981614991' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
2023-08-11T10:27:42.287 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:41 smithi088 ceph-mon[178644]: from='client.? 172.21.15.88:0/2020995373' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-18","app": "rados","yes_i_really_mean_it": true}]': finished
2023-08-11T10:27:42.288 INFO:journalctl@ceph.mon.a.smithi088.stdout:Aug 11 10:27:41 smithi088 ceph-mon[178644]: osdmap e872: 8 total, 8 up, 8 in
2023-08-11T10:27:42.288 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:41 smithi088 ceph-mon[182061]: osdmap e871: 8 total, 8 up, 8 in
2023-08-11T10:27:42.288 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:41 smithi088 ceph-mon[182061]: from='client.? 172.21.15.88:0/2020995373' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-18","app": "rados","yes_i_really_mean_it": true}]: dispatch
2023-08-11T10:27:42.289 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:41 smithi088 ceph-mon[182061]: from='mgr.34107 172.21.15.88:0/3981614991' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
2023-08-11T10:27:42.289 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:41 smithi088 ceph-mon[182061]: from='client.? 172.21.15.88:0/2020995373' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-18","app": "rados","yes_i_really_mean_it": true}]': finished
2023-08-11T10:27:42.289 INFO:journalctl@ceph.mon.c.smithi088.stdout:Aug 11 10:27:41 smithi088 ceph-mon[182061]: osdmap e872: 8 total, 8 up, 8 in
2023-08-11T10:27:42.435 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:41 smithi196 ceph-mon[163147]: osdmap e871: 8 total, 8 up, 8 in
2023-08-11T10:27:42.435 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:41 smithi196 ceph-mon[163147]: from='client.? 172.21.15.88:0/2020995373' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-18","app": "rados","yes_i_really_mean_it": true}]: dispatch
2023-08-11T10:27:42.435 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:41 smithi196 ceph-mon[163147]: from='mgr.34107 172.21.15.88:0/3981614991' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
2023-08-11T10:27:42.435 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:41 smithi196 ceph-mon[163147]: from='client.? 172.21.15.88:0/2020995373' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-smithi088-219235-18","app": "rados","yes_i_really_mean_it": true}]': finished
2023-08-11T10:27:42.435 INFO:journalctl@ceph.mon.b.smithi196.stdout:Aug 11 10:27:41 smithi196 ceph-mon[163147]: osdmap e872: 8 total, 8 up, 8 in
2023-08-11T10:27:42.650 INFO:tasks.workunit.client.0.smithi088.stdout:/home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.2.6-882-gfd55b450/rpm/el8/BUILD/ceph-17.2.6-882-gfd55b450/src/test/cls_2pc_queue/test_cls_2pc_queue.cc:780: Failure
2023-08-11T10:27:42.650 INFO:tasks.workunit.client.0.smithi088.stdout:Expected equality of these values:
2023-08-11T10:27:42.650 INFO:tasks.workunit.client.0.smithi088.stdout:  0
2023-08-11T10:27:42.650 INFO:tasks.workunit.client.0.smithi088.stdout:  ioctx.operate(queue_name, &wop)
2023-08-11T10:27:42.650 INFO:tasks.workunit.client.0.smithi088.stdout:    Which is: -22
2023-08-11T10:27:42.942 INFO:tasks.workunit.client.0.smithi088.stdout:[  FAILED  ] TestCls2PCQueue.AsyncConsumer (3020 ms)

Subtasks 1 (1 open0 closed)

Bug #63355: test/cls_2pc_queue: fails during migration testsPending BackportAli Masarwa

Actions

Related issues 1 (1 open0 closed)

Copied to rgw - Backport #63498: reef: test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failureNewYuval LifshitzActions
Actions #1

Updated by Laura Flores 9 months ago

/a/yuriw-2023-08-11_02:49:40-rados-wip-yuri4-testing-2023-08-10-1739-distro-default-smithi/7366915

Actions #2

Updated by Casey Bodley 9 months ago

  • Assignee set to Yuval Lifshitz
  • Tags set to notifications

strange that it's returning EINVAL

Actions #3

Updated by Laura Flores 9 months ago

/a/yuriw-2023-08-17_21:18:20-rados-wip-yuri11-testing-2023-08-17-0823-distro-default-smithi/7372057

Actions #4

Updated by Matan Breizman 9 months ago

/a/yuriw-2023-08-22_18:16:03-rados-wip-yuri10-testing-2023-08-17-1444-distro-default-smithi/7376758

Actions #5

Updated by Yuval Lifshitz 8 months ago

what is the minimal teuthology test that runs "ceph_test_cls_2pc_queue'?

Actions #6

Updated by Yuval Lifshitz 8 months ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 53265
Actions #7

Updated by Laura Flores 8 months ago

Yuval Lifshitz wrote:

what is the minimal teuthology test that runs "ceph_test_cls_2pc_queue'?

You can use this test, for example, to run ceph_test_cls_2pc_queue: http://pulpito.front.sepia.ceph.com/yuriw-2023-08-22_18:16:03-rados-wip-yuri10-testing-2023-08-17-1444-distro-default-smithi/7376758/

rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_3.0} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}}

It comes from this workunit in the qa directory: qa/workunits/cls/test_cls_2pc_queue.sh

Actions #8

Updated by Laura Flores 8 months ago

/a/yuriw-2023-08-15_18:58:56-rados-wip-yuri3-testing-2023-08-15-0955-distro-default-smithi/7369400

Actions #9

Updated by Ilya Dryomov 8 months ago

This also showed up in upgrade suites:

https://pulpito.ceph.com/dis-2023-09-07_16:36:12-upgrade:pacific-x-main-distro-default-smithi/
https://pulpito.ceph.com/dis-2023-09-07_23:24:04-upgrade:pacific-x-main-distro-default-smithi/

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi120 with status 124: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer fail on unexpected EINVAL errors, and TestCls2PCQueue.MultiProducerConsumer hangs on top of that:

2023-09-07T17:22:35.959 INFO:tasks.workunit.client.0.smithi120.stdout:[ RUN      ] TestCls2PCQueue.MultiProducer
2023-09-07T17:22:38.016 INFO:tasks.workunit.client.0.smithi120.stdout:/build/ceph-16.2.14-21-gabd50cb2/src/test/cls_2pc_queue/test_cls_2pc_queue.cc:718: Failure
2023-09-07T17:22:38.016 INFO:tasks.workunit.client.0.smithi120.stdout:Expected equality of these values:
2023-09-07T17:22:38.017 INFO:tasks.workunit.client.0.smithi120.stdout:  0
2023-09-07T17:22:38.017 INFO:tasks.workunit.client.0.smithi120.stdout:  ioctx.operate(queue_name, &op)
2023-09-07T17:22:38.017 INFO:tasks.workunit.client.0.smithi120.stdout:    Which is: -22
2023-09-07T17:22:49.629 INFO:tasks.workunit.client.0.smithi120.stdout:/build/ceph-16.2.14-21-gabd50cb2/src/test/cls_2pc_queue/test_cls_2pc_queue.cc:724: Failure
2023-09-07T17:22:49.629 INFO:tasks.workunit.client.0.smithi120.stdout:Expected equality of these values:
2023-09-07T17:22:49.629 INFO:tasks.workunit.client.0.smithi120.stdout:  consume_count
2023-09-07T17:22:49.629 INFO:tasks.workunit.client.0.smithi120.stdout:    Which is: 0
2023-09-07T17:22:49.630 INFO:tasks.workunit.client.0.smithi120.stdout:  number_of_ops*number_of_elements*max_producer_count
2023-09-07T17:22:49.630 INFO:tasks.workunit.client.0.smithi120.stdout:    Which is: 69000
2023-09-07T17:22:50.510 INFO:tasks.workunit.client.0.smithi120.stdout:[  FAILED  ] TestCls2PCQueue.MultiProducer (14551 ms)
2023-09-07T17:22:50.510 INFO:tasks.workunit.client.0.smithi120.stdout:[ RUN      ] TestCls2PCQueue.AsyncConsumer
2023-09-07T17:22:53.673 INFO:tasks.workunit.client.0.smithi120.stdout:/build/ceph-16.2.14-21-gabd50cb2/src/test/cls_2pc_queue/test_cls_2pc_queue.cc:778: Failure
2023-09-07T17:22:53.673 INFO:tasks.workunit.client.0.smithi120.stdout:Expected equality of these values:
2023-09-07T17:22:53.674 INFO:tasks.workunit.client.0.smithi120.stdout:  0
2023-09-07T17:22:53.674 INFO:tasks.workunit.client.0.smithi120.stdout:  ioctx.operate(queue_name, &wop)
2023-09-07T17:22:53.674 INFO:tasks.workunit.client.0.smithi120.stdout:    Which is: -22
2023-09-07T17:22:54.545 INFO:tasks.workunit.client.0.smithi120.stdout:[  FAILED  ] TestCls2PCQueue.AsyncConsumer (4035 ms)
2023-09-07T17:22:54.545 INFO:tasks.workunit.client.0.smithi120.stdout:[ RUN      ] TestCls2PCQueue.MultiProducerConsumer
2023-09-07T17:22:56.719 INFO:tasks.workunit.client.0.smithi120.stdout:/build/ceph-16.2.14-21-gabd50cb2/src/test/cls_2pc_queue/test_cls_2pc_queue.cc:851: Failure
2023-09-07T17:22:56.719 INFO:tasks.workunit.client.0.smithi120.stdout:Expected equality of these values:
2023-09-07T17:22:56.719 INFO:tasks.workunit.client.0.smithi120.stdout:  0
2023-09-07T17:22:56.719 INFO:tasks.workunit.client.0.smithi120.stdout:  ioctx.operate(queue_name, &op)
2023-09-07T17:22:56.720 INFO:tasks.workunit.client.0.smithi120.stdout:    Which is: -22
2023-09-07T17:22:56.720 INFO:tasks.workunit.client.0.smithi120.stdout:/build/ceph-16.2.14-21-gabd50cb2/src/test/cls_2pc_queue/test_cls_2pc_queue.cc:851: Failure
2023-09-07T17:22:56.721 INFO:tasks.workunit.client.0.smithi120.stdout:Expected equality of these values:
2023-09-07T17:22:56.721 INFO:tasks.workunit.client.0.smithi120.stdout:  0
2023-09-07T17:22:56.721 INFO:tasks.workunit.client.0.smithi120.stdout:  ioctx.operate(queue_name, &op)
2023-09-07T17:22:56.721 INFO:tasks.workunit.client.0.smithi120.stdout:    Which is: -22
2023-09-07T17:22:56.721 INFO:tasks.workunit.client.0.smithi120.stdout:/build/ceph-16.2.14-21-gabd50cb2/src/test/cls_2pc_queue/test_cls_2pc_queue.cc:851: Failure
2023-09-07T17:22:56.722 INFO:tasks.workunit.client.0.smithi120.stdout:Expected equality of these values:
2023-09-07T17:22:56.722 INFO:tasks.workunit.client.0.smithi120.stdout:  0
2023-09-07T17:22:56.722 INFO:tasks.workunit.client.0.smithi120.stdout:  ioctx.operate(queue_name, &op)
2023-09-07T17:22:56.722 INFO:tasks.workunit.client.0.smithi120.stdout:    Which is: -22
2023-09-07T17:22:56.723 INFO:tasks.workunit.client.0.smithi120.stdout:/build/ceph-16.2.14-21-gabd50cb2/src/test/cls_2pc_queue/test_cls_2pc_queue.cc:851: Failure
2023-09-07T17:22:56.723 INFO:tasks.workunit.client.0.smithi120.stdout:Expected equality of these values:
2023-09-07T17:22:56.723 INFO:tasks.workunit.client.0.smithi120.stdout:  0
2023-09-07T17:22:56.723 INFO:tasks.workunit.client.0.smithi120.stdout:  ioctx.operate(queue_name, &op)
2023-09-07T17:22:56.723 INFO:tasks.workunit.client.0.smithi120.stdout:    Which is: -22
2023-09-07T17:22:56.725 INFO:tasks.workunit.client.0.smithi120.stdout:/build/ceph-16.2.14-21-gabd50cb2/src/test/cls_2pc_queue/test_cls_2pc_queue.cc:851: Failure
2023-09-07T17:22:56.725 INFO:tasks.workunit.client.0.smithi120.stdout:Expected equality of these values:
2023-09-07T17:22:56.725 INFO:tasks.workunit.client.0.smithi120.stdout:  0
2023-09-07T17:22:56.725 INFO:tasks.workunit.client.0.smithi120.stdout:  ioctx.operate(queue_name, &op)
2023-09-07T17:22:56.726 INFO:tasks.workunit.client.0.smithi120.stdout:    Which is: -22
[...]
2023-09-07T20:21:47.567 DEBUG:teuthology.orchestra.run:got remote process result: 124
2023-09-07T20:21:47.799 INFO:tasks.workunit:Stopping ['cls'] on client.0...
Actions #10

Updated by Yuval Lifshitz 8 months ago

  • Status changed from Fix Under Review to Resolved
Actions #11

Updated by Aishwarya Mathuria 7 months ago

/a/yuriw-2023-10-05_21:43:37-rados-wip-yuri6-testing-2023-10-04-0901-distro-default-smithi/7412046

Actions #12

Updated by Laura Flores 6 months ago

  • Status changed from Resolved to New

Still seeing this even though the fix has merged:
/a/yuriw-2023-10-24_00:11:54-rados-wip-yuri4-testing-2023-10-23-0903-distro-default-smithi/7435691
/a/yuriw-2023-10-24_00:11:54-rados-wip-yuri4-testing-2023-10-23-0903-distro-default-smithi/7436000

Actions #13

Updated by Yuval Lifshitz 6 months ago

created a new sub-task: https://tracker.ceph.com/issues/63355
to capture the migration tests failure, which is unrelated to the test fix: https://github.com/ceph/ceph/pull/53265

Actions #14

Updated by Laura Flores 6 months ago

  • Backport set to reef

/a/yuriw-2023-10-31_14:43:48-rados-wip-yuri4-testing-2023-10-30-1117-distro-default-smithi/7442226

Actions #15

Updated by Laura Flores 6 months ago

/a/yuriw-2023-10-24_00:11:03-rados-wip-yuri2-testing-2023-10-23-0917-distro-default-smithi/7435568

Actions #16

Updated by Yuval Lifshitz 6 months ago

Laura Flores wrote:

/a/yuriw-2023-10-31_14:43:48-rados-wip-yuri4-testing-2023-10-30-1117-distro-default-smithi/7442226

Laura, this looks like an upgrade failure: http://qa-proxy.ceph.com/teuthology/yuriw-2023-10-31_14:43:48-rados-wip-yuri4-testing-2023-10-30-1117-distro-default-smithi/7442072/teuthology.log

this issue is going to be addressed in a separate PR, and tracked here: https://tracker.ceph.com/issues/63355

Actions #17

Updated by Casey Bodley 6 months ago

Yuval, since we merged https://github.com/ceph/ceph/pull/53265 for this, should we go ahead and move this to pending backport?

Actions #18

Updated by Yuval Lifshitz 6 months ago

  • Status changed from New to Pending Backport
Actions #19

Updated by Backport Bot 6 months ago

  • Copied to Backport #63498: reef: test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer failure added
Actions #20

Updated by Backport Bot 6 months ago

  • Tags changed from notifications to notifications backport_processed
Actions

Also available in: Atom PDF