https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2020-08-31T21:44:42Z
Ceph
RADOS - Bug #47180: qa/standalone/mon/mon-handle-forward.sh failure
https://tracker.ceph.com/issues/47180?journal_id=174131
2020-08-31T21:44:42Z
Neha Ojha
nojha@redhat.com
<ul><li><strong>Priority</strong> changed from <i>Normal</i> to <i>High</i></li></ul><p>similar one</p>
<pre>
2020-08-26T13:11:50.032 INFO:tasks.workunit.client.0.smithi043.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-handle-forward.sh:45: run: grep 'mon_command(.*"POOL1"' td/mon-handle-forward/mon.b.log
2020-08-26T13:11:50.038 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.553+0000 7f3884f62700 5 --2- v2:127.0.0.1:7301/0 >> 127.0.0.1:0/1746676809 conn(0x55ede6dd2400 0x55ede6dcea00 crc :-1 s=READ_MESSAGE_COMPLETE pgs=1 cs=0 l=1 rev1=1 rx=0 tx=0).handle_message received message m=0x55ede5f2dc80 seq=6 from=client.? type=50 mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1
2020-08-26T13:11:50.039 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.553+0000 7f3880759700 1 -- v2:127.0.0.1:7301/0 <== client.? 127.0.0.1:0/1746676809 6 ==== mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 ==== 102+0+0 (crc 0 0 0) 0x55ede5f2dc80 con 0x55ede6dd2400
2020-08-26T13:11:50.039 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.553+0000 7f3880759700 0 mon.b@1(peon) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1
2020-08-26T13:11:50.040 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f3880759700 10 mon.b@1(peon).paxosservice(osdmap 1..1) dispatch 0x55ede5f2dc80 mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/1746676809 con 0x55ede6dd2400
2020-08-26T13:11:50.040 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f3880759700 10 mon.b@1(peon).osd e1 preprocess_query mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/1746676809
2020-08-26T13:11:50.041 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f3880759700 10 mon.b@1(peon) e1 forward_request 5 request mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 features 4540138292837744639
2020-08-26T13:11:50.042 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f3880759700 1 -- v2:127.0.0.1:7301/0 send_to--> mon v2:127.0.0.1:7300/0 -- forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4 -- ?+0 0x55ede6dc7200
2020-08-26T13:11:50.042 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f3880759700 1 -- v2:127.0.0.1:7301/0 --> v2:127.0.0.1:7300/0 -- forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4 -- 0x55ede6dc7200 con 0x55ede5ffa400
2020-08-26T13:11:50.043 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f3880759700 5 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).send_message enqueueing message m=0x55ede6dc7200 type=46 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4
2020-08-26T13:11:50.043 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f387f757700 20 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).prepare_send_message m=forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4
2020-08-26T13:11:50.043 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f387f757700 20 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).prepare_send_message encoding features 4540138292837744639 0x55ede6dc7200 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4
2020-08-26T13:11:50.043 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f387f757700 5 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).write_message sending message m=0x55ede6dc7200 seq=39 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4
2020-08-26T13:11:50.044 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f387f757700 10 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).handle_message_ack got ack seq 40 >= 39 on 0x55ede6dc7200 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4
2020-08-26T13:11:50.044 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.737+0000 7f3884f62700 5 --2- v2:127.0.0.1:7301/0 >> 127.0.0.1:0/1746676809 conn(0x55ede6dd2400 0x55ede6dcea00 crc :-1 s=READ_MESSAGE_COMPLETE pgs=1 cs=0 l=1 rev1=1 rx=0 tx=0).handle_message received message m=0x55ede5f2d800 seq=7 from=client.? type=50 mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1
2020-08-26T13:11:50.044 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.753+0000 7f3880759700 1 -- v2:127.0.0.1:7301/0 <== client.? 127.0.0.1:0/1746676809 7 ==== mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 ==== 102+0+0 (crc 0 0 0) 0x55ede5f2d800 con 0x55ede6dd2400
2020-08-26T13:11:50.045 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.753+0000 7f3880759700 0 mon.b@1(peon) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1
2020-08-26T13:11:50.045 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.753+0000 7f3880759700 10 mon.b@1(peon).paxosservice(osdmap 1..2) dispatch 0x55ede5f2d800 mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/1746676809 con 0x55ede6dd2400
2020-08-26T13:11:50.045 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f3880759700 10 mon.b@1(peon).paxosservice(osdmap 1..2) dispatch 0x55ede5f2d800 mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/1746676809 con 0x55ede6dd2400
2020-08-26T13:11:50.045 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f3880759700 10 mon.b@1(peon).osd e2 preprocess_query mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/1746676809
2020-08-26T13:11:50.046 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f3880759700 10 mon.b@1(peon) e1 forward_request 7 request mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 features 4540138292837744639
2020-08-26T13:11:50.046 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f3880759700 1 -- v2:127.0.0.1:7301/0 send_to--> mon v2:127.0.0.1:7300/0 -- forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4 -- ?+0 0x55ede6dc6000
2020-08-26T13:11:50.046 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f3880759700 1 -- v2:127.0.0.1:7301/0 --> v2:127.0.0.1:7300/0 -- forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4 -- 0x55ede6dc6000 con 0x55ede5ffa400
2020-08-26T13:11:50.047 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f3880759700 5 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).send_message enqueueing message m=0x55ede6dc6000 type=46 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4
2020-08-26T13:11:50.047 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f387f757700 20 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).prepare_send_message m=forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4
2020-08-26T13:11:50.047 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f387f757700 20 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).prepare_send_message encoding features 4540138292837744639 0x55ede6dc6000 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4
2020-08-26T13:11:50.047 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f387f757700 5 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).write_message sending message m=0x55ede6dc6000 seq=45 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4
2020-08-26T13:11:50.047 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f387f757700 10 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).handle_message_ack got ack seq 45 >= 45 on 0x55ede6dc6000 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4
2020-08-26T13:11:50.048 INFO:tasks.workunit.client.0.smithi043.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-handle-forward.sh:45: run: return 1
</pre>
<p>/a/jafaj-2020-08-26_09:07:46-rados-wip-jan-testing-2020-08-26-0905-distro-basic-smithi/5377769</p>
RADOS - Bug #47180: qa/standalone/mon/mon-handle-forward.sh failure
https://tracker.ceph.com/issues/47180?journal_id=174298
2020-09-02T21:03:59Z
Neha Ojha
nojha@redhat.com
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>Triaged</i></li><li><strong>Priority</strong> changed from <i>High</i> to <i>Urgent</i></li></ul><p>I am able to reproduce this locally and it fails consistently on master <a class="external" href="https://pulpito.ceph.com/nojha-2020-09-01_20:42:17-rados:standalone-master-distro-basic-smithi/">https://pulpito.ceph.com/nojha-2020-09-01_20:42:17-rados:standalone-master-distro-basic-smithi/</a>.</p>
<p>Note that the same test passes in octopus: <a class="external" href="https://pulpito.ceph.com/nojha-2020-09-02_20:00:43-rados:standalone-octopus-distro-basic-smithi/">https://pulpito.ceph.com/nojha-2020-09-02_20:00:43-rados:standalone-octopus-distro-basic-smithi/</a></p>
<p>Looks like the "osd pool create POOL2 12" command is getting sent to MONA 127.0.0.1:7300 instead of MONB</p>
<pre>
nojha@vossi06:~/work/ceph/build$ grep "POOL2" out.log
../qa/standalone/mon/mon-handle-forward.sh:47: run: ceph --mon-host 127.0.0.1:7301 osd pool create POOL2 12
pool 'POOL2' created
../qa/standalone/mon/mon-handle-forward.sh:49: run: grep 'forward_request.*mon_command(.*"POOL2"' td/mon-handle-forward/mon.b.log
2020-09-01T20:39:28.113+0000 7f7137514700 5 --2- v2:127.0.0.1:7300/0 >> 127.0.0.1:0/334991331 conn(0x5561f3380480 0x5561f33a0000 crc :-1 s=READ_MESSAGE_COMPLETE pgs=2 cs=0 l=1 rev1=1 rx=0 tx=0).handle_message received message m=0x5561f24ece80 seq=6 from=client.? type=50 mon_command({"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12} v 0) v1
2020-09-01T20:39:28.113+0000 7f7138516700 1 -- v2:127.0.0.1:7300/0 <== client.? 127.0.0.1:0/334991331 6 ==== mon_command({"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12} v 0) v1 ==== 102+0+0 (crc 0 0 0) 0x5561f24ece80 con 0x5561f3380480
2020-09-01T20:39:28.113+0000 7f7138516700 0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12} v 0) v1
2020-09-01T20:39:28.113+0000 7f7138516700 0 log_channel(audit) log [INF] : from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]: dispatch
2020-09-01T20:39:28.113+0000 7f7138516700 10 mon.a@0(leader).paxosservice(osdmap 1..2) dispatch 0x5561f24ece80 mon_command({"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/334991331 con 0x5561f3380480
2020-09-01T20:39:28.113+0000 7f7138516700 10 mon.a@0(leader).osd e2 preprocess_query mon_command({"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/334991331
2020-09-01T20:39:28.113+0000 7f7138516700 7 mon.a@0(leader).osd e2 prepare_update mon_command({"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/334991331
2020-09-01T20:39:28.125+0000 7f7138516700 10 mon.a@0(leader).log v6 logging 2020-09-01T20:39:28.118196+0000 mon.a (mon.0) 16 : audit [INF] from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]: dispatch
2020-09-01T20:39:28.197+0000 7f7136d13700 0 log_channel(audit) log [INF] : from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]': finished
2020-09-01T20:39:28.197+0000 7f7136d13700 2 mon.a@0(leader) e1 send_reply 0x5561f26912a0 0x5561f38d21a0 mon_command_ack([{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]=0 pool 'POOL2' created v3) v1
2020-09-01T20:39:28.197+0000 7f7136d13700 1 -- v2:127.0.0.1:7300/0 --> 127.0.0.1:0/334991331 -- mon_command_ack([{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]=0 pool 'POOL2' created v3) v1 -- 0x5561f38d21a0 con 0x5561f3380480
2020-09-01T20:39:28.197+0000 7f7136d13700 5 --2- v2:127.0.0.1:7300/0 >> 127.0.0.1:0/334991331 conn(0x5561f3380480 0x5561f33a0000 crc :-1 s=READY pgs=2 cs=0 l=1 rev1=1 rx=0 tx=0).send_message enqueueing message m=0x5561f38d21a0 type=51 mon_command_ack([{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]=0 pool 'POOL2' created v3) v1
2020-09-01T20:39:28.201+0000 7f7137514700 20 --2- v2:127.0.0.1:7300/0 >> 127.0.0.1:0/334991331 conn(0x5561f3380480 0x5561f33a0000 crc :-1 s=READY pgs=2 cs=0 l=1 rev1=1 rx=0 tx=0).prepare_send_message m=mon_command_ack([{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]=0 pool 'POOL2' created v3) v1
2020-09-01T20:39:28.201+0000 7f7137514700 20 --2- v2:127.0.0.1:7300/0 >> 127.0.0.1:0/334991331 conn(0x5561f3380480 0x5561f33a0000 crc :-1 s=READY pgs=2 cs=0 l=1 rev1=1 rx=0 tx=0).prepare_send_message encoding features 4540138292837744639 0x5561f38d21a0 mon_command_ack([{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]=0 pool 'POOL2' created v3) v1
2020-09-01T20:39:28.201+0000 7f7137514700 5 --2- v2:127.0.0.1:7300/0 >> 127.0.0.1:0/334991331 conn(0x5561f3380480 0x5561f33a0000 crc :-1 s=READY pgs=2 cs=0 l=1 rev1=1 rx=0 tx=0).write_message sending message m=0x5561f38d21a0 seq=7 mon_command_ack([{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]=0 pool 'POOL2' created v3) v1
2020-09-01T20:39:28.205+0000 7f7136d13700 7 mon.a@0(leader).log v7 update_from_paxos applying incremental log 7 2020-09-01T20:39:28.118196+0000 mon.a (mon.0) 16 : audit [INF] from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]: dispatch
2020-09-01T20:39:28.217+0000 7f7136d13700 10 mon.a@0(leader).log v7 logging 2020-09-01T20:39:28.202589+0000 mon.a (mon.0) 17 : audit [INF] from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]': finished
2020-09-01T20:39:28.309+0000 7f7136d13700 7 mon.a@0(leader).log v8 update_from_paxos applying incremental log 8 2020-09-01T20:39:28.202589+0000 mon.a (mon.0) 17 : audit [INF] from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]': finished
2020-09-01T20:39:28.205+0000 7f5ef0f8a700 7 mon.b@1(peon).log v7 update_from_paxos applying incremental log 7 2020-09-01T20:39:28.118196+0000 mon.a (mon.0) 16 : audit [INF] from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]: dispatch
2020-09-01T20:39:28.313+0000 7f5ef0f8a700 7 mon.b@1(peon).log v8 update_from_paxos applying incremental log 8 2020-09-01T20:39:28.202589+0000 mon.a (mon.0) 17 : audit [INF] from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]': finished
</pre>
<p>I suspected this PR <a class="external" href="https://github.com/ceph/ceph/pull/36533">https://github.com/ceph/ceph/pull/36533</a> so I reverted it and the test is passing fine locally for me.</p>
RADOS - Bug #47180: qa/standalone/mon/mon-handle-forward.sh failure
https://tracker.ceph.com/issues/47180?journal_id=174299
2020-09-02T21:38:10Z
Patrick Donnelly
pdonnell@redhat.com
<ul><li><strong>Assignee</strong> set to <i>Patrick Donnelly</i></li><li><strong>Target version</strong> set to <i>v16.0.0</i></li><li><strong>Source</strong> set to <i>Q/A</i></li><li><strong>Backport</strong> set to <i>octopus,nautilus</i></li></ul><p>Issue is that `mon_host` is now only used for bootstrapping the MonClient. After that, it uses whatever the current monitors are in quorum from the MonMap. We need to modify the test so it no longer depends on mon_host being a subset of the number of monitors actually in quorum.</p>
RADOS - Bug #47180: qa/standalone/mon/mon-handle-forward.sh failure
https://tracker.ceph.com/issues/47180?journal_id=174300
2020-09-02T21:40:25Z
Patrick Donnelly
pdonnell@redhat.com
<ul><li><strong>Status</strong> changed from <i>Triaged</i> to <i>In Progress</i></li></ul>
RADOS - Bug #47180: qa/standalone/mon/mon-handle-forward.sh failure
https://tracker.ceph.com/issues/47180?journal_id=174302
2020-09-02T21:46:05Z
Patrick Donnelly
pdonnell@redhat.com
<ul><li><strong>Related to</strong> <i><a class="issue tracker-7 status-3 priority-6 priority-high2 closed" href="/issues/46645">Fix #46645</a>: librados|libcephfs: use latest MonMap when creating from CephContext</i> added</li></ul>
RADOS - Bug #47180: qa/standalone/mon/mon-handle-forward.sh failure
https://tracker.ceph.com/issues/47180?journal_id=175250
2020-09-16T21:32:21Z
Patrick Donnelly
pdonnell@redhat.com
<ul><li><strong>Status</strong> changed from <i>In Progress</i> to <i>Fix Under Review</i></li><li><strong>Pull request ID</strong> set to <i>37202</i></li></ul>
RADOS - Bug #47180: qa/standalone/mon/mon-handle-forward.sh failure
https://tracker.ceph.com/issues/47180?journal_id=175413
2020-09-19T01:58:07Z
Patrick Donnelly
pdonnell@redhat.com
<ul><li><strong>Status</strong> changed from <i>Fix Under Review</i> to <i>Pending Backport</i></li></ul>
RADOS - Bug #47180: qa/standalone/mon/mon-handle-forward.sh failure
https://tracker.ceph.com/issues/47180?journal_id=175711
2020-09-23T09:08:26Z
Nathan Cutler
ncutler@suse.cz
<ul><li><strong>Copied to</strong> <i><a class="issue tracker-9 status-3 priority-4 priority-default closed" href="/issues/47599">Backport #47599</a>: octopus: qa/standalone/mon/mon-handle-forward.sh failure</i> added</li></ul>
RADOS - Bug #47180: qa/standalone/mon/mon-handle-forward.sh failure
https://tracker.ceph.com/issues/47180?journal_id=175713
2020-09-23T09:08:34Z
Nathan Cutler
ncutler@suse.cz
<ul><li><strong>Copied to</strong> <i><a class="issue tracker-9 status-3 priority-4 priority-default closed" href="/issues/47600">Backport #47600</a>: nautilus: qa/standalone/mon/mon-handle-forward.sh failure</i> added</li></ul>
RADOS - Bug #47180: qa/standalone/mon/mon-handle-forward.sh failure
https://tracker.ceph.com/issues/47180?journal_id=176368
2020-09-30T15:39:54Z
Nathan Cutler
ncutler@suse.cz
<ul><li><strong>Status</strong> changed from <i>Pending Backport</i> to <i>Resolved</i></li></ul><p>While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".</p>