Project

General

Profile

Actions

Bug #47180

closed

qa/standalone/mon/mon-handle-forward.sh failure

Added by Neha Ojha over 3 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Urgent
Category:
-
Target version:
% Done:

0%

Source:
Q/A
Tags:
Backport:
octopus,nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2020-08-27T18:53:31.770 INFO:tasks.workunit.client.0.smithi050.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-handle-forward.sh:48: run:  ceph --admin-daemon /tmp/ceph-asok.24595/ceph-mon.b.asok log flush
2020-08-27T18:53:31.854 INFO:tasks.workunit.client.0.smithi050.stdout:{}
2020-08-27T18:53:31.862 INFO:tasks.workunit.client.0.smithi050.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-handle-forward.sh:49: run:  grep 'forward_request.*mon_command(.*"POOL2"' td/mon-handle-forward/mon.b.log
2020-08-27T18:53:31.868 INFO:tasks.workunit.client.0.smithi050.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-handle-forward.sh:49: run:  return 1
2020-08-27T18:53:31.868 INFO:tasks.workunit.client.0.smithi050.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2153: main:  code=1
2020-08-27T18:53:31.868 INFO:tasks.workunit.client.0.smithi050.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2155: main:  teardown td/mon-handle-forward 1

/a/teuthology-2020-08-26_07:01:02-rados-master-distro-basic-smithi/5377298


Related issues 3 (0 open3 closed)

Related to CephFS - Fix #46645: librados|libcephfs: use latest MonMap when creating from CephContextResolvedShyamsundar Ranganathan

Actions
Copied to RADOS - Backport #47599: octopus: qa/standalone/mon/mon-handle-forward.sh failureResolvedActions
Copied to RADOS - Backport #47600: nautilus: qa/standalone/mon/mon-handle-forward.sh failureResolvedActions
Actions #1

Updated by Neha Ojha over 3 years ago

  • Priority changed from Normal to High

similar one

2020-08-26T13:11:50.032 INFO:tasks.workunit.client.0.smithi043.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-handle-forward.sh:45: run:  grep 'mon_command(.*"POOL1"' td/mon-handle-forward/mon.b.log
2020-08-26T13:11:50.038 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.553+0000 7f3884f62700  5 --2- v2:127.0.0.1:7301/0 >> 127.0.0.1:0/1746676809 conn(0x55ede6dd2400 0x55ede6dcea00 crc :-1 s=READ_MESSAGE_COMPLETE pgs=1 cs=0 l=1 rev1=1 rx=0 tx=0).handle_message received message m=0x55ede5f2dc80 seq=6 from=client.? type=50 mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1
2020-08-26T13:11:50.039 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.553+0000 7f3880759700  1 -- v2:127.0.0.1:7301/0 <== client.? 127.0.0.1:0/1746676809 6 ==== mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 ==== 102+0+0 (crc 0 0 0) 0x55ede5f2dc80 con 0x55ede6dd2400
2020-08-26T13:11:50.039 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.553+0000 7f3880759700  0 mon.b@1(peon) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1
2020-08-26T13:11:50.040 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f3880759700 10 mon.b@1(peon).paxosservice(osdmap 1..1) dispatch 0x55ede5f2dc80 mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/1746676809 con 0x55ede6dd2400
2020-08-26T13:11:50.040 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f3880759700 10 mon.b@1(peon).osd e1 preprocess_query mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/1746676809
2020-08-26T13:11:50.041 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f3880759700 10 mon.b@1(peon) e1 forward_request 5 request mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 features 4540138292837744639
2020-08-26T13:11:50.042 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f3880759700  1 -- v2:127.0.0.1:7301/0 send_to--> mon v2:127.0.0.1:7300/0 -- forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4 -- ?+0 0x55ede6dc7200
2020-08-26T13:11:50.042 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f3880759700  1 -- v2:127.0.0.1:7301/0 --> v2:127.0.0.1:7300/0 -- forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4 -- 0x55ede6dc7200 con 0x55ede5ffa400
2020-08-26T13:11:50.043 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f3880759700  5 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).send_message enqueueing message m=0x55ede6dc7200 type=46 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4
2020-08-26T13:11:50.043 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f387f757700 20 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).prepare_send_message m=forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4
2020-08-26T13:11:50.043 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f387f757700 20 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).prepare_send_message encoding features 4540138292837744639 0x55ede6dc7200 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4
2020-08-26T13:11:50.043 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f387f757700  5 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).write_message sending message m=0x55ede6dc7200 seq=39 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4
2020-08-26T13:11:50.044 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.557+0000 7f387f757700 10 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).handle_message_ack got ack seq 40 >= 39 on 0x55ede6dc7200 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 5 con_features 4540138292837744639) v4
2020-08-26T13:11:50.044 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.737+0000 7f3884f62700  5 --2- v2:127.0.0.1:7301/0 >> 127.0.0.1:0/1746676809 conn(0x55ede6dd2400 0x55ede6dcea00 crc :-1 s=READ_MESSAGE_COMPLETE pgs=1 cs=0 l=1 rev1=1 rx=0 tx=0).handle_message received message m=0x55ede5f2d800 seq=7 from=client.? type=50 mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1
2020-08-26T13:11:50.044 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.753+0000 7f3880759700  1 -- v2:127.0.0.1:7301/0 <== client.? 127.0.0.1:0/1746676809 7 ==== mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 ==== 102+0+0 (crc 0 0 0) 0x55ede5f2d800 con 0x55ede6dd2400
2020-08-26T13:11:50.045 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.753+0000 7f3880759700  0 mon.b@1(peon) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1
2020-08-26T13:11:50.045 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.753+0000 7f3880759700 10 mon.b@1(peon).paxosservice(osdmap 1..2) dispatch 0x55ede5f2d800 mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/1746676809 con 0x55ede6dd2400
2020-08-26T13:11:50.045 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f3880759700 10 mon.b@1(peon).paxosservice(osdmap 1..2) dispatch 0x55ede5f2d800 mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/1746676809 con 0x55ede6dd2400
2020-08-26T13:11:50.045 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f3880759700 10 mon.b@1(peon).osd e2 preprocess_query mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/1746676809
2020-08-26T13:11:50.046 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f3880759700 10 mon.b@1(peon) e1 forward_request 7 request mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 features 4540138292837744639
2020-08-26T13:11:50.046 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f3880759700  1 -- v2:127.0.0.1:7301/0 send_to--> mon v2:127.0.0.1:7300/0 -- forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4 -- ?+0 0x55ede6dc6000
2020-08-26T13:11:50.046 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f3880759700  1 -- v2:127.0.0.1:7301/0 --> v2:127.0.0.1:7300/0 -- forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4 -- 0x55ede6dc6000 con 0x55ede5ffa400
2020-08-26T13:11:50.047 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f3880759700  5 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).send_message enqueueing message m=0x55ede6dc6000 type=46 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4
2020-08-26T13:11:50.047 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f387f757700 20 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).prepare_send_message m=forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4
2020-08-26T13:11:50.047 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f387f757700 20 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).prepare_send_message encoding features 4540138292837744639 0x55ede6dc6000 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4
2020-08-26T13:11:50.047 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f387f757700  5 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).write_message sending message m=0x55ede6dc6000 seq=45 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4
2020-08-26T13:11:50.047 INFO:tasks.workunit.client.0.smithi043.stdout:2020-08-26T13:11:49.805+0000 7f387f757700 10 --2- v2:127.0.0.1:7301/0 >> v2:127.0.0.1:7300/0 conn(0x55ede5ffa400 0x55ede5f3c300 unknown :-1 s=READY pgs=10 cs=0 l=0 rev1=1 rx=0 tx=0).handle_message_ack got ack seq 45 >= 45 on 0x55ede6dc6000 forward(mon_command({"prefix": "osd pool create", "pool": "POOL1", "pg_num": 12} v 0) v1 caps allow * tid 7 con_features 4540138292837744639) v4
2020-08-26T13:11:50.048 INFO:tasks.workunit.client.0.smithi043.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-handle-forward.sh:45: run:  return 1

/a/jafaj-2020-08-26_09:07:46-rados-wip-jan-testing-2020-08-26-0905-distro-basic-smithi/5377769

Actions #2

Updated by Neha Ojha over 3 years ago

  • Status changed from New to Triaged
  • Priority changed from High to Urgent

I am able to reproduce this locally and it fails consistently on master https://pulpito.ceph.com/nojha-2020-09-01_20:42:17-rados:standalone-master-distro-basic-smithi/.

Note that the same test passes in octopus: https://pulpito.ceph.com/nojha-2020-09-02_20:00:43-rados:standalone-octopus-distro-basic-smithi/

Looks like the "osd pool create POOL2 12" command is getting sent to MONA 127.0.0.1:7300 instead of MONB

nojha@vossi06:~/work/ceph/build$ grep "POOL2" out.log 
../qa/standalone/mon/mon-handle-forward.sh:47: run:  ceph --mon-host 127.0.0.1:7301 osd pool create POOL2 12
pool 'POOL2' created
../qa/standalone/mon/mon-handle-forward.sh:49: run:  grep 'forward_request.*mon_command(.*"POOL2"' td/mon-handle-forward/mon.b.log
2020-09-01T20:39:28.113+0000 7f7137514700  5 --2- v2:127.0.0.1:7300/0 >> 127.0.0.1:0/334991331 conn(0x5561f3380480 0x5561f33a0000 crc :-1 s=READ_MESSAGE_COMPLETE pgs=2 cs=0 l=1 rev1=1 rx=0 tx=0).handle_message received message m=0x5561f24ece80 seq=6 from=client.? type=50 mon_command({"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12} v 0) v1
2020-09-01T20:39:28.113+0000 7f7138516700  1 -- v2:127.0.0.1:7300/0 <== client.? 127.0.0.1:0/334991331 6 ==== mon_command({"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12} v 0) v1 ==== 102+0+0 (crc 0 0 0) 0x5561f24ece80 con 0x5561f3380480
2020-09-01T20:39:28.113+0000 7f7138516700  0 mon.a@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12} v 0) v1
2020-09-01T20:39:28.113+0000 7f7138516700  0 log_channel(audit) log [INF] : from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]: dispatch
2020-09-01T20:39:28.113+0000 7f7138516700 10 mon.a@0(leader).paxosservice(osdmap 1..2) dispatch 0x5561f24ece80 mon_command({"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/334991331 con 0x5561f3380480
2020-09-01T20:39:28.113+0000 7f7138516700 10 mon.a@0(leader).osd e2 preprocess_query mon_command({"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/334991331
2020-09-01T20:39:28.113+0000 7f7138516700  7 mon.a@0(leader).osd e2 prepare_update mon_command({"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12} v 0) v1 from client.? 127.0.0.1:0/334991331
2020-09-01T20:39:28.125+0000 7f7138516700 10 mon.a@0(leader).log v6  logging 2020-09-01T20:39:28.118196+0000 mon.a (mon.0) 16 : audit [INF] from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]: dispatch
2020-09-01T20:39:28.197+0000 7f7136d13700  0 log_channel(audit) log [INF] : from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]': finished
2020-09-01T20:39:28.197+0000 7f7136d13700  2 mon.a@0(leader) e1 send_reply 0x5561f26912a0 0x5561f38d21a0 mon_command_ack([{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]=0 pool 'POOL2' created v3) v1
2020-09-01T20:39:28.197+0000 7f7136d13700  1 -- v2:127.0.0.1:7300/0 --> 127.0.0.1:0/334991331 -- mon_command_ack([{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]=0 pool 'POOL2' created v3) v1 -- 0x5561f38d21a0 con 0x5561f3380480
2020-09-01T20:39:28.197+0000 7f7136d13700  5 --2- v2:127.0.0.1:7300/0 >> 127.0.0.1:0/334991331 conn(0x5561f3380480 0x5561f33a0000 crc :-1 s=READY pgs=2 cs=0 l=1 rev1=1 rx=0 tx=0).send_message enqueueing message m=0x5561f38d21a0 type=51 mon_command_ack([{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]=0 pool 'POOL2' created v3) v1
2020-09-01T20:39:28.201+0000 7f7137514700 20 --2- v2:127.0.0.1:7300/0 >> 127.0.0.1:0/334991331 conn(0x5561f3380480 0x5561f33a0000 crc :-1 s=READY pgs=2 cs=0 l=1 rev1=1 rx=0 tx=0).prepare_send_message m=mon_command_ack([{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]=0 pool 'POOL2' created v3) v1
2020-09-01T20:39:28.201+0000 7f7137514700 20 --2- v2:127.0.0.1:7300/0 >> 127.0.0.1:0/334991331 conn(0x5561f3380480 0x5561f33a0000 crc :-1 s=READY pgs=2 cs=0 l=1 rev1=1 rx=0 tx=0).prepare_send_message encoding features 4540138292837744639 0x5561f38d21a0 mon_command_ack([{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]=0 pool 'POOL2' created v3) v1
2020-09-01T20:39:28.201+0000 7f7137514700  5 --2- v2:127.0.0.1:7300/0 >> 127.0.0.1:0/334991331 conn(0x5561f3380480 0x5561f33a0000 crc :-1 s=READY pgs=2 cs=0 l=1 rev1=1 rx=0 tx=0).write_message sending message m=0x5561f38d21a0 seq=7 mon_command_ack([{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]=0 pool 'POOL2' created v3) v1
2020-09-01T20:39:28.205+0000 7f7136d13700  7 mon.a@0(leader).log v7 update_from_paxos applying incremental log 7 2020-09-01T20:39:28.118196+0000 mon.a (mon.0) 16 : audit [INF] from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]: dispatch
2020-09-01T20:39:28.217+0000 7f7136d13700 10 mon.a@0(leader).log v7  logging 2020-09-01T20:39:28.202589+0000 mon.a (mon.0) 17 : audit [INF] from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]': finished
2020-09-01T20:39:28.309+0000 7f7136d13700  7 mon.a@0(leader).log v8 update_from_paxos applying incremental log 8 2020-09-01T20:39:28.202589+0000 mon.a (mon.0) 17 : audit [INF] from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]': finished
2020-09-01T20:39:28.205+0000 7f5ef0f8a700  7 mon.b@1(peon).log v7 update_from_paxos applying incremental log 7 2020-09-01T20:39:28.118196+0000 mon.a (mon.0) 16 : audit [INF] from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]: dispatch
2020-09-01T20:39:28.313+0000 7f5ef0f8a700  7 mon.b@1(peon).log v8 update_from_paxos applying incremental log 8 2020-09-01T20:39:28.202589+0000 mon.a (mon.0) 17 : audit [INF] from='client.? 127.0.0.1:0/334991331' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "POOL2", "pg_num": 12}]': finished

I suspected this PR https://github.com/ceph/ceph/pull/36533 so I reverted it and the test is passing fine locally for me.

Actions #3

Updated by Patrick Donnelly over 3 years ago

  • Assignee set to Patrick Donnelly
  • Target version set to v16.0.0
  • Source set to Q/A
  • Backport set to octopus,nautilus

Issue is that `mon_host` is now only used for bootstrapping the MonClient. After that, it uses whatever the current monitors are in quorum from the MonMap. We need to modify the test so it no longer depends on mon_host being a subset of the number of monitors actually in quorum.

Actions #4

Updated by Patrick Donnelly over 3 years ago

  • Status changed from Triaged to In Progress
Actions #5

Updated by Patrick Donnelly over 3 years ago

  • Related to Fix #46645: librados|libcephfs: use latest MonMap when creating from CephContext added
Actions #6

Updated by Patrick Donnelly over 3 years ago

  • Status changed from In Progress to Fix Under Review
  • Pull request ID set to 37202
Actions #7

Updated by Patrick Donnelly over 3 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #8

Updated by Nathan Cutler over 3 years ago

  • Copied to Backport #47599: octopus: qa/standalone/mon/mon-handle-forward.sh failure added
Actions #9

Updated by Nathan Cutler over 3 years ago

  • Copied to Backport #47600: nautilus: qa/standalone/mon/mon-handle-forward.sh failure added
Actions #10

Updated by Nathan Cutler over 3 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions

Also available in: Atom PDF