Project

General

Profile

Bug #36540

msg: messages are queued but not sent

Added by Patrick Donnelly about 2 months ago. Updated about 2 months ago.

Status:
New
Priority:
Urgent
Assignee:
-
Category:
-
Target version:
Start date:
Due date:
% Done:

0%

Source:
Q/A
Tags:
Backport:
mimic,luminous
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:

Description

In CephFS testing, we've observed transient failures caused by what
appears to messages being dropped [1,2]. These appear to have been
caused by the recent refactor PR [3,4] but I have no evidence other
than the problems appearing during testing with [4] after [4] was
merged.

I'm running tests [5] to see if I can get more debugging (debug ms =
20) but I wanted to canvas for ideas/advice before I get much deeper.
Has anyone else seen transient failures with messages getting dropped?

[1] http://tracker.ceph.com/issues/36389
[2] http://tracker.ceph.com/issues/36349
[3] https://github.com/ceph/ceph/pull/23415
[4] https://github.com/ceph/ceph/pull/24305
[5] http://pulpito.ceph.com/?branch=wip-pdonnell-testing-20181011.152759

See also thread on ceph-devel "msgr bug in master caused by recent protocol refactor (?)".

Example with "debug ms = 20":

2018-10-16 19:57:47.356 1b1c9700 5 mds.beacon.e Sending beacon
up:active seq 2214
2018-10-16 19:57:47.356 1b1c9700 1 -- 172.21.15.179:6816/2028340967
--> 172.21.15.179:6789/0 -- mdsbeacon(4307/e up:active seq 2214 v283)
v7 -- 0x209f2f40 con 0
2018-10-16 19:57:47.356 1b1c9700 20 -- 172.21.15.179:6816/2028340967

172.21.15.179:6789/0 conn(0x1f544e20 legacy :-1 s=OPENED pgs=143

cs=1 l=1).prepare_send_message m mdsbeacon(4307/e up:active seq 2214
v283) v7
2018-10-16 19:57:47.357 1b1c9700 20 -- 172.21.15.179:6816/2028340967

172.21.15.179:6789/0 conn(0x1f544e20 legacy :-1 s=OPENED pgs=143

cs=1 l=1).prepare_send_message encoding features 4611087854031142911
0x209f2f40 mdsbeacon(4307/e up:active seq 2214 v283) v7
2018-10-16 19:57:47.357 1b1c9700 15 -- 172.21.15.179:6816/2028340967

172.21.15.179:6789/0 conn(0x1f544e20 legacy :-1 s=OPENED pgs=143

cs=1 l=1).send_message inline write is denied, reschedule m=0x209f2f40

From: /ceph/teuthology-archive/pdonnell-2018-10-16_16:46:31-multimds-wip-pdonnell-testing-20181011.152759-distro-basic-smithi/3148285/remote/smithi179/log/ceph-mds.e.log.gz

It looks like the messenger just never sent the message. FWIW, the mds
and mon in this particular case are on the same host. I looked around
at other beacon sends (grep "beacon.e") and the actual send by the
msgr happens promptly afterwards. For some reason, that's not the case
case for seq=2214 ... but I'm not that familiar with msgr debugging.

and another where the connection recovered:

Another job with the same test configuration (with valgrind) but this
one is different in that the connection eventually sent the beacons
without requiring a reconnect:

2018-10-16 18:32:55.287 1b1c9700 5 mds.beacon.c Sending beacon
up:active seq 634
2018-10-16 18:32:55.287 1b1c9700 1 -- 172.21.15.156:6819/417220422
--> 172.21.15.156:6789/0 -- mdsbeacon(4315/c up:active seq 634 v331)
v7 -- 0x21aabd20 con 0
2018-10-16 18:32:55.287 1b1c9700 20 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1 s=OPENED pgs=78 cs=1
l=1).prepare_send_message m mdsbeacon(4315/c up:active seq 634 v331)
v7
2018-10-16 18:32:55.287 1b1c9700 20 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1 s=OPENED pgs=78 cs=1
l=1).prepare_send_message encoding features 4611087854031142911
0x21aabd20 mdsbeacon(4315/c up:active seq 634 v331) v7
2018-10-16 18:32:55.287 1b1c9700 15 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1 s=OPENED pgs=78 cs=1
l=1).send_message inline write is denied, reschedule m=0x21aabd20
...
2018-10-16 18:33:07.993 159be700 4 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1
s=STATE_CONNECTION_ESTABLISHED l=1).handle_write
2018-10-16 18:33:07.993 159be700 10 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1 s=OPENED pgs=78 cs=1
l=1).write_event
2018-10-16 18:33:07.993 159be700 10 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1 s=OPENED pgs=78 cs=1
l=1).append_keepalive_or_ack
2018-10-16 18:33:07.993 159be700 20 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1 s=OPENED pgs=78 cs=1
l=1).write_message no session security
2018-10-16 18:33:07.993 159be700 20 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1 s=OPENED pgs=78 cs=1
l=1).write_message sending message type=100 src mds.2 front=416 data=0
off 0
2018-10-16 18:33:07.993 159be700 20 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1 s=OPENED pgs=78 cs=1
l=1).write_message sending 721 0x21aabd20
2018-10-16 18:33:07.994 159be700 10 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1
s=STATE_CONNECTION_ESTABLISHED l=1)._try_send sent bytes 500 remaining
bytes 0

From: /ceph/teuthology-archive/pdonnell-2018-10-16_16:46:31-multimds-wip-pdonnell-testing-20181011.152759-distro-basic-smithi/3148290/remote/smithi156/log/ceph-mds.c.log.gz

So 12 seconds inexplicably spanned the beacon queue and the actual
send on the socket.

Notice it also was also slow to "receive" three mdsmaps in the same time period:

2018-10-16 18:33:07.477 159be700 5 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1
s=READ_FOOTER_AND_DISPATCH pgs=78 cs=1 l=1). rx mon.1 seq 998
0x12348cd0 mdsmap(e 331) v1
...
2018-10-16 18:33:07.481 159be700 5 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1
s=READ_FOOTER_AND_DISPATCH pgs=78 cs=1 l=1). rx mon.1 seq 999
0x12350210 mdsmap(e 332) v1
...
2018-10-16 18:33:07.486 159be700 5 -- 172.21.15.156:6819/417220422 >>
172.21.15.156:6789/0 conn(0x1487fcd0 legacy :-1
s=READ_FOOTER_AND_DISPATCH pgs=78 cs=1 l=1). rx mon.1 seq 1000
0x123652c0 mdsmap(e 333) v1

All three rapidly received in sequence.

The mons sent the above mdsmaps in a timely fashion, e.g. e331:

2018-10-16 18:32:43.343 1d101700 20 -- 172.21.15.156:6789/0 done
calling dispatch on 0x22151430
2018-10-16 18:32:43.344 1f105700 4 -- 172.21.15.156:6789/0 >>
172.21.15.156:6819/417220422 conn(0x217e96e0 legacy :6789
s=STATE_CONNECTION_ESTABLISHED l=1).handle_write
2018-10-16 18:32:43.344 1f105700 10 -- 172.21.15.156:6789/0 >>
172.21.15.156:6819/417220422 conn(0x217e96e0 legacy :6789 s=OPENED
pgs=1 cs=1 l=1).write_event
2018-10-16 18:32:43.344 1f105700 20 -- 172.21.15.156:6789/0 >>
172.21.15.156:6819/417220422 conn(0x217e96e0 legacy :6789 s=OPENED
pgs=1 cs=1 l=1).prepare_send_message m mdsmap(e 330) v1
2018-10-16 18:32:43.344 1f105700 20 -- 172.21.15.156:6789/0 >>
172.21.15.156:6819/417220422 conn(0x217e96e0 legacy :6789 s=OPENED
pgs=1 cs=1 l=1).prepare_send_message encoding features
4611087854031142911 0x224309d0 mdsmap(e 330) v1
2018-10-16 18:32:43.345 1f105700 20 -- 172.21.15.156:6789/0 >>
172.21.15.156:6819/417220422 conn(0x217e96e0 legacy :6789 s=OPENED
pgs=1 cs=1 l=1).write_message signed m=0x224309d0): sig = 0
2018-10-16 18:32:43.345 1f105700 20 -- 172.21.15.156:6789/0 >>
172.21.15.156:6819/417220422 conn(0x217e96e0 legacy :6789 s=OPENED
pgs=1 cs=1 l=1).write_message sending message type=21 src mon.1
front=1905 data=0 off 0
2018-10-16 18:32:43.345 1f105700 20 -- 172.21.15.156:6789/0 >>
172.21.15.156:6819/417220422 conn(0x217e96e0 legacy :6789 s=OPENED
pgs=1 cs=1 l=1).write_message sending 995 0x224309d0
2018-10-16 18:32:43.345 1f105700 10 -- 172.21.15.156:6789/0 >>
172.21.15.156:6819/417220422 conn(0x217e96e0 legacy :6789
s=STATE_CONNECTION_ESTABLISHED l=1)._try_send sent bytes 1980
remaining bytes 0
2018-10-16 18:32:43.345 1f105700 10 -- 172.21.15.156:6789/0 >>
172.21.15.156:6819/417220422 conn(0x217e96e0 legacy :6789 s=OPENED
pgs=1 cs=1 l=1).write_message sending 0x224309d0 done.

From: /ceph/teuthology-archive/pdonnell-2018-10-16_16:46:31-multimds-wip-pdonnell-testing-20181011.152759-distro-basic-smithi/3148290/remote/smithi156/log/ceph-mon.a.log.gz


Related issues

Related to fs - Bug #36389: untar encounters unexpected EPERM on kclient/multimds cluster with thrashing New 10/10/2018
Related to fs - Bug #36349: mds: src/mds/MDCache.cc: 1637: FAILED ceph_assert(follows >= realm->get_newest_seq()) New 10/08/2018
Related to Ceph - Bug #36666: msg: rejoin message queued but not sent New

History

#1 Updated by Patrick Donnelly about 2 months ago

  • Related to Bug #36389: untar encounters unexpected EPERM on kclient/multimds cluster with thrashing added

#2 Updated by Patrick Donnelly about 2 months ago

  • Related to Bug #36349: mds: src/mds/MDCache.cc: 1637: FAILED ceph_assert(follows >= realm->get_newest_seq()) added

#3 Updated by Patrick Donnelly about 2 months ago

  • Subject changed from msg: messages are not queued but not sent to msg: messages are queued but not sent

#4 Updated by Patrick Donnelly about 2 months ago

  • Related to Bug #36666: msg: rejoin message queued but not sent added

Also available in: Atom PDF