Project

General

Profile

Actions

Bug #8404

closed

OSD: stopped doing useful work until scrub timeouts nudged PGs

Added by Greg Farnum almost 10 years ago. Updated almost 10 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

http://qa-proxy.ceph.com/teuthology/teuthology-2014-05-18_23:02:25-multimds-master-testing-basic-plana/262217/

I did some quick grepping through the MDS logs, and it's definitely some kind of hang, not a performance issue. Forward progress appears to have stalled at about 12:34:41, at which point there is exporting activity. Maybe the client didn't resend to the new auth MDS correctly or something?


Related issues 1 (0 open1 closed)

Is duplicate of Ceph - Bug #8338: OSD: no longer checking that ops on older maps are correctly targetedResolvedSamuel Just05/12/2014

Actions
Actions #1

Updated by Greg Farnum almost 10 years ago

teuthology-2014-05-18_23:02:25-multimds-master-testing-basic-plana/262204/

Actions #2

Updated by Zheng Yan almost 10 years ago

client is waiting for session close message from mds.8.

2014-05-19 15:33:27.831973 7f38df361700  0 -- 10.214.131.11:6815/12204 >> 10.214.131.21:0/2397708351 pipe(0x2e81a00 sd=25 :6815 s=2 pgs=21 cs=23 l=0 c=0x2fcb6e0).fault with nothing to send, going to standby
2014-05-19 15:33:31.126319 7f38e0c6c700 10 mds.8.1 beacon_send up:active seq 2720 (currently up:active)
2014-05-19 15:33:31.126354 7f38e0c6c700  1 -- 10.214.131.11:6815/12204 --> 10.214.131.10:6789/0 -- mdsbeacon(4098/e up:active seq 2720 v8) v2 -- ?+0 0x322cb00 con 0x2e5f9a0
2014-05-19 15:33:31.127380 7f38e3673700  1 -- 10.214.131.11:6815/12204 <== mon.0 10.214.131.10:6789/0 2818 ==== mdsbeacon(4098/e up:active seq 2720 v8) v2 ==== 103+0+0 (425398902 0 0) 0x32348c0 con 0x2e5f9a0
2014-05-19 15:33:31.127415 7f38e3673700 10 mds.8.1 handle_mds_beacon up:active seq 2720 rtt 0.001081
2014-05-19 15:33:32.051766 7f38e3673700  1 -- 10.214.131.11:6815/12204 <== osd.2 10.214.131.10:6805/10269 1 ==== osd_op_reply(41 208.00000001 [write 3628~198] v5'1367 uv1367 ondisk = 0) v6 ==== 179+0+0 (1233982911 0 0) 0x2fadc80 con 0x2ffd9a0
2014-05-19 15:33:32.051824 7f38e3673700 10 mds.8.server _session_logged client.4182 10.214.131.21:0/2397708351 state_seq 3 close 17
2014-05-19 15:33:32.051837 7f38e3673700  1 -- 10.214.131.11:6815/12204 mark_disposable 0x2fcb6e0 -- 0x2e81a00
2014-05-19 15:33:32.051843 7f38e3673700 10 mds.8.1 send_message_client client.4182 10.214.131.21:0/2397708351 client_session(close) v1
2014-05-19 15:33:32.051851 7f38e3673700  1 -- 10.214.131.11:6815/12204 --> 10.214.131.21:0/2397708351 -- client_session(close) v1 -- ?+0 0x31ffc40 con 0x2fcb6e0
2014-05-19 15:33:32.051894 7f38e3673700  1 -- 10.214.131.11:6815/12204 <== osd.2 10.214.131.10:6805/10269 2 ==== osd_op_reply(43 208.00000001 [write 3826~198] v5'1369 uv1369 ondisk = 0) v6 ==== 179+0+0 (109633772 0 0) 0x2f55280 con 0x2ffd9a0

the session close message was dropped because MDS first marked connection disposable, then sent session close message (the connection was already in standby state)

Server::_session_logged() has following comments

    if (session->is_closing()) {
      // mark con disposable.  if there is a fault, we will get a
      // reset and clean it up.  if the client hasn't received the
      // CLOSE message yet, they will reconnect and get an
      // ms_handle_remote_reset() and realize they had in fact closed.
      // do this *before* sending the message to avoid a possible
      // race.
      mds->messenger->mark_disposable(session->connection.get());

      // reset session
      mds->send_message_client(new MClientSession(CEPH_SESSION_CLOSE), session);
      mds->sessionmap.set_state(session, Session::STATE_CLOSED);
      session->clear();
    } 

No idea what is the race.

Actions #3

Updated by Zheng Yan almost 10 years ago

  • Status changed from New to 12
Actions #4

Updated by Greg Farnum almost 10 years ago

  • Project changed from CephFS to Ceph
  • Subject changed from multimds: LibCephFS.DirLs test failure to OSD: stopped doing useful work until scrub timeouts nudged PGs
  • Category deleted (47)
  • Status changed from 12 to New

Wow, this is crazy:
The MDS is trying to log the ESession(close) event to the OSD, but the OSD connection isn't working. It finally gets a response back at 2014-05-19 15:33:32.051766 (after initially submitting the request at 2014-05-19 12:34:42.031387).

It looks like by the time this finally happens, the client has actually been killed.

The OSD sees all of the connect events from the MDS, but it doesn't see any of the Messages it tried to submit.

It looks like the OSD just got horrifically backed up for some reason; it's got OSDOps incoming which don't reach a flag point for hours (meaning that they're throttled from actually starting).

So, this is an OSD problem. I'm trying to figure it out because Sam is trying to pin it on my fast dispatch changes.

Actions #5

Updated by Greg Farnum almost 10 years ago

  • Status changed from New to Duplicate

This is a duplicate of #8338. That took too long to figure out. :(

Actions

Also available in: Atom PDF