Project

General

Profile

Bug #38040

osd_map_message_max default is too high?

Added by Ilya Dryomov about 1 year ago. Updated 4 months ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
luminous,mimic
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature:

Description

In a thread on ceph-users [1], three different users with fairly large clusters (~600 OSDs, ~3500 OSDs) reported running into a kernel client limit on the size of the front section of the message:

Dec 26 19:28:53 mon5 kernel: libceph: mon0 10.128.150.10:6789 io error
Dec 26 19:28:53 mon5 kernel: libceph: mon0 10.128.150.10:6789 session lost, hunting for new mon
Dec 26 19:28:53 mon5 kernel: libceph: mon2 10.128.150.12:6789 session established
Dec 26 19:28:58 mon5 kernel: libceph: mon2 10.128.150.12:6789 io error
Dec 26 19:28:58 mon5 kernel: libceph: mon2 10.128.150.12:6789 session lost, hunting for new mon
Dec 26 19:28:58 mon5 kernel: libceph: mon1 10.128.150.11:6789 session established
#define CEPH_MSG_MAX_FRONT_LEN    (16*1024*1024)

The default for osd_map_message_max was reduced to 40 in luminous, but still appears to be too high. While CEPH_MSG_MAX_FRONT_LEN is just an arbitrary constant and I can certainly bump it, I'm not sure that's the right thing to do.
Should osd_map_message_max be further reduced to 20 or 10 or better yet expressed in bytes?

[1] https://www.mail-archive.com/ceph-users@lists.ceph.com/msg51522.html


Related issues

Related to RADOS - Bug #38282: cephtool/test.sh failure in test_mon_osd_pool_set Resolved 02/12/2019
Related to RADOS - Bug #38330: osd/OSD.cc: 1515: abort() in Service::build_incremental_map_msg Resolved
Duplicated by Ceph - Bug #38031: Monitor sent <16MB MOSDMap message cause kernel client instability. Duplicate 01/24/2019
Copied to RADOS - Backport #38276: luminous: osd_map_message_max default is too high? Resolved
Copied to RADOS - Backport #38277: mimic: osd_map_message_max default is too high? Resolved

History

#1 Updated by Ilya Dryomov about 1 year ago

  • Assignee set to Sage Weil

Assigning Sage, as the author of commit 855955e58e63 ("osd: reduce size of osdmap cache, messages").

#2 Updated by Josh Durgin about 1 year ago

  • Priority changed from High to Urgent

#3 Updated by Sage Weil about 1 year ago

  • Status changed from New to Fix Under Review
  • Backport set to luminous,mimic

#4 Updated by Sage Weil about 1 year ago

  • Status changed from Fix Under Review to Pending Backport
  • Priority changed from Urgent to High

#5 Updated by Nathan Cutler about 1 year ago

  • Copied to Backport #38276: luminous: osd_map_message_max default is too high? added

#6 Updated by Nathan Cutler about 1 year ago

  • Copied to Backport #38277: mimic: osd_map_message_max default is too high? added

#7 Updated by Sage Weil about 1 year ago

  • Related to Bug #38282: cephtool/test.sh failure in test_mon_osd_pool_set added

#8 Updated by Sage Weil about 1 year ago

  • Related to Bug #38330: osd/OSD.cc: 1515: abort() in Service::build_incremental_map_msg added

#9 Updated by Xiaoxi Chen about 1 year ago

  • Related to Bug #38031: Monitor sent <16MB MOSDMap message cause kernel client instability. added

#10 Updated by Ilya Dryomov about 1 year ago

  • Related to deleted (Bug #38031: Monitor sent <16MB MOSDMap message cause kernel client instability.)

#11 Updated by Ilya Dryomov about 1 year ago

  • Duplicated by Bug #38031: Monitor sent <16MB MOSDMap message cause kernel client instability. added

#12 Updated by Nathan Cutler 8 months ago

  • Pull request ID set to 26340

#13 Updated by Nathan Cutler 6 months ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved".

#14 Updated by Nathan Cutler 4 months ago

Luminous backport analysis:

Based on the description of #43106 I'm guessing that https://github.com/ceph/ceph/pull/26448 is only needed if https://github.com/ceph/ceph/pull/26413 is being backported, which it isn't to luminous.

So luminous should be OK, but it would be nice to get confirmation from a core dev - @Neha?

Also available in: Atom PDF