Project

General

Profile

Bug #47273

ceph report missing osdmap_clean_epochs if answered by peon

Added by Dan van der Ster over 3 years ago. Updated 9 months ago.

Status:
Resolved
Priority:
Normal
Category:
-
Target version:
-
% Done:

100%

Source:
Tags:
backport_processed
Backport:
quincy,pacific,octopus
Regression:
No
Severity:
3 - minor
Reviewed:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The ceph report section `osdmap_clean_epochs` is empty when the response comes from a peon.

E.g, a good report from the leader:

2020-09-02 15:22:07.884 7f2444ff9700  1 -- 137.138.152.57:0/1376076529 <== mon.0 v2:137.138.152.57:3300/0 7 ==== mon_command_ack([{"prefix": "report"}]=0 report 1659462795 v0) v1 ==== 71+0+9992003 (secure 0 0 0) 0x7f2440000d30 con 0x7f242800e010
report 1659462795
{
  "min_last_epoch_clean": 3389484,
  "last_epoch_clean": {
    "per_pool": [
      {
        "poolid": 4,
        "floor": 3389484
      },
      {
        "poolid": 5,
...

From a peon:

2020-09-02 15:23:27.876 7f9f8f7fe700  1 -- 137.138.152.57:0/4020987241 <== mon.3 v2:188.184.36.206:3300/0 7 ==== mon_command_ack([{"prefix": "report"}]=0 report 726012183 v0) v1 ==== 70+0+9875972 (secure 0 0 0) 0x7f9f8403bf50 con 0x7f9f74006740
report 726012183
{
  "min_last_epoch_clean": 0,
  "last_epoch_clean": {
    "per_pool": []
  },
  "osd_epochs": []
}


Related issues

Copied to RADOS - Backport #56602: quincy: ceph report missing osdmap_clean_epochs if answered by peon Resolved
Copied to RADOS - Backport #56603: octopus: ceph report missing osdmap_clean_epochs if answered by peon Rejected
Copied to RADOS - Backport #56604: pacific: ceph report missing osdmap_clean_epochs if answered by peon Resolved

History

#1 Updated by Neha Ojha over 3 years ago

  • Project changed from Ceph to RADOS

#2 Updated by Dan van der Ster over 2 years ago

  • Affected Versions v14.2.21 added
  • Affected Versions deleted (v14.2.11)

#3 Updated by Dan van der Ster over 2 years ago

  • Affected Versions v14.2.11, v14.2.12, v14.2.13, v14.2.14, v14.2.15, v14.2.16, v14.2.17, v14.2.18, v14.2.19, v14.2.2, v14.2.20 added

#4 Updated by Steve Taylor about 2 years ago

I am also seeing this behavior on the latest Octopus and Pacific releases.

The reason I'm looking is that I'm seeing osdmap trimming get stuck after PG merging on 15.2.8 patched with the fix for https://tracker.ceph.com/issues/48212. This was happening very frequently before the patch and is still happening much less frequently after the patch.

Is it possible that this is related?

#5 Updated by Dan van der Ster about 2 years ago

Is it possible that this is related?

I'm not sure, but I guess not.
I think this bug is rather about not forwarding a cmd to the leader.

For your trimming issue -- does it persist after restarting all the mons?

#6 Updated by Neha Ojha about 2 years ago

  • Priority changed from High to Normal

#7 Updated by Dan van der Ster about 2 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Dan van der Ster
  • Pull request ID set to 45042

#8 Updated by Neha Ojha about 2 years ago

  • Backport set to quincy,pacific,octopus

#9 Updated by Dan van der Ster over 1 year ago

  • Status changed from Fix Under Review to Pending Backport

#10 Updated by Backport Bot over 1 year ago

  • Copied to Backport #56602: quincy: ceph report missing osdmap_clean_epochs if answered by peon added

#11 Updated by Backport Bot over 1 year ago

  • Copied to Backport #56603: octopus: ceph report missing osdmap_clean_epochs if answered by peon added

#12 Updated by Backport Bot over 1 year ago

  • Copied to Backport #56604: pacific: ceph report missing osdmap_clean_epochs if answered by peon added

#13 Updated by Backport Bot over 1 year ago

  • Tags set to backport_processed

#14 Updated by Konstantin Shalygin 9 months ago

  • Status changed from Pending Backport to Resolved
  • % Done changed from 0 to 100

Also available in: Atom PDF