Project

General

Profile

Bug #41177

ceph-objectstore-tool: update-mon-db return EINVAL with missed inc_osdmap

Added by huang jun over 4 years ago. Updated over 4 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
nautilus, mimic
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

we have a cluster, and test the rebuild mon db from osd,
it return EINVAL when operating the osd.4, as below cmdline showed,
the osd.4 didn't have inc_osdmap.252 bc at epoch 252 the osd.4 not created yet.  

[root@node1 ~]# ceph-objectstore-tool --id=4 --data-path=/var/lib/ceph/osd/ceph-4 --op get-osdmap --epoch 252 --file xx
ceph-objectstore-tool --id=4 --data-path=/var/lib/ceph/osd/ceph-4 --op=update-mon-db --mon-store-path=/tmp/sds-tools/rebuild-mon/agentmondbUxYBJS_2019-08-08T18:44:52
ignoring keyring (/var/lib/ceph/osd/ceph-4/keyring): can't open /var/lib/ceph/osd/ceph-4/keyring: (2) No such file or directory 
missing #-1:42a33e63:::inc_osdmap.252:0#
​
[root@node1 ~]# ceph-objectstore-tool --id=4 --data-path=/var/lib/ceph/osd/ceph-4 --op get-osdmap --epoch 252 --file xx
osdmap#252 exported.
[root@node1 ~]# osdmaptool --print xx
osdmaptool: osdmap file 'xx'
epoch 252
fsid bf47aab9-51bf-4a5e-bc8a-15e508ca79ed
created 2019-08-01 16:13:31.208891
modified 2019-08-01 17:08:22.430813
flags nodeep-scrub,sortbitwise,recovery_deletes,purged_snapdirs
crush_version 43
full_ratio 0.9
backfillfull_ratio 0.85
nearfull_ratio 0.8
omap_full_ratio 0.9
omap_backfillfull_ratio 0.85
omap_nearfull_ratio 0.8
require_min_compat_client luminous
min_compat_client jewel
require_osd_release luminous
​
pool 19 'pool-3f06526dc4614faca74209340740f81f' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 256 pgp_num 256 last_change 215 flags hashpspool stripe_width 0 async_recovery_max_updates 60 osd_backfillfull_ratio 0.85 osd_full_ratio 0.9 osd_nearfull_ratio 0.8 osd_omap_backfillfull_ratio 0.85 osd_omap_full_ratio 0.9 osd_omap_nearfull_ratio 0.8
pool 20 'pool-827473ed435f4330b7ea94c08c02e221' replicated size 1 min_size 1 crush_rule 2 object_hash rjenkins pg_num 256 pgp_num 256 last_change 230 flags hashpspool stripe_width 0 async_recovery_max_updates 60 osd_backfillfull_ratio 0.85 osd_full_ratio 0.9 osd_nearfull_ratio 0.8 osd_omap_backfillfull_ratio 0.85 osd_omap_nearfull_ratio 0.8
pool 21 '.sds.rgw' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 237 flags hashpspool stripe_width 0 async_recovery_max_updates 200 osd_backfillfull_ratio 0.85 osd_full_ratio 0.9 osd_nearfull_ratio 0.8 osd_omap_backfillfull_ratio 0.85 osd_omap_nearfull_ratio 0.8
pool 22 '.sds.rgw.root' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 243 flags hashpspool stripe_width 0 async_recovery_max_updates 200 osd_backfillfull_ratio 0.85 osd_full_ratio 0.9 osd_nearfull_ratio 0.8 osd_omap_backfillfull_ratio 0.85 osd_omap_nearfull_ratio 0.8
pool 23 '.sds.rgw.control' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 250 flags hashpspool stripe_width 0 async_recovery_max_updates 200 osd_backfillfull_ratio 0.85 osd_full_ratio 0.9 osd_nearfull_ratio 0.8 osd_omap_backfillfull_ratio 0.85 osd_omap_nearfull_ratio 0.8
pool 24 '.sds.rgw.gc' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 251 flags hashpspool stripe_width 0 async_recovery_max_updates 200 osd_backfillfull_ratio 0.85 osd_full_ratio 0.9 osd_nearfull_ratio 0.8 osd_omap_backfillfull_ratio 0.85 osd_omap_nearfull_ratio 0.8
​
max_osd 4
osd.0 up in weight 1 up_from 203 up_thru 251 down_at 0 last_clean_interval [0,0) 10.255.101.30:6800/32069 10.255.101.30:6801/32069 10.255.101.30:6803/32069 10.255.101.30:6804/32069 10.255.101.30:6802/32069 exists,up aee33031-c36f-42a1-9fb9-4910a4781343
osd.1 up in weight 1 up_from 217 up_thru 226 down_at 0 last_clean_interval [0,0) 10.255.101.31:6800/29399 10.255.101.31:6801/29399 10.255.101.31:6803/29399 10.255.101.31:6804/29399 10.255.101.31:6802/29399 exists,up d86480ae-34d3-4d41-9e49-9b4f9e03dc9e

it's ok to just ignore the inc_osdmap?

Related issues

Related to RADOS - Bug #42824: mimic: rebuild_mondb.cc: FAILED assert(0) in update_osdmap() Resolved 11/14/2019
Copied to Ceph - Backport #41463: nautilus: ceph-objectstore-tool: update-mon-db return EINVAL with missed inc_osdmap Resolved
Copied to Ceph - Backport #41464: mimic: ceph-objectstore-tool: update-mon-db return EINVAL with missed inc_osdmap Resolved

History

#1 Updated by Kefu Chai over 4 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Kefu Chai
  • Backport set to nautilus, mimic
  • Pull request ID set to 29571

#2 Updated by Kefu Chai over 4 years ago

  • Status changed from Fix Under Review to Pending Backport

#5 Updated by Kefu Chai over 4 years ago

  • Status changed from Pending Backport to 12

#7 Updated by Kefu Chai over 4 years ago

  • Status changed from 12 to Pending Backport

#8 Updated by Nathan Cutler over 4 years ago

  • Copied to Backport #41463: nautilus: ceph-objectstore-tool: update-mon-db return EINVAL with missed inc_osdmap added

#9 Updated by Nathan Cutler over 4 years ago

  • Copied to Backport #41464: mimic: ceph-objectstore-tool: update-mon-db return EINVAL with missed inc_osdmap added

#10 Updated by Brad Hubbard over 4 years ago

  • Related to Bug #42824: mimic: rebuild_mondb.cc: FAILED assert(0) in update_osdmap() added

#11 Updated by Nathan Cutler over 4 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Also available in: Atom PDF