Project

General

Profile

Actions

Bug #46225

closed

Health check failed: 1 osds down (OSD_DOWN)

Added by Sridhar Seshasayee almost 4 years ago. Updated almost 4 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

/a/sseshasa-2020-06-24_17:46:09-rados-wip-sseshasa-testing-2020-06-24-1858-distro-basic-smithi/5176410

2020-06-24T22:59:36.227+0000 1cfeb700 10 mon.b@0(leader).osd e20 MOSDMarkMeDown for: osd.0 v2:172.21.15.47:6804/22376
2020-06-24T22:59:36.229+0000 1cfeb700 7 mon.b@0(leader).osd e20 prepare_update MOSDMarkMeDown(request_ack=1, osd.0, v2:172.21.15.47:6804/22376, fsid=071fdd9e-d30a-4616-97d9-7e087c7a2c7f) v3 from osd.0 v2:172.21.15.47:6804/22376
2020-06-24T22:59:36.231+0000 1cfeb700 0 log_channel(cluster) log [INF] : osd.0 marked itself down


Related issues 1 (0 open1 closed)

Is duplicate of RADOS - Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean.ResolvedIlya Dryomov

Actions
Actions #1

Updated by Neha Ojha almost 4 years ago

Also, related to https://tracker.ceph.com/issues/46180

2020-06-24T22:45:29.312 INFO:tasks.ceph.ceph_manager.ceph:PG 1.0 is not active+clean
2020-06-24T22:45:29.312 INFO:tasks.ceph.ceph_manager.ceph:{'pgid': '1.0', 'version': "0'0", 'reported_seq': '5', 'reported_epoch': '19', 'state': 'creating+peering', 'last_fresh': '2020-06-24T22:25:17.395121+0000', 'last_change': '2020-06-24T22:25:13.287762+0000', 'last_active': '2020-06-24T22:24:37.316394+0000', 'last_peered': '2020-06-24T22:24:37.316394+0000', 'last_clean': '2020-06-24T22:24:37.316394+0000', 'last_became_active': '0.000000', 'last_became_peered': '0.000000', 'last_unstale': '2020-06-24T22:25:17.395121+0000', 'last_undegraded': '2020-06-24T22:25:17.395121+0000', 'last_fullsized': '2020-06-24T22:25:17.395121+0000', 'mapping_epoch': 15, 'log_start': "0'0", 'ondisk_log_start': "0'0", 'created': 4, 'last_epoch_clean': 0, 'parent': '0.0', 'parent_split_bits': 0, 'last_scrub': "0'0", 'last_scrub_stamp': '2020-06-24T22:24:37.316394+0000', 'last_deep_scrub': "0'0", 'last_deep_scrub_stamp': '2020-06-24T22:24:37.316394+0000', 'last_clean_scrub_stamp': '2020-06-24T22:24:37.316394+0000', 'log_size': 0, 'ondisk_log_size': 0, 'stats_invalid': False, 'dirty_stats_invalid': False, 'omap_stats_invalid': False, 'hitset_stats_invalid': False, 'hitset_bytes_stats_invalid': False, 'pin_stats_invalid': False, 'manifest_stats_invalid': False, 'snaptrimq_len': 0, 'stat_sum': {'num_bytes': 0, 'num_objects': 0, 'num_object_clones': 0, 'num_object_copies': 0, 'num_objects_missing_on_primary': 0, 'num_objects_missing': 0, 'num_objects_degraded': 0, 'num_objects_misplaced': 0, 'num_objects_unfound': 0, 'num_objects_dirty': 0, 'num_whiteouts': 0, 'num_read': 0, 'num_read_kb': 0, 'num_write': 0, 'num_write_kb': 0, 'num_scrub_errors': 0, 'num_shallow_scrub_errors': 0, 'num_deep_scrub_errors': 0, 'num_objects_recovered': 0, 'num_bytes_recovered': 0, 'num_keys_recovered': 0, 'num_objects_omap': 0, 'num_objects_hit_set_archive': 0, 'num_bytes_hit_set_archive': 0, 'num_flush': 0, 'num_flush_kb': 0, 'num_evict': 0, 'num_evict_kb': 0, 'num_promote': 0, 'num_flush_mode_high': 0, 'num_flush_mode_low': 0, 'num_evict_mode_some': 0, 'num_evict_mode_full': 0, 'num_objects_pinned': 0, 'num_legacy_snapsets': 0, 'num_large_omap_objects': 0, 'num_objects_manifest': 0, 'num_omap_bytes': 0, 'num_omap_keys': 0, 'num_objects_repaired': 0}, 'up': [1, 5], 'acting': [1, 5], 'avail_no_missing': [], 'object_location_counts': [], 'blocked_by': [7], 'up_primary': 1, 'acting_primary': 1, 'purged_snaps': []}

2020-06-24T22:59:36.199 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-sseshasa-testing-2020-06-24-1858/qa/tasks/ceph.py", line 1829, in task
    healthy(ctx=ctx, config=dict(cluster=config['cluster']))
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-sseshasa-testing-2020-06-24-1858/qa/tasks/ceph.py", line 1419, in healthy
    manager.wait_for_clean()
  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-sseshasa-testing-2020-06-24-1858/qa/tasks/ceph_manager.py", line 2516, in wait_for_clean
    'wait_for_clean: failed before timeout expired'
AssertionError: wait_for_clean: failed before timeout expired
Actions #2

Updated by Neha Ojha almost 4 years ago

  • Related to Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean. added
Actions #3

Updated by Neha Ojha almost 4 years ago

  • Status changed from New to Triaged
Actions #4

Updated by Neha Ojha almost 4 years ago

  • Related to deleted (Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean.)
Actions #5

Updated by Neha Ojha almost 4 years ago

  • Is duplicate of Bug #46180: qa: Scrubbing terminated -- not all pgs were active and clean. added
Actions #6

Updated by Neha Ojha almost 4 years ago

  • Status changed from Triaged to Duplicate
Actions

Also available in: Atom PDF