Project

General

Profile

Actions

Support #39319

open

Every 15 min - Monitor daemon marked osd.x down, but it is still running

Added by Vladimir Savinov about 5 years ago. Updated about 5 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Component(RADOS):
Pull request ID:

Description

1. Install Ceph (ceph version 13.2.5 mimic (stable)) in 4 node (CentOS7, in test environment VmWare ESXI 5.5)

firewalld stop and disable in all node. All node in one ESXI server, in one network port group and network no problem.
- nodeadm (for deploy)
- node1 (osd, mon, mgr)
- node2 (osd, mon, mgr)
- node3 (osd, mon, mgr)

2. Every 15 min - Monitor daemon marked osd.x down, but it is still running for all node

in node.x

#ceph -s

cluster:
id: 943c0ac1-f168-4129-984b-84cb65846a95
health: HEALTH_OK

services:
mon: 3 daemons, quorum greend02-n01ceph02,greend02-n02ceph02,greend02-n03ceph02
mgr: greend02-n03ceph02(active), standbys: greend02-n01ceph02, greend02-n02ceph02
osd: 3 osds: 3 up, 3 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.2 GiB used, 147 GiB / 150 GiB avail
pgs:

#ceph -w

....
2019-04-16 12:03:07.731048 osd.2 [WRN] Monitor daemon marked osd.2 down, but it is still running
2019-04-16 12:06:17.673535 mon.greend02-n01ceph02 [INF] osd.0 marked down after no beacon for 300.090443 seconds
2019-04-16 12:06:17.673797 mon.greend02-n01ceph02 [WRN] Health check failed: 1 osds down (OSD_DOWN)
2019-04-16 12:06:17.673846 mon.greend02-n01ceph02 [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN)
2019-04-16 12:06:18.729747 mon.greend02-n01ceph02 [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2019-04-16 12:06:18.729812 mon.greend02-n01ceph02 [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (1 osds) down)
2019-04-16 12:06:18.729845 mon.greend02-n01ceph02 [INF] Cluster is now healthy
2019-04-16 12:06:18.734399 mon.greend02-n01ceph02 [INF] osd.0 192.168.118.17:6800/5561 boot
2019-04-16 12:06:17.735692 osd.0 [WRN] Monitor daemon marked osd.0 down, but it is still running
2019-04-16 12:11:07.696520 mon.greend02-n01ceph02 [INF] osd.1 marked down after no beacon for 300.198038 seconds
2019-04-16 12:11:07.698011 mon.greend02-n01ceph02 [WRN] Health check failed: 1 osds down (OSD_DOWN)
2019-04-16 12:11:07.698060 mon.greend02-n01ceph02 [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN)
2019-04-16 12:11:11.137107 mon.greend02-n01ceph02 [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2019-04-16 12:11:11.137152 mon.greend02-n01ceph02 [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (1 osds) down)
2019-04-16 12:11:11.137167 mon.greend02-n01ceph02 [INF] Cluster is now healthy
.....

  1. cat ceph-osd.1.log | grep says (in all node, this in node2)

2019-04-16 11:10:49.626 7fa4cdaf2700 10 osd.1 3912 handle_osd_ping osd.2 192.168.118.19:6803/5271 says i am down in 3913
2019-04-16 11:10:49.626 7fa4cd2f1700 10 osd.1 3912 handle_osd_ping osd.0 192.168.118.17:6803/4005561 says i am down in 3913
2019-04-16 11:10:49.626 7fa4cdaf2700 10 osd.1 3912 handle_osd_ping osd.0 192.168.118.17:6802/4005561 says i am down in 3913
2019-04-16 11:10:49.626 7fa4cd2f1700 10 osd.1 3912 handle_osd_ping osd.2 192.168.118.19:6804/5271 says i am down in 3913
2019-04-16 11:25:53.834 7fa4ccaf0700 10 osd.1 3918 handle_osd_ping osd.0 192.168.118.17:6806/5005561 says i am down in 3919
2019-04-16 11:25:53.834 7fa4cd2f1700 10 osd.1 3918 handle_osd_ping osd.0 192.168.118.17:6805/5005561 says i am down in 3919
2019-04-16 11:25:53.835 7fa4ccaf0700 10 osd.1 3918 handle_osd_ping osd.2 192.168.118.19:6806/1005271 says i am down in 3919
2019-04-16 11:25:53.835 7fa4cdaf2700 10 osd.1 3918 handle_osd_ping osd.2 192.168.118.19:6807/1005271 says i am down in 3919

#systemctl status ceph-osd@x -> Active: active (running) and long time Active


- why in node1, node2, node3 "... says i am down..." ?
- why in node1, node2, node3 "...Monitor daemon marked osd.x down, but it is still running..." ?
- how to solve a problem?


Files

ceph.conf (1.27 KB) ceph.conf Vladimir Savinov, 04/16/2019 10:11 AM
Actions #1

Updated by Brad Hubbard about 5 years ago

Turn up debug_ms to 5 maybe. It's very likely you need to look more closely at your network.

Actions #2

Updated by Greg Farnum about 5 years ago

  • Project changed from Ceph to RADOS
Actions #3

Updated by Vladimir Savinov about 5 years ago

I add "debug ms = 1" line in [osd] adm view log monitor in /var/log/ceph

...
mon.greend02-n02ceph02@1(peon) e3 ms_verify_authorizer bad authorizer from mon 192.168.118.19:6789/0
...

What "ms_verify_authorizer bad authorizer from mon" ? I copy all keys in node1...3 (I read "install" http://docs.ceph.com/docs/mimic

I disable autor auth in ceph.conf ( = none and reboot all nodes) ... not effect ....

Actions #4

Updated by Vladimir Savinov about 5 years ago

2019-04-23 13:36:20.668791 osd.2 [WRN] Monitor daemon marked osd.2 down, but it is still running
2019-04-23 13:40:36.220357 mon.greend02-n01ceph02 [INF] osd.0 marked down after no beacon for 301.408772 seconds
2019-04-23 13:40:36.220598 mon.greend02-n01ceph02 [WRN] Health check failed: 1 osds down (OSD_DOWN)
2019-04-23 13:40:36.220624 mon.greend02-n01ceph02 [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN)
2019-04-23 13:40:38.280754 mon.greend02-n01ceph02 [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2019-04-23 13:40:38.280801 mon.greend02-n01ceph02 [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (1 osds) down)
2019-04-23 13:40:38.280817 mon.greend02-n01ceph02 [INF] Cluster is now healthy
2019-04-23 13:40:38.286185 mon.greend02-n01ceph02 [INF] osd.0 192.168.118.17:6800/4870 boot
2019-04-23 13:40:37.306443 osd.0 [WRN] Monitor daemon marked osd.0 down, but it is still running
2019-04-23 13:40:56.223104 mon.greend02-n01ceph02 [INF] osd.1 marked down after no beacon for 301.551311 seconds
2019-04-23 13:40:56.223361 mon.greend02-n01ceph02 [WRN] Health check failed: 1 osds down (OSD_DOWN)
2019-04-23 13:40:56.223401 mon.greend02-n01ceph02 [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN)
2019-04-23 13:40:58.527924 mon.greend02-n01ceph02 [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2019-04-23 13:40:58.527968 mon.greend02-n01ceph02 [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (1 osds) down)
2019-04-23 13:40:58.527984 mon.greend02-n01ceph02 [INF] Cluster is now healthy
2019-04-23 13:40:58.533013 mon.greend02-n01ceph02 [INF] osd.1 192.168.118.18:6800/4971 boot
2019-04-23 13:40:58.353101 osd.1 [WRN] Monitor daemon marked osd.1 down, but it is still running
2019-04-23 13:51:21.274008 mon.greend02-n01ceph02 [INF] osd.2 marked down after no beacon for 300.880526 seconds
2019-04-23 13:51:21.274623 mon.greend02-n01ceph02 [WRN] Health check failed: 1 osds down (OSD_DOWN)
2019-04-23 13:51:21.274653 mon.greend02-n01ceph02 [WRN] Health check failed: 1 host (1 osds) down (OSD_HOST_DOWN)
2019-04-23 13:51:22.676542 mon.greend02-n01ceph02 [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2019-04-23 13:51:22.676583 mon.greend02-n01ceph02 [INF] Health check cleared: OSD_HOST_DOWN (was: 1 host (1 osds) down)

Actions

Also available in: Atom PDF