Bug #54132
ssh errors too verbose
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Description
Related to https://tracker.ceph.com/issues/53358
14837 lines in health detail in gibba
HEALTH_WARN 1 hosts fail cephadm check [WRN] CEPHADM_HOST_CHECK_FAILED: 1 hosts fail cephadm check host gibba031 (172.21.2.131) failed check: Can't communicate with remote host `172.21.2.131`, possibly because python3 is not installed there. [Errno 113] Connect call failed ('172.21.2.131', 22) Log: Opening SSH connection to 172.21.2.131, port 22 [conn=19, pktid=3460] Received MSG_CHANNEL_OPEN_CONFIRMATION (91), 17 bytes 00000000: 5b 00 00 01 c0 00 00 00 00 00 00 00 00 00 00 80 [............... 00000010: 00 . [conn=19, chan=448] Initial send window 0, packet size 32768 [conn=20, pktid=3460] Received MSG_CHANNEL_OPEN_CONFIRMATION (91), 17 bytes 00000000: 5b 00 00 01 c0 00 00 00 00 00 00 00 00 00 00 80 [............... 00000010: 00 . [conn=20, chan=448] Initial send window 0, packet size 32768 [conn=21, pktid=3460] Received MSG_CHANNEL_OPEN_CONFIRMATION (91), 17 bytes 00000000: 5b 00 00 01 c0 00 00 00 00 00 00 00 00 00 00 80 [............... 00000010: 00 . [conn=21, chan=448] Initial send window 0, packet size 32768 [conn=22, pktid=3460] Received MSG_CHANNEL_OPEN_CONFIRMATION (91), 17 bytes 00000000: 5b 00 00 01 c0 00 00 00 00 00 00 00 00 00 00 80 [............... 00000010: 00 . [conn=22, chan=448] Initial send window 0, packet size 32768 [conn=23, pktid=3460] Received MSG_CHANNEL_OPEN_CONFIRMATION (91), 17 bytes 00000000: 5b 00 00 01 c0 00 00 00 00 00 00 00 00 00 00 80 [............... 00000010: 00 . [conn=23, chan=448] Initial send window 0, packet size 32768 [conn=24, pktid=3460] Received MSG_CHANNEL_OPEN_CONFIRMATION (91), 17 bytes 00000000: 5b 00 00 01 c0 00 00 00 00 00 00 00 00 00 00 80 [............... ... [conn=36, chan=447] Received 10977 data bytes [conn=36, chan=447, pktid=3453] Received MSG_CHANNEL_DATA (94), 10 bytes 00000000: 5e 00 00 01 bf 00 00 00 01 0a ^......... [conn=36, chan=447] Received 1 data byte [conn=36, chan=447, pktid=3454] Received MSG_CHANNEL_REQUEST (98), 25 bytes 00000000: 62 00 00 01 bf 00 00 00 0b 65 78 69 74 2d 73 74 b........exit-st 00000010: 61 74 75 73 00 00 00 00 00 atus..... [conn=36, chan=447] Received exit status 0 [conn=36, chan=447, pktid=3455] Received MSG_CHANNEL_EOF (96), 5 bytes 00000000: 60 00 00 01 bf `.... [conn=36, chan=447] Received EOF [conn=36, chan=447, pktid=3456] Received MSG_CHANNEL_CLOSE (97), 5 bytes 00000000: 61 00 00 01 bf a.... [conn=36, chan=447] Received channel close [conn=36, pktid=2715] Sent MSG_IGNORE (2), 5 bytes 00000000: 02 00 00 00 00 ..... [conn=36, chan=447, pktid=2716] Sent MSG_CHANNEL_CLOSE (97), 5 bytes 00000000: 61 00 00 00 00 a.... [conn=34, chan=447] Channel closed [conn=36, chan=447] Channel closed
History
#1 Updated by Melissa Li almost 2 years ago
- Status changed from New to In Progress
- Assignee set to Melissa Li
- Pull request ID set to 45132
#2 Updated by Laura Flores over 1 year ago
- Status changed from In Progress to Resolved
#3 Updated by Vikhyat Umrao over 1 year ago
quincy backport - https://github.com/ceph/ceph/pull/46055
#4 Updated by Vikhyat Umrao over 1 year ago
Workaround:
Set option mon_health_detail_to_clog to false.
If MONs are in quorum:
ceph config set mon mon_health_detail_to_clog false
If MONs are not in quorum:
It needs to be changed in /var/lib/ceph/$fsid/mon.$hostname/config and restart of monitor service.