Project

General

Profile

Actions

Bug #3569

closed

Monitor & OSD failures when an OSD clock is wrong

Added by Greg Farnum over 11 years ago. Updated about 10 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

I know that ceph has time synced servers has a requirements, but I
think a sane failure mode like a message in the logs instead of
incontrollably growing memory usage would be a good idea.

I had the NTP process die on me tonight on an OSD (for unknown reason
so far ...) and the clock went 3000s out of sync and the OSD memory
just kept growing, and also the master mon memory.
(which has the nice effect of having the master mon being OOM killed,
then one of the backup takes the master role and grows as well and
gets killed and so on and so forth until there is no quorum anymore).

It happenned very reliably at each attempt to restart the OSD and
stopped right when I fixed the clock.
Just take a working cluster, take an osd out, let it rebalance, set
the clock of one of the OSD 50 min too fast, and restart the OSD.

I had it occur twice with the same clock sync problems. (once in a
test cluster with just 2 osd IIRC and once in the prod cluster).

I don't get it anymore because I patched the underlying problem that
was causing the clock to jump forward 50 min.

If you can't reproduce it locally, I can try to reproduce it again on
the test cluster tomorrow.

My best guess was that somehow the messages had a timestamp and it
refused to process message too much in the future and maybe just
queued them while waiting (but 50 min worth of message is a lot of
memory). But that's really a wild guess :p

It's not queuing up messages until their timestamp, but it might be trying to get new cephx keys, is my best guess?


Related issues 1 (0 open1 closed)

Related to Ceph - Bug #3609: mon: track down the Monitor's memory consuption sourcesResolvedJoao Eduardo Luis12/12/2012

Actions
Actions #1

Updated by Joao Eduardo Luis over 11 years ago

Might not be related, but when I triggered bug #3587 earlier today I noticed that the monitor started consuming over 1.3GB of memory. My guess is that the memory consumption is getting out of hand when messages are queued. These are small messages, but they do tend to happen at a huge rate, and given that this particular monitor was out of the quorum the messages weren't being handled at all.

Also noticed that much of these messages were auth related though, so you might be right and it might have something to do with cephx. I'll try to reproduce the previous behavior and track down the causes.

Actions #2

Updated by Joao Eduardo Luis over 11 years ago

Oh... scratch that. Only after a closer look after reproducing the bug did I notice that what I saw was from VIRT; RSS in this case is always pretty much stable, slowly growing though but still around 13MB.

Actions #3

Updated by Dzianis Huznou over 10 years ago

I wasn't able to reproduce this bug at Ceph v0.72.1. Increasing memory was insignificant and stopped after a while.

Actions #4

Updated by Samuel Just about 10 years ago

  • Status changed from New to Can't reproduce
Actions

Also available in: Atom PDF