Project

General

Profile

Actions

Bug #2233

closed

Throttle when there are lots of large conccurent IOs

Added by Mark Nelson about 12 years ago. Updated over 11 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
-
Category:
librados
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

When sending large amounts of data via a single client (ie 256 concurrent 64MB IOs) we can hit a bad_alloc on the client. This appears to take the entire node down.

INFO:teuthology.task.radosbench.radosbench.0.out:Maintaining 256 concurrent writes of 67108864 bytes for at least 60 seconds.
INFO:teuthology.task.radosbench.radosbench.0.err:terminate called after throwing an instance of 'ceph::buffer::bad_alloc'
INFO:teuthology.task.radosbench.radosbench.0.err: what(): buffer::bad_alloc
INFO:teuthology.task.radosbench.radosbench.0.err:*** Caught signal (Aborted) *
INFO:teuthology.task.radosbench.radosbench.0.err: in thread 7f174fa237a0
INFO:teuthology.task.radosbench.radosbench.0.err: ceph version 0.44.1-124-gf8a5386 (f8a53869f6db4c76516ee525f00f87f930920692)
INFO:teuthology.task.radosbench.radosbench.0.err: 1: /tmp/cephtest/binary/usr/local/bin/rados() [0x443ef6]
INFO:teuthology.task.radosbench.radosbench.0.err: 2: (()+0x10060) [0x7f174f0b7060]
INFO:teuthology.task.radosbench.radosbench.0.err: 3: (gsignal()+0x35) [0x7f174dcaa3a5]
INFO:teuthology.task.radosbench.radosbench.0.err: 4: (abort()+0x17b) [0x7f174dcadb0b]
INFO:teuthology.task.radosbench.radosbench.0.err: 5: (_gnu_cxx::_verbose_terminate_handler()+0x11d) [0x7f174e568d7d]
INFO:teuthology.task.radosbench.radosbench.0.err: 6: (()+0xb9f26) [0x7f174e566f26]
INFO:teuthology.task.radosbench.radosbench.0.err: 7: (()+0xb9f53) [0x7f174e566f53]
INFO:teuthology.task.radosbench.radosbench.0.err: 8: (()+0xba04e) [0x7f174e56704e]
INFO:teuthology.task.radosbench.radosbench.0.err: 9: (ceph::buffer::create_page_aligned(unsigned int)+0x95) [0x45a055]
INFO:teuthology.task.radosbench.radosbench.0.err: 10: (ceph::buffer::list::append(char const
, unsigned int)+0x49) [0x45b4b9]
INFO:teuthology.task.radosbench.radosbench.0.err: 11: (write_bench(librados::Rados&, librados::IoCtx&, int, int, bench_data*)+0x2b1) [0x42e5d1]
INFO:teuthology.task.radosbench.radosbench.0.err: 12: (aio_bench(librados::Rados&, librados::IoCtx&, int, int, int, int)+0x399) [0x430149]
INFO:teuthology.task.radosbench.radosbench.0.err: 13: (main()+0x4796) [0x42a1d6]
INFO:teuthology.task.radosbench.radosbench.0.err: 14: (__libc_start_main()+0xed) [0x7f174dc9530d]
INFO:teuthology.task.radosbench.radosbench.0.err: 15: /tmp/cephtest/binary/usr/local/bin/rados() [0x42b4f9]

Actions #1

Updated by Greg Farnum about 12 years ago

That is 16GB of RAM being allocated and used — I don't remember what hardware these are running on and have no idea what their swap configs are, but keep that in mind!

Actions #2

Updated by Mark Nelson about 12 years ago

Aha! The plana nodes appear to only have 8GB of ram and 8GB of swap.
Is the allocation of that memory part of librados? What will happen
(even say, with smaller message sizes) if an application on the client
is using up the majority of the available memory?

Actions #3

Updated by Greg Farnum about 12 years ago

Just the rados bench tool itself is allocating 16GB to feed into librados.

Now that you mention it, librados might be duplicating that again — I don't remember for sure; it depends on the interface being used.

Anyway, data that isn't yet committed to disk needs to be kept in-memory on the client, no way around that. You're trying to overcommit the hardware, and maybe it could fail more gracefully but it does have to fail. :)

Actions #4

Updated by Mark Nelson about 12 years ago

Yeah, it's the failing gracefully bit that I'm interested in. :)

Actions #5

Updated by Sage Weil over 11 years ago

  • Status changed from New to Won't Fix
Actions

Also available in: Atom PDF