Project

General

Profile

Actions

Bug #19511

closed

bluestore overwhelms aio queue

Added by Dmitry Smirnov about 7 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
High
Assignee:
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

In CEPH cluster V12.0.1 with BlueStore-enabled backend some OSD (currently 4 of 30, on different hosts) started to being marked DOWN and OSD Linux processes are failing upon restart attempts with core dumps.
It seems to start happening after upgrade from V12.0.0 to V12.0.1. CEPH is deployed on six RedHat 7.3 servers with 10Gb/s interconnection.

Error message:
-------------------------------------------------------------------------------------------------------------------------------------------------
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:12.512466 7f5498612b00 1 WARNING: the following dangerous and experimental features are enabled: bluestore
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:12.512748 7f5498612b00 -1 WARNING: the following dangerous and experimental features are enabled: bluestore
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:12.512890 7f5498612b00 -1 WARNING: experimental feature 'bluestore' is enabled
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: Please be aware that this feature is experimental, untested,
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: unsupported, and may result in data corruption, data loss,
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: and/or irreparable damage to your cluster. Do not use
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: feature with important data.
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: starting osd.18 at - osd_data /var/lib/ceph/osd/ceph-18 /var/lib/ceph/osd/ceph-18/journal
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:12.535362 7f5498612b00 -1 WARNING: the following dangerous and experimental features are enabled: bluestore
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:12.632827 7f5498612b00 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:12.633139 7f5498612b00 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:12.633374 7f5498612b00 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:12.633709 7f5498612b00 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:12 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:12.633977 7f5498612b00 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:14 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:14.484098 7f5498612b00 -1 osd.18 4529 log_to_monitors {default=true}
Apr 06 12:52:15 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:15.711108 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:15 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:15.849766 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:15 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:15.850305 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:15 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:15.850851 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:15 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:15.959358 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:15 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:15.960130 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:15 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:15.960468 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:16 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:16.124253 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.0.1/rpm/el7/BUILD/
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.0.1/rpm/el7/BUILD/
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:24.317687 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 16
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:24.317693 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio submit got (11) Resource temporarily unavailable
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: ceph version 12.0.1 (5456408827a1a31690514342624a4ff9b66be1d5)
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x7f5498fe4470]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 2: (KernelDevice::aio_submit(IOContext*)+0x893) [0x7f5498f6d723]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 3: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x72) [0x7f5498e46072]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 4: (BlueStore::_deferred_try_submit(BlueStore::OpSequencer*)+0xf44) [0x7f5498e92904]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 5: (BlueStore::_deferred_try_submit()+0xaa) [0x7f5498e9300a]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 6: (BlueStore::_kv_sync_thread()+0x1cde) [0x7f5498e9946e]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 7: (BlueStore::KVSyncThread::entry()+0xd) [0x7f5498ec6eed]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 8: (()+0x7dc5) [0x7f5495e80dc5]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 9: (clone()+0x6d) [0x7f5494f7573d]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:24.320362 7f5485251700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACH
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.0.1/rpm/el7/BUILD/
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: ceph version 12.0.1 (5456408827a1a31690514342624a4ff9b66be1d5)
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x7f5498fe4470]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 2: (KernelDevice::aio_submit(IOContext*)+0x893) [0x7f5498f6d723]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 3: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x72) [0x7f5498e46072]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 4: (BlueStore::_deferred_try_submit(BlueStore::OpSequencer*)+0xf44) [0x7f5498e92904]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 5: (BlueStore::_deferred_try_submit()+0xaa) [0x7f5498e9300a]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 6: (BlueStore::_kv_sync_thread()+0x1cde) [0x7f5498e9946e]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 7: (BlueStore::KVSyncThread::entry()+0xd) [0x7f5498ec6eed]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 8: (()+0x7dc5) [0x7f5495e80dc5]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 9: (clone()+0x6d) [0x7f5494f7573d]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: -9902> 2017-04-06 12:52:14.484098 7f5498612b00 -1 osd.18 4529 log_to_monitors {default=true}
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: -971> 2017-04-06 12:52:15.711108 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: -928> 2017-04-06 12:52:15.849766 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: -917> 2017-04-06 12:52:15.850305 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: -906> 2017-04-06 12:52:15.850851 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: -881> 2017-04-06 12:52:15.959358 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: -880> 2017-04-06 12:52:15.960130 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: -879> 2017-04-06 12:52:15.960468 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: -837> 2017-04-06 12:52:16.124253 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 1
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: -2> 2017-04-06 12:52:24.317687 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio_submit retries 16
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: -1> 2017-04-06 12:52:24.317693 7f5485251700 -1 bdev(0x7f54a2ca3a00 /var/lib/ceph/osd/ceph-18/block) aio submit got (11) Resource temporarily unavailable
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 0> 2017-04-06 12:52:24.320362 7f5485251700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/M
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.0.1/rpm/el7/BUILD/
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: ceph version 12.0.1 (5456408827a1a31690514342624a4ff9b66be1d5)
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x7f5498fe4470]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 2: (KernelDevice::aio_submit(IOContext*)+0x893) [0x7f5498f6d723]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 3: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x72) [0x7f5498e46072]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 4: (BlueStore::_deferred_try_submit(BlueStore::OpSequencer*)+0xf44) [0x7f5498e92904]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 5: (BlueStore::_deferred_try_submit()+0xaa) [0x7f5498e9300a]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 6: (BlueStore::_kv_sync_thread()+0x1cde) [0x7f5498e9946e]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 7: (BlueStore::KVSyncThread::entry()+0xd) [0x7f5498ec6eed]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 8: (()+0x7dc5) [0x7f5495e80dc5]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 9: (clone()+0x6d) [0x7f5494f7573d]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: * Caught signal (Aborted) *
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: in thread 7f5485251700 thread_name:bstore_kv_sync
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: ceph version 12.0.1 (5456408827a1a31690514342624a4ff9b66be1d5)
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 1: (()+0x95091f) [0x7f5498f8991f]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 2: (()+0xf370) [0x7f5495e88370]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 3: (gsignal()+0x37) [0x7f5494eb31d7]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 4: (abort()+0x148) [0x7f5494eb48c8]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 5: (ceph::__ceph_assert_fail(char const
, char const*, int, char const*)+0x284) [0x7f5498fe45e4]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 6: (KernelDevice::aio_submit(IOContext*)+0x893) [0x7f5498f6d723]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 7: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x72) [0x7f5498e46072]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 8: (BlueStore::_deferred_try_submit(BlueStore::OpSequencer*)+0xf44) [0x7f5498e92904]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 9: (BlueStore::_deferred_try_submit()+0xaa) [0x7f5498e9300a]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 10: (BlueStore::_kv_sync_thread()+0x1cde) [0x7f5498e9946e]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 11: (BlueStore::KVSyncThread::entry()+0xd) [0x7f5498ec6eed]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 12: (()+0x7dc5) [0x7f5495e80dc5]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 13: (clone()+0x6d) [0x7f5494f7573d]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 2017-04-06 12:52:24.344606 7f5485251700 -1
Caught signal (Aborted)
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: in thread 7f5485251700 thread_name:bstore_kv_sync
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: ceph version 12.0.1 (5456408827a1a31690514342624a4ff9b66be1d5)
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 1: (()+0x95091f) [0x7f5498f8991f]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 2: (()+0xf370) [0x7f5495e88370]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 3: (gsignal()+0x37) [0x7f5494eb31d7]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 4: (abort()+0x148) [0x7f5494eb48c8]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 5: (ceph::__ceph_assert_fail(char const
, char const*, int, char const*)+0x284) [0x7f5498fe45e4]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 6: (KernelDevice::aio_submit(IOContext*)+0x893) [0x7f5498f6d723]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 7: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x72) [0x7f5498e46072]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 8: (BlueStore::_deferred_try_submit(BlueStore::OpSequencer*)+0xf44) [0x7f5498e92904]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 9: (BlueStore::_deferred_try_submit()+0xaa) [0x7f5498e9300a]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 10: (BlueStore::_kv_sync_thread()+0x1cde) [0x7f5498e9946e]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 11: (BlueStore::KVSyncThread::entry()+0xd) [0x7f5498ec6eed]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 12: (()+0x7dc5) [0x7f5495e80dc5]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 13: (clone()+0x6d) [0x7f5494f7573d]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 0> 2017-04-06 12:52:24.344606 7f5485251700 -1
Caught signal (Aborted) *
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: in thread 7f5485251700 thread_name:bstore_kv_sync
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: ceph version 12.0.1 (5456408827a1a31690514342624a4ff9b66be1d5)
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 1: (()+0x95091f) [0x7f5498f8991f]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 2: (()+0xf370) [0x7f5495e88370]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 3: (gsignal()+0x37) [0x7f5494eb31d7]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 4: (abort()+0x148) [0x7f5494eb48c8]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x284) [0x7f5498fe45e4]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 6: (KernelDevice::aio_submit(IOContext*)+0x893) [0x7f5498f6d723]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 7: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x72) [0x7f5498e46072]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 8: (BlueStore::_deferred_try_submit(BlueStore::OpSequencer*)+0xf44) [0x7f5498e92904]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 9: (BlueStore::_deferred_try_submit()+0xaa) [0x7f5498e9300a]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 10: (BlueStore::_kv_sync_thread()+0x1cde) [0x7f5498e9946e]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 11: (BlueStore::KVSyncThread::entry()+0xd) [0x7f5498ec6eed]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 12: (()+0x7dc5) [0x7f5495e80dc5]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: 13: (clone()+0x6d) [0x7f5494f7573d]
Apr 06 12:52:24 rdocloudnode5.kc.lv ceph-osd95063: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 06 12:52:24 rdocloudnode5.kc.lv abrt-hook-ccpp95209: Process 95063 (ceph-osd) of user 167 killed by SIGABRT - dumping core
Apr 06 12:52:25 rdocloudnode5.kc.lv abrt-hook-ccpp95209: Failed to create core_backtrace: waitpid failed: No child processes
Apr 06 12:52:25 rdocloudnode5.kc.lv systemd1: : main process exited, code=killed, status=6/ABRT
Apr 06 12:52:25 rdocloudnode5.kc.lv systemd1: Unit entered failed state.
Apr 06 12:52:25 rdocloudnode5.kc.lv systemd1: failed.
Apr 06 12:52:25 rdocloudnode5.kc.lv abrt-server95222: Package 'ceph-osd' isn't signed with proper key
Apr 06 12:52:25 rdocloudnode5.kc.lv abrt-server95222: 'post-create' on '/var/spool/abrt/ccpp-2017-04-06-12:52:24-95063' exited with 1
Apr 06 12:52:25 rdocloudnode5.kc.lv abrt-server95222: Deleting problem directory '/var/spool/abrt/ccpp-2017-04-06-12:52:24-95063'
------------------------------------------------------------------------------------------------------------------------------------------------

  1. ceph-disk list
    /dev/dm-0 other, xfs, mounted on /
    /dev/dm-1 swap, swap
    /dev/nvme0n1 :
    /dev/nvme0n1p1 ceph block.wal, for /dev/sdb1
    /dev/nvme0n1p2 ceph block.wal, for /dev/sdc1
    /dev/nvme0n1p3 ceph block.wal, for /dev/sdd1
    /dev/nvme0n1p4 ceph block.wal, for /dev/sde1
    /dev/nvme0n1p5 ceph block.wal, for /dev/sdf1
    /dev/sda :
    /dev/sda1 other, vfat, mounted on /boot/efi
    /dev/sda2 other, xfs, mounted on /boot
    /dev/sda3 other, LVM2_member
    /dev/sdb :
    /dev/sdb1 ceph data, active, cluster ceph, osd.16, block /dev/sdb2, block.wal /dev/nvme0n1p1
    /dev/sdb2 ceph block, for /dev/sdb1
    /dev/sdc :
    /dev/sdc1 ceph data, active, cluster ceph, osd.17, block /dev/sdc2, block.wal /dev/nvme0n1p2
    /dev/sdc2 ceph block, for /dev/sdc1
    /dev/sdd :
    /dev/sdd1 ceph data, active, cluster ceph, osd.18, block /dev/sdd2, block.wal /dev/nvme0n1p3
    /dev/sdd2 ceph block, for /dev/sdd1
    /dev/sde :
    /dev/sde1 ceph data, active, cluster ceph, osd.19, block /dev/sde2, block.wal /dev/nvme0n1p4
    /dev/sde2 ceph block, for /dev/sde1
    /dev/sdf :
    /dev/sdf1 ceph data, active, cluster ceph, osd.28, block /dev/sdf2, block.wal /dev/nvme0n1p5
    /dev/sdf2 ceph block, for /dev/sdf1

Related issues 1 (0 open1 closed)

Related to RADOS - Bug #21171: bluestore: aio submission deadlockResolvedSage Weil08/29/2017

Actions
Actions #1

Updated by Dmitry Smirnov about 7 years ago

The problem is occurring under a high write load, as example when changing size of existing pool from 2 to 3. Seems, issue is not related to BlueStore back-end as I mentioned in ticket's subject, but is generic.
The entire cluster becomes unstable and mostly unusable, plenty of OSDs start to flapping for hours.

Same effect can be achieved by executing FIO write tests when iodepth parameter * replicas count > OSDs count in pool being tested. Lets say, we have CEPH pool build on top of 24 OSDs and pool size = 3, with iodepth = 8 everything works fine, with iodepth = 9 or more we face an issue. Going to replicas count 2 we got the same issue with iodepth = 13 or more.

Tried to change and adjust for Linux and CEPH:
kernel.pid_max set to 4194303
hard nofile set to 65535
osd_op_threads for all OSDs increased from its default 2 to 8

Actions #2

Updated by Dmitry Smirnov about 7 years ago

I managed to catch a specific OSD crush occurrence using 'gdb' debugger.

--------------------------------------------------------------------------------------------------------------------
(gdb)

2017-04-07 13:47:31.936839 7fffe4c17700 -1 bdev(0x55555f1c3a00 /var/lib/ceph/osd/ceph-16/block) aio_submit retries 1
2017-04-07 13:47:31.937347 7fffe4c17700 -1 bdev(0x55555f1c3a00 /var/lib/ceph/osd/ceph-16/block) aio_submit retries 2
2017-04-07 13:47:31.942187 7fffe4c17700 -1 bdev(0x55555f1c3a00 /var/lib/ceph/osd/ceph-16/block) aio_submit retries 5
[New Thread 0x7fffd1bf1700 (LWP 123601)]
[New Thread 0x7fffd13f0700 (LWP 123602)]
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/relea se/12.0.1/rpm/el7/BUILD/ceph-12.0.1/src/os/bluestore/KernelDevice.cc: In function 'virtual void KernelDevice::aio_submit(IOContext*)' thread 7fffd73fc700 time 2017-04-07 13:49:17.179409
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/relea se/12.0.1/rpm/el7/BUILD/ceph-12.0.1/src/os/bluestore/KernelDevice.cc: 446: FAILED assert(r 0)
2017-04-07 13:49:17.179370 7fffd73fc700 -1 bdev(0x55555f1c3a00 /var/lib/ceph/osd/ceph-16/block) aio_submit retries 16
2017-04-07 13:49:17.179383 7fffd73fc700 -1 bdev(0x55555f1c3a00 /var/lib/ceph/osd/ceph-16/block) aio submit got (11) Resource temporarily un available
ceph version 12.0.1 (5456408827a1a31690514342624a4ff9b66be1d5)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x555555eff470]
2: (KernelDevice::aio_submit(IOContext*)+0x893) [0x555555e88723]
3: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x72) [0x555555d61072]
4: (BlueStore::_deferred_try_submit(BlueStore::OpSequencer*)+0xf44) [0x555555dad904]
5: (BlueStore::_deferred_try_submit()+0xaa) [0x555555dae00a]
6: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector&lt;ObjectStore::Transaction, std::allocator&lt;ObjectStore::Transaction&gt; > &, boost::intrusive_ptr&lt;TrackedOp&gt;, ThreadPool::TPHandle*)+0xd9e) [0x555555dcfd7e]
7: (PrimaryLogPG::queue_transactions(std::vector&lt;ObjectStore::Transaction, std::allocator&lt;ObjectStore::Transaction&gt; >&, boost::intrusive_pt r&lt;OpRequest&gt;)+0x65) [0x555555b6c855]
8: (ReplicatedBackend::sub_op_modify(boost::intrusive_ptr&lt;OpRequest&gt;)+0xbf3) [0x555555c4b4d3]
9: (ReplicatedBackend::handle_message(boost::intrusive_ptr&lt;OpRequest&gt;)+0x364) [0x555555c50ce4]
10: (PrimaryLogPG::do_request(boost::intrusive_ptr&lt;OpRequest&gt;&, ThreadPool::TPHandle&)+0x485) [0x555555b100b5]
11: (OSD::dequeue_op(boost::intrusive_ptr&lt;PG&gt;, boost::intrusive_ptr&lt;OpRequest&gt;, ThreadPool::TPHandle&)+0x225) [0x5555559bd9c5]
12: (PGQueueable::RunVis::operator()(boost::intrusive_ptr&lt;OpRequest&gt; const&)+0x57) [0x5555559bde27]
13: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xfce) [0x5555559e766e]
14: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x945) [0x555555f050d5]
15: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x555555f07230]
16: (()+0x7dc5) [0x7ffff5846dc5]
17: (clone()+0x6d) [0x7ffff493b73d]
NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.
2017-04-07 13:49:17.183821 7fffd73fc700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/c entos7/DIST/centos7/MACHINE_SIZE/huge/release/12.0.1/rpm/el7/BUILD/ceph-12.0.1/src/os/bluestore/KernelDevice.cc: In function 'virtual void K ernelDevice::aio_submit(IOContext*)' thread 7fffd73fc700 time 2017-04-07 13:49:17.179409
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/relea se/12.0.1/rpm/el7/BUILD/ceph-12.0.1/src/os/bluestore/KernelDevice.cc: 446: FAILED assert(r 0)

ceph version 12.0.1 (5456408827a1a31690514342624a4ff9b66be1d5)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x555555eff470]
2: (KernelDevice::aio_submit(IOContext*)+0x893) [0x555555e88723]
3: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x72) [0x555555d61072]
4: (BlueStore::_deferred_try_submit(BlueStore::OpSequencer*)+0xf44) [0x555555dad904]
5: (BlueStore::_deferred_try_submit()+0xaa) [0x555555dae00a]
6: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector&lt;ObjectStore::Transaction, std::allocator&lt;ObjectStore::Transaction&gt; > &, boost::intrusive_ptr&lt;TrackedOp&gt;, ThreadPool::TPHandle*)+0xd9e) [0x555555dcfd7e]
7: (PrimaryLogPG::queue_transactions(std::vector&lt;ObjectStore::Transaction, std::allocator&lt;ObjectStore::Transaction&gt; >&, boost::intrusive_pt r&lt;OpRequest&gt;)+0x65) [0x555555b6c855]
8: (ReplicatedBackend::sub_op_modify(boost::intrusive_ptr&lt;OpRequest&gt;)+0xbf3) [0x555555c4b4d3]
9: (ReplicatedBackend::handle_message(boost::intrusive_ptr&lt;OpRequest&gt;)+0x364) [0x555555c50ce4]
10: (PrimaryLogPG::do_request(boost::intrusive_ptr&lt;OpRequest&gt;&, ThreadPool::TPHandle&)+0x485) [0x555555b100b5]
11: (OSD::dequeue_op(boost::intrusive_ptr&lt;PG&gt;, boost::intrusive_ptr&lt;OpRequest&gt;, ThreadPool::TPHandle&)+0x225) [0x5555559bd9c5]
12: (PGQueueable::RunVis::operator()(boost::intrusive_ptr&lt;OpRequest&gt; const&)+0x57) [0x5555559bde27]
13: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xfce) [0x5555559e766e]
14: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x945) [0x555555f050d5]
15: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x555555f07230]
16: (()+0x7dc5) [0x7ffff5846dc5]
17: (clone()+0x6d) [0x7ffff493b73d]
NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.
-2> 2017-04-07 13:49:17.179370 7fffd73fc700 -1 bdev(0x55555f1c3a00 /var/lib/ceph/osd/ceph-16/block) aio_submit retries 16
-1> 2017-04-07 13:49:17.179383 7fffd73fc700 -1 bdev(0x55555f1c3a00 /var/lib/ceph/osd/ceph-16/block) aio submit got (11) Resource tempor arily unavailable
0> 2017-04-07 13:49:17.183821 7fffd73fc700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABL E_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.0.1/rpm/el7/BUILD/ceph-12.0.1/src/os/bluestore/KernelDevice.cc: In function 'virtua l void KernelDevice::aio_submit(IOContext*)' thread 7fffd73fc700 time 2017-04-07 13:49:17.179409
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/relea se/12.0.1/rpm/el7/BUILD/ceph-12.0.1/src/os/bluestore/KernelDevice.cc: 446: FAILED assert(r == 0)
ceph version 12.0.1 (5456408827a1a31690514342624a4ff9b66be1d5)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x110) [0x555555eff470]
2: (KernelDevice::aio_submit(IOContext*)+0x893) [0x555555e88723]
3: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x72) [0x555555d61072]
4: (BlueStore::_deferred_try_submit(BlueStore::OpSequencer*)+0xf44) [0x555555dad904]
5: (BlueStore::_deferred_try_submit()+0xaa) [0x555555dae00a]
6: (BlueStore::queue_transactions(ObjectStore::Sequencer*, std::vector&lt;ObjectStore::Transaction, std::allocator&lt;ObjectStore::Transaction&gt; > &, boost::intrusive_ptr&lt;TrackedOp&gt;, ThreadPool::TPHandle*)+0xd9e) [0x555555dcfd7e]
7: (PrimaryLogPG::queue_transactions(std::vector&lt;ObjectStore::Transaction, std::allocator&lt;ObjectStore::Transaction&gt; >&, boost::intrusive_pt r&lt;OpRequest&gt;)+0x65) [0x555555b6c855]
8: (ReplicatedBackend::sub_op_modify(boost::intrusive_ptr&lt;OpRequest&gt;)+0xbf3) [0x555555c4b4d3]
9: (ReplicatedBackend::handle_message(boost::intrusive_ptr&lt;OpRequest&gt;)+0x364) [0x555555c50ce4]
10: (PrimaryLogPG::do_request(boost::intrusive_ptr&lt;OpRequest&gt;&, ThreadPool::TPHandle&)+0x485) [0x555555b100b5]
11: (OSD::dequeue_op(boost::intrusive_ptr&lt;PG&gt;, boost::intrusive_ptr&lt;OpRequest&gt;, ThreadPool::TPHandle&)+0x225) [0x5555559bd9c5]
12: (PGQueueable::RunVis::operator()(boost::intrusive_ptr&lt;OpRequest&gt; const&)+0x57) [0x5555559bde27]
13: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xfce) [0x5555559e766e]
14: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x945) [0x555555f050d5]
15: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x555555f07230]
16: (()+0x7dc5) [0x7ffff5846dc5]
17: (clone()+0x6d) [0x7ffff493b73d]
NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.

Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffd73fc700 (LWP 122824)]
0x00007ffff48791d7 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install fuse-libs-2.9.2-7.el7.x86_64 glibc-2.17-157.el7_3.1.x86_64 gperftools-libs-2.4.91-1.el7. x86_64 leveldb-1.12.0-11.el7.x86_64 libaio-0.3.109-13.el7.x86_64 libblkid-2.23.2-33.el7.x86_64 libgcc-4.8.5-11.el7.x86_64 libibverbs-1.2.1-1 .el7.x86_64 libnl3-3.2.28-2.el7.x86_64 libstdc++-4.8.5-11.el7.x86_64 libunwind-1.1-5.el7_2.2.x86_64 libuuid-2.23.2-33.el7.x86_64 lttng-ust-2 .4.1-4.el7.x86_64 nspr-4.11.0-1.el7_2.x86_64 nss-3.21.3-2.el7_3.x86_64 nss-softokn-3.16.2.3-14.4.el7.x86_64 nss-softokn-freebl-3.16.2.3-14.4 .el7.x86_64 nss-util-3.21.3-1.1.el7_3.x86_64 snappy-1.1.0-3.el7.x86_64 sqlite-3.7.17-8.el7.x86_64 userspace-rcu-0.7.16-1.el7.x86_64 zlib-1.2 .7-17.el7.x86_64
(gdb)
--------------------------------------------------------------------------------------------------------------------

Actions #3

Updated by Sage Weil about 7 years ago

  • Subject changed from BlueStore OSDs goes down with core dumps to bluestore overwhelms aio queue
  • Status changed from New to Need More Info
  • Assignee set to Sage Weil

The crash is due to overwhelming the aio queue with the kernel. You can adjust this up with bdev_aio_max_queue_depth = 1024 or similar; that should maek this go away?

Although this is theoretically possible, I haven't seen this crash in practice in any of my environments. Is this an HDD or SSD or mix? What's the workload?

Actions #4

Updated by Dmitry Smirnov about 7 years ago

Sage,

I just returned to notify that problem was solved by increasing 'bdev_aio_max_queue_depth' for all OSDs from its default value 32 to a larger value
and saw your notice with same suggestion here :)
We use 10K SAS HDDs with extremely fast Flashtec NVMe cards for BlueStore WAL journals. FIO write tests with quite moderate iodeph was causing this issue. So, currently a single cloud user can break down a whole CEPH cluster by utilizing such a simple mean.

Please evaluate the possibility to increase 'bdev_aio_max_queue_depth' from its default value to a larger number.

Thanks!

Actions #5

Updated by Igor Fedotov almost 7 years ago

Sage Weil wrote:

The crash is due to overwhelming the aio queue with the kernel. You can adjust this up with bdev_aio_max_queue_depth = 1024 or similar; that should maek this go away?

Although this is theoretically possible, I haven't seen this crash in practice in any of my environments. Is this an HDD or SSD or mix? What's the workload?

Yeah, I saw the same issues with previous default depth value. SSD case...

Actions #6

Updated by Igor Fedotov almost 7 years ago

And Ubuntu 16.04

Actions #7

Updated by Sage Weil almost 7 years ago

the default has been increased to 1024.. have you seen it since then? i have a feeling this is "fixed" now.

Actions #8

Updated by Sage Weil almost 7 years ago

  • Status changed from Need More Info to Resolved
Actions #9

Updated by Luis Henriques almost 7 years ago

This may be a different issue, but I'm seeing a similar trace with current master branch:

 ceph version 12.0.3-1919-g782b63ae9c (782b63ae9c1eba1d0eb61a1bed1a8874329944ca) luminous (dev)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0xf5) [0x5578a35f76a5]
 2: (KernelDevice::aio_submit(IOContext*)+0xb10) [0x5578a3592e30]
 3: (BlueStore::_deferred_submit(BlueStore::OpSequencer*)+0x713) [0x5578a3455a03]
 4: (BlueStore::_deferred_try_submit()+0x1c6) [0x5578a3456356]
 5: (BlueStore::_txc_finish(BlueStore::TransContext*)+0x9c7) [0x5578a346adf7]
 6: (BlueStore::_txc_state_proc(BlueStore::TransContext*)+0xba) [0x5578a347b66a]
 7: (BlueStore::_kv_finalize_thread()+0xa0c) [0x5578a347d1ec]
 8: (BlueStore::KVFinalizeThread::entry()+0xd) [0x5578a34d266d]
 9: (()+0x74e7) [0x7f93b7d974e7]
 10: (clone()+0x3f) [0x7f93b6e0aa2f]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

This is using a vstart.sh cluster (with '-b' switch), and by running fio on a cephfs mount with a simple script:
   [random-writers]
   rw=randrw
   size=32m
   numjobs=8

Actions #10

Updated by Jan Fajerski over 6 years ago

We are still able to trigger this behaviour. On a cluster where 3 OSD's with block storage on SSD's share an NVME for wal/db (times 10) a moderatly size fio workload from multiple clients brings OSDs down. Increasing bdev_aio_max_queue_depth from the 1024 default to 8192 or higher has so far mitigated this.

Not sure if this warrants re-opening this bug or the particular setup requires increasing the bdev_aio_max_queue_depth default.

edit: This is on ceph version 12.1.4 12.2.0

Actions #11

Updated by Jan Fajerski over 6 years ago

  • Status changed from Resolved to 12
  • Target version changed from v12.0.0 to v12.2.0
  • Affected Versions v12.2.0 added
  • Affected Versions deleted (v12.0.0)
Actions #12

Updated by Sage Weil over 6 years ago

  • Related to Bug #21171: bluestore: aio submission deadlock added
Actions #13

Updated by Dmitry Smirnov over 6 years ago

Yes, I got a similar behaviour yesterday again after an upgrade to version 12.2.0.
I haven't collected enough information yet, but I saw fluctuating OSDs (up/down) under a high random I/O load, caused by a simple 'rbd bench' command:
  1. rbd bench --io-type write --io-pattern rand --io-size 4K --io-total 1G basetier/performance-test

My current setting of bdev_aio_max_queue_depth is 1024, BlueStore back-end, EC pool with overwrites.

Actions #14

Updated by Jan Fajerski over 6 years ago

Just re-ran the same fio experiment (multiple clients hit a ssd/nvme backed cluster via cephfs) on ceph version 12.2.0-398-g1264bae1a8. This should contain the fix for #21171 if I'm not mistaken.

The workload produces blocked requests, though now downed OSDs. Increasing bdev_aio_max_queue_depth has so far not helped (currently at 2048).

Actions #15

Updated by Sage Weil over 6 years ago

Jan- I can't find that commit (1264bae1a8) in the repo. can you verify which version you're running? And/or try with teh latest luminous branch? (e.g., https://shaman.ceph.com/builds/ceph/luminous/c3f1c891ef19db8c534045102b2778d9a0335ffe/)

Actions #16

Updated by Sage Weil over 6 years ago

  • Status changed from 12 to Need More Info
Actions #17

Updated by Dmitry Smirnov over 6 years ago

To have more information what is happening while performing a test:

  1. rbd bench --io-type write --io-pattern rand --io-size 4K --io-total 1G basetier/performance-test

=======================================================================================================
2017-09-22 19:03:27.507865 [WRN] slow request 30.773820 seconds old, received at 2017-09-22 19:02:56.733958: osd_op(client.3517534.0:92219 13.3fas0 13.7249f7fa (undecoded) ondisk+write+known_if_redirected e15243) currently queued_for_pg
2017-09-22 19:03:27.507860 [WRN] slow request 31.338635 seconds old, received at 2017-09-22 19:02:56.169142: osd_op(client.3517534.0:91538 13.4es0 13.6ca77c4e (undecoded) ondisk+write+known_if_redirected e15243) currently queued_for_pg
2017-09-22 19:03:27.507857 [WRN] slow request 30.766031 seconds old, received at 2017-09-22 19:02:56.741747: osd_op(client.3517534.0:92229 13.5ds0 13.751f645d (undecoded) ondisk+write+known_if_redirected e15243) currently queued_for_pg
2017-09-22 19:03:27.507854 [WRN] slow request 30.679609 seconds old, received at 2017-09-22 19:02:56.828169: osd_op(client.3517534.0:92362 13.62s0 13.1c123062 (undecoded) ondisk+write+known_if_redirected e15243) currently queued_for_pg
2017-09-22 19:03:27.507847 [WRN] slow request 30.480250 seconds old, received at 2017-09-22 19:02:57.027528: osd_op(client.3517534.0:92626 13.4es0 13.fbdcf84e (undecoded) ondisk+write+known_if_redirected e15243) currently queued_for_pg
2017-09-22 19:03:27.507842 [WRN] 20 slow requests, 5 included below; oldest blocked for > 31.581467 secs
2017-09-22 19:03:26.507660 [WRN] slow request 30.269051 seconds old, received at 2017-09-22 19:02:56.238533: osd_op(client.3517534.0:91640 13.39bs0 13.afacdb9b (undecoded) ondisk+write+known_if_redirected e15243) currently queued_for_pg
2017-09-22 19:03:26.507658 [WRN] slow request 30.326777 seconds old, received at 2017-09-22 19:02:56.180807: osd_op(client.3517534.0:91561 13.17fs0 13.5ea3057f (undecoded) ondisk+write+known_if_redirected e15243) currently queued_for_pg
2017-09-22 19:03:26.507656 [WRN] slow request 30.534470 seconds old, received at 2017-09-22 19:02:55.973113: osd_op(client.3517534.0:91288 13.ees0 13.1f4e5cee (undecoded) ondisk+write+known_if_redirected e15243) currently queued_for_pg
2017-09-22 19:03:26.507647 [WRN] slow request 30.581273 seconds old, received at 2017-09-22 19:02:55.926310: osd_op(client.3517534.0:91220 13.ees0 13.e80064ee (undecoded) ondisk+write+known_if_redirected e15243) currently queued_for_pg
2017-09-22 19:03:26.507645 [WRN] slow request 30.084138 seconds old, received at 2017-09-22 19:02:56.423446: osd_op(client.3517534.0:91854 13.39bs0 13.10443b9b (undecoded) ondisk+write+known_if_redirected e15243) currently queued_for_pg
2017-09-22 19:03:26.507641 [WRN] 12 slow requests, 5 included below; oldest blocked for > 30.761040 secs
2017-09-22 19:03:25.507479 [WRN] slow request 30.005265 seconds old, received at 2017-09-22 19:02:55.502163: osd_op(client.3517534.0:90649 13.4es0 13.8193a44e (undecoded) ondisk+write+known_if_redirected e15243) currently queued_for_pg
2017-09-22 19:03:25.507475 [WRN] slow request 30.095750 seconds old, received at 2017-09-22 19:02:55.411678: osd_op(client.3517534.0:90523 13.251s0 13.5b71b651 (undecoded) ondisk+write+known_if_redirected e15243) currently queued_for_pg
2017-09-22 19:03:25.507472 [WRN] 2 slow requests, 2 included below; oldest blocked for > 30.095750 secs
2017-09-22 19:03:30.756299 [WRN] Health check update: 236 slow requests are blocked > 32 sec (REQUEST_SLOW)
2017-09-22 19:03:26.750736 [WRN] Health check update: 175 slow requests are blocked > 32 sec (REQUEST_SLOW)
2017-09-22 19:03:19.506586 [WRN] slow request 30.032715 seconds old, received at 2017-09-22 19:02:49.473762: osd_op(client.3517534.0:83174 13.4es0 13:720a7d5c:::rbd_data.14.3360de2ae8944a.0000000000000b55:head [write 253952~4096] snapc 0=[] ondisk+write+known_if_redirected e15243) currently started
2017-09-22 19:03:19.506570 [WRN] slow request 31.251089 seconds old, received at 2017-09-22 19:02:48.255387: osd_op(client.3517534.0:81667 13.3fas0 13:5fca708b:::rbd_data.14.3360de2ae8944a.0000000000003ad9:head [write 430080~4096] snapc 0=[] ondisk+write+known_if_redirected e15243) currently sub_op_applied
2017-09-22 19:03:19.506568 [WRN] slow request 30.463655 seconds old, received at 2017-09-22 19:02:49.042822: osd_op(client.3517534.0:82648 13.19ds0 13:b9bcb107:::rbd_data.14.3360de2ae8944a.0000000000003a1a:head [write 2400256~4096] snapc 0=[] ondisk+write+known_if_redirected e15243) currently sub_op_applied
2017-09-22 19:03:19.506565 [WRN] slow request 30.754045 seconds old, received at 2017-09-22 19:02:48.752431: osd_op(client.3517534.0:82261 13.3fas0 13:5fd9b118:::rbd_data.14.3360de2ae8944a.0000000000001136:head [write 3796992~4096] snapc 0=[] ondisk+write+known_if_redirected e15243) currently sub_op_applied
2017-09-22 19:03:19.506562 [WRN] slow request 30.900303 seconds old, received at 2017-09-22 19:02:48.606174: osd_op(client.3517534.0:82079 13.17fs0 13:feb0f085:::rbd_data.14.3360de2ae8944a.0000000000005197:head [write 2723840~4096] snapc 0=[] ondisk+write+known_if_redirected e15243) currently sub_op_applied
2017-09-22 19:03:19.506553 [WRN] 22 slow requests, 5 included below; oldest blocked for > 33.417991 secs
2017-09-22 19:03:18.506409 [WRN] slow request 31.991547 seconds old, received at 2017-09-22 19:02:46.514728: osd_op(client.3517534.0:79398 13.3fas0 13:5fc0d66d:::rbd_data.14.3360de2ae8944a.0000000000002bc4:head [write 3493888~4096] snapc 0=[] ondisk+write+known_if_redirected e15243) currently sub_op_applied
2017-09-22 19:03:18.506401 [WRN] slow request 30.140398 seconds old, received at 2017-09-22 19:02:48.365878: osd_op(client.3517534.0:81793 13.39bs0 13:d9d4a7d4:::rbd_data.14.3360de2ae8944a.00000000000022ef:head [write 2756608~4096] snapc 0=[] ondisk+write+known_if_redirected e15243) currently started
2017-09-22 19:03:18.506398 [WRN] slow request 32.806549 seconds old, received at 2017-09-22 19:02:45.699726: osd_op(client.3517534.0:78420 13.4es0 13:721d4b02:::rbd_data.14.3360de2ae8944a.00000000000017cb:head [write 696320~4096] snapc 0=[] ondisk+write+known_if_redirected e15243) currently sub_op_applied
2017-09-22 19:03:18.506387 [WRN] slow request 30.006363 seconds old, received at 2017-09-22 19:02:48.499912: osd_op(client.3517534.0:81955 13.387s0 13:e1ea834c:::rbd_data.14.3360de2ae8944a.0000000000000e4a:head [write 499712~4096] snapc 0=[] ondisk+write+known_if_redirected e15243) currently started
2017-09-22 19:03:18.506384 [WRN] slow request 30.530085 seconds old, received at 2017-09-22 19:02:47.976190: osd_op(client.3517534.0:81301 13.19ds0 13:b980299d:::rbd_data.14.3360de2ae8944a.0000000000003c30:head [write 2838528~4096] snapc 0=[] ondisk+write+known_if_redirected e15243) currently sub_op_applied
2017-09-22 19:03:18.506365 [WRN] 25 slow requests, 5 included below; oldest blocked for > 32.857609 secs
2017-09-22 19:03:17.506207 [WRN] slow request 31.857451 seconds old, received at 2017-09-22 19:02:45.648666: osd_op(client.3517534.0:78345 13.4es0 13:722b7425:::rbd_data.14.3360de2ae8944a.0000000000002bb4:head [write 1986560~4096] snapc 0=[] ondisk+write+known_if_redirected e15243) currently started

=======================================================================================================

RBD image parameters

rbd image 'performance-test':
size 102400 MB in 25600 objects
order 22 (4096 kB objects)
data_pool: spacetier
block_name_prefix: rbd_data.14.3360de2ae8944a
format: 2
features: layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten, data-pool
flags:
create_timestamp: Sun Sep 10 23:19:22 2017
stripe unit: 4096 kB
stripe count: 4

Actions #18

Updated by Sage Weil over 6 years ago

  • Project changed from Ceph to bluestore
Actions #19

Updated by Sage Weil over 6 years ago

  • Status changed from Need More Info to Resolved

this is fixed in 12.2.1 or 12.2.2 (varying degrees of fixes)

Actions

Also available in: Atom PDF