Project

General

Profile

Actions

Bug #46525

open

osd crush

Added by 伟杰 谭 almost 4 years ago. Updated almost 3 years ago.

Status:
Need More Info
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

my env:
ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20) nautilus (stable)

CentOS Linux release 7.7.1908 (AltArch)

and my node have 132G memory

osd crush info

-2> 2020-07-14 11:10:48.804 ffff7fa9ced0  5 osd.87 656 heartbeat osd_stat(store_statfs(0x7444f430000/0x40000000/0x74702400000, data 0x2630fecbd/0x272fc0000, compress 0x0/0x0/0x0, omap 0x7b251, meta 0x3ff84daf), peers [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,40,41,42,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,70,71,72,86,88] op hist [0,0,0,0,0,0,0,0,0,0,3])
-1> 2020-07-14 11:10:48.904 ffff90cbced0 5 prioritycache tune_memory target: 4294967296 mapped: 1946525696 unmapped: 114688 heap: 1946640384 old mem: 2845415832 new mem: 2845415832
0> 2020-07-14 11:10:49.214 ffff822eced0 -1 ** Caught signal (Aborted) *
in thread ffff822eced0 thread_name:tp_osd_tp
ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20) nautilus (stable)
1: [0xffff9de0066c]
2: (gsignal()+0x4c) [0xffff9ced5238]
3: (abort()+0x11c) [0xffff9ced68b0]
4: (_gnu_cxx::_verbose_terminate_handler()+0x188) [0xffff9d1d0608]
5: (()+0x9e14c) [0xffff9d1ce14c]
6: (()+0x9e1b0) [0xffff9d1ce1b0]
7: (__cxa_rethrow()+0) [0xffff9d1ce4a0]
8: (ceph::buffer::v14_2_0::create_aligned_in_mempool(unsigned int, unsigned int, int)+0x208) [0xaaaabad152b0]
9: (ceph::buffer::v14_2_0::create_aligned(unsigned int, unsigned int)+0x2c) [0xaaaabad153bc]
10: (ceph::buffer::v14_2_0::create_page_aligned(unsigned int)+0x34) [0xaaaabad158e4]
11: (ceph::buffer::v14_2_0::list::reserve(unsigned long)+0x6c) [0xaaaabad15cdc]
12: (BlueStore::ExtentMap::ExtentMap(BlueStore::Onode*)+0xd4) [0xaaaaba9dc374]
13: (BlueStore::Collection::get_onode(ghobject_t const&, bool)+0x6b0) [0xaaaaba9ec3b0]
14: (BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0x18a8) [0xaaaabaa3c428]
15: (BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x1ec) [0xaaaabaa4fd9c]
16: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>)+0x74) [0xaaaaba7cb754]
17: (ReplicatedBackend::do_repop(boost::intrusive_ptr<OpRequest>)+0xb68) [0xaaaaba8bb128]
18: (ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x250) [0xaaaaba8c80c0]
19: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x64) [0xaaaaba7e1184]
20: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x4c4) [0xaaaaba79136c]
21: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x360) [0xaaaaba5e0788]
22: (PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x84) [0xaaaaba85bf14]
23: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x880) [0xaaaaba5fa008]
24: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x308) [0xaaaabab84370]
25: (ShardedThreadPool::WorkThreadSharded::entry()+0x18) [0xaaaabab86cf8]
26: (()+0x7d38) [0xffff9d3e7d38]
27: (()+0xdf5f0) [0xffff9cf7f5f0]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Actions #1

Updated by Greg Farnum almost 3 years ago

  • Project changed from Ceph to bluestore
Actions #2

Updated by Igor Fedotov almost 3 years ago

  • Status changed from New to Need More Info
Actions

Also available in: Atom PDF