Actions
Bug #40485
closedMultisite synchronization init - Segmentation fault
Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
Community (user)
Tags:
multisite, segmentation fault
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rgw
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
During the initialization of rgw replication, segmentation fault errors appears.
Crash report are attached below.
ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)
1: (()+0xf5d0) [0x7f51f75425d0]
2: (RGWCoroutine::set_sleeping(bool)+0xc) [0x558854248e0c]
3: (RGWOmapAppend::flush_pending()+0x2d) [0x55885424de1d]
4: (RGWOmapAppend::finish()+0x10) [0x55885424df00]
5: (RGWDataSyncShardCR::stop_spawned_services()+0x30) [0x55885448d320]
6: (RGWDataSyncShardCR::incremental_sync()+0x4c6) [0x5588544a6736]
7: (RGWDataSyncShardCR::operate()+0x75) [0x5588544a80e5]
8: (RGWCoroutinesStack::operate(RGWCoroutinesEnv*)+0x46) [0x558854246566]
9: (RGWCoroutinesManager::run(fe2c32370e::list<RGWCoroutinesStack*, fe2c32370e::allocator<RGWCoroutinesStack*> >&)+0x293) [0x558854249233]
10: (RGWCoroutinesManager::run(RGWCoroutine*)+0x78) [0x55885424a108]
11: (RGWRemoteDataLog::run_sync(int)+0x1e7) [0x55885447fd37]
12: (RGWDataSyncProcessorThread::process()+0x46) [0x558854303cb6]
13: (RGWRadosThread::Worker::entry()+0x22b) [0x5588542a54cb]
14: (()+0x7dd5) [0x7f51f753add5]
15: (clone()+0x6d) [0x7f51eba2cead]
I've noticed that when replicating the dc2.rgw.buckets.index pool, the rgw segfaults.
After this segfault, replication switches back to writeing to default.rgw.buckets.* pools insted of zone specified ones e.g. dc2.rgw.buckets.*
Files
Updated by Piotr O almost 5 years ago
It was caused by "rgw_num_rados_handles = 8"
After change rgw_num_rados_handles to 1, problems disappeared.
Updated by Abhishek Lekshmanan over 4 years ago
- Status changed from New to Can't reproduce
Actions