Project

General

Profile

Actions

Bug #17661

closed

'radosgw-admin bucket sync init' crashes

Added by Casey Bodley over 7 years ago. Updated about 7 years ago.

Status:
Resolved
Priority:
High
Assignee:
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
jewel
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

RGWInitBucketShardSyncStatusCoroutine was recently changed to store its sync_status by reference. This works fine when called via RGWRunBucketShardSyncCoroutine, but not in RGWBucketSyncStatusManager::init_sync_status() which is used by radosgw-admin. RGWBucketSyncStatusManager::init_sync_status() returns a coroutine with a reference to a temporary local variable, leading to this crash:

2016-10-20 23:26:51.874229 7f4720705a80 -1 *** Caught signal (Segmentation fault) **
 in thread 7f4720705a80 thread_name:radosgw-admin

 ceph version Development (no_version)
 1: (ceph::BackTrace::BackTrace(int)+0x2d) [0x55e6f5a9317d]
 2: (()+0xc7927c) [0x55e6f5a9227c]
 3: (()+0x10a00) [0x7f4714c44a00]
 4: (()+0x148628) [0x7f4713473628]
 5: (ceph::buffer::ptr::append(char const*, unsigned int)+0xd5) [0x7f47178f88db]
 6: (ceph::buffer::list::append(char const*, unsigned int)+0x6d) [0x7f47178fae29]
 7: (encode(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, ceph::buffer::list&, unsigned long)+0x60) [0x55e6f57d8b47]
 8: (rgw_obj_key::encode(ceph::buffer::list&) const+0x10f) [0x55e6f586a9bb]
 9: (encode(rgw_obj_key const&, ceph::buffer::list&, unsigned long)+0x27) [0x55e6f586aa70]
 10: (rgw_bucket_shard_full_sync_marker::encode(ceph::buffer::list&) const+0x10f) [0x55e6f58c3d45]
 11: (encode(rgw_bucket_shard_full_sync_marker const&, ceph::buffer::list&, unsigned long)+0x27) [0x55e6f58c41a9]
 12: (rgw_bucket_shard_sync_info::encode(ceph::buffer::list&) const+0x136) [0x55e6f58c4888]
 13: (encode(rgw_bucket_shard_sync_info const&, ceph::buffer::list&, unsigned long)+0x27) [0x55e6f58c49ad]
 14: (RGWSimpleRadosWriteCR<rgw_bucket_shard_sync_info>::RGWSimpleRadosWriteCR(RGWAsyncRadosProcessor*, RGWRados*, rgw_bucket const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rgw_bucket_shard_sync_info const&)+0xe3) [0x55e6f58da123]
 15: (RGWInitBucketShardSyncStatusCoroutine::operate()+0x4e2) [0x55e6f58d466e]
 16: (RGWCoroutinesStack::operate(RGWCoroutinesEnv*)+0x191) [0x55e6f590710b]
 17: (RGWCoroutinesManager::run(std::__cxx11::list<RGWCoroutinesStack*, std::allocator<RGWCoroutinesStack*> >&)+0x287) [0x55e6f5908ccd]
 18: (RGWBucketSyncStatusManager::init_sync_status()+0x11c) [0x55e6f58be4d0]
 19: (main()+0x13a2c) [0x55e6f579029d]
 20: (__libc_start_main()+0xf0) [0x7f471334b580]
 21: (_start()+0x29) [0x55e6f5770db9]

Related issues 2 (0 open2 closed)

Related to rgw - Bug #17624: multisite: after finishing full sync on a bucket, incremental sync starts over from the beginningResolvedCasey Bodley10/19/2016

Actions
Copied to rgw - Backport #17708: jewel: 'radosgw-admin bucket sync init' crashesResolvedLoïc DacharyActions
Actions #1

Updated by Casey Bodley over 7 years ago

  • Related to Bug #17624: multisite: after finishing full sync on a bucket, incremental sync starts over from the beginning added
Actions #2

Updated by Casey Bodley over 7 years ago

  • Status changed from New to Fix Under Review
Actions #3

Updated by Yehuda Sadeh over 7 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #4

Updated by Loïc Dachary over 7 years ago

  • Copied to Backport #17708: jewel: 'radosgw-admin bucket sync init' crashes added
Actions #5

Updated by Yehuda Sadeh over 7 years ago

  • Priority changed from Normal to High
Actions #6

Updated by Nathan Cutler about 7 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF