Bug #50218
closedSegmentation fault in radosgw_Main()
0%
Description
2021-04-07T23:10:18.014 INFO:tasks.rgw.c1.client.0.smithi081.stdout:*** Caught signal (Segmentation fault) ** 2021-04-07T23:10:18.091 INFO:tasks.rgw.c1.client.0.smithi081.stdout: in thread 7fbf609352c0 thread_name:radosgw 2021-04-07T23:10:18.093 INFO:tasks.rgw.c2.client.0.smithi152.stdout:*** Caught signal (Segmentation fault) ** 2021-04-07T23:10:18.093 INFO:tasks.rgw.c2.client.0.smithi152.stdout: in thread 7fe1bf5cb2c0 thread_name:radosgw 2021-04-07T23:10:18.093 INFO:tasks.rgw.c2.client.0.smithi152.stdout: ceph version 16.2.0-63-g0af03c54 (0af03c54623fd04658877fdb3265ff54fb242c3c) pacific (stable) 2021-04-07T23:10:18.093 INFO:tasks.rgw.c2.client.0.smithi152.stdout: 1: /lib/x86_64-linux-gnu/libc.so.6(+0x3f040) [0x7fe1bdb9d040] 2021-04-07T23:10:18.094 INFO:tasks.rgw.c2.client.0.smithi152.stdout: 2: (std::locale::operator=(std::locale const&)+0x28) [0x56259360ed48] 2021-04-07T23:10:18.094 INFO:tasks.rgw.c2.client.0.smithi152.stdout: 3: (std::ios_base::imbue(std::locale const&)+0x2e) [0x5625936c15de] 2021-04-07T23:10:18.094 INFO:tasks.rgw.c2.client.0.smithi152.stdout: 4: (std::basic_ios<char, std::char_traits<char> >::imbue(std::locale const&)+0x44) [0x562593656b14] 2021-04-07T23:10:18.094 INFO:tasks.rgw.c2.client.0.smithi152.stdout: 5: (std::basic_ostream<char, std::char_traits<char> >& boost::asio::ip::operator<< <char, std::char_traits<char>, boost::asio::ip::tcp>(std::basic_ostream<char, std::char_traits<char> >&, boost::asio::ip::basic_endpoint<boost::asio::ip::tcp> const&)+0x8f) [0x7fe1be67af6f] 2021-04-07T23:10:18.095 INFO:tasks.rgw.c2.client.0.smithi152.stdout: 6: /usr/lib/libradosgw.so.2(+0x49f858) [0x7fe1be65e858] 2021-04-07T23:10:18.095 INFO:tasks.rgw.c2.client.0.smithi152.stdout: 7: (radosgw_Main(int, char const**)+0x362c) [0x7fe1be7e1abc] 2021-04-07T23:10:18.095 INFO:tasks.rgw.c2.client.0.smithi152.stdout: 8: __libc_start_main() 2021-04-07T23:10:18.095 INFO:tasks.rgw.c2.client.0.smithi152.stdout: 9: _start()
rados/upgrade/pacific-x/rgw-multisite/{clusters frontend overrides realm tasks upgrade/primary}
/a/nojha-2021-04-07_22:43:15-rados-wip-default-mclock-distro-basic-smithi/6027634
Found another report in https://github.com/ceph/ceph/pull/40639#issuecomment-814945648
Updated by Casey Bodley about 3 years ago
- Related to Bug #50139: RadosGW can't start when upgrade to Pacific (16.2.0) from Octopus (15.2.10) added
Updated by Mark Kogan about 3 years ago
pasting rgw log from start until the segf
2021-04-07T23:10:17.851+0000 7fbf609352c0 0 deferred set uid:gid to 64045:64045 (ceph:ceph) 2021-04-07T23:10:17.851+0000 7fbf609352c0 0 ceph version 16.2.0-63-g0af03c54 (0af03c54623fd04658877fdb3265ff54fb242c3c) pacific (stable), process radosgw, pid 18211 2021-04-07T23:10:17.851+0000 7fbf609352c0 0 framework: beast 2021-04-07T23:10:17.851+0000 7fbf609352c0 0 framework conf key: port, val: 8000 2021-04-07T23:10:17.851+0000 7fbf609352c0 1 radosgw_Main not setting numa affinity 2021-04-07T23:10:17.851+0000 7fbf45f76700 20 reqs_thread_entry: start 2021-04-07T23:10:17.851+0000 7fbf45775700 10 entry start 2021-04-07T23:10:17.859+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.859+0000 7fbf609352c0 20 rados_obj.operate() r=-2 bl.length=0 2021-04-07T23:10:17.859+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.859+0000 7fbf609352c0 20 rados_obj.operate() r=-2 bl.length=0 2021-04-07T23:10:17.859+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.859+0000 7fbf609352c0 20 rados_obj.operate() r=-2 bl.length=0 2021-04-07T23:10:17.859+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.863+0000 7fbf609352c0 20 rados_obj.operate() r=0 bl.length=46 2021-04-07T23:10:17.875+0000 7fbf609352c0 20 RGWRados::pool_iterate: got zonegroup_info.bcb436cc-16f5-4046-a493-e59cf3d8458f 2021-04-07T23:10:17.875+0000 7fbf609352c0 20 RGWRados::pool_iterate: got zone_info.abb6f73d-5af8-4416-afb8-63fd2d1de295 2021-04-07T23:10:17.875+0000 7fbf609352c0 20 RGWRados::pool_iterate: got zone_names.default 2021-04-07T23:10:17.875+0000 7fbf609352c0 20 RGWRados::pool_iterate: got zonegroups_names.default 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados_obj.operate() r=0 bl.length=46 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados_obj.operate() r=0 bl.length=889 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados_obj.operate() r=0 bl.length=46 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados_obj.operate() r=0 bl.length=358 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados_obj.operate() r=-2 bl.length=0 2021-04-07T23:10:17.879+0000 7fbf609352c0 10 cannot find current period zonegroup using local zonegroup 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados_obj.operate() r=-2 bl.length=0 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados_obj.operate() r=0 bl.length=46 2021-04-07T23:10:17.879+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados_obj.operate() r=0 bl.length=358 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 zonegroup default 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados_obj.operate() r=-2 bl.length=0 2021-04-07T23:10:17.883+0000 7fbf609352c0 10 Cannot find current period zone using local zone 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados_obj.operate() r=-2 bl.length=0 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados_obj.operate() r=0 bl.length=46 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados_obj.operate() r=0 bl.length=889 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 zone default found 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados_obj.operate() r=-2 bl.length=0 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados->read ofs=0 len=0 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 rados_obj.operate() r=-2 bl.length=0 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 started sync module instance, tier type = 2021-04-07T23:10:17.883+0000 7fbf609352c0 20 started zone id=abb6f73d-5af8-4416-afb8-63fd2d1de295 (name=default) with tier type = 2021-04-07T23:10:17.915+0000 7fbf609352c0 20 add_watcher() i=0 2021-04-07T23:10:17.915+0000 7fbf609352c0 20 add_watcher() i=1 2021-04-07T23:10:17.915+0000 7fbf609352c0 20 add_watcher() i=2 2021-04-07T23:10:17.915+0000 7fbf609352c0 20 add_watcher() i=3 2021-04-07T23:10:17.915+0000 7fbf609352c0 20 add_watcher() i=4 2021-04-07T23:10:17.915+0000 7fbf609352c0 20 add_watcher() i=5 2021-04-07T23:10:17.915+0000 7fbf609352c0 20 add_watcher() i=6 2021-04-07T23:10:17.919+0000 7fbf609352c0 20 add_watcher() i=7 2021-04-07T23:10:17.919+0000 7fbf609352c0 2 all 8 watchers are set, enabling cache 2021-04-07T23:10:17.923+0000 7fbf3074b700 2 RGWDataChangesLog::ChangesRenewThread: start 2021-04-07T23:10:17.923+0000 7fbf609352c0 20 check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2021-04-07T23:10:17.923+0000 7fbf609352c0 20 check_secure_mon_conn(): mode 1 is insecure 2021-04-07T23:10:18.007+0000 7fbf2c743700 2 garbage collection: garbage collection: start 2021-04-07T23:10:18.007+0000 7fbf2c743700 20 garbage collection: RGWGC::process entered with GC index_shard=13, max_secs=3600, expired_only=1 2021-04-07T23:10:18.007+0000 7fbf2bf42700 2 object expiration: start 2021-04-07T23:10:18.007+0000 7fbf2bf42700 20 processing shard = obj_delete_at_hint.0000000000 2021-04-07T23:10:18.007+0000 7fbf2af40700 20 reqs_thread_entry: start 2021-04-07T23:10:18.007+0000 7fbf28f3c700 20 reqs_thread_entry: start 2021-04-07T23:10:18.007+0000 7fbf26737700 5 lifecycle: schedule life cycle next start time: Thu Apr 8 00:00:00 2021 2021-04-07T23:10:18.007+0000 7fbf24733700 5 lifecycle: schedule life cycle next start time: Thu Apr 8 00:00:00 2021 2021-04-07T23:10:18.007+0000 7fbf2272f700 5 lifecycle: schedule life cycle next start time: Thu Apr 8 00:00:00 2021 2021-04-07T23:10:18.007+0000 7fbf21f2e700 20 BucketsSyncThread: start 2021-04-07T23:10:18.007+0000 7fbf609352c0 20 init_complete bucket index max shards: 11 2021-04-07T23:10:18.007+0000 7fbf2172d700 20 UserSyncThread: start 2021-04-07T23:10:18.007+0000 7fbf609352c0 10 Started notification manager with: 1 workers 2021-04-07T23:10:18.007+0000 7fbf1ff2a700 20 INFO: next queues processing will happen at: Wed Apr 7 23:10:48 2021 2021-04-07T23:10:18.007+0000 7fbf609352c0 20 RGW hostnames: 2021-04-07T23:10:18.007+0000 7fbf609352c0 20 RGW S3website hostnames: 2021-04-07T23:10:18.007+0000 7fbf20f2c700 20 processing logshard = reshard.0000000000 2021-04-07T23:10:18.011+0000 7fbf2c743700 20 garbage collection: RGWGC::process cls_rgw_gc_list returned with returned:0, entries.size=0, truncated=0, next_marker='' 2021-04-07T23:10:18.011+0000 7fbf609352c0 0 framework: beast 2021-04-07T23:10:18.011+0000 7fbf609352c0 0 framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt 2021-04-07T23:10:18.011+0000 7fbf609352c0 0 framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key 2021-04-07T23:10:18.011+0000 7fbf609352c0 0 starting handler: beast 2021-04-07T23:10:18.011+0000 7fbf2c743700 20 garbage collection: RGWGC::process cls_rgw_gc_list returned NO non expired entries, so setting cache entry to TRUE 2021-04-07T23:10:18.011+0000 7fbf2c743700 20 garbage collection: RGWGC::process cls_rgw_gc_queue_list_entries returned with return value:0, entries.size=0, truncated=0, next_marker='' 2021-04-07T23:10:18.011+0000 7fbf609352c0 -1 *** Caught signal (Segmentation fault) ** in thread 7fbf609352c0 thread_name:radosgw ceph version 16.2.0-63-g0af03c54 (0af03c54623fd04658877fdb3265ff54fb242c3c) pacific (stable) 1: /lib/x86_64-linux-gnu/libc.so.6(+0x3f040) [0x7fbf5ef07040] 2: (std::locale::operator=(std::locale const&)+0x28) [0x55eaa400dd48] 3: (std::ios_base::imbue(std::locale const&)+0x2e) [0x55eaa40c05de] 4: (std::basic_ios<char, std::char_traits<char> >::imbue(std::locale const&)+0x44) [0x55eaa4055b14] 5: (std::basic_ostream<char, std::char_traits<char> >& boost::asio::ip::operator<< <char, std::char_traits<char>, boost::asio::ip::tcp>(std::basic_ostream<char, std::char_traits<char> >&, boost::asio::ip::basic_endpoint<boost::asio::ip::tcp> const&)+0x8f) [0x7fbf5f9e4f6f] 6: /usr/lib/libradosgw.so.2(+0x49f858) [0x7fbf5f9c8858] 7: (radosgw_Main(int, char const**)+0x362c) [0x7fbf5fb4babc] 8: __libc_start_main() 9: _start() NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Updated by Casey Bodley about 3 years ago
Updated by Casey Bodley about 3 years ago
crashes seem to be specific to ubuntu? same with https://tracker.ceph.com/issues/49955 and https://tracker.ceph.com/issues/50269
Updated by Neha Ojha about 3 years ago
Couple of things I noticed regarding this particular failure:
1. The same tests were passing few days ago (sha1: d2c0f56b9235507e96124d903af6148f1edc34af)
https://pulpito.ceph.com/nojha-2021-04-05_22:22:02-rados-master-distro-basic-smithi/6023041/
https://pulpito.ceph.com/nojha-2021-04-05_22:22:02-rados-master-distro-basic-smithi/6023171/
2. These upgrade tests do not provide any particular distro, so always get pinned to ubuntu 18.04 (I think)
/a/nojha-2021-04-07_22:43:15-rados-wip-default-mclock-distro-basic-smithi/6027634 - failed
2021-04-07T22:53:32.183 INFO:teuthology.task.internal:Checking packages skipped, missing os_type 'None' or ceph hash 'dda3ece3c83703f2c390a51819350b94ddd2814a' 2021-04-07T22:53:32.184 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2021-04-07T22:53:32.193 INFO:teuthology.task.internal:no buildpackages task found 2021-04-07T22:53:32.194 INFO:teuthology.run_tasks:Running task internal.save_config... 2021-04-07T22:53:32.252 INFO:teuthology.task.internal:Saving configuration 2021-04-07T22:53:32.267 INFO:teuthology.run_tasks:Running task internal.check_lock... 2021-04-07T22:53:32.285 INFO:teuthology.task.internal.check_lock:Checking locks... 2021-04-07T22:53:32.304 DEBUG:teuthology.task.internal.check_lock:machine status is {'is_vm': False, 'locked': True, 'locked_since': '2021-04-07 22:45:59.646653', 'description': '/home/teuthworker/archive/nojha-2021-04-07_22:43:15-rados-wip-default-mclock-distro-basic-smithi/6027634', 'locked_by': 'scheduled_nojha@teuthology', 'up': True, 'ssh_pub_key': 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICsWz3DHHEXISeAW/LbMEFvXnDpCq50DBSyAFs6rR1tP', 'os_version': '18.04', 'machine_type': 'smithi', 'vm_host': None, 'os_type': 'ubuntu', 'arch': 'x86_64', 'mac_address': None, 'name': 'smithi081.front.sepia.ceph.com'} 2021-04-07T22:53:32.325 DEBUG:teuthology.task.internal.check_lock:machine status is {'is_vm': False, 'locked': True, 'locked_since': '2021-04-07 22:45:59.646675', 'description': '/home/teuthworker/archive/nojha-2021-04-07_22:43:15-rados-wip-default-mclock-distro-basic-smithi/6027634', 'locked_by': 'scheduled_nojha@teuthology', 'up': True, 'ssh_pub_key': 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDsHs0fWJHfPfae+WiaSGLtglyU6XkMzS5gXVCJEqJAv', 'os_version': '18.04', 'machine_type': 'smithi', 'vm_host': None, 'os_type': 'ubuntu', 'arch': 'x86_64', 'mac_address': None, 'name': 'smithi152.front.sepia.ceph.com'}
/a/nojha-2021-04-05_22:22:02-rados-master-distro-basic-smithi/6023041/ - passed
2021-04-05T23:04:38.470 INFO:teuthology.task.internal:Checking packages skipped, missing os_type 'None' or ceph hash 'd2c0f56b9235507e96124d903af6148f1edc34af' 2021-04-05T23:04:38.470 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2021-04-05T23:04:38.482 INFO:teuthology.task.internal:no buildpackages task found 2021-04-05T23:04:38.482 INFO:teuthology.run_tasks:Running task internal.save_config... 2021-04-05T23:04:38.519 INFO:teuthology.task.internal:Saving configuration 2021-04-05T23:04:38.536 INFO:teuthology.run_tasks:Running task internal.check_lock... 2021-04-05T23:04:38.601 INFO:teuthology.task.internal.check_lock:Checking locks... 2021-04-05T23:04:38.625 DEBUG:teuthology.task.internal.check_lock:machine status is {'is_vm': False, 'locked': True, 'locked_since': '2021-04-05 22:59:07.866583', 'description': '/home/teuthworker/archive/nojha-2021-04-05_22:22:02-rados-master-distro-basic-smithi/6023041', 'locked_by': 'scheduled_nojha@teuthology', 'up': True, 'ssh_pub_key': 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGME7diZoaKKtM/QLcnQxs92b5PeiYwpGOqIvLQGkS5M', 'os_version': '18.04', 'machine_type': 'smithi', 'vm_host': None, 'os_type': 'ubuntu', 'arch': 'x86_64', 'mac_address': None, 'name': 'smithi065.front.sepia.ceph.com'} 2021-04-05T23:04:38.649 DEBUG:teuthology.task.internal.check_lock:machine status is {'is_vm': False, 'locked': True, 'locked_since': '2021-04-05 22:59:07.866560', 'description': '/home/teuthworker/archive/nojha-2021-04-05_22:22:02-rados-master-distro-basic-smithi/6023041', 'locked_by': 'scheduled_nojha@teuthology', 'up': True, 'ssh_pub_key': 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILZmz4BFYjLKfFSvggruEGS9rrsSO6ktJ7b1+OA4UwNV', 'os_version': '18.04', 'machine_type': 'smithi', 'vm_host': None, 'os_type': 'ubuntu', 'arch': 'x86_64', 'mac_address': None, 'name': 'smithi081.front.sepia.ceph.com'}
Updated by Mark Kogan about 3 years ago
adding addr2line of the crash location
ceph version 16.2.0-63-g0af03c54 (0af03c54623fd04658877fdb3265ff54fb242c3c) pacific (stable) 1: /lib/x86_64-linux-gnu/libc.so.6(+0x3f040) [0x7fbf5ef07040] 2: (std::locale::operator=(std::locale const&)+0x28) [0x55eaa400dd48] 3: (std::ios_base::imbue(std::locale const&)+0x2e) [0x55eaa40c05de] 4: (std::basic_ios<char, std::char_traits<char> >::imbue(std::locale const&)+0x44) [0x55eaa4055b14] 5: (std::basic_ostream<char, std::char_traits<char> >& boost::asio::ip::operator<< <char, std::char_traits<char>, boost::asio::ip::tcp>(std::basic_ostream<char, std::char_traits<char> >&, boost::asio::ip::basic_endpoint<boost::asio::ip::tcp> const&)+0x8f) [0x7fbf5f9e4f6f] 6: /usr/lib/libradosgw.so.2(+0x49f858) [0x7fbf5f9c8858] ^^^^^^ 7: (radosgw_Main(int, char const**)+0x362c) [0x7fbf5fb4babc] 8: __libc_start_main() 9: _start() NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
eu-addr2line --debuginfo-path=$(realpath ./usr/lib/debug/) --pretty-print -a -f -i -C -e ./usr/lib/libradosgw.so.2 0x49f858 0x000000000049f858: init at ./src/rgw/rgw_asio_frontend.cc:613:54 vim ./ceph-16.2.0-63-g0af03c54/src/rgw/rgw_asio_frontend.cc +613 ... 507 int AsioFrontend::init() 508 { 509 boost::system::error_code ec; 510 auto& config = conf->get_config_map(); ... 607 ┊ l.acceptor.listen(max_connection_backlog); 608 ┊ l.acceptor.async_accept(l.socket, 609 ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ [this, &l] (boost::system::error_code ec) { 610 ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ accept(l, ec); 611 ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ ┊ }); 612 -> 613 ┊ ldout(ctx(), 4) << "frontend listening on " << l.endpoint << dendl; 614 ┊ socket_bound = true; 615 } ...
Updated by Mark Kogan about 3 years ago
analyzing found core at:
http://qa-proxy.ceph.com/teuthology/nojha-2021-04-07_22:43:15-rados-wip-default-mclock-distro-basic-smithi/6027634/remote/smithi081/coredump/1617837018.18211.core.gz
podman run -it --rm --net=host -v ./:/mnt ubuntu:18.04 apt update apt install gdb cd /mnt gdb -ex "set debug-file-directory $(realpath ./deb/usr/lib/debug)" -ex "set solib-search-path $(realpath ./deb/usr/lib)" ./deb/usr/bin/radosgw ./1617837018.18211.core ... Core was generated by `radosgw --rgw-frontends beast port=8000 -n client.0 --cluster c1 -k /etc/ceph/c'. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Program terminated with signal SIGSEGV, Segmentation fault. #0 __GI_raise (sig=<optimized out>) at ../sysdeps/unix/sysv/linux/raise.c:51 51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. [Current thread is 1 (Thread 0x7fbf609352c0 (LWP 18211))] Reading symbols from /mnt/deb/usr/lib/libradosgw.so.2.0.0...Reading symbols from /mnt/deb/usr/lib/debug/.build-id/e4/dfdb23462ccef8c2a115d84dc14b9260947a21.debug...done. done. (gdb) bt #0 __GI_raise (sig=<optimized out>) at ../sysdeps/unix/sysv/linux/raise.c:51 #1 0x00007fbf6025d9e3 in reraise_fatal (signum=11) at ./src/global/signal_handler.cc:332 #2 handle_fatal_signal(int) () at ./src/global/signal_handler.cc:332 #3 <signal handler called> #4 0x000055eaa400dd48 in std::locale::operator=(std::locale const&) () #5 0x000055eaa40c05de in std::ios_base::imbue(std::locale const&) () #6 0x000055eaa4055b14 in std::basic_ios<char, std::char_traits<char> >::imbue(std::locale const&) () #7 0x00007fbf5f9e4f6f in boost::asio::ip::detail::endpoint::to_string[abi:cxx11]() const (this=0x7fff14d07720) at ./obj-x86_64-linux-gnu/boost/include/boost/asio/ip/detail/impl/endpoint.ipp:183 #8 boost::asio::ip::operator<< <char, std::char_traits<char>, boost::asio::ip::tcp> (os=..., endpoint=...) at ./obj-x86_64-linux-gnu/boost/include/boost/asio/ip/impl/basic_endpoint.hpp:34 #9 0x00007fbf5f9c8858 in (anonymous namespace)::AsioFrontend::init() () at /usr/include/c++/9/bits/char_traits.h:335 #10 0x00007fbf5fb4babc in radosgw_Main(int, char const**) () at ./src/rgw/rgw_main.cc:608 #11 0x00007fbf5eee9bf7 in __libc_start_main (main=0x55eaa40063b0 <main>, argc=14, argv=0x7fff14d084c8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fff14d084b8) at ../csu/libc-start.c:310 #12 0x000055eaa400672a in _start () (gdb) f 9 (gdb) set print pretty on (gdb) p l.endpoint $7 = { impl_ = { data_ = { base = { sa_family = 2, sa_data = "\037@", '\000' <repeats 11 times> }, v4 = { sin_family = 2, sin_port = 16415, sin_addr = { s_addr = 0 }, sin_zero = "\000\000\000\000\000\000\000" }, v6 = { sin6_family = 2, sin6_port = 16415, sin6_flowinfo = 0, sin6_addr = { __in6_u = { __u6_addr8 = '\000' <repeats 15 times>, __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0} } }, sin6_scope_id = 0 } } } } (gdb) f 7 (gdb) p *this $10 = { data_ = { base = { sa_family = 2, sa_data = "\037@", '\000' <repeats 11 times> }, v4 = { sin_family = 2, sin_port = 16415, sin_addr = { s_addr = 0 }, sin_zero = "\000\000\000\000\000\000\000" }, v6 = { sin6_family = 2, sin6_port = 16415, sin6_flowinfo = 0, sin6_addr = { __in6_u = { __u6_addr8 = '\000' <repeats 15 times>, __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0} } }, sin6_scope_id = 0 } } } find . -name endpoint.ipp -exec ls -l {} \; -rw-r--r-- 1 mkogan mkogan 3499 Apr 22 2020 ./deb/ceph-16.2.0-63-g0af03c54/src/boost/boost/asio/local/detail/impl/endpoint.ipp -rw-r--r-- 1 mkogan mkogan 2846 Apr 22 2020 ./deb/ceph-16.2.0-63-g0af03c54/src/boost/boost/asio/generic/detail/impl/endpoint.ipp -rw-r--r-- 1 mkogan mkogan 5850 Apr 22 2020 ./deb/ceph-16.2.0-63-g0af03c54/src/boost/boost/asio/ip/detail/impl/endpoint.ipp vim ./deb/ceph-16.2.0-63-g0af03c54/src/boost/boost/asio/ip/detail/impl/endpoint.ipp +183 179 #if !defined(BOOST_ASIO_NO_IOSTREAM) 180 std::string endpoint::to_string() const 181 { 182 std::ostringstream tmp_os; ->183 tmp_os.imbue(std::locale::classic()); 184 if (is_v4()) 185 ┊ tmp_os << address(); 186 else 187 ┊ tmp_os << '[' << address() << ']'; 188 tmp_os << ':' << port(); 189 190 return tmp_os.str(); 191 } 192 #endif // !defined(BOOST_ASIO_NO_IOSTREAM)
Updated by Kefu Chai about 3 years ago
rados/upgrade/pacific-x/rgw-multisite/{clusters frontend overrides realm tasks upgrade/primary}
/a/kchai-2021-04-12_02:29:32-rados-wip-kefu-testing-2021-04-12-0057-distro-basic-smithi/6037153
crashed on smithi165, which was ubuntu bionic.
16.2.0-90-g50f1821b-1bionic ... ceph version 16.2.0-90-g50f1821b (50f1821b4caa22e3a2ca880c14f02061498aa47a) pacific (stable)
the pacific build on bionic is built with PPA repo. and we always have "WITH_STATIC_LIBSTDCXX=ON" if the GCC from a PPA repo is used, so the built executable does not depend on the C++ runtime from the updated GCC?
++ use_ppa=true ++ true ++ hookdir=/home/jenkins-build/.pbuilder/hook.d ++ rm -rf /home/jenkins-build/.pbuilder/hook.d ++ mkdir -p /home/jenkins-build/.pbuilder/hook.d ++ echo /home/jenkins-build/.pbuilder/hook.d + hookdir=/home/jenkins-build/.pbuilder/hook.d + setup_updates_repo /home/jenkins-build/.pbuilder/hook.d + local hookdir=/home/jenkins-build/.pbuilder/hook.d + [[ ubuntu != ubuntu ]] + [[ x86_64 == \x\8\6\_\6\4 ]] + cat + chmod +x /home/jenkins-build/.pbuilder/hook.d/D04install-updates-repo + setup_pbuilder_for_ppa /home/jenkins-build/.pbuilder/hook.d + local hookdir=/home/jenkins-build/.pbuilder/hook.d + use_ppa + case $vers in + case $DIST in + use_ppa=true + true + local gcc_ver=7 + '[' bionic = bionic ']' + gcc_ver=9 + setup_pbuilder_for_new_gcc /home/jenkins-build/.pbuilder/hook.d 9 ... -- The CXX compiler identification is GNU 9.3.0 -- The C compiler identification is GNU 9.3.0
probably we could try to test on focal so no 3rd party GCC is involved?
https://github.com/ceph/ceph/pull/40803 is created in hope to address (work around) this issue.
Updated by Casey Bodley about 3 years ago
- Related to Bug #50269: s3select crash in std::locale added
Updated by Casey Bodley almost 3 years ago
- Has duplicate Bug #50742: crash: std::locale::operator=(std::locale const&) added
Updated by Casey Bodley almost 3 years ago
- Has duplicate Bug #50741: crash: librados::v14_2_0::IoCtx::operate(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, librados::v14_2_0::ObjectWriteOperation*) added
Updated by Yaarit Hatuka almost 3 years ago
- Status changed from Resolved to Closed
Not a Ceph bug, changing to Closed.