Project

General

Profile

Actions

Backport #39660

closed

nautilus: rgw: Segfault during request processing

Added by Benjamin Cherian almost 5 years ago. Updated over 4 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Target version:
Release:
nautilus
Pull request ID:
Crash signature (v1):

deef9bb5ae652e197bdd1d574e72ce1b13610ac0b7e6886d7273d3fe5d3bbb4a
c083ce297aedc92fef4aa6ec4c427a0d1a8d615cbc0ad87aeddc6b15d93b66ab
37ef0617bce5a8714a7327431e1fcf77ba16d83d19721c12fd04700611a449aa
efe1291e222af22d17e9f2cd8525bbdf4d52d6f934963798271bbc8903890e0e
7bea2b9c3e7c411daaa0a16d8fd9bbcdc34649097e804bf96afedd8e11a3827c

Crash signature (v2):

Description

https://github.com/ceph/ceph/pull/30746


Segfault in RadosGW with Beast frontend after constant mixed workload every few hours. RGW log with crash traceback is attached.

Output of ceph crash info is below:

{
"crash_id": "2019-05-09_22:53:06.531229Z_2b2c29f8-489e-411b-9428-efaa0e875006",
"timestamp": "2019-05-09 22:53:06.531229Z",
"process_name": "radosgw",
"entity_name": "client.rgw.cis01.rgw0",
"ceph_version": "14.2.0",
"utsname_hostname": "cis01",
"utsname_sysname": "Linux",
"utsname_release": "4.15.0-46-generic",
"utsname_version": "#49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019",
"utsname_machine": "x86_64",
"os_name": "Ubuntu",
"os_id": "ubuntu",
"os_version_id": "18.04",
"os_version": "18.04.2 LTS (Bionic Beaver)",
"backtrace": [
"(()+0x12890) [0x7fcc0ef88890]",
"(void boost::intrusive::list_impl<boost::intrusive::bhtraits<rgw::AioResultEntry, boost::intrusive::list_node_traits<void*>, (boost::intrusive::link_mode_type)1, boost::intrusive::dft_tag, 1u>, unsigned long, true, void>::sort<get_obj_data::flush(rgw::OwningList<rgw::AioResultEntry>&&)::{lambda(auto:1 const&, auto:2 const&)#1}>(get_obj_data::flush(rgw::OwningList<rgw::AioResultEntry>&&)::{lambda(auto:1 const&, auto:2 const&)#1})+0x55e) [0x557a70739e6e]",
"(get_obj_data::flush(rgw::OwningList<rgw::AioResultEntry>&&)+0x68) [0x557a7073a268]",
"(RGWRados::get_obj_iterate_cb(rgw_raw_obj const&, long, long, long, bool, RGWObjState*, void*)+0x1bb) [0x557a7070761b]",
"(()+0x5e3a74) [0x557a70707a74]",
"(RGWRados::iterate_obj(RGWObjectCtx&, RGWBucketInfo const&, rgw_obj const&, long, long, unsigned long, int ()(rgw_raw_obj const&, long, long, long, bool, RGWObjState, void*), void*)+0x44e) [0x557a7071de5e]",
"(RGWRados::Object::Read::iterate(long, long, RGWGetDataCB*)+0x17e) [0x557a70721c1e]",
"(RGWGetObj::execute()+0xfb7) [0x557a706a2627]",
"(rgw_process_authenticated(RGWHandler_REST*, RGWOp*&, RGWRequest*, req_state*, bool)+0x710) [0x557a70455f40]",
"(process_request(RGWRados*, RGWREST*, RGWRequest*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rgw::auth::StrategyRegistry const&, RGWRestfulIO*, OpsLogSocket*, optional_yield, rgw::dmclock::Scheduler*, int*)+0x1d04) [0x557a70458834]",
"(()+0x29d5fa) [0x557a703c15fa]",
"(()+0x29e1cd) [0x557a703c21cd]",
"(make_fcontext()+0x2f) [0x557a708ad6af]"
]
}


Files

ceph-rgw-cis01.rgw0.log.gz (206 KB) ceph-rgw-cis01.rgw0.log.gz RGW Log with segfault Benjamin Cherian, 05/10/2019 12:58 AM
rgw-segfault.log (1.94 KB) rgw-segfault.log Michael Bisig, 07/01/2019 08:10 AM
loadgraph.png (46.8 KB) loadgraph.png Roland Sommer, 08/23/2019 06:29 AM

Related issues 2 (0 open2 closed)

Has duplicate rgw - Bug #41511: civetweb threads using 100% of CPUDuplicate08/26/2019

Actions
Has duplicate rgw - Bug #43135: rgw: core dump in RadosWriterDuplicate

Actions
Actions #1

Updated by Abhishek Lekshmanan almost 5 years ago

  • Priority changed from Normal to High
Actions #2

Updated by Thomas Kriechbaumer almost 5 years ago

I'm seeing the same - multiple times in the last week:

$ ceph crash ls
2019-05-28_13:59:44.820948Z_16d7cfdd-afa4-4580-a6c1-4351ab6fc5a8 client.rgw.s1
2019-05-28_21:37:42.684672Z_5a593ce3-c4da-4051-a79d-3e307ed33795 client.rgw.s3
2019-05-29_08:35:58.759415Z_a33193ba-0a23-41dc-9076-ff79633e54bd client.rgw.s3
2019-05-29_15:43:09.297051Z_cb937742-ebfd-4cff-9c13-8238b8060856 client.rgw.s0
2019-05-29_18:33:05.316942Z_a8fbd7be-5952-4ae2-aa05-91054495501e client.rgw.s4
2019-05-30_01:01:41.917151Z_858b20b8-6fd0-4c2c-a564-43dff7011aeb client.rgw.s4
2019-06-04_09:24:53.252360Z_5559f4b2-ea29-4fa8-acf1-d4cdc800ac4b client.rgw.s2
2019-06-04_09:32:05.195274Z_6e25d4e7-0688-4e2a-bee2-f667ada1193c client.rgw.s1
2019-06-04_10:23:13.216684Z_85ee8e56-03a8-426b-a163-fa067e116ce4 client.rgw.s3
2019-06-04_13:53:40.675588Z_a0acb6f9-04a5-461d-b90e-38d0ce57d787 client.rgw.s2
2019-06-04_14:43:06.324040Z_04e147b8-68de-46ab-847d-2e5255bb603d client.rgw.s0

$ ceph crash info 2019-06-04_14:43:06.324040Z_04e147b8-68de-46ab-847d-2e5255bb603d {
"crash_id": "2019-06-04_14:43:06.324040Z_04e147b8-68de-46ab-847d-2e5255bb603d",
"timestamp": "2019-06-04 14:43:06.324040Z",
"process_name": "radosgw",
"entity_name": "client.rgw.s0",
"ceph_version": "14.2.1",
"utsname_hostname": "s0",
"utsname_sysname": "Linux",
"utsname_release": "4.15.0-48-generic",
"utsname_version": "#51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019",
"utsname_machine": "x86_64",
"os_name": "Ubuntu",
"os_id": "ubuntu",
"os_version_id": "18.04",
"os_version": "18.04.2 LTS (Bionic Beaver)",
"backtrace": [
"(()+0x12890) [0x7f2a9ebbe890]",
"(boost::intrusive::circular_list_algorithms<boost::intrusive::list_node_traits<void*> >::swap_nodes(boost::intrusive::list_node<void*>* const&, boost::intrusive::list_node<void*>* const&)+0x52) [0x55affba5b0a2]",
"(void boost::intrusive::list_impl<boost::intrusive::bhtraits<rgw::AioResultEntry, boost::intrusive::list_node_traits<void*>, (boost::intrusive::link_mode_type)1, boost::intrusive::dft_tag, 1u>, unsigned long, true, void>::sort<get_obj_data::flush(rgw::OwningList<rgw::AioResultEntry>&&)::{lambda(auto:1 const&, auto:2 const&)#1}>(get_obj_data::flush(rgw::OwningList<rgw::AioResultEntry>&&)::{lambda(auto:1 const&, auto:2 const&)#1})+0x2ab) [0x55affba5b3fb]",
"(get_obj_data::flush(rgw::OwningList<rgw::AioResultEntry>&&)+0x7d) [0x55affba5b65d]",
"(RGWRados::get_obj_iterate_cb(rgw_raw_obj const&, long, long, long, bool, RGWObjState*, void*)+0x32e) [0x55affba2a4be]",
"(()+0x6196f4) [0x55affba2a6f4]",
"(RGWRados::iterate_obj(RGWObjectCtx&, RGWBucketInfo const&, rgw_obj const&, long, long, unsigned long, int ()(rgw_raw_obj const&, long, long, long, bool, RGWObjState, void*), void*)+0x44e) [0x55affba3e58e]",
"(RGWRados::Object::Read::iterate(long, long, RGWGetDataCB*)+0x178) [0x55affba422b8]",
"(RGWGetObj::execute()+0x8fb) [0x55affb9c9e0b]",
"(rgw_process_authenticated(RGWHandler_REST*, RGWOp*&, RGWRequest*, req_state*, bool)+0x915) [0x55affb797df5]",
"(process_request(RGWRados*, RGWREST*, RGWRequest*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rgw::auth::StrategyRegistry const&, RGWRestfulIO*, OpsLogSocket*, optional_yield, rgw::dmclock::Scheduler*, int*)+0x253c) [0x55affb79a9dc]",
"(()+0x2f06b8) [0x55affb7016b8]",
"(()+0x2f114a) [0x55affb70214a]",
"(make_fcontext()+0x2f) [0x55affbba310f]"
]
}

Actions #3

Updated by Valery Tschopp almost 5 years ago

Our production cluster at SWITCH.ch have the same issue with the radosgw.

Since we deployed radosgw 14.2.1 (Nautilus) last week, under heavy load the radosgw processes crash with the same stack trace.

Actions #4

Updated by Casey Bodley almost 5 years ago

  • Status changed from New to 12
Actions #5

Updated by Casey Bodley almost 5 years ago

  • Priority changed from High to Urgent
Actions #6

Updated by Gal Salomon almost 5 years ago

can you attach a description of your test-run , or script itself , which produce this crash?

Actions #7

Updated by Michael Bisig almost 5 years ago

Hi
We found the error on our production cluster running Ceph 14.2.1.

The crash appears frequently every 30h and requires a restart of the rgw. Therefore, we do not have a test-script but it appears under normal load. (for both frontends civetweb and beast)

I attached a file with our segmentation error but it is equivalent to the crash reports above. Unfortunately, we do not have much more info.

Actions #8

Updated by Gal Salomon almost 5 years ago

Tx for the additional information.
what about the core file? can u upload it?

Actions #9

Updated by Gal Salomon almost 5 years ago

hi
can u tell whats was the memory consumption of rgw process before it crashed?
mainly the virtual-memory VS resident-memory.

Actions #10

Updated by Roland Sommer over 4 years ago

We encounter the exact same problem using ceph 14.2.2 on ubuntu 18.04. The load on the rgw-nodes rises constantly, more and more threads seem to be spawned. I can observe 100% CPU usage on every core, using 10GB virtual an 1.8GB resident memory on one node which is currently in the state probably shortly before segfaulting again.

Actions #11

Updated by Gal Salomon over 4 years ago

Thanks allot.
can u send the ceph.conf?
did u run with civetweb or beast (frontend)?
do u know what allocator? tcmalloc or libc?

Actions #12

Updated by Roland Sommer over 4 years ago

We are using the beast frontend and have not configured anything special regarding memory allocation, so this is tcmalloc under ubuntu using TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728.

Actions #13

Updated by Thomas Kriechbaumer over 4 years ago

We are running Ubuntu 18.04, regular kernel, no special allocator.
Apart from that the only non-default settings we have in our ceph.conf are:

Frontend setting:
rgw_frontends = beast endpoint=127.0.0.1:4444

rgw thread pool size = 512
rgw cache enabled = true
rgw s3 auth use ldap = true
... (and a few more LDAP related settings)

Actions #14

Updated by Roland Sommer over 4 years ago

On our side, the non-default settings (regarding frontend) are:

rgw_cache_lru_size = 100000
rgw_override_bucket_index_max_shards = 16
rgw_enable_static_website = true
rgw_gc_max_objs = 997
debug rgw = 1
rgw resolve cname = true
rgw_enable_usage_log = true
rgw_cache_enabled = true
rgw_num_rados_handles = 32
rgw_thread_pool_size = 512
rgw_frontends = "beast port=80 ssl_port=443 ssl_certificate=/etc/ssl/wildcard.pem" 
Actions #15

Updated by Roland Sommer over 4 years ago

I attached a sample graph from one of our rgw nodes (they all look similar) where you can see the load pattern / crash cycle since the update to 14.2.2. While running 13.2.x, everything was fine.

Actions #16

Updated by Casey Bodley over 4 years ago

could you please reduce num rados handles to 1? anything higher is known to cause problems, and the option itself is being removed in nautilus 14.2.3. please let us know if you're able to reproduce this crash reproduces with a single handle

Actions #17

Updated by Thomas Kriechbaumer over 4 years ago

We are not using "num rados handles" and currently see RGW crashing multiple times per hour!

2019-08-29_00:03:09.533585Z_bd4155a9-641a-4237-ac92-fa19d3d316d8 client.rgw.s0
2019-08-29_01:06:20.637864Z_41c84cdd-400c-4adc-9344-7ec406609ed4 client.rgw.s4
2019-08-29_01:08:37.980497Z_f0527403-87e5-4c77-b464-b20a3bfbb380 client.rgw.s3
2019-08-29_01:12:41.319268Z_451a532d-d3d0-4957-a48a-da0b0cab41ce client.rgw.s0
2019-08-29_01:58:28.440703Z_17281f4b-948a-4d61-a594-6eccbeab63c8 client.rgw.s3
2019-08-29_02:36:09.323537Z_cb50cffd-c52a-4492-9e37-b50fd816fb04 client.rgw.s3
2019-08-29_03:30:50.428450Z_7d89288f-0336-4da4-8294-685ac7e7f33b client.rgw.s1
2019-08-29_03:32:06.523949Z_6939ccfc-d332-4313-9aea-e43e71c86fb1 client.rgw.s3
2019-08-29_03:52:03.471603Z_e94417c6-dddc-421a-beec-8a53eaa0d866 client.rgw.s3
2019-08-29_04:08:25.456541Z_04f9b4aa-1efd-46f0-96e1-e3319b726a79 client.rgw.s0
2019-08-29_04:10:01.423254Z_b49e2a1c-058c-42da-b3fc-46a6e51946e1 client.rgw.s1
2019-08-29_05:08:42.441626Z_3f1b8479-e0dd-433f-b1e7-df8ac7f04e5f client.rgw.s4
2019-08-29_05:32:24.489384Z_da93676f-b14d-4361-a447-c4490b5d07fa client.rgw.s2
2019-08-29_05:47:03.566947Z_20eef172-9ad3-4aa5-b9b9-2de4f1934436 client.rgw.s3
2019-08-29_05:58:03.979830Z_30c70eb4-50ea-46de-b950-65c674892a44 client.rgw.s3
2019-08-29_05:59:34.363028Z_ce72a01a-97c3-46de-a628-1077e439da81 client.rgw.s1
2019-08-29_06:43:49.681058Z_eeacb43b-8a04-4350-9ad9-eb079a6082be client.rgw.s4
2019-08-29_06:46:25.499259Z_84d562bf-8f89-4d70-9ddd-da3db35f3dab client.rgw.s4
2019-08-29_06:57:57.403025Z_32066843-63b2-4add-a074-b09ff75714ae client.rgw.s3
2019-08-29_07:49:09.194436Z_1c220f53-7bcb-4f98-bb62-efa14a91c34d client.rgw.s8
2019-08-29_10:40:17.426785Z_5cc1e21f-7291-44ad-923a-f7aac8a58c6b client.rgw.s8
2019-08-29_14:50:41.143234Z_248a5617-bcd3-4f59-aa78-4d80381f79be client.rgw.s2
2019-08-29_15:04:51.091926Z_73781253-c349-4f5e-84d4-c6e58b24a204 client.rgw.s4
2019-08-29_15:11:21.090542Z_dde4f2c3-1f14-4cf7-b106-eb20f77cd729 client.rgw.s2
2019-08-29_15:22:57.224304Z_f9f7afe3-3f88-4a80-8e20-8ecab51af63d client.rgw.s2
2019-08-29_15:22:57.783620Z_8ef1f65f-9b0c-4214-81c9-ffb63c4136b6 client.rgw.s0
2019-08-29_15:44:46.256449Z_f7191eca-3974-440d-8f9e-70133433d96c client.rgw.s2
2019-08-29_15:56:34.430647Z_c70a9497-dc80-4004-a6b8-cf396f4f3ca0 client.rgw.s4
2019-08-29_15:57:28.351862Z_6771086b-ed5e-4669-8e50-f4f604007ca7 client.rgw.s4
2019-08-29_16:04:37.004699Z_d1c355e1-7ef5-42ed-9535-b1ba201e763c client.rgw.s0
2019-08-29_16:04:54.491304Z_a8319afb-81cb-4ecb-8fe3-73ebd12fe6e3 client.rgw.s3
2019-08-29_16:06:17.980549Z_b93acbd5-ce41-4d97-b508-4b9ad1b8c030 client.rgw.s0
2019-08-29_16:13:25.268542Z_8f8afdba-e10e-4a7b-8b68-8a7f0cd7b46d client.rgw.s1
2019-08-29_16:13:28.689191Z_eef5657e-a679-4730-a4dc-e52440bcca6f client.rgw.s3
2019-08-29_16:24:12.076799Z_b337198a-9602-46a4-a9db-41f57181cb19 client.rgw.s3
2019-08-29_16:31:22.739961Z_b3367c30-a7e6-4ca7-98d5-947a6a40174c client.rgw.s3
2019-08-29_16:34:39.124604Z_47298ea2-5e72-4398-ab41-920f88378f58 client.rgw.s0
2019-08-29_16:39:22.410032Z_93d33e53-74f2-4638-b8bc-b2f7372d39ba client.rgw.s3
2019-08-29_17:15:42.567189Z_fc708126-491a-4687-8a0c-79565d8995df client.rgw.s0
2019-08-29_17:20:08.814902Z_8663799b-8596-4e59-a989-f15a8e5d7fb1 client.rgw.s0
2019-08-29_17:40:32.771758Z_8d7c2059-5097-4c6b-9a5f-aa85e22787ca client.rgw.s3
2019-08-29_18:13:30.199420Z_f799df90-041b-4f5d-9a55-396288382b82 client.rgw.s1

This is just today - it is going since end of May when we upgraded to nautilus - at least that is when "ceph crash ls" started logging this segfaults!

We are currently on 14.2.2 with this config:

[global]
fsid = <snip>
mon initial members =  <snip>
mon host =  <snip>

cluster network =  <snip>
public network =  <snip>

auth cluster required = cephx
auth service required = cephx
auth client required = cephx

osd pool default pg num = 16
osd pool default pgp num = 16

osd heartbeat grace = 60

rgw thread pool size = 512
rgw cache enabled = true

rgw s3 auth use ldap = true
rgw ldap uri =  <snip>
rgw ldap binddn =  <snip>
rgw ldap secret =  <snip>
rgw ldap searchdn =  <snip>
rgw ldap searchfilter =  <snip>

[client.rgw.s0]
rgw_frontends = beast endpoint=127.0.0.1:4444

(repeated rgw config for 5 other nodes running behind haproxy to do TLS and load balancing)
Actions #18

Updated by Roland Sommer over 4 years ago

After resetting num_rados_handles back to 1, the load- and segfault-behaviour seems to have stopped.

Actions #19

Updated by Benjamin Cherian over 4 years ago

Just FYI, on my cluster where I reported (and still see) this issue, I never had changed num_rados_handles, so I don't think that alone is the problem.

Actions #20

Updated by Roland Sommer over 4 years ago

After running smooth for over two weeks, load begun to rise slightly again and another segfault happened.

Actions #21

Updated by Ilsoo Byun over 4 years ago

There seems to be a race condition. The location where the error occurred changed each time. But there was one thing in common. They were all related to AioResultList. Therefore I protected the list by a lock, and this phenomenon has disappeared. I created a related pull request: https://github.com/ceph/ceph/pull/30746

Actions #22

Updated by Sage Weil over 4 years ago

This is being reported by telemetry for 1 cluster, ~20 crashes/day

Actions #23

Updated by Casey Bodley over 4 years ago

  • Status changed from 12 to In Progress
  • Pull request ID set to 30746
Actions #24

Updated by Casey Bodley over 4 years ago

  • Has duplicate Bug #41511: civetweb threads using 100% of CPU added
Actions #25

Updated by Casey Bodley over 4 years ago

  • Status changed from In Progress to 7
Actions #26

Updated by Casey Bodley over 4 years ago

  • Tracker changed from Bug to Backport

changed from Bug to Backport since this targets nautilus

Actions #27

Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)
  • Status changed from 7 to In Progress
  • Release set to nautilus
Actions #28

Updated by Nathan Cutler over 4 years ago

  • Subject changed from RadosGW Segfault during request processing to nautilus: rgw: Segfault during request processing
Actions #29

Updated by Sage Weil over 4 years ago

  • Crash signature (v1) updated (diff)
Actions #30

Updated by Sage Weil over 4 years ago

  • Crash signature (v1) updated (diff)
Actions #31

Updated by Yuri Weinstein over 4 years ago

Benjamin Cherian wrote:

https://github.com/ceph/ceph/pull/30746


Segfault in RadosGW with Beast frontend after constant mixed workload every few hours. RGW log with crash traceback is attached.

Output of ceph crash info is below:

{
"crash_id": "2019-05-09_22:53:06.531229Z_2b2c29f8-489e-411b-9428-efaa0e875006",
"timestamp": "2019-05-09 22:53:06.531229Z",
"process_name": "radosgw",
"entity_name": "client.rgw.cis01.rgw0",
"ceph_version": "14.2.0",
"utsname_hostname": "cis01",
"utsname_sysname": "Linux",
"utsname_release": "4.15.0-46-generic",
"utsname_version": "#49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019",
"utsname_machine": "x86_64",
"os_name": "Ubuntu",
"os_id": "ubuntu",
"os_version_id": "18.04",
"os_version": "18.04.2 LTS (Bionic Beaver)",
"backtrace": [
"(()+0x12890) [0x7fcc0ef88890]",
"(void boost::intrusive::list_impl<boost::intrusive::bhtraits<rgw::AioResultEntry, boost::intrusive::list_node_traits<void*>, (boost::intrusive::link_mode_type)1, boost::intrusive::dft_tag, 1u>, unsigned long, true, void>::sort<get_obj_data::flush(rgw::OwningList<rgw::AioResultEntry>&&)::{lambda(auto:1 const&, auto:2 const&)#1}>(get_obj_data::flush(rgw::OwningList<rgw::AioResultEntry>&&)::{lambda(auto:1 const&, auto:2 const&)#1})+0x55e) [0x557a70739e6e]",
"(get_obj_data::flush(rgw::OwningList<rgw::AioResultEntry>&&)+0x68) [0x557a7073a268]",
"(RGWRados::get_obj_iterate_cb(rgw_raw_obj const&, long, long, long, bool, RGWObjState*, void*)+0x1bb) [0x557a7070761b]",
"(()+0x5e3a74) [0x557a70707a74]",
"(RGWRados::iterate_obj(RGWObjectCtx&, RGWBucketInfo const&, rgw_obj const&, long, long, unsigned long, int ()(rgw_raw_obj const&, long, long, long, bool, RGWObjState, void*), void*)+0x44e) [0x557a7071de5e]",
"(RGWRados::Object::Read::iterate(long, long, RGWGetDataCB*)+0x17e) [0x557a70721c1e]",
"(RGWGetObj::execute()+0xfb7) [0x557a706a2627]",
"(rgw_process_authenticated(RGWHandler_REST*, RGWOp*&, RGWRequest*, req_state*, bool)+0x710) [0x557a70455f40]",
"(process_request(RGWRados*, RGWREST*, RGWRequest*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rgw::auth::StrategyRegistry const&, RGWRestfulIO*, OpsLogSocket*, optional_yield, rgw::dmclock::Scheduler*, int*)+0x1d04) [0x557a70458834]",
"(()+0x29d5fa) [0x557a703c15fa]",
"(()+0x29e1cd) [0x557a703c21cd]",
"(make_fcontext()+0x2f) [0x557a708ad6af]"
]
}

merged

Actions #32

Updated by Nathan Cutler over 4 years ago

  • Status changed from In Progress to Resolved
  • Target version set to v14.2.5

This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30746
merge commit 59c0b7aa67b0bf89ebb822cc622849c742da1f69 (v14.2.4-532-g59c0b7aa67)

Actions #33

Updated by Casey Bodley over 4 years ago

  • Has duplicate Bug #43135: rgw: core dump in RadosWriter added
Actions

Also available in: Atom PDF