Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2024-03-27T22:30:11ZCeph
Redmine rgw - Bug #65188 (Fix Under Review): rgwlc: Executing radosgw-admin lc process --bucket <bkt-name...https://tracker.ceph.com/issues/651882024-03-27T22:30:11ZMatt Benjaminmbenjamin@redhat.com
<p>[LC-Process]: Executing radosgw-admin lc process --bucket <bkt-name> without setting lc rule results in Segmentation fault</p>
<p>Description of problem:<br />[LC-Process]: Executing radosgw-admin lc process --bucket <bkt-name> without setting lc rule results in Segmentation fault</p>
<p>Version-Release number of selected component (if applicable):<br />ceph version 18.2.1-73.el9cp</p>
<p>How reproducible:<br />3/3</p>
<p>Steps to Reproduce:<br />1. Deploy cluster with: ceph version 18.2.1-73.el9cp<br />2. Create a bucket: <bkt_name><br />3. Upload object to the bucket<br />4. Perform: radosgw-admin lc process --bucket <bkt_name></p>
<p>Actual results:<br />Throwing error:</p>
<ul>
<li>Caught signal (Segmentation fault) <b><br /> in thread 7f74eed29800 thread_name:radosgw-admin<br /> ceph version 18.2.1-73.el9cp (16d1bc4bed21ede5993c301b4626fa21cbe97cff) reef (stable)<br /> 1: /lib64/libc.so.6(+0x54db0) [0x7f74ef254db0]<br /> 2: (RGWLC::process_bucket(int, int, RGWLC::LCWorker*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)+0x2b6) [0x556e82e69626]<br /> 3: (RGWLC::process(RGWLC::LCWorker*, std::unique_ptr<rgw::sal::Bucket, std::default_delete<rgw::sal::Bucket> > const&, bool)+0xb7) [0x556e82e6d1a7]<br /> 4: (RGWRados::process_lc(std::unique_ptr<rgw::sal::Bucket, std::default_delete<rgw::sal::Bucket> > const&)+0xdd) [0x556e831d409d]<br /> 5: main()<br /> 6: /lib64/libc.so.6(+0x3feb0) [0x7f74ef23feb0]<br /> 7: __libc_start_main()<br /> 8: _start()<br />2024-03-20T02:17:03.968-0400 7f74eed29800 -1 <strong></b> Caught signal (Segmentation fault) *</strong><br /> in thread 7f74eed29800 thread_name:radosgw-admin</li>
</ul> rgw - Bug #64950 (Pending Backport): rgw-nfs: various file mv (rename) operations failhttps://tracker.ceph.com/issues/649502024-03-15T13:56:58ZMatt Benjaminmbenjamin@redhat.com
<p>This is a regression, probably dating back to Quincy. The initial regression was caused by zipper integration.</p> rgw - Bug #64676 (Pending Backport): rgw: awssigv4: new trailer boundary casehttps://tracker.ceph.com/issues/646762024-03-02T19:34:30ZMatt Benjaminmbenjamin@redhat.com
<p>I observed an environment in which the maven test suite, using http, generated a trailer chunk<br />boundary of "0;" rather than the expected "\r\n0;"</p>
<p>This only arose during upload of a 0-length object.</p>
<p>The failing test has obviously run successfully at and since merge, this was seen only when the<br />current sigv4 parsing logic was taken for our downstream 7.0z1.</p> rgw - Bug #64109 (In Progress): rgw: implement GetObjAttributeshttps://tracker.ceph.com/issues/641092024-01-20T22:40:57ZMatt Benjaminmbenjamin@redhat.com
<p>As described in [1], GetObjectAttributes "combines the functionality of HeadObject and ListParts. All of the data returned with each of those individual calls can be returned with a single call to GetObjectAttributes."</p>
<p>Among other things, the operation can return the S3 checksums of the component parts of multipart-uploaded objects.</p>
<p>AwsCli Example:<br /><pre>
aws --endpoint-url=http://fedora.private:8000 s3api get-object-attributes --bucket sheik --key yerbouti --object-attributes "ETag" "ObjectSize"
{
"LastModified": "2024-01-19T18:05:58+00:00",
"ObjectSize": 3340
}
</pre></p>
<p>[1] <a class="external" href="https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html">https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html</a></p> rgw - Bug #63962 (Triaged): rgw-file: FLAG_SYMBOLIC_LINK decl aliases other flagshttps://tracker.ceph.com/issues/639622024-01-08T18:19:50ZMatt Benjaminmbenjamin@redhat.com
<p>I don't think this bug has side effects, currently, but should be fixed (to claim a unique bit).</p> rgw - Bug #63951 (Fix Under Review): rgw: implement S3 additional checksumshttps://tracker.ceph.com/issues/639512024-01-07T21:54:37ZMatt Benjaminmbenjamin@redhat.com
<p>Since ~2022, AWS SDKs can be requested to send an additional content checksum, using one of CRC32, CRC32c, SHA1, or SHA256-format digests.</p>
<p>The new content checksums are sent as the value of one of the new content checksum headers, x-amz-checksum-{crc32, crc32c, sha1, sha25 6}.<br />Depending on the SDK and encoding choices, the checksum header may be sent as a traditional header or aws-chunked trailing header.</p>
<p>Content checksums are stored with S3 objects and can be recovered via S3 metadata requests (e.g., HeadObject, GetObjectAttributes).</p> rgw - Bug #63546 (Resolved): rgwlc: even current object versions have a unique instancehttps://tracker.ceph.com/issues/635462023-11-15T18:38:50ZMatt Benjaminmbenjamin@redhat.com
<p>Lifecycle processing was clearing list prior to sending the info to rgw_notify.</p> rgw - Bug #63467 (Triaged): rgwlc: require tests that verify creation of delete markers in curre...https://tracker.ceph.com/issues/634672023-11-07T13:11:56ZMatt Benjaminmbenjamin@redhat.com
<p>For example, such tests should detect any regression in <a class="external" href="https://tracker.ceph.com/issues/63458">https://tracker.ceph.com/issues/63458</a></p> rgw - Bug #63458 (Resolved): rgwlc: currentversion expiration incorrectly removes objects (rather...https://tracker.ceph.com/issues/634582023-11-06T20:10:17ZMatt Benjaminmbenjamin@redhat.com
<p>This regression is a partial side effect of an attempted optimization to re-use a sal object handle, introduced in <a class="external" href="https://github.com/ceph/ceph/pull/50680">https://github.com/ceph/ceph/pull/50680</a> (merged 11/5).</p> rgw - Bug #63394 (Pending Backport): rgw: link only radosgw with ALLOC_LIBShttps://tracker.ceph.com/issues/633942023-11-01T19:55:32ZMatt Benjaminmbenjamin@redhat.com
<p>In particular, do not link intermediate dependencies nor librgw.so.2<br />with a custom allocator (normally tcmalloc).</p>
<p>This prevents illegal behavior due to mismatched allocators when run<br />under nfs-ganesha or other consumers.</p>
<p>Example gdb session:</p>
<p>""" <br />(gdb) down<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: osd: Replace ALLOW_MESSAGES_FROM macro (Resolved)" href="https://tracker.ceph.com/issues/10">#10</a> 0x00007ffff5dc7850 in Option::Option (this=0x4786d0) at /home/mbenjamin/dev/ceph-cp/src/common/options.h:14<br />14 struct Option {<br />(gdb) w<br />Missing arguments.<br />(gdb) where<br />#0 <em>int_malloc (av=av@entry=0x7ffff7ac7c80 <main_arena>, bytes=44) at malloc.c:4026<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: gpf in tcp_sendpage (Closed)" href="https://tracker.ceph.com/issues/1">#1</a> 0x00007ffff798fd72 in <i>GI</i>_libc_malloc (bytes=<optimized out>) at malloc.c:3297<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: BUG at fs/ceph/caps.c:2178 (Closed)" href="https://tracker.ceph.com/issues/2">#2</a> 0x00007ffff46b570c in operator new (sz=44) at ../../../../libstdc++-v3/libsupc++/new_op.cc:50<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: leaked dentry ref on umount (Closed)" href="https://tracker.ceph.com/issues/3">#3</a> 0x00007ffff5bcabf5 in std::</em>_new_allocator<char>::allocate (this=0x4786f8, _<em>n=44)<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/new_allocator.h:147<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: lockdep warning in socket code (Closed)" href="https://tracker.ceph.com/issues/4">#4</a> 0x00007ffff5bcab71 in std::allocator<char>::allocate (this=0x4786f8, __n=44)<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/allocator.h:198<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: ./rados lspools sometimes hangs after listing all pools? (Closed)" href="https://tracker.ceph.com/issues/5">#5</a> std::allocator_traits<std::allocator<char> >::allocate (</em>_a=..., _<em>n=44)<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/alloc_traits.h:482<br /><a class="issue tracker-2 status-6 priority-3 priority-lowest closed" title="Feature: libceph could use a backward-compatible-to function (Rejected)" href="https://tracker.ceph.com/issues/6">#6</a> std::</em>_cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_S_allocate (_<em>a=..., <br /> __n=44) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/basic_string.h:126<br /><a class="issue tracker-6 status-3 priority-3 priority-lowest closed" title="Documentation: Document Monitor Commands (Resolved)" href="https://tracker.ceph.com/issues/7">#7</a> 0x00007ffff5bca998 in std::</em>_cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_create (this=0x4786f8, _<em>capacity=@0x7ffffffe59f8: 43, __old_capacity=0)<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/basic_string.tcc:155<br /><a class="issue tracker-3 status-5 priority-4 priority-default closed" title="Support: Document differences from S3 (Closed)" href="https://tracker.ceph.com/issues/8">#8</a> 0x00007ffff5bcce42 in std::</em>_cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_construct<char*> (this=0x4786f8, _<em>beg=0x45c7e0 "time in seconds for detecting a hung thread", __end=0x45c80b "")<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/basic_string.tcc:225<br /><a class="issue tracker-2 status-8 priority-3 priority-lowest closed" title="Feature: Access unimported data (Won't Fix)" href="https://tracker.ceph.com/issues/9">#9</a> 0x00007ffff5bccd81 in std::</em>_cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string (this=0x4786f8, _<em>str="time in seconds for detecting a hung thread")<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/basic_string.h:541<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: osd: Replace ALLOW_MESSAGES_FROM macro (Resolved)" href="https://tracker.ceph.com/issues/10">#10</a> 0x00007ffff5dc7850 in Option::Option (this=0x4786d0) at /home/mbenjamin/dev/ceph-cp/src/common/options.h:14<br /><a class="issue tracker-4 status-3 priority-3 priority-lowest closed" title="Cleanup: mds: replace ALLOW_MESSAGES_FROM macro (Resolved)" href="https://tracker.ceph.com/issues/11">#11</a> 0x00007ffff5dc77bd in std::_Construct<Option, Option const&> (</em>_p=0x4786d0, _<em>args=...)<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/stl_construct.h:119<br /><a class="issue tracker-2 status-3 priority-3 priority-lowest closed" title="Feature: uclient: Make cap handling smarter (Resolved)" href="https://tracker.ceph.com/issues/12">#12</a> 0x00007ffff5dc7707 in std::</em>_do_uninit_copy<Option const*, Option*> (_<em>first=0x7fffffff2b58, <br /> __last=0x7fffffffc848, __result=0x478010)<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/stl_uninitialized.h:120<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed parent" title="Feature: uclient: Make readdir use the cache (Resolved)" href="https://tracker.ceph.com/issues/13">#13</a> 0x00007ffff5dc76c5 in std::</em>_uninitialized_copy<false>::__uninit_copy<Option const*, Option*> (<br /> _<em>first=0x7fffffff2498, __last=0x7fffffffc848, __result=0x478010)<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/stl_uninitialized.h:137<br /><a class="issue tracker-1 status-10 priority-4 priority-default closed" title="Bug: osd: pg split breaks if not all osds are up (Duplicate)" href="https://tracker.ceph.com/issues/14">#14</a> 0x00007ffff5dc768d in std::uninitialized_copy<Option const*, Option*> (</em>_first=0x7fffffff2498, <br /> _<em>last=0x7fffffffc848, __result=0x478010)<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/stl_uninitialized.h:184<br /><a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mds rejoin: invented dirfrags (MDCache.cc:3469) (Resolved)" href="https://tracker.ceph.com/issues/15">#15</a> 0x00007ffff5dc75e9 in std::</em>_uninitialized_copy_a<Option const*, Option*, Option> (_<em>first=0x7fffffff2498, <br /> __last=0x7fffffffc848, __result=0x478010)<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/stl_uninitialized.h:373<br /><a class="issue tracker-1 status-3 priority-5 priority-high3 closed" title="Bug: mds restart vs dbench (Resolved)" href="https://tracker.ceph.com/issues/16">#16</a> 0x00007ffff5e9b90e in std::vector<Option, std::allocator<Option> >::_M_range_initialize<Option const*> (<br /> this=0x7fffffffd638, __first=0x7fffffff2498, __last=0x7fffffffc848)<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/stl_vector.h:1692<br />--Type <RET> for more, q to quit, c to continue without paging--<br /><a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: rm -r failure (Rejected)" href="https://tracker.ceph.com/issues/17">#17</a> 0x00007ffff5e9a953 in std::vector<Option, std::allocator<Option> >::vector (this=0x7fffffffd638, <br /> __l=std::initializer_list of length 97 = {...}, __a=...)<br /> at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/stl_vector.h:679<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: reconnect fixups (Resolved)" href="https://tracker.ceph.com/issues/18">#18</a> 0x00007ffff5edc0c4 in get_rbd_options ()<br /> at /home/mbenjamin/dev/ceph-cp/build/src/common/options/rbd_options.cc:12<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: rbd (Resolved)" href="https://tracker.ceph.com/issues/19">#19</a> 0x00007ffff5e4fd32 in build_options ()<br /> at /home/mbenjamin/dev/ceph-cp/src/common/options/build_options.cc:44<br /><a class="issue tracker-2 status-3 priority-5 priority-high3 closed" title="Feature: client: recover from a killed session (w/ blacklist) (Resolved)" href="https://tracker.ceph.com/issues/20">#20</a> 0x00007ffff5bb6100 in __cxx_global_var_init.55(void) ()<br /> at /home/mbenjamin/dev/ceph-cp/src/common/options.cc:340<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: optionally use libatomic for atomic_t (Resolved)" href="https://tracker.ceph.com/issues/21">#21</a> 0x00007ffff5bb6147 in _GLOBAL</em>_sub_I_options.cc ()<br /> from /home/mbenjamin/dev/ceph-cp/build/lib/libceph-common.so.2<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: BUG at fs/ceph/caps.c:253 (Closed)" href="https://tracker.ceph.com/issues/22">#22</a> 0x00007ffff7fcef77 in call_init (env=0x7fffffffe090, argv=0x7fffffffe058, argc=6, l=<optimized out>)<br /> at dl-init.c:90<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: fcntl/flock advisory lock support (Resolved)" href="https://tracker.ceph.com/issues/23">#23</a> call_init (l=<optimized out>, argc=6, argv=0x7fffffffe058, env=0x7fffffffe090) at dl-init.c:27<br /><a class="issue tracker-2 status-1 priority-4 priority-default" title="Feature: mdsc: preallocate reply msgs (New)" href="https://tracker.ceph.com/issues/24">#24</a> 0x00007ffff7fcf06d in <em>dl_init (main_map=0x407850, argc=6, argv=0x7fffffffe058, env=0x7fffffffe090)<br /> at dl-init.c:137<br /><a class="issue tracker-2 status-1 priority-3 priority-lowest" title="Feature: mdsc: mempool for cap writeback? (New)" href="https://tracker.ceph.com/issues/25">#25</a> 0x00007ffff7fcb5c2 in __GI</em>_dl_catch_exception (exception=exception@entry=0x0, <br /> operate=operate@entry=0x7ffff7fd5c30 <call_dl_init>, args=args@entry=0x7fffffffd8e0) at dl-catch.c:211<br /><a class="issue tracker-2 status-6 priority-3 priority-lowest closed" title="Feature: statlite (Rejected)" href="https://tracker.ceph.com/issues/26">#26</a> 0x00007ffff7fd5bcc in dl_open_worker (a=a@entry=0x7fffffffda90) at dl-open.c:808<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: ACLs (Resolved)" href="https://tracker.ceph.com/issues/27">#27</a> 0x00007ffff7fcb523 in _<em>GI</em>_dl_catch_exception (exception=exception@entry=0x7fffffffda70, <br /> operate=operate@entry=0x7ffff7fd5b30 <dl_open_worker>, args=args@entry=0x7fffffffda90) at dl-catch.c:237<br /><a class="issue tracker-1 status-8 priority-4 priority-default closed" title="Bug: gracefully fail on fill_trace errors (Won't Fix)" href="https://tracker.ceph.com/issues/28">#28</a> 0x00007ffff7fd5f44 in <em>dl_open (file=0x7ffff7dd42ec "libganesha_rados_urls.so", mode=<optimized out>, <br /> caller_dlopen=0x7ffff7c86d3a <load_rados_config+24>, nsid=<optimized out>, argc=6, argv=0x7fffffffe058, <br /> env=0x7fffffffe090) at dl-open.c:884<br /><a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: mds: rfiles underflow (Resolved)" href="https://tracker.ceph.com/issues/29">#29</a> 0x00007ffff797b714 in dlopen_doit (a=a@entry=0x7fffffffdd40) at dlopen.c:56<br /><a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: multimds: slave_request on getattr (Resolved)" href="https://tracker.ceph.com/issues/30">#30</a> 0x00007ffff7fcb523 in __GI</em>_dl_catch_exception (exception=exception@entry=0x7fffffffdc80, <br /> operate=0x7ffff797b6b0 <dlopen_doit>, args=0x7fffffffdd40) at dl-catch.c:237<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: Handle mal-formed requests better (Closed)" href="https://tracker.ceph.com/issues/31">#31</a> 0x00007ffff7fcb679 in <em>dl_catch_error (objname=0x7fffffffdce8, errstring=0x7fffffffdcf0, <br /> mallocedp=0x7fffffffdce7, operate=<optimized out>, args=<optimized out>) at dl-catch.c:256<br /><a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Feature: msgr: send messages via a Connection* (Resolved)" href="https://tracker.ceph.com/issues/32">#32</a> 0x00007ffff797b1f3 in _dlerror_run (operate=operate@entry=0x7ffff797b6b0 <dlopen_doit>, <br /> args=args@entry=0x7fffffffdd40) at dlerror.c:138<br /><a class="issue tracker-2 status-5 priority-4 priority-default closed" title="Feature: O_LAZY or equivalent (Closed)" href="https://tracker.ceph.com/issues/33">#33</a> 0x00007ffff797b7cf in dlopen_implementation (dl_caller=<optimized out>, mode=<optimized out>, <br /> file=<optimized out>) at dlopen.c:71<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: mds: nonempty cap xlist on snaprealm during trim (Closed)" href="https://tracker.ceph.com/issues/34">#34</a> _</em>_dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:81<br /><a class="issue tracker-1 status-10 priority-4 priority-default closed" title="Bug: osd: pg split should queue transaction(s) under appropriate sequencer(s) (Duplicate)" href="https://tracker.ceph.com/issues/35">#35</a> 0x00007ffff7c86d3a in load_rados_config ()<br /> at /home/mbenjamin/dev/nfs-ganesha/src/config_parsing/conf_url.c:85<br /><a class="issue tracker-1 status-6 priority-4 priority-default closed" title="Bug: uml crash in sendpage (Rejected)" href="https://tracker.ceph.com/issues/36">#36</a> 0x00007ffff7c86ff8 in config_url_init ()<br /> at /home/mbenjamin/dev/nfs-ganesha/src/config_parsing/conf_url.c:123<br />--Type <RET> for more, q to quit, c to continue without paging--<br />Thread 1 "ganesha.nfsd" hit Breakpoint 10, tc_free (ptr=0xee2420) at src/tcmalloc.cc:1936<br />1936 void tc_free(void* ptr) PERFTOOLS_NOTHROW {<br />(gdb) c<br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 10, tc_free (ptr=0xee2400) at src/tcmalloc.cc:1936<br />1936 void tc_free(void* ptr) PERFTOOLS_NOTHROW {<br />(gdb) c<br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 10, tc_free (ptr=0xef9080) at src/tcmalloc.cc:1936<br />1936 void tc_free(void* ptr) PERFTOOLS_NOTHROW {<br />(gdb) c<br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 10, tc_free (ptr=0xede010) at src/tcmalloc.cc:1936<br />1936 void tc_free(void* ptr) PERFTOOLS_NOTHROW {<br />(gdb) c<br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 10, tc_free (ptr=0xee0020) at src/tcmalloc.cc:1936<br />1936 void tc_free(void* ptr) PERFTOOLS_NOTHROW {<br />(gdb) c<br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 1, librgw_create (rgw=0x7ffff76615f0 <RGWFSM+816>, argc=2, <br /> argv=0x7fffffffd440) at /home/mbenjamin/dev/ceph-cp/src/rgw/librgw.cc:72<br />72 rc = rgwlib.init(args);<br />(gdb) c<br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 10, tc_free (ptr=0xee2560) at src/tcmalloc.cc:1936<br />1936 void tc_free(void* ptr) PERFTOOLS_NOTHROW {<br />(gdb) c<br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 10, tc_free (ptr=0xee2540) at src/tcmalloc.cc:1936<br />1936 void tc_free(void* ptr) PERFTOOLS_NOTHROW {<br />(gdb) <br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 10, tc_free (ptr=0xee2520) at src/tcmalloc.cc:1936<br />1936 void tc_free(void* ptr) PERFTOOLS_NOTHROW {<br />(gdb) c<br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 10, tc_free (ptr=0xef9080) at src/tcmalloc.cc:1936<br />1936 void tc_free(void* ptr) PERFTOOLS_NOTHROW {<br />(gdb) c<br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 10, tc_free (ptr=0xee2440) at src/tcmalloc.cc:1936<br />1936 void tc_free(void* ptr) PERFTOOLS_NOTHROW {<br />(gdb) c<br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 6, rgw_global_init (defaults=0x7fffffffd298, <br /> args=std::vector of length 3, capacity 4 = {...}, module_type=8, code_env=CODE_ENVIRONMENT_DAEMON, flags=1)<br /> at /home/mbenjamin/dev/ceph-cp/src/rgw/rgw_common.cc:3051<br />3051 global_pre_init(defaults, args, module_type, code_env, flags);<br />(gdb) c<br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 7, ceph_argparse_early_args (<br /> args=std::vector of length 3, capacity 4 = {...}, module_type=8, cluster=0x7fffffffcda0, <br /> conf_file_list=0x7fffffffcdc0) at /home/mbenjamin/dev/ceph-cp/src/common/ceph_argparse.cc:496<br />496 auto orig_args = args;<br />(gdb) <br />[New Thread 0x7fffeb39e6c0 (LWP 1228515)]</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 2, global_pre_init (defaults=0x7fffffffd298, <br /> args=std::vector of length 0, capacity 4, module_type=8, code_env=CODE_ENVIRONMENT_DAEMON, flags=1)<br /> at /home/mbenjamin/dev/ceph-cp/src/global/global_init.cc:174<br />174 conf.do_argv_commands();<br />(gdb) c<br />Continuing.</p>
<p>Thread 1 "ganesha.nfsd" hit Breakpoint 10, tc_free (ptr=0x446e70) at src/tcmalloc.cc:1936<br />1936 void tc_free(void* ptr) PERFTOOLS_NOTHROW {<br />(gdb) c<br />Continuing.<br />src/tcmalloc.cc:333] Attempt to free invalid pointer 0x446e70</p>
<p>Thread 1 "ganesha.nfsd" received signal SIGABRT, Aborted.<br />__pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0)<br /> at pthread_kill.c:44<br />44 return INTERNAL_SYSCALL_ERROR_P (ret) ? INTERNAL_SYSCALL_ERRNO (ret) : 0; <br />(gdb) <br />"""</p> rgw - Bug #63380 (Closed): librgw: avoid illegal free while preparing to initialize rgwlibhttps://tracker.ceph.com/issues/633802023-10-31T18:42:36ZMatt Benjaminmbenjamin@redhat.com
<p>The use of argv_to_vec in librgw_create(...) is both incorrect and unsafe, and specifically causing an illegal free during<br />delete of spl_args.</p> Dashboard - Bug #63100 (Resolved): mgr/dashboard: ninja/make install cannot be performed when das...https://tracker.ceph.com/issues/631002023-10-04T14:14:13ZMatt Benjaminmbenjamin@redhat.com
<a name="Description-of-problem"></a>
<h3 >Description of problem<a href="#Description-of-problem" class="wiki-anchor">¶</a></h3>
<p>The installation logic trivially attempts to install a "prebuilt" object that will not be present if the dashboard is disabled.<br />The one-liner fix to enclose the install action in a test for the object to exist, does not require a tracker and failure to install, like a<br />failure to build from source, should be a highly prioritized type of bug to fix.</p>
<p>The PR fixing this issue was submitted on July 4, and approved by Ernesto Puertat on July 10. The non-existence of a tracker was clearly marked<br />in the checklist. This PR has not been merged in 4 months.</p>
<a name="Environment"></a>
<h3 >Environment<a href="#Environment" class="wiki-anchor">¶</a></h3>
<p>ALL</p>
<a name="How-reproducible"></a>
<h3 >How reproducible<a href="#How-reproducible" class="wiki-anchor">¶</a></h3>
<p>100%. Build with dashboard = OFF.</p>
<a name="Actual-results"></a>
<h3 >Actual results<a href="#Actual-results" class="wiki-anchor">¶</a></h3>
<p>make install or ninja install fails 100% of the time because an unbuilt dashboard cannot be installed.</p> Ceph - Bug #62778 (Pending Backport): cmake: BuildFIO.cmake should not introduce -std=gnu++17https://tracker.ceph.com/issues/627782023-09-08T22:05:28ZMatt Benjaminmbenjamin@redhat.com
<p>Not correct in general, and a build bug because fio-objectstore<br />includes c++20 headers.</p> rgw - Bug #62249 (New): rgw: dbstore: bucket::list with marker param should return the first entr...https://tracker.ceph.com/issues/622492023-07-31T16:23:28ZMatt Benjaminmbenjamin@redhat.com
<p>In my testing, dbstore's listing contains the marker, for this case.</p> rgw - Bug #61338 (Duplicate): rgw: qactive perf counter may leak on errorshttps://tracker.ceph.com/issues/613382023-05-22T13:15:13ZMatt Benjaminmbenjamin@redhat.com
<p>One of my customer ran some bench tests on their ceph cluster. As part of their tests they've tried to make a lot of requests using the next s3bench: <a class="external" href="https://github.com/shpaz/s3bench/blob/master/s3bench.py">https://github.com/shpaz/s3bench/blob/master/s3bench.py</a></p>
<p>they used a simple automation that destroys all running processes (using `kill` command) that runs this script. After destroying all the processes they've noticed that the metric provided by the mgr `ceph_rgw_qactive` stays high as it was, although all operations to the radosgw were stopped.</p>
<p>ASK:</p>
<p>they would like to get an explanation regarding this metric and the cause of it to stay high. The Grafana dashboard that uses this metric as `rgw connections` graph.</p>