Project

General

Profile

Actions

Bug #61774

open

centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons

Added by Laura Flores 11 months ago. Updated 1 day ago.

Status:
Fix Under Review
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
reef
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

centos 9 testing based on https://github.com/ceph/ceph/pull/50441 reveals rocksdb memory leak

/a/lflores-2023-06-22_15:43:37-rados-wip-yuri2-testing-2023-06-12-1303-distro-default-smithi/7311970

<error>
  <unique>0x113f7</unique>
  <tid>1</tid>
  <threadname>ceph-mon</threadname>
  <kind>Leak_StillReachable</kind>
  <xwhat>
    <text>8 bytes in 1 blocks are still reachable in loss record 1 of 172</text>
    <leakedbytes>8</leakedbytes>
    <leakedblocks>1</leakedblocks>
  </xwhat>
  <stack>
    <frame>
      <ip>0x484622F</ip>
      <obj>/usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so</obj>
      <fn>operator new[](unsigned long)</fn>
      <dir>/builddir/build/BUILD/valgrind-3.19.0/coregrind/m_replacemalloc</dir>
      <file>vg_replace_malloc.c</file>
      <line>640</line>
    </frame>
    <frame>
      <ip>0x82B381</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>allocate</fn>
      <dir>/usr/include/c++/11/ext</dir>
      <file>new_allocator.h</file>
      <line>127</line>
    </frame>
    <frame>
      <ip>0x82B381</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>allocate</fn>
      <dir>/usr/include/c++/11/bits</dir>
      <file>alloc_traits.h</file>
      <line>464</line>
    </frame>
    <frame>
      <ip>0x82B381</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>_M_allocate</fn>
      <dir>/usr/include/c++/11/bits</dir>
      <file>stl_vector.h</file>
      <line>346</line>
    </frame>
    <frame>
      <ip>0x82B381</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>void std::vector&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt;, std::allocator&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt; &gt; &gt;::_M_realloc_insert&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt; &gt;(__gnu_cxx::__normal_iterator&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt;*, std::vector&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt;, std::allocator&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt; &gt; &gt; &gt;, std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt;&amp;&amp;)</fn>
      <dir>/usr/include/c++/11/bits</dir>
      <file>vector.tcc</file>
      <line>440</line>
    </frame>
    <frame>
      <ip>0x982603</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>emplace_back&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt; &gt;</fn>
      <dir>/usr/include/c++/11/bits</dir>
      <file>vector.tcc</file>
      <line>121</line>
    </frame>
    <frame>
      <ip>0x982603</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>AddFactoryEntry</fn>
      <dir>/usr/src/debug/ceph-18.0.0-4378.gbab5f35c.el9.x86_64/src/rocksdb/include/rocksdb/utilities</dir>
      <file>object_registry.h</file>
      <line>272</line>
    </frame>
    <frame>
      <ip>0x982603</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>std::function&lt;rocksdb::FileSystem* (std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;, std::unique_ptr&lt;rocksdb::FileSystem, std::default_delete&lt;rocksdb::FileSystem&gt; &gt;*, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt;*)&gt; const&amp; rocksdb::ObjectLibrary::AddFactory&lt;rocksdb::FileSystem&gt;(rocksdb::ObjectLibrary::PatternEntry const&amp;, std::function&lt;rocksdb::FileSystem* (std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;, std::unique_ptr&lt;rocksdb::FileSystem, std::default_delete&lt;rocksdb::FileSystem&gt; &gt;*, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt;*)&gt; const&amp;)</fn>
      <dir>/usr/src/debug/ceph-18.0.0-4378.gbab5f35c.el9.x86_64/src/rocksdb/include/rocksdb/utilities</dir>
      <file>object_registry.h</file>
      <line>256</line>
    </frame>
    <frame>
      <ip>0x3DE34B</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>__static_initialization_and_destruction_0(int, int) [clone .constprop.0]</fn>
      <dir>/usr/src/debug/ceph-18.0.0-4378.gbab5f35c.el9.x86_64/src/rocksdb/env</dir>
      <file>fs_posix.cc</file>
      <line>1283</line>
    </frame>
    <frame>
      <ip>0x57BFFDA</ip>
      <obj>/usr/lib64/libc.so.6</obj>
      <fn>__libc_start_main@@GLIBC_2.34</fn>
    </frame>
    <frame>
      <ip>0x3E51A4</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>(below main)</fn>
    </frame>
  </stack>
</error>


Related issues 3 (2 open1 closed)

Related to RADOS - Bug #61820: mon: segfault on rocksdb openingPending BackportMatan Breizman

Actions
Related to RADOS - Bug #64637: LeakPossiblyLost in BlueStore::_do_write_small() in osdNew

Actions
Has duplicate RADOS - Bug #64214: Health check failed: 1 osds down (OSD_DOWN) in cluster logs.Duplicate

Actions
Actions #1

Updated by Laura Flores 11 months ago

  • Subject changed from rocksdb "Leak_StillReachable" memory leak in mons to centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
Actions #4

Updated by Laura Flores 11 months ago

By adding this change to "qa/valgrind.supp":

diff --git a/qa/valgrind.supp b/qa/valgrind.supp
index 1a73a84e5a8..9a1b60e01cc 100644
--- a/qa/valgrind.supp
+++ b/qa/valgrind.supp
@@ -458,6 +458,20 @@
         fun:*rocksdb*VersionBuilder*Rep*LoadTableHandlers*
         ...
 }
+{
+        rocksdb FileSystem
+        Memcheck:Leak
+        ...
+        fun:*rocksdb*FileSystem*
+        ...
+}

Part of the leak was suppressed, but there are still issues with "static_initialization_and_destruction":
/a/lflores-2023-06-22_20:42:43-rados-wip-yuri2-testing-2023-06-12-1303-distro-default-smithi/7313276/remote/smithi031/log/valgrind/mon.a.log.gz

<error>
  <unique>0x7d50</unique>
  <tid>1</tid>
  <threadname>ceph-mon</threadname>
  <kind>Leak_StillReachable</kind>
  <xwhat>
    <text>16 bytes in 1 blocks are still reachable in loss record 7 of 172</text>
    <leakedbytes>16</leakedbytes>
    <leakedblocks>1</leakedblocks>
  </xwhat>
  <stack>
    <frame>
      <ip>0x484622F</ip>
      <obj>/usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so</obj>
      <fn>operator new[](unsigned long)</fn>
      <dir>/builddir/build/BUILD/valgrind-3.19.0/coregrind/m_replacemalloc</dir>
      <file>vg_replace_malloc.c</file>
      <line>640</line>
    </frame>
    <frame>
      <ip>0x3C99CB</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>__static_initialization_and_destruction_0(int, int) [clone .constprop.0]</fn>
      <dir>/usr/src/debug/ceph-18.0.0-4378.gbab5f35c.el9.x86_64/src/rocksdb/db</dir>
      <file>error_handler.cc</file>
      <line>255</line>
    </frame>
    <frame>
      <ip>0x57BFFDA</ip>
      <obj>/usr/lib64/libc.so.6</obj>
      <fn>__libc_start_main@@GLIBC_2.34</fn>
    </frame>
    <frame>
      <ip>0x3E51A4</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>(below main)</fn>
    </frame>
  </stack>
</error>

Actions #5

Updated by Radoslaw Zarzynski 11 months ago

Leak_StillReachable – it looks that we don't properly clean-up the memory at the monitor shutdown. I think these leaks (and other similar we've already suppressed) are just different symptoms of the same illness.

Useful link on definitely lost vs still reachable: https://stackoverflow.com/a/50504703.

Actions #6

Updated by Radoslaw Zarzynski 11 months ago

From the gcc's source code it looks that __static_initialization_and_destruction_02 is a special, compiler-created function to handle...:

/* The name of the function we create to handle initializations and
   destructions for objects with static storage duration.  */
#define SSDF_IDENTIFIER "__static_initialization_and_destruction" 
...
/* Begins the generation of the function that will handle all
   initialization and destruction of objects with static storage
   duration.  The function generated takes two parameters of type
   `int': __INITIALIZE_P and __PRIORITY.  If __INITIALIZE_P is
   nonzero, it performs initializations.  Otherwise, it performs
   destructions.  It only performs those initializations or
   destructions with the indicated __PRIORITY.  The generated function
   returns no value.

   It is assumed that this function will only be called once per
   translation unit.  */

static tree
start_static_storage_duration_function (unsigned count)
{
  tree type;
  tree body;
  char id[sizeof (SSDF_IDENTIFIER) + 1 /* '\0' */ + 32];

  /* Create the identifier for this function.  It will be of the form
     SSDF_IDENTIFIER_<number>.  */
  sprintf (id, "%s_%u", SSDF_IDENTIFIER, count);
...

Do we somewhere call _exit() maybe?

Actions #7

Updated by Radoslaw Zarzynski 10 months ago

  • Assignee changed from Laura Flores to Kamoltat (Junior) Sirivadhna
  • Priority changed from High to Normal

I think there are two, parallel directions here.

1. As immediate workaround we should suppress – I believe Laura is working on that.
2. As a longer term fix we should look into logs and try to figure how the normal deinit is bypassed – I guess that might be interesting for Junior.

If we get the fix from 2), we could revert (even in the same PR) the workaround from 1). Reassigning to Junior for the sake of 2).

Actions #8

Updated by Laura Flores 10 months ago

Makes sense Radek. Will continue to work on silencing the valgrind failure in the meantime.

Actions #9

Updated by Matan Breizman 10 months ago

  • Related to Bug #61820: mon: segfault on rocksdb opening added
Actions #10

Updated by Laura Flores 10 months ago

I've tried many different combinations to silence this valgrind leak with no luck.

This is the most recent combination I've tried, with the help of "--gen-suppressions=all" (valgrind auto-generates a suppression entry). With this, I'm still running into the valgrind failure noted in https://tracker.ceph.com/issues/61774#note-4.

diff --git a/qa/suites/rados/valgrind-leaks/1-start.yaml b/qa/suites/rados/valgrind-leaks/1-start.yaml
index 9263f2a838b..551aa5e7ebe 100644
--- a/qa/suites/rados/valgrind-leaks/1-start.yaml
+++ b/qa/suites/rados/valgrind-leaks/1-start.yaml
@@ -21,8 +21,8 @@ overrides:
       osd:
         osd fast shutdown: false
     valgrind:
-      mon: [--tool=memcheck, --leak-check=full, --show-reachable=yes]
-      osd: [--tool=memcheck]
+      mon: [--tool=memcheck, --leak-check=full, --show-reachable=yes, --gen-suppressions=all]
+      osd: [--tool=memcheck, --gen-suppressions=all]
 roles:
 - [mon.a, mon.b, mon.c, mgr.x, mgr.y, osd.0, osd.1, osd.2, client.0]
 tasks:
diff --git a/qa/valgrind.supp b/qa/valgrind.supp
index 1a73a84e5a8..4d6394e6500 100644
--- a/qa/valgrind.supp
+++ b/qa/valgrind.supp
@@ -458,6 +458,23 @@
         fun:*rocksdb*VersionBuilder*Rep*LoadTableHandlers*
         ...
 }
+{
+        rocksdb FileSystem
+        Memcheck:Leak
+        ...
+        fun:*rocksdb*FileSystem*
+        ...
+}
+{
+        static_initialization_and_destruction
+        Memcheck:Leak
+        ...
+        fun:_Znam
+        fun:_Z41__static_initialization_and_destruction_0ii.constprop.0
+        fun:__libc_start_main@@GLIBC_2.34
+        fun:(below main)
+        ...
+}

Here is the latest teuthology results using the above changes: http://pulpito.front.sepia.ceph.com/lflores-2023-07-19_00:49:10-rados-wip-yuri2-testing-2023-07-15-0802-distro-default-smithi/7343127/

This is the valgrind leak that still shows up:

<error>
  <unique>0x5612</unique>
  <tid>1</tid>
  <threadname>ceph-mon</threadname>
  <kind>Leak_StillReachable</kind>
  <xwhat>
    <text>16 bytes in 1 blocks are still reachable in loss record 8 of 172</text>
    <leakedbytes>16</leakedbytes>
    <leakedblocks>1</leakedblocks>
  </xwhat>
  <stack>
    <frame>
      <ip>0x484622F</ip>
      <obj>/usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so</obj>
      <fn>operator new[](unsigned long)</fn>
      <dir>/builddir/build/BUILD/valgrind-3.19.0/coregrind/m_replacemalloc</dir>
      <file>vg_replace_malloc.c</file>
      <line>640</line>
    </frame>
    <frame>
      <ip>0x9601A1</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>rocksdb::ObjectLibrary::Default()</fn>
      <dir>/usr/src/debug/ceph-18.0.0-4991.ga4658f21.el9.x86_64/src/rocksdb/utilities</dir>
      <file>object_registry.cc</file>
      <line>213</line>
    </frame>
    <frame>
      <ip>0x3DB8C4</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>__static_initialization_and_destruction_0(int, int) [clone .constprop.0]</fn>
      <dir>/usr/src/debug/ceph-18.0.0-4991.ga4658f21.el9.x86_64/src/rocksdb/env</dir>
      <file>fs_posix.cc</file>
      <line>1283</line>
    </frame>
    <frame>
      <ip>0x57C8FDA</ip>
      <obj>/usr/lib64/libc.so.6</obj>
      <fn>__libc_start_main@@GLIBC_2.34</fn>
    </frame>
    <frame>
      <ip>0x3E2834</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>(below main)</fn>
    </frame>

Another format I tried was:

{
        static_initialization_and_destruction
        Memcheck:Leak
        ...
        fun:*static_initialization_and_destruction_0*
        ...
}

But the result was the same.

Actions #11

Updated by Matan Breizman 10 months ago

from:

      <obj>/usr/bin/ceph-mon</obj>
      <fn>__static_initialization_and_destruction_0(int, int) [clone .constprop.0]</fn>
      <dir>/usr/src/debug/ceph-18.0.0-4378.gbab5f35c.el9.x86_64/src/rocksdb/db</dir>
      <file>error_handler.cc</file>
      <line>255</line>

See:

STATIC_AVOID_DESTRUCTION(const Status, kOkStatus){Status::OK()};

// Coding guidelines say to avoid static objects with non-trivial destructors,
// because it's easy to cause trouble (UB) in static destruction. This
// macro makes it easier to define static objects that are normally never
// destructed, except are destructed when running under ASAN

It seems as an intentional leak which is dismissed on #ifdef ROCKSDB_VALGRIND_RUN. We should take this flag into consideration as well to avoid this error.

Actions #12

Updated by Laura Flores 9 months ago

I did manage to reproduce this failure in a centos 9 container. install-deps.sh is not configured for centos 9 yet, so I had to do some manual installs (by running install-deps.sh and installing whatever it said was missing).

After that, here are the steps to reproduce:

$ cd ceph
$ ninja vstart -j$(nproc)
$ ../src/vstart.sh --debug --new -x --localhost --bluestore --without-dashboard
$ mkdir /home/tmp/
$ mkdir /home/tmp/valgrind
$ touch /home/tmp/valgrind/mon.a.log
$ ps waxu | grep ceph-mon
root       38217  1.1  0.0 354064 139616 ?       Ssl  18:03   0:01 /root/ceph/build/bin/ceph-mon -i a -c /root/ceph/build/ceph.conf
root       38258  0.7  0.0 353048 135160 ?       Ssl  18:03   0:01 /root/ceph/build/bin/ceph-mon -i b -c /root/ceph/build/ceph.conf
root       38299  0.7  0.0 352020 134604 ?       Ssl  18:03   0:01 /root/ceph/build/bin/ceph-mon -i c -c /root/ceph/build/ceph.conf
root       42886  0.0  0.0   3876  2012 pts/0    S+   18:06   0:00 grep --color=auto ceph-mon
$ kill 38217
$ OPENSSL_ia32cap='~0x1000000000000000' valgrind --trace-children=no --child-silent-after-fork=yes '--soname-synonyms=somalloc=*tcmalloc*' --num-callers=50 --suppressions=/root/ceph/qa/valgrind.supp --xml=yes --xml-file=/home/tmp/valgrind/mon.a.log --time-stamp=yes --vgdb=yes --exit-on-first-error=no --error-exitcode=42 --tool=memcheck --leak-check=full --show-reachable=yes /root/ceph/build/bin/ceph-mon -i a -c /root/ceph/build/ceph.conf -d

Let the last step run for a bit (about a 30 sec - minute), then kill with Ctrl+C. Check /home/tmp/valgrind/mon.a.log, and the valgrind leak will be there.

Actions #13

Updated by Laura Flores 9 months ago

  • Status changed from New to Fix Under Review
  • Assignee changed from Kamoltat (Junior) Sirivadhna to Laura Flores
  • Pull request ID set to 52639

Trying this. Currently in testing.

Actions #14

Updated by Radoslaw Zarzynski 9 months ago

We agreed it's not a blocker for Reef.

Actions #15

Updated by Kamoltat (Junior) Sirivadhna 8 months ago

/a/yuriw-2023-08-16_18:39:08-rados-wip-yuri3-testing-2023-08-15-0955-distro-default-smithi/7370286/

Actions #16

Updated by Aishwarya Mathuria 7 months ago

/a/yuriw-2023-10-05_21:43:37-rados-wip-yuri6-testing-2023-10-04-0901-distro-default-smithi/7412027

Actions #17

Updated by Nitzan Mordechai 7 months ago

/a/yuriw-2023-10-16_14:44:27-rados-wip-yuri10-testing-2023-10-11-0812-distro-default-smithi/7429792
/a/yuriw-2023-10-16_14:44:27-rados-wip-yuri10-testing-2023-10-11-0812-distro-default-smithi/7429928
/a/yuriw-2023-10-16_14:44:27-rados-wip-yuri10-testing-2023-10-11-0812-distro-default-smithi/7429857
/a/yuriw-2023-10-16_14:44:27-rados-wip-yuri10-testing-2023-10-11-0812-distro-default-smithi/7429858
/a/yuriw-2023-10-16_14:44:27-rados-wip-yuri10-testing-2023-10-11-0812-distro-default-smithi/7429719

Actions #18

Updated by Matan Breizman 7 months ago

/a/yuriw-2023-10-11_14:08:36-rados-wip-yuri11-testing-2023-10-10-1226-reef-distro-default-smithi/7421712/
/a/yuriw-2023-10-11_14:08:36-rados-wip-yuri11-testing-2023-10-10-1226-reef-distro-default-smithi/7421713/

Actions #19

Updated by Nitzan Mordechai 6 months ago

/a/yuriw-2023-10-30_15:34:36-rados-wip-yuri10-testing-2023-10-27-0804-distro-default-smithi/7441266
/a/yuriw-2023-10-30_15:34:36-rados-wip-yuri10-testing-2023-10-27-0804-distro-default-smithi/7441336
/a/yuriw-2023-10-30_15:34:36-rados-wip-yuri10-testing-2023-10-27-0804-distro-default-smithi/7441129
/a/yuriw-2023-10-30_15:34:36-rados-wip-yuri10-testing-2023-10-27-0804-distro-default-smithi/7441267
/a/yuriw-2023-10-30_15:34:36-rados-wip-yuri10-testing-2023-10-27-0804-distro-default-smithi/7441201

Actions #20

Updated by Kamoltat (Junior) Sirivadhna 6 months ago

yuriw-2023-11-02_14:18:00-rados-wip-yuri-testing-2023-11-01-1538-reef-distro-default-smithi/7444680/remote/smithi088/log/valgrind/mon.c.log.gz

Actions #21

Updated by Radoslaw Zarzynski 6 months ago

From reef.1 testing: https://pulpito.ceph.com/yuriw-2023-11-05_15:32:58-rados-reef-release-distro-default-smithi/7448381/

rzarzynski@teuthology:/a/yuriw-2023-11-05_15:32:58-rados-reef-release-distro-default-smithi/7448381$ less ./remote/smithi148/log/valgrind/mon.a.log.gz
...
    <frame>
      <ip>0x82BB51</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>void std::vector&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt;, std::allocator&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt; &gt; &gt;::_M_realloc_insert&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt; &gt;(__gnu_cxx::__normal_iterator&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt;*, std::vector&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt;, std::allocator&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt; &gt; &gt; &gt;, std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt;&amp;&amp;)</fn>
      <dir>/usr/include/c++/11/bits</dir>
      <file>vector.tcc</file>
      <line>440</line>
    </frame>
    <frame>
      <ip>0x982DD3</ip>
      <obj>/usr/bin/ceph-mon</obj>
      <fn>emplace_back&lt;std::unique_ptr&lt;rocksdb::ObjectLibrary::Entry, std::default_delete&lt;rocksdb::ObjectLibrary::Entry&gt; &gt; &gt;</fn>
      <dir>/usr/include/c++/11/bits</dir>
      <file>vector.tcc</file>
      <line>121</line>
    </frame>
Actions #23

Updated by Laura Flores 5 months ago

/a/yuriw-2023-11-20_15:34:30-rados-wip-yuri7-testing-2023-11-17-0819-distro-default-smithi/7463356

Actions #24

Updated by Nitzan Mordechai 5 months ago

/a/yuriw-2023-12-07_16:37:24-rados-wip-yuri8-testing-2023-12-06-1425-distro-default-smithi/7482191
/a/yuriw-2023-12-07_16:37:24-rados-wip-yuri8-testing-2023-12-06-1425-distro-default-smithi/7482160
/a/yuriw-2023-12-07_16:37:24-rados-wip-yuri8-testing-2023-12-06-1425-distro-default-smithi/7482192
/a/yuriw-2023-12-07_16:37:24-rados-wip-yuri8-testing-2023-12-06-1425-distro-default-smithi/7482172
/a/yuriw-2023-12-07_16:37:24-rados-wip-yuri8-testing-2023-12-06-1425-distro-default-smithi/7482181
/a/yuriw-2023-12-07_16:37:24-rados-wip-yuri8-testing-2023-12-06-1425-distro-default-smithi/7482202

Actions #25

Updated by Nitzan Mordechai 5 months ago

/a/yuriw-2023-12-11_23:27:14-rados-wip-yuri8-testing-2023-12-11-1101-distro-default-smithi/7487541
/a/yuriw-2023-12-11_23:27:14-rados-wip-yuri8-testing-2023-12-11-1101-distro-default-smithi/7487610
/a/yuriw-2023-12-11_23:27:14-rados-wip-yuri8-testing-2023-12-11-1101-distro-default-smithi/7487750
/a/yuriw-2023-12-11_23:27:14-rados-wip-yuri8-testing-2023-12-11-1101-distro-default-smithi/7487751
/a/yuriw-2023-12-11_23:27:14-rados-wip-yuri8-testing-2023-12-11-1101-distro-default-smithi/7487683
/a/yuriw-2023-12-11_23:27:14-rados-wip-yuri8-testing-2023-12-11-1101-distro-default-smithi/7487822

Actions #26

Updated by Radoslaw Zarzynski 5 months ago

bump up

Actions #27

Updated by Matan Breizman 4 months ago

/a/yuriw-2023-12-26_16:10:01-rados-wip-yuri3-testing-2023-12-19-1211-distro-default-smithi/7501413

Actions #28

Updated by Aishwarya Mathuria 4 months ago

/a/yuriw-2024-01-03_16:19:00-rados-wip-yuri6-testing-2024-01-02-0832-distro-default-smithi/7505504/
/a/yuriw-2024-01-03_16:19:00-rados-wip-yuri6-testing-2024-01-02-0832-distro-default-smithi/7505571/
/a/yuriw-2024-01-03_16:19:00-rados-wip-yuri6-testing-2024-01-02-0832-distro-default-smithi/7505572/
/a/yuriw-2024-01-03_16:19:00-rados-wip-yuri6-testing-2024-01-02-0832-distro-default-smithi/7505643/

Actions #30

Updated by Nitzan Mordechai 3 months ago

/a/yuriw-2024-01-18_15:10:37-rados-wip-yuri3-testing-2024-01-17-0753-distro-default-smithi/7520619
/a/yuriw-2024-01-18_15:10:37-rados-wip-yuri3-testing-2024-01-17-0753-distro-default-smithi/7520474
/a/yuriw-2024-01-18_15:10:37-rados-wip-yuri3-testing-2024-01-17-0753-distro-default-smithi/7520407
/a/yuriw-2024-01-18_15:10:37-rados-wip-yuri3-testing-2024-01-17-0753-distro-default-smithi/7520546
/a/yuriw-2024-01-18_15:10:37-rados-wip-yuri3-testing-2024-01-17-0753-distro-default-smithi/7520333
/a/yuriw-2024-01-18_15:10:37-rados-wip-yuri3-testing-2024-01-17-0753-distro-default-smithi/7520475

Actions #31

Updated by Laura Flores 3 months ago

  • Has duplicate Bug #64214: Health check failed: 1 osds down (OSD_DOWN) in cluster logs. added
Actions #32

Updated by Kamoltat (Junior) Sirivadhna 3 months ago

/a/yuriw-2024-01-31_19:20:14-rados-wip-yuri3-testing-2024-01-29-1434-distro-default-smithi/7540732

Actions #33

Updated by Laura Flores 3 months ago

Update: Working to refine the valgrind.supp file; new fix should be ready soon.

Actions #34

Updated by Aishwarya Mathuria 3 months ago

/a/yuriw-2024-02-13_15:50:02-rados-wip-yuri2-testing-2024-02-12-0808-reef-distro-default-smithi/7558336
/a/yuriw-2024-02-13_15:50:02-rados-wip-yuri2-testing-2024-02-12-0808-reef-distro-default-smithi/7558344
/a/yuriw-2024-02-13_15:50:02-rados-wip-yuri2-testing-2024-02-12-0808-reef-distro-default-smithi/7558338

Actions #35

Updated by Laura Flores 2 months ago

  • Related to Bug #64637: LeakPossiblyLost in BlueStore::_do_write_small() in osd added
Actions #36

Updated by Laura Flores about 2 months ago

Update on this: The PR is ready to be reviewed again.

Actions #37

Updated by Laura Flores about 2 months ago

/a/yuriw-2024-03-04_20:52:58-rados-reef-release-distro-default-smithi/7581722
/a/yuriw-2024-03-04_20:52:58-rados-reef-release-distro-default-smithi/7581653

Actions #38

Updated by Sridhar Seshasayee about 2 months ago

/a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587531
/a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587717
/a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587719
/a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587721
/a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587849
/a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587941

Actions #39

Updated by Matan Breizman about 2 months ago

/a/yuriw-2024-03-08_16:19:51-rados-wip-yuri2-testing-2024-03-01-1606-distro-default-smithi/7587174/

Actions #40

Updated by Aishwarya Mathuria about 2 months ago

/a/yuriw-2024-03-13_19:26:09-rados-wip-yuri-testing-2024-03-12-1240-reef-distro-default-smithi/7598397
/a/yuriw-2024-03-13_19:26:09-rados-wip-yuri-testing-2024-03-12-1240-reef-distro-default-smithi/7598394

Actions #41

Updated by Radoslaw Zarzynski about 2 months ago

Passed QA.

Actions #42

Updated by Aishwarya Mathuria 17 days ago

/a/yuriw-2024-04-09_14:35:50-rados-wip-yuri5-testing-2024-03-21-0833-distro-default-smithi/7648705
/a/yuriw-2024-04-09_14:35:50-rados-wip-yuri5-testing-2024-03-21-0833-distro-default-smithi/7648842
/a/yuriw-2024-04-09_14:35:50-rados-wip-yuri5-testing-2024-03-21-0833-distro-default-smithi/7648568
/a/yuriw-2024-04-09_14:35:50-rados-wip-yuri5-testing-2024-03-21-0833-distro-default-smithi/7648640
/a/yuriw-2024-04-09_14:35:50-rados-wip-yuri5-testing-2024-03-21-0833-distro-default-smithi/7648704

Actions #43

Updated by Matan Breizman 16 days ago

/a/yuriw-2024-04-16_23:25:35-rados-wip-yuriw-testing-20240416.150233-distro-default-smithi/7659275/
/a/yuriw-2024-04-16_23:25:35-rados-wip-yuriw-testing-20240416.150233-distro-default-smithi/7659345/
/a/yuriw-2024-04-16_23:25:35-rados-wip-yuriw-testing-20240416.150233-distro-default-smithi/7659406/
/a/yuriw-2024-04-16_23:25:35-rados-wip-yuriw-testing-20240416.150233-distro-default-smithi/7659407/
/a/yuriw-2024-04-16_23:25:35-rados-wip-yuriw-testing-20240416.150233-distro-default-smithi/7659470/

Actions #44

Updated by Laura Flores 12 days ago

Update: Still working to understand why my local reproducer worked with the latest fix but not in teuthology.

Actions #45

Updated by Matan Breizman 3 days ago

  • Backport set to reef

/a/yuriw-2024-04-20_01:10:46-rados-wip-yuri7-testing-2024-04-18-1351-reef-distro-default-smithi/7664183

Actions #46

Updated by Sridhar Seshasayee 2 days ago

Observed on Squid:
/a/yuriw-2024-04-30_03:21:19-rados-wip-yuri4-testing-2024-04-29-0642-distro-default-smithi/7680193
/a/yuriw-2024-04-30_03:21:19-rados-wip-yuri4-testing-2024-04-29-0642-distro-default-smithi/7680255
/a/yuriw-2024-04-30_03:21:19-rados-wip-yuri4-testing-2024-04-29-0642-distro-default-smithi/7680256
/a/yuriw-2024-04-30_03:21:19-rados-wip-yuri4-testing-2024-04-29-0642-distro-default-smithi/7680320

Actions #47

Updated by Aishwarya Mathuria 1 day ago

/a/yuriw-2024-04-30_14:17:59-rados-wip-yuri5-testing-2024-04-17-1400-distro-default-smithi/7680976/
/a/yuriw-2024-04-30_14:17:59-rados-wip-yuri5-testing-2024-04-17-1400-distro-default-smithi/7681106
/a/yuriw-2024-04-30_14:17:59-rados-wip-yuri5-testing-2024-04-17-1400-distro-default-smithi/7681063
/a/yuriw-2024-04-30_14:17:59-rados-wip-yuri5-testing-2024-04-17-1400-distro-default-smithi/7681062

Actions

Also available in: Atom PDF