Bug #53575
closedValgrind reports memory "Leak_PossiblyLost" errors concerning lib64
0%
Description
Found in /a/yuriw-2021-12-09_00:18:57-rados-wip-yuri-testing-2021-12-08-1336-distro-default-smithi/6553724
The following is a report from /ceph/teuthology-archive/yuriw-2021-12-09_00:18:57-rados-wip-yuri-testing-2021-12-08-1336-distro-default-smithi/6553724/remote/smithi179/log/valgrind/osd.0.log.gz:
<error>
<unique>0x281c0eb</unique>
<tid>1</tid>
<kind>Leak_PossiblyLost</kind>
<xwhat>
<text>32 bytes in 1 blocks are possibly lost in loss record 269 of 696</text>
<leakedbytes>32</leakedbytes>
<leakedblocks>1</leakedblocks>
</xwhat>
<stack>
<frame>
<ip>0x4C3721A</ip>
<obj>/usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so</obj>
<fn>calloc</fn>
<dir>/builddir/build/BUILD/valgrind-3.16.0/coregrind/m_replacemalloc</dir>
<file>vg_replace_malloc.c</file>
<line>760</line>
</frame>
<frame>
<ip>0x870D685</ip>
<obj>/usr/lib64/libnl-3.so.200.26.0</obj>
<fn>__trans_list_add</fn>
</frame>
<frame>
<ip>0x848FD4C</ip>
<obj>/usr/lib64/libnl-route-3.so.200.26.0</obj>
</frame>
<frame>
<ip>0x400F8B9</ip>
<obj>/usr/lib64/ld-2.28.so</obj>
<fn>call_init.part.0</fn>
</frame>
<frame>
<ip>0x400F9B9</ip>
<obj>/usr/lib64/ld-2.28.so</obj>
<fn>_dl_init</fn>
</frame>
<frame>
<ip>0x4000FD9</ip>
<obj>/usr/lib64/ld-2.28.so</obj>
</frame>
<frame>
<ip>0x5</ip>
</frame>
<frame>
<ip>0x1FFF000C1E</ip>
</frame>
<frame>
<ip>0x1FFF000C27</ip>
</frame>
<frame>
<ip>0x1FFF000C2A</ip>
</frame>
<frame>
<ip>0x1FFF000C34</ip>
</frame>
<frame>
<ip>0x1FFF000C39</ip>
</frame>
<frame>
<ip>0x1FFF000C3C</ip>
</frame>
</stack>
</error>
Updated by Neha Ojha over 2 years ago
- Status changed from New to Rejected
We could suppress this but since it is not coming from the Ceph code, rejecting it.
Updated by Laura Flores over 1 year ago
Found a similar instance here:
/a/lflores-2022-09-30_21:47:41-rados-wip-lflores-testing-distro-default-smithi/7050789
<error>
<unique>0x1e49c46</unique>
<tid>1</tid>
<kind>Leak_PossiblyLost</kind>
<xwhat>
<text>32 bytes in 1 blocks are possibly lost in loss record 434 of 1,263</text>
<leakedbytes>32</leakedbytes>
<leakedblocks>1</leakedblocks>
</xwhat>
<stack>
<frame>
<ip>0x4C3BE4B</ip>
<obj>/usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so</obj>
<fn>calloc</fn>
<dir>/builddir/build/BUILD/valgrind-3.19.0/coregrind/m_replacemalloc</dir>
<file>vg_replace_malloc.c</file>
<line>1328</line>
</frame>
<frame>
<ip>0xA9976E5</ip>
<obj>/usr/lib64/libnl-3.so.200.26.0</obj>
<fn>__trans_list_add</fn>
</frame>
<frame>
<ip>0xA7102AC</ip>
<obj>/usr/lib64/libnl-route-3.so.200.26.0</obj>
</frame>
<frame>
<ip>0x4009059</ip>
<obj>/usr/lib64/ld-2.28.so</obj>
<fn>call_init.part.0</fn>
<dir>/usr/src/debug/glibc-2.28-211.el8.x86_64/elf</dir>
<file>dl-init.c</file>
<line>72</line>
</frame>
<frame>
<ip>0x4009159</ip>
<obj>/usr/lib64/ld-2.28.so</obj>
<fn>call_init</fn>
<dir>/usr/src/debug/glibc-2.28-211.el8.x86_64/elf</dir>
<file>dl-init.c</file>
<line>118</line>
</frame>
<frame>
<ip>0x4009159</ip>
<obj>/usr/lib64/ld-2.28.so</obj>
<fn>_dl_init</fn>
<dir>/usr/src/debug/glibc-2.28-211.el8.x86_64/elf</dir>
<file>dl-init.c</file>
<line>119</line>
</frame>
<frame>
<ip>0x401D989</ip>
<obj>/usr/lib64/ld-2.28.so</obj>
</frame>
<frame>
<ip>0x5</ip>
</frame>
<frame>
<ip>0x1FFF000C1E</ip>
</frame>
<frame>
<ip>0x1FFF000C27</ip>
</frame>
<frame>
<ip>0x1FFF000C2A</ip>
</frame>
<frame>
<ip>0x1FFF000C34</ip>
</frame>
<frame>
<ip>0x1FFF000C39</ip>
</frame>
<frame>
<ip>0x1FFF000C3C</ip>
</frame>
</stack>
</error>
Updated by Nitzan Mordechai over 1 year ago
- Related to Bug #57751: LibRadosAio.SimpleWritePP hang and pkill added
Updated by Nitzan Mordechai over 1 year ago
- Related to Bug #57618: rados/test.sh hang and pkilled (LibRadosWatchNotifyEC.WatchNotify) added
Updated by Yuri Weinstein over 1 year ago
Updated by Laura Flores over 1 year ago
- Status changed from Rejected to Resolved
Updated by Laura Flores 12 months ago
- Translation missing: en.field_tag_list set to test-failure
/a/yuriw-2023-04-25_14:15:40-rados-pacific-release-distro-default-smithi/7251534
Updated by Laura Flores 12 months ago
- Status changed from Resolved to New
- Backport set to quincy,pacific
Updated by Nitzan Mordechai 12 months ago
Laura, the original PR was for quincy. And the related tracker https://tracker.ceph.com/issues/57618 already have backport new trackers (yesterday) so i'll change the status to In progress for that tracker as well and continue with the exists backport trackers
Updated by Nitzan Mordechai 12 months ago
- Status changed from New to In Progress
- Backport changed from quincy,pacific to pacific
Updated by Radoslaw Zarzynski 12 months ago
Sounds like a missed backport. Please correct me if I'm wrong.
Updated by Laura Flores 12 months ago
Radoslaw Zarzynski wrote:
Sounds like a missed backport. Please correct me if I'm wrong.
That's my understanding as well.
Updated by Laura Flores 12 months ago
The pacific backport has been approved and is just awaiting testing: https://github.com/ceph/ceph/pull/49521
Updated by Radoslaw Zarzynski 11 months ago
- Status changed from In Progress to Resolved
Closing this as the last-mention PR got duplicated by https://github.com/ceph/ceph/pull/51341 and duplicate has been merge.
Updated by Laura Flores 6 months ago
- Related to Bug #63233: mon|client|mds: valgrind reports possible leaks in the MDS added