Bug #20924
closedosd: leaked Session on osd.7
0%
Description
<kind>Leak_DefinitelyLost</kind> <xwhat> <text>1,116 (512 direct, 604 indirect) bytes in 1 blocks are definitely lost in loss record 37 of 44</text> <leakedbytes>1116</leakedbytes> <leakedblocks>1</leakedblocks> </xwhat> <stack> <frame> <ip>0xA36E203</ip> <obj>/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so</obj> <fn>operator new(unsigned long)</fn> <dir>/builddir/build/BUILD/valgrind-3.11.0/coregrind/m_replacemalloc</dir> <file>vg_replace_malloc.c</file> <line>334</line> </frame> <frame> <ip>0x5CBCC9</ip> <obj>/usr/bin/ceph-osd</obj> <fn>OSD::ms_verify_authorizer(Connection*, int, int, ceph::buffer::list&, ceph::buffer::list&, bool&, CryptoKey&)</fn> <dir>/usr/src/debug/ceph-12.1.2-385-g853e3cf/src/osd</dir> <file>OSD.cc</file> <line>7042</line> </frame> <frame> <ip>0xBEAEAE</ip> <obj>/usr/bin/ceph-osd</obj> <fn>ms_deliver_verify_authorizer</fn> <dir>/usr/src/debug/ceph-12.1.2-385-g853e3cf/src/msg</dir> <file>Messenger.h</file> <line>770</line> </frame> <frame> <ip>0xBEAEAE</ip> <obj>/usr/bin/ceph-osd</obj> <fn>SimpleMessenger::verify_authorizer(Connection*, int, int, ceph::buffer::list&, ceph::buffer::list&, bool&, CryptoKey&)</fn> <dir>/usr/src/debug/ceph-12.1.2-385-g853e3cf/src/msg/simple</dir> <file>SimpleMessenger.cc</file> <line>420</line> </frame> <frame> <ip>0xDEB615</ip> <obj>/usr/bin/ceph-osd</obj> <fn>Pipe::accept()</fn> <dir>/usr/src/debug/ceph-12.1.2-385-g853e3cf/src/msg/simple</dir> <file>Pipe.cc</file> <line>507</line> </frame> <frame> <ip>0xDF065D</ip> <obj>/usr/bin/ceph-osd</obj> <fn>Pipe::reader()</fn> <dir>/usr/src/debug/ceph-12.1.2-385-g853e3cf/src/msg/simple</dir> <file>Pipe.cc</file> <line>1624</line> </frame>
no other leaks, so the Connection went away. Probably a mismatched session get/put elsewhere in the code?
/a/sage-2017-08-06_13:59:55-rados-wip-sage-testing-20170805a-distro-basic-smithi/1490233
Updated by Sage Weil over 6 years ago
- Priority changed from Normal to High
/a/sage-2017-08-26_20:38:41-rados-luminous-distro-basic-smithi/1568055
Updated by Sage Weil over 6 years ago
/a/sage-2017-09-10_02:50:18-rados-wip-sage-testing-2017-09-08-1434-distro-basic-smithi/1615133
Updated by Sage Weil over 6 years ago
/a/sage-2017-09-13_13:31:57-rados-wip-sage-testing-2017-09-12-1750-distro-basic-smithi/1627916
is it just me or is it always osd.7?
Updated by Sage Weil over 6 years ago
/a/yuriw-2017-09-19_19:54:13-rados-wip-yuri-testing3-2017-09-19-1710-distro-basic-smithi/1648800
osd.7 again! weird
Updated by Sage Weil over 6 years ago
- Subject changed from osd: leaked Session to osd: leaked Session on osd.7
/a/sage-2017-12-06_22:54:32-rados-wip-sage-testing-2017-12-06-1352-distro-basic-smithi/1939984
osd.7 again!
Updated by Kefu Chai about 6 years ago
/a/yuriw-2018-02-02_20:31:37-rados-wip_yuri_master_2.2.18-distro-basic-smithi/2143177/remote/smithi111/log/valgrind/osd.7.log.gz
osd.7 again again!
Updated by Sage Weil about 6 years ago
/a/yuriw-2018-02-02_20:31:37-rados-wip_yuri_master_2.2.18-distro-basic-smithi/2143177
Updated by Kefu Chai about 6 years ago
/a/kchai-2018-03-05_17:31:09-rados-wip-kefu-testing-2018-03-05-2238-distro-basic-smithi/2252897
Updated by Sage Weil about 6 years ago
/a/sage-2018-04-04_02:28:04-rados-wip-sage2-testing-2018-04-03-1634-distro-basic-smithi/2351291
rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}
Updated by Sage Weil about 6 years ago
/a/sage-2018-04-17_04:17:03-rados-wip-sage3-testing-2018-04-16-2028-distro-basic-smithi/2404155
this time on osd.4!
rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}
Updated by Josh Durgin almost 6 years ago
osd.3 here:
rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}
Updated by Kefu Chai almost 6 years ago
osd.3
/a//yuriw-2018-05-09_22:08:37-rados-mimic-distro-basic-smithi/2511364/remote/smithi118/log/valgrind/osd.3.log.gz
rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}
Updated by Sage Weil almost 6 years ago
osd.7
/a/sage-2018-05-18_13:08:19-rados-wip-sage2-testing-2018-05-17-0701-distro-basic-smithi/2546923
rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}
Updated by Sage Weil almost 6 years ago
osd.7
/a/sage-2018-05-18_16:20:24-rados-wip-sage-testing-2018-05-18-0817-distro-basic-smithi/2548324
rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}
Updated by Sage Weil almost 6 years ago
osd.4
/a/sage-2018-05-20_18:11:15-rados-wip-sage3-testing-2018-05-20-1031-distro-basic-smithi/2558319
rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}
Updated by Kefu Chai almost 6 years ago
https://github.com/ceph/ceph/pull/22292 might address this issue.
Updated by Kefu Chai almost 6 years ago
- Status changed from 12 to Pending Backport
- Backport set to luminous, mimic
i think https://github.com/ceph/ceph/pull/22292 indeed addresses this issue
Updated by Nathan Cutler almost 6 years ago
- Copied to Backport #24359: mimic: osd: leaked Session on osd.7 added
Updated by Nathan Cutler almost 6 years ago
- Copied to Backport #24360: luminous: osd: leaked Session on osd.7 added
Updated by Samuel Just over 4 years ago
Updated by Nathan Cutler over 4 years ago
- Status changed from Pending Backport to Resolved
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved".