Project

General

Profile

Bug #37751

handle_conf_change crash in osd

Added by Sage Weil over 5 years ago. Updated about 5 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2018-12-23T12:35:07.083 INFO:teuthology.orchestra.run.smithi103:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 debug kick_recovery_wq ' 0'
2018-12-23T12:35:07.252 INFO:tasks.ceph.osd.0.smithi103.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.0.1-2083-gdf05153/rpm/el7/BUILD/ceph-14.0.1-208
3-gdf05153/src/common/Mutex.cc: In function 'void Mutex::lock(bool)' thread 7f56a9ead700 time 2018-12-23 12:35:07.256467
2018-12-23T12:35:07.252 INFO:tasks.ceph.osd.0.smithi103.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.0.1-2083-gdf05153/rpm/el7/BUILD/ceph-14.0.1-208
3-gdf05153/src/common/Mutex.cc: 79: FAILED ceph_assert(r == 0)
2018-12-23T12:35:07.254 INFO:tasks.ceph.osd.0.smithi103.stderr: ceph version 14.0.1-2083-gdf05153 (df05153d452d05bae39059418d39e8b573d3f516) nautilus (dev)
2018-12-23T12:35:07.254 INFO:tasks.ceph.osd.0.smithi103.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14a) [0x55aeb30c82ae]
2018-12-23T12:35:07.254 INFO:tasks.ceph.osd.0.smithi103.stderr: 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x55aeb30c847c]
2018-12-23T12:35:07.254 INFO:tasks.ceph.osd.0.smithi103.stderr: 3: (Mutex::lock(bool)+0xa8) [0x55aeb382b128]
2018-12-23T12:35:07.255 INFO:tasks.ceph.osd.0.smithi103.stderr: 4: (OSD::handle_conf_change(ConfigProxy const&, std::set<std::string, std::less<std::string>, std::allocator<std::string> > const&)+0x4f) [0x55aeb32008cf]
2018-12-23T12:35:07.255 INFO:tasks.ceph.osd.0.smithi103.stderr: 5: (ConfigProxy::apply_changes(std::ostream*)+0x161) [0x55aeb3250921]
2018-12-23T12:35:07.255 INFO:tasks.ceph.osd.0.smithi103.stderr: 6: (OSD::_do_command(Connection*, std::map<std::string, boost::variant<std::string, bool, long, double, std::vector<std::string, std::allocator<std::string> >, std::vector<long, std::allocator<long> >, std::vector<double, std::allocator<double> > >, std::less<void>, std::allocator<std::pair<std::string const, boost::variant<std::string, bool, long, double, std::vector<std::string, std::allocator<std::string> >, std::vector<long, std::allocator<long> >, std::vector<double, std::allocator<double> > > > > >&, unsigned long, ceph::buffer::list&, ceph::buffer::list&, std::basic_stringstream<char, std::char_traits<char>, std::allocator<char> >&, std::basic_stringstream<char, std::char_traits<char>, std::allocator<char> >&)+0x28cc) [0x55aeb320c83c]
2018-12-23T12:35:07.255 INFO:tasks.ceph.osd.0.smithi103.stderr: 7: (OSD::do_command(Connection*, unsigned long, std::vector<std::string, std::allocator<std::string> >&, ceph::buffer::list&)+0x48f) [0x55aeb320e0ff]
2018-12-23T12:35:07.255 INFO:tasks.ceph.osd.0.smithi103.stderr: 8: (ThreadPool::WorkQueue<OSD::Command>::_void_process(void*, ThreadPool::TPHandle&)+0x53) [0x55aeb327a653]
2018-12-23T12:35:07.255 INFO:tasks.ceph.osd.0.smithi103.stderr: 9: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa6f) [0x55aeb384376f]
2018-12-23T12:35:07.255 INFO:tasks.ceph.osd.0.smithi103.stderr: 10: (ThreadPool::WorkThread::entry()+0x10) [0x55aeb384bc90]
2018-12-23T12:35:07.256 INFO:tasks.ceph.osd.0.smithi103.stderr: 11: (()+0x7dd5) [0x7f56d5440dd5]
2018-12-23T12:35:07.256 INFO:tasks.ceph.osd.0.smithi103.stderr: 12: (clone()+0x6d) [0x7f56d4306ead]

/a/sage-2018-12-23_01:06:50-rados-wip-sage-testing-2018-12-22-1603-distro-basic-smithi/3394310

Related issues

Related to CephFS - Bug #24823: mds: deadlock when setting config value via admin socket Resolved

History

#1 Updated by Kefu Chai about 5 years ago

  • Assignee set to Kefu Chai

#2 Updated by Kefu Chai about 5 years ago

  • Status changed from 12 to Fix Under Review

#3 Updated by Kefu Chai about 5 years ago

we started to guard handle_conf_change() since aad318abc9a680d68aab96b051fb7457c8f7feac.

#4 Updated by Kefu Chai about 5 years ago

  • Status changed from Fix Under Review to Resolved

#5 Updated by Patrick Donnelly about 5 years ago

  • Related to Bug #24823: mds: deadlock when setting config value via admin socket added

Also available in: Atom PDF