Project

General

Profile

Actions

Bug #37718

closed

ceph-osdomap-tool crashes

Added by David Zafman over 5 years ago. Updated over 5 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
David Zafman
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

$ ../qa/run-standalone.sh "osd-scrub-snaps.sh TEST_scrub_snaps"
...
../qa/standalone/scrub/osd-scrub-snaps.sh:100: create_scenario: OBJ5SAVE='["1.0",{"oid":"obj5","key":"","snapid":1,"hash":1718170787,"max":0,"pool":1,"namespace":"","max":0}]'
../qa/standalone/scrub/osd-scrub-snaps.sh:102: create_scenario: ceph-osdomap-tool --no-mon-config --omap-path td/osd-scrub-snaps/0/current/omap --command dump-raw-keys
../qa/standalone/scrub/osd-scrub-snaps.sh: line 43: 54213 Segmentation fault (core dumped) ceph-osdomap-tool --no-mon-config --omap-path $dir/${osd}/current/omap --command dump-raw-keys > $dir/drk.log
../qa/standalone/scrub/osd-scrub-snaps.sh:103: create_scenario: grep '_USER_[0-9]*_USER_,MAP_.*[.]1[.]obj5[.][.]' td/osd-scrub-snaps/drk.log
../qa/standalone/scrub/osd-scrub-snaps.sh:103: create_scenario: return 1
../qa/standalone/scrub/osd-scrub-snaps.sh:193: TEST_scrub_snaps: return 1
../qa/standalone/scrub/osd-scrub-snaps.sh:38: run: return 1

$ gdb bin/ceph-osdomap-tool /tmp/cores.52133/core*
...
Core was generated by `ceph-osdomap-tool --no-mon-config --omap-path td/osd-scrub-snaps/0/current/omap'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 GIpthread_mutex_lock (mutex=0x64) at ../nptl/pthread_mutex_lock.c:67
67 ../nptl/pthread_mutex_lock.c: No such file or directory.
[Current thread is 1 (Thread 0x7f98da050c80 (LWP 54213))]
(gdb) bt
#0 GI_pthread_mutex_lock (mutex=0x64) at ../nptl/pthread_mutex_lock.c:67
#1 0x00007f98d026d2e9 in __gthread_mutex_lock (
_mutex=0x64) at /usr/include/x86_64-linux-gnu/c++/7/bits/gthr-default.h:748
#2 0x00007f98d0272fd0 in std::mutex::lock (this=0x64) at /usr/include/c++/7/bits/std_mutex.h:103
#3 0x00007f98d068d7e6 in std::scoped_lock<std::mutex>::scoped_lock (this=0x7ffc9d9e9ec8, __m=...) at /usr/include/c++/7/mutex:610
#4 0x00007f98d068af85 in ceph::logging::Log::flush (this=0x4) at /home/dzafman/ceph/src/log/Log.cc:196
#5 0x000055bc327b62fa in global_pre_init (defaults=0x0, args=std::vector of length 1, capacity 1 = {...}, module_type=4, code_env=CODE_ENVIRONMENT_UTILITY_NODOUT, flags=0)
at /home/dzafman/ceph/src/global/global_init.cc:134
#6 0x000055bc327b661c in global_init (defaults=0x0, args=std::vector of length 1, capacity 1 = {...}, module_type=4, code_env=CODE_ENVIRONMENT_UTILITY_NODOUT, flags=0,
data_dir_option=0x0, run_pre_init=true) at /home/dzafman/ceph/src/global/global_init.cc:177
#7 0x000055bc3272c373 in main (argc=6, argv=0x7ffc9d9ec3d8) at /home/dzafman/ceph/src/tools/ceph_osdomap_tool.cc:80

Actions #1

Updated by David Zafman over 5 years ago

  • Status changed from New to Rejected

Rebuilding the binary fixed the problem. It looked like a library incompatibility because safe_to_start_threads should have been false but was 224 a bogus value.

Actions

Also available in: Atom PDF