Actions
Bug #4688
closedceph-mds: daemon fails to start after ceph installation
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
ceph version: ceph version 0.60-439-gd7b7ace (d7b7acefc8e106f2563771a721944c57e10d54fb)
Core was generated by `ceph-mds -f -i a'. Program terminated with signal 11, Segmentation fault. #0 lockdep_register (name=0x89d996 "md_config_t") at common/lockdep.cc:118 118 common/lockdep.cc: No such file or directory. (gdb) bt #0 lockdep_register (name=0x89d996 "md_config_t") at common/lockdep.cc:118 #1 0x00000000007da7ae in lockdep_will_lock (name=0x89d996 "md_config_t", id=-1) at common/lockdep.cc:160 #2 0x0000000000787934 in _will_lock (this=<optimized out>) at ./common/Mutex.h:56 #3 Mutex::Lock (this=0x1d8ea18, no_lockdep=<optimized out>) at common/Mutex.cc:80 #4 0x00000000007d2325 in Locker (m=..., this=<synthetic pointer>) at ./common/Mutex.h:120 #5 md_config_t::call_all_observers (this=0x1d8de80) at common/config.cc:573 #6 0x000000000085d9db in global_init (alt_def_args=<optimized out>, args=..., module_type=<optimized out>, code_env=CODE_ENVIRONMENT_DAEMON, flags=<optimized out>) at global/global_init.cc:111 #7 0x00000000004b7fc8 in main (argc=4, argv=0x7ffff13b2518) at ceph_mds.cc:152
Updated by Tamilarasi muthamizhan about 11 years ago
this started happening from ceph v0.60-438-g1a3890a
Updated by Greg Farnum about 11 years ago
- Status changed from New to 12
- Priority changed from Urgent to Immediate
Yeah, I just merged in something for Sage and I guess he didn't test it either — sorry. :/
Does this happen only on the MDS, or with other daemons too?
Updated by Tamilarasi muthamizhan about 11 years ago
- Status changed from 12 to New
- Priority changed from Immediate to Urgent
this started happening from ceph v0.60-438-g1a3890a
Updated by Tamilarasi muthamizhan about 11 years ago
- Priority changed from Urgent to Immediate
it happens only with mds.
Updated by Greg Farnum about 11 years ago
- Status changed from New to Resolved
Hmm, I just saw it on the monitor, and it makes more sense if it's a global thing. :)
Reverted the patch in question and things seem to be starting up and logging correctly now. I'll have to re-open the other bug, though.
Actions