Actions
Bug #14672
closedMDS crashes with FAILED assert(inode_map.count(in->vino()) == 0) in 9.2.0
Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Full log in attach
9> 2016-02-05 15:26:29.015197 7f177de2f700 10 mds.0.locker got rdlock on (ipolicy sync r=1) [inode 1000000138e [...2,head] /backups/vulpixhome/g
ent/ auth v1328 ap=1+1 f(v0 m2016-02-05 15:11:49.275650 4=0+4) n(v53 rc2016-02-05 15:12:41.886233 b276810712 5559=3918+1641) (isnap sync r=1) (inest l
ock dirty) (ipolicy sync r=1) (iversion lock) caps={67042=pAsLsXsFs/@0,67045=pAsLsXs/-@0,67578=pAsLsXsFs/-@0,67587=pAsLsXsFs/-@0} | dirtyscattered=1
lock=2 dirfrag=1 caps=1 dirty=1 authpin=1 0x7f178c539660]
8> 2016-02-05 15:26:29.015219 7f177de2f700 7 mds.0.locker rdlock_start on (isnap sync) on [inode 1000000138f [...2,head] /backups/vulpixhome/ge
nt/vsc408/ auth v1509 ap=1+1 f(v0 m2016-02-05 15:09:13.624725 10=0+10) n(v51 rc2016-02-05 15:12:41.538927 b168677648 3672=2522+1150)/n(v50 rc2016-02-0
5 15:12:41.538927 b168321213 3672=2522+1150) (inest lock dirty) (iversion lock) caps={67045=pAsLsXsFs/@0,67587=pAsLsXsFs/-@0} | dirtyscattered=1 dirf
rag=1 caps=1 dirtyrstat=1 dirty=1 authpin=1 0x7f178c539bf8]
7> 2016-02-05 15:26:29.015237 7f177de2f700 10 mds.0.locker got rdlock on (isnap sync r=1) [inode 1000000138f [...2,head] /backups/vulpixhome/gen
t/vsc408/ auth v1509 ap=1+1 f(v0 m2016-02-05 15:09:13.624725 10=0+10) n(v51 rc2016-02-05 15:12:41.538927 b168677648 3672=2522+1150)/n(v50 rc2016-02-05
15:12:41.538927 b168321213 3672=2522+1150) (isnap sync r=1) (inest lock dirty) (iversion lock) caps={67045=pAsLsXsFs/@0,67587=pAsLsXsFs/-@0} | dirty
scattered=1 lock=1 dirfrag=1 caps=1 dirtyrstat=1 dirty=1 authpin=1 0x7f178c539bf8]
6> 2016-02-05 15:26:29.015256 7f177de2f700 7 mds.0.locker rdlock_start on (ipolicy sync) on [inode 1000000138f [...2,head] /backups/vulpixhome/
gent/vsc408/ auth v1509 ap=1+1 f(v0 m2016-02-05 15:09:13.624725 10=0+10) n(v51 rc2016-02-05 15:12:41.538927 b168677648 3672=2522+1150)/n(v50 rc2016-02
-05 15:12:41.538927 b168321213 3672=2522+1150) (isnap sync r=1) (inest lock dirty) (iversion lock) caps={67045=pAsLsXsFs/@0,67587=pAsLsXsFs/-@0} | di
rtyscattered=1 lock=1 dirfrag=1 caps=1 dirtyrstat=1 dirty=1 authpin=1 0x7f178c539bf8]
5> 2016-02-05 15:26:29.015274 7f177de2f700 10 mds.0.locker got rdlock on (ipolicy sync r=1) [inode 1000000138f [...2,head] /backups/vulpixhome/g
ent/vsc408/ auth v1509 ap=1+1 f(v0 m2016-02-05 15:09:13.624725 10=0+10) n(v51 rc2016-02-05 15:12:41.538927 b168677648 3672=2522+1150)/n(v50 rc2016-02
05 15:12:41.538927 b168321213 3672=2522+1150) (isnap sync r=1) (inest lock dirty) (ipolicy sync r=1) (iversion lock) caps={67045=pAsLsXsFs/-@0,67587=p
AsLsXsFs/-@0} | dirtyscattered=1 lock=2 dirfrag=1 caps=1 dirtyrstat=1 dirty=1 authpin=1 0x7f178c539bf8]
4> 2016-02-05 15:26:29.015301 7f177de2f700 5 - op tracker -- seq: 2, time: 2016-02-05 15:26:29.015301, event: acquired locks, op: client_request(client.67587:2334 create #10000000d3a/.input.log.us42rN 2016-02-05 15:12:42.328932 RETRY=7 REPLAY)
3> 2016-02-05 15:26:29.015321 7f177de2f700 20 mds.0.sessionmap mark_projected s=0x7f178c4f3b80 name=client.67587 pv=13127 -> 13128
-2> 2016-02-05 15:26:29.015326 7f177de2f700 10 mds.0.server prepare_new_inode used_prealloc 1000000380c ([1000000380d~a7,10000004a51~1f5], 668 left)
-1> 2016-02-05 15:26:29.015330 7f177de2f700 10 mds.0.server dir mode 040775 new mode 0100600
0> 2016-02-05 15:26:29.018013 7f177de2f700 -1 mds/MDCache.cc: In function 'void MDCache::add_inode(CInode*)' thread 7f177de2f700 time 2016-02-05 15:26:29.015333
mds/MDCache.cc: 269: FAILED assert(inode_map.count(in>vino()) == 0)
ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0x7f1787ed6105]
2: (MDCache::add_inode(CInode*)+0x246) [0x7f1787bd14b6]
3: (Server::prepare_new_inode(std::shared_ptr<MDRequestImpl>&, CDir*, inodeno_t, unsigned int, ceph_file_layout*)+0xf30) [0x7f1787b71ce0]
4: (Server::handle_client_openc(std::shared_ptr<MDRequestImpl>&)+0xd83) [0x7f1787b75963]
5: (Server::dispatch_client_request(std::shared_ptr<MDRequestImpl>&)+0xa48) [0x7f1787b94678]
6: (MDCache::dispatch_request(std::shared_ptr<MDRequestImpl>&)+0x4c) [0x7f1787c0d2dc]
7: (MDSInternalContextBase::complete(int)+0x1e3) [0x7f1787d4b993]
8: (MDSRank::_advance_queues()+0x382) [0x7f1787b2b652]
9: (MDSRank::ProgressThread::entry()+0x4a) [0x7f1787b2baca]
10: (()+0x7dc5) [0x7f1786d22dc5]
11: (clone()+0x6d) [0x7f1785c0d21d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Updated by John Spray about 8 years ago
Was this the system that you ran newfs on? I think I was concerned that this might be from running newfs while the metadata pool still contained objects from the old filesystem (pools needed to be destroyed and created)
Updated by Greg Farnum about 8 years ago
- Status changed from New to Rejected
Yeah, I'm pretty sure about that.
Updated by Christopher Nelson about 8 years ago
I have this same problem, and I did not do a newfs. I simply upgraded from Hammer to Infernalis:
-6> 2016-02-18 15:38:57.843074 7f7e1abe4700 4 mds.0.server handle_client_request client_request(client.10893209:42138 create #100007be287/iam-3_univ_sqr_5_xs.jpg.staged RETRY=22) -5> 2016-02-18 15:38:57.843090 7f7e1abe4700 5 -- op tracker -- seq: 2, time: 2016-02-18 15:38:57.665733, event: throttled, op: client_request(client.10893209:42138 create #100007be287/iam-3_univ_sqr_5_xs.jpg.staged RETRY=22) -4> 2016-02-18 15:38:57.843104 7f7e1abe4700 5 -- op tracker -- seq: 2, time: 2016-02-18 15:38:57.665742, event: all_read, op: client_request(client.10893209:42138 create #100007be287/iam-3_univ_sqr_5_xs.jpg.staged RETRY=22) -3> 2016-02-18 15:38:57.843117 7f7e1abe4700 5 -- op tracker -- seq: 2, time: 2016-02-18 15:38:57.840181, event: dispatched, op: client_request(client.10893209:42138 create #100007be287/iam-3_univ_sqr_5_xs.jpg.staged RETRY=22) -2> 2016-02-18 15:38:57.843240 7f7e1abe4700 5 -- op tracker -- seq: 2, time: 2016-02-18 15:38:57.843240, event: acquired locks, op: client_request(client.10893209:42138 create #100007be287/iam-3_univ_sqr_5_xs.jpg.staged RETRY=22) -1> 2016-02-18 15:38:57.843360 7f7e188dd700 1 -- 10.245.22.92:6802/7018 <== osd.2 10.245.22.111:6800/12250 54567 ==== osd_op_reply(135460 600.00000000 [omap-get-header 0~0,omap-get-vals 0~16,getxattr (62)] v0'0 uv3192151 ondisk = 0) v6 ==== 263+0+292 (2860936119 0 2592524424) 0x7f7e2970eb00 con 0x7f7e296886e0 0> 2016-02-18 15:38:57.846653 7f7e1abe4700 -1 mds/MDCache.cc: In function 'void MDCache::add_inode(CInode*)' thread 7f7e1abe4700 time 2016-02-18 15:38:57.843282 mds/MDCache.cc: 269: FAILED assert(inode_map.count(in->vino()) == 0) ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x8b) [0x7f7e24da2d2b] 2: (()+0x29bae6) [0x7f7e24ab3ae6] 3: (Server::prepare_new_inode(std::shared_ptr<MDRequestImpl>&, CDir*, inodeno_t, unsigned int, ceph_file_layout*)+0xf18) [0x7f7e24a56768] 4: (Server::handle_client_openc(std::shared_ptr<MDRequestImpl>&)+0xd5a) [0x7f7e24a5a37a] 5: (Server::dispatch_client_request(std::shared_ptr<MDRequestImpl>&)+0xabc) [0x7f7e24a78a5c] 6: (Server::handle_client_request(MClientRequest*)+0x47f) [0x7f7e24a78f6f] 7: (Server::dispatch(Message*)+0x3ab) [0x7f7e24a7d16b] 8: (MDSRank::handle_deferrable_message(Message*)+0x7fc) [0x7f7e24a07b2c] 9: (MDSRank::_dispatch(Message*, bool)+0x1da) [0x7f7e24a121ba] 10: (MDSRank::retry_dispatch(Message*)+0x12) [0x7f7e24a132f2] 11: (MDSInternalContextBase::complete(int)+0x1d3) [0x7f7e24c24fd3] 12: (MDSRank::_advance_queues()+0x372) [0x7f7e24a119f2] 13: (MDSRank::ProgressThread::entry()+0x4a) [0x7f7e24a11e6a] 14: (()+0x8182) [0x7f7e24172182] 15: (clone()+0x6d) [0x7f7e22ae647d] NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Actions