Actions
Bug #5787
closedclient/Client.cc: 2081: FAILED assert(!unclean) in put_inode
Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Client
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
-1> 2013-07-28 09:18:51.961474 7fef4958f700 1 -- 10.214.131.12:0/19701 <== mds.0 10.214.131.16:6807/14764 15909 ==== client_reply(???:7159 = 0 Success safe) v1 ==== 27+0+0 (2747124447 0 0) 0x560a2c0 con 0x2e29dc0 0> 2013-07-28 09:18:51.990762 7fef4958f700 -1 client/Client.cc: In function 'void Client::put_inode(Inode*, int)' thread 7fef4958f700 time 2013-07-28 09:18:51.961569 client/Client.cc: 2081: FAILED assert(!unclean) ceph version 0.61.7-31-gb70a9ab (b70a9abc5e3ae01204256f414bd7e69d083ed7c6) 1: (Client::put_inode(Inode*, int)+0x58e) [0x48a95e] 2: (Client::put_request(MetaRequest*)+0xd0) [0x48c450] 3: (Client::handle_client_reply(MClientReply*)+0x965) [0x4c6f85] 4: (Client::ms_dispatch(Message*)+0x5cb) [0x4cbddb] 5: (DispatchQueue::entry()+0x3f1) [0x646d51] 6: (DispatchQueue::DispatchThread::entry()+0xd) [0x5860cd] 7: (()+0x7e9a) [0x7fef4eaf1e9a] 8: (clone()+0x6d) [0x7fef4d2a7ccd] NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
logs are incomplete, but i see:
- lookup
- unlink
- forget
...
- safe reply on the unlink
- crash
but not actual read or write; that must have happened longer ago. the oset has 1 object and in->size is a bit over 4 MB. need a more complete log, i think.
job was
ubuntu@teuthology:/a/teuthology-2013-07-28_01:30:28-upgrade-fs-next-testing-basic-plana/86966$ cat orig.config.yaml kernel: kdb: true sha1: 88b7f22bc0e44db48a24af23e4de3653bc44b2d2 machine_type: plana nuke-on-error: true os_type: ubuntu overrides: admin_socket: branch: next ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 log-whitelist: - slow request sha1: b5250fdc70119408b102091229ba8e10fa0b1446 ceph-deploy: branch: dev: next conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 install: ceph: sha1: b5250fdc70119408b102091229ba8e10fa0b1446 s3tests: branch: next workunit: sha1: b5250fdc70119408b102091229ba8e10fa0b1446 roles: - - mon.a - mds.a - osd.0 - osd.1 - - mon.b - mon.c - osd.2 - osd.3 - - client.0 tasks: - chef: null - clock.check: null - install: branch: cuttlefish - ceph: fs: xfs - ceph-fuse: null - workunit: branch: cuttlefish clients: client.0: - suites/iogen.sh - install.upgrade: all: branch: next - ceph.restart: - mds.a - mon.a - mon.b - mon.c - osd.0 - osd.1 - osd.2 - osd.3 - workunit: branch: next clients: client.0: - suites/fsstress.sh teuthology_branch: next
Updated by Sage Weil over 10 years ago
- Status changed from New to Need More Info
Updated by Greg Farnum over 10 years ago
If the inode is >4MB, shouldn't the oset have more than one object? Sounds like maybe we lost track of an in-flight write somewhere...
Updated by Sage Weil over 10 years ago
Greg Farnum wrote:
If the inode is >4MB, shouldn't the oset have more than one object? Sounds like maybe we lost track of an in-flight write somewhere...
maybe, could also have been partially flushed, or a sparse write. either way, need logs i think.
Updated by Greg Farnum about 10 years ago
- Priority changed from High to Normal
Demoting due to uclient and Need More Info.
Updated by Zheng Yan about 10 years ago
- Status changed from Need More Info to Duplicate
Actions