Bug #6775
closedkvm backtrace on rbd task
0%
Description
On these tests:
/var/lib/teuthworker/archive/teuthology-2013-11-07_19:00:48-rbd-cuttlefish-testing-basic-plana/89378 (plana34)
/var/lib/teuthworker/archive/teuthology-2013-11-08_19:01:03-rbd-dumpling-testing-basic-plana/90815 (plana81)
/var/lib/teuthworker/archive/teuthology-2013-11-08_19:01:03-rbd-dumpling-testing-basic-plana/90829 (plana73)
plana34:
Stack traceback for pid 8588
0xffff88003c7f9fb0 8588 8554 1 3 R 0xffff88003c7fa458 *kvm
ffff880222677c58 0000000000000018 0000000000000246 ffff880222420000
ffff880222420000 ffff880222677cd8 0000000000006c14 0000000000000000
ffff880222420000 ffffffff00000000 0000000000000046 0000000000000000
Call Trace:
[<ffffffffa04ce6ac>] ? vmx_handle_external_intr+0x6c/0x70 [kvm_intel]
[<ffffffffa045bb67>] ? kvm_arch_vcpu_ioctl_run+0x8c7/0x10b0 [kvm]
[<ffffffffa045bbe2>] ? kvm_arch_vcpu_ioctl_run+0x942/0x10b0 [kvm]
[<ffffffff810b793d>] ? trace_hardirqs_on+0xd/0x10
[<ffffffffa0443d76>] ? kvm_vcpu_ioctl+0x456/0x650 [kvm]
[<ffffffff811a3d9c>] ? fget_light+0x3c/0x130
[<ffffffff81198ed6>] ? do_vfs_ioctl+0x96/0x560
[<ffffffff811a3dfe>] ? fget_light+0x9e/0x130
[<ffffffff811a3d9c>] ? fget_light+0x3c/0x130
[<ffffffff81199431>] ? SyS_ioctl+0x91/0xb0
[<ffffffff8134303e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff8166fe92>] ? system_call_fastpath+0x16/0x1b
plana81:
Stack traceback for pid 14206
0xffff880222581fb0 14206 14176 1 0 R 0xffff880222582458 *kvm
ffff880224c63c58 0000000000000018 0000000000000246 ffff880223d80000
ffff880223d80000 ffff880224c63cd8 0000000000006c14 0000000000000000
ffff880223d80000 ffffffff00000000 0000000000000046 0000000000000000
Call Trace:
[<ffffffffa04ca6ac>] ? vmx_handle_external_intr+0x6c/0x70 [kvm_intel]
[<ffffffffa0457b67>] ? kvm_arch_vcpu_ioctl_run+0x8c7/0x10b0 [kvm]
[<ffffffffa0457be2>] ? kvm_arch_vcpu_ioctl_run+0x942/0x10b0 [kvm]
[<ffffffff810b793d>] ? trace_hardirqs_on+0xd/0x10
[<ffffffffa043fd76>] ? kvm_vcpu_ioctl+0x456/0x650 [kvm]
[<ffffffff811a3d9c>] ? fget_light+0x3c/0x130
[<ffffffff81198ed6>] ? do_vfs_ioctl+0x96/0x560
[<ffffffff811a3dfe>] ? fget_light+0x9e/0x130
[<ffffffff811a3d9c>] ? fget_light+0x3c/0x130
[<ffffffff81199431>] ? SyS_ioctl+0x91/0xb0
[<ffffffff8134303e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff8166fe92>] ? system_call_fastpath+0x16/0x1b
plana73:
Stack traceback for pid 17019
0xffff88012d421fb0 17019 16985 1 0 R 0xffff88012d422458 *kvm
ffff880223efdc28 0000000000000018 ffff8801e4580000 ffff880223efdc48
ffffffffa04fc291 0000000000000001 ffff8801e4580000 ffff880223efdc68
ffffffffa04fc396 0000000000000000 ffff8801e4580000 ffff880223efdcd8
Call Trace:
[<ffffffffa04fc291>] ? skip_emulated_instruction+0x31/0x70 [kvm_intel]
[<ffffffffa04fc396>] ? handle_pause+0x16/0x30 [kvm_intel]
[<ffffffffa04feb36>] ? vmx_handle_exit+0x106/0x900 [kvm_intel]
[<ffffffffa04f6666>] ? vmx_handle_external_intr+0x26/0x70 [kvm_intel]
[<ffffffffa0483c5a>] ? kvm_arch_vcpu_ioctl_run+0x9ba/0x10b0 [kvm]
[<ffffffffa0483be2>] ? kvm_arch_vcpu_ioctl_run+0x942/0x10b0 [kvm]
[<ffffffff810b793d>] ? trace_hardirqs_on+0xd/0x10
[<ffffffffa046bd76>] ? kvm_vcpu_ioctl+0x456/0x650 [kvm]
[<ffffffff811a3d9c>] ? fget_light+0x3c/0x130
[<ffffffff81198ed6>] ? do_vfs_ioctl+0x96/0x560
[<ffffffff811a3dfe>] ? fget_light+0x9e/0x130
[<ffffffff811a3d9c>] ? fget_light+0x3c/0x130
[<ffffffff81199431>] ? SyS_ioctl+0x91/0xb0
[<ffffffff8134303e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff8166fe92>] ? system_call_fastpath+0x16/0x1b
Machines are still in kdb prompt if someone wants to check them out. Let me know if its cool to nuke them.
Updated by Josh Durgin over 10 years ago
- Status changed from New to Rejected
This looks unrelated to ceph. The machines can be nuked.