Project

General

Profile

Actions

Bug #5043

closed

Oops in remove_osd

Added by Sandon Van Ness almost 11 years ago. Updated almost 11 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

Stack output:

Stack traceback for pid 29892
0xffff88022140bf20 29892 2 1 6 R 0xffff88022140c3a8 *kworker/6:1
ffff88020ad79b48 0000000000000018 ffff88022140c640 ffff88020ad79b68
ffffffffa00bfa0c ffff88020a285000 ffff8802244a4950 ffff88020ad79b98
ffffffffa00bfcff ffff88020a285030 ffff8802244a4950 0000000000000005
Call Trace:
[<ffffffffa00bfa0c>] ? __remove_osd+0x3c/0xa0 [libceph]
[<ffffffffa00bfcff>] ? __reset_osd+0x12f/0x170 [libceph]
[<ffffffffa00c0e3e>] ? osd_reset+0x7e/0x2b0 [libceph]
[<ffffffffa00b8e81>] ? con_work+0x571/0x2d40 [libceph]
[<ffffffff816724c0>] ? _raw_spin_unlock_irq+0x30/0x40
[<ffffffff81074aa1>] ? process_one_work+0x161/0x4b0
[<ffffffff810b8c05>] ? trace_hardirqs_on_caller+0x105/0x190
[<ffffffff81074b12>] ? process_one_work+0x1d2/0x4b0
[<ffffffff81074aa1>] ? process_one_work+0x161/0x4b0
[<ffffffff81076778>] ? worker_thread+0x118/0x340
[<ffffffff81076660>] ? manage_workers+0x320/0x320
[<ffffffff8107c46a>] ? kthread+0xea/0xf0
[<ffffffff8107c380>] ? flush_kthread_work+0x1a0/0x1a0
[<ffffffff8167b4ac>] ? ret_from_fork+0x7c/0xb0
[<ffffffff8107c380>] ? flush_kthread_work+0x1a0/0x1a0

This was the job:

kernel: &id001
kdb: true
sha1: b5b09be30cf99f9c699e825629f02e3bce555d44
machine_type: plana
nuke-on-error: true
overrides:
ceph:
conf:
mon:
debug mon: 20
debug ms: 20
debug paxos: 20
osd:
filestore flush min: 0
osd op thread timeout: 60
fs: btrfs
log-whitelist:
- slow request
sha1: 6c1e4791782ce2b3e101ee80640d896bcda684de
s3tests:
branch: next
workunit:
sha1: 6c1e4791782ce2b3e101ee80640d896bcda684de
roles:
- - mon.a
- mon.c
- osd.0
- osd.1
- osd.2
- - mon.b
- mds.a
- osd.3
- osd.4
- osd.5
- - client.0
targets:
: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0IExVUuQEnwBzUbN6+jgEb/qLLsSDFLAe0OPv+R3Q3uUZn+4QjB+FQ3sMMEwaGpEhUfbpWn3xZbCmi4NngTsgPDjImYJPMeMecaxvVXqAJt4IM6gdBN0415lrKXtbXaGBJCmeFFDB+xN+JN6Qhk8DWN62DnJB8MS5vKn9u0S3HvtGzY/QrnutT3AX9I4097isbTepLFRC4n2CoAC9srXaxAprFgLgOIYHm386B3W7yK3yfIImWofvZxYpPCsr/7ws8DSgdRX9eDucGQfY2WcYKCEIgZclGPBhExm7q/ahHUqYKrzWY3RD93AuXJVgOJk5Yp8C1Ryx8vyJkAxBW4nb
: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDuXajaQgHe9XnbLOzI8WWFYVz6+TnOiTzbkIJPGOZpzQEjnUtJraQIEt5ABSeovMjiEj+V4XvunfyuSmEd0H9giRSyjmCHTPGlpndfTeCdVtCBpNqf5GkUqHaEY1Hp57XPbya2rGlwtFm0NeIDYx6pfkejKnsTOUqwhgUb6950TRhjHQhMjFgyALSyfAm/4y6vGZfjm57+yyih6XgDkqWiiQ6Y/aJVR2n+iCzvqEzV7JSCU+Brn+k8IQLHho1fadYqc5PjYct5BaVlHcP6c+T8nJE/DvqGwZ4gQaVJcuWJiDfLOPPYo1g/0AFicxauLwVNJ6HFR9FjLLGtGU+2DcVN
: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDp3cwfZhOipCot6NiKX4cRMn4zx43QY0+5HdqzCQU2y7OrOJt3d0qvifnZPyeq8/d+aW2WL2OM8m4taz380JsP0SLmlpY8D0pGY/tN0pQDqIFd8EboMtKY6tR8unQrVzuczMqup/tkKSfdRp0zAeTiJ8qH7l9MaVcOw6WfRACb8f7APJE2gVRBrzPAdbqKzAphTRzZSz0cq722AX7XQDPT2dz7NoTp5Tk7xaQdDu2II+78B1H27IWdyYeonfy17yf9N+IA2Xzna/g5zu8apg7UvzyFmHunLyjr78dhPtR39201A0QJ5x5Qli9/UaB3LwiqnbCiGfx4xWFazdUFzxiD
tasks:
- internal.lock_machines:
- 3
- plana
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- kernel: *id001
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install: null
- ceph:
log-whitelist:
- wrongly marked me down
- objects unfound and apparently lost
- thrashosds: null
- kclient: null
- workunit:
clients:
all:
- suites/ffsb.sh

Its on plana45 which will not be reclaimed until the end of the week at the earliest if someone wants to check it out.

Actions

Also available in: Atom PDF