Project

General

Profile

Actions

Bug #5033

closed

oops in ceph_put_wrbuffer_cap_refs

Added by Sage Weil almost 11 years ago. Updated almost 8 years ago.

Status:
Can't reproduce
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
kceph
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

[3]kdb> bt
Stack traceback for pid 22981
0xffff88020b6b0000    22981    21259  1    3   R  0xffff88020b6b0488 *ffsb
 ffff8802117c9938 0000000000000018 ffffffffa08cdd19 ffffea00078c6080
 ffffea00078c6080 ffff8802117c9998 ffffffff8113e187 ffff880200000024
 ffff8801ffffffff ffff880200000001 ffffea0000000001 ffff88019f96a440
Call Trace:
 [<ffffffffa08cdd19>] ? ceph_put_wrbuffer_cap_refs+0x39/0x360 [ceph]
 [<ffffffff8113e187>] ? test_clear_page_writeback+0x87/0x190
 [<ffffffffa08c74d3>] ? writepage_nounlock+0x243/0x3d0 [ceph]
 [<ffffffffa08c77b0>] ? ceph_update_writeable_page+0x150/0x4b0 [ceph]
 [<ffffffff81132581>] ? grab_cache_page_write_begin+0xa1/0xe0
 [<ffffffffa08c7b7c>] ? ceph_write_begin+0x6c/0xc0 [ceph]
 [<ffffffff81131e5a>] ? generic_file_buffered_write+0x11a/0x290
 [<ffffffffa08c4160>] ? ceph_aio_write+0x8f0/0xa90 [ceph]
 [<ffffffff81186203>] ? do_sync_write+0xa3/0xe0
 [<ffffffff811868f3>] ? vfs_write+0xb3/0x180
 [<ffffffff81186d95>] ? sys_write+0x55/0xa0
 [<ffffffff8167b559>] ? system_call_fastpath+0x16/0x1b

job was
kernel:
  kdb: true
  sha1: b5b09be30cf99f9c699e825629f02e3bce555d44
machine_type: plana
nuke-on-error: true
overrides:
  ceph:
    conf:
      mon:
        debug mon: 20
        debug ms: 20
        debug paxos: 20
      osd:
        filestore flush min: 0
        osd op thread timeout: 60
    fs: btrfs
    log-whitelist:
    - slow request
    sha1: fd901056831586e8135e28c8f4ba9c2ec44dfcf6
  s3tests:
    branch: next
  workunit:
    sha1: fd901056831586e8135e28c8f4ba9c2ec44dfcf6
roles:
- - mon.a
  - mon.c
  - osd.0
  - osd.1
  - osd.2
- - mon.b
  - mds.a
  - osd.3
  - osd.4
  - osd.5
- - client.0
tasks:
- chef: null
- clock.check: null
- install: null
- ceph:
    log-whitelist:
    - wrongly marked me down
    - objects unfound and apparently lost
- thrashosds: null
- kclient: null
- workunit:
    clients:
      all:
      - suites/ffsb.sh

have seen this several times in the last month or so.
Actions #1

Updated by Sandon Van Ness almost 11 years ago

plana47 died with:

[0]kdb> bt
Stack traceback for pid 25102
0xffff88001c499f90 25102 23405 1 0 R 0xffff88001c49a410 *ffsb
ffff88020d3cda78 0000000000000018 0000000000000000 ffffea0004ba7980
000010001f264df8 00000000028cc000 0000000000001000 00000000028cb000
ffff88020c72ea80 ffffea0004ba7980 ffff88021f264df8 ffff88020d3cdb08
Call Trace:
[<ffffffff81130d65>] ? grab_cache_page_write_begin+0x95/0xf0
[<ffffffff8134141d>] ? do_raw_spin_unlock+0x5d/0xb0
[<ffffffffa0642b04>] ? ceph_write_begin+0x74/0xd0 [ceph]
[<ffffffff8134141d>] ? do_raw_spin_unlock+0x5d/0xb0
[<ffffffff8113064a>] ? generic_file_buffered_write+0x11a/0x290
[<ffffffffa063efc0>] ? ceph_aio_write+0x910/0xac0 [ceph]
[<ffffffff81183843>] ? do_sync_write+0xa3/0xe0
[<ffffffff81183ef3>] ? vfs_write+0xb3/0x180
[<ffffffff81184232>] ? sys_write+0x52/0xa0
[<ffffffff81666e59>] ? system_call_fastpath+0x16/0x1b

Its still on the prompt if someone wants to look into it at all. I will reclaim at the end of the week otherwise.

Actions #2

Updated by Sage Weil almost 11 years ago

  • Priority changed from Urgent to High
Actions #3

Updated by Sage Weil over 10 years ago

  • Status changed from New to Can't reproduce
Actions #4

Updated by Greg Farnum almost 8 years ago

  • Component(FS) kceph added
Actions

Also available in: Atom PDF