Project

General

Profile

Actions

Bug #3624

closed

BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554 last function: xfs_end_io+0x0/0x110 [xfs]

Added by Tamilarasi muthamizhan over 11 years ago. Updated over 11 years ago.

Status:
Won't Fix
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

log: ubuntu@teuthology:/a/teuthology-2012-12-13_19:00:03-regression-next-testing-basic/13746

ubuntu@teuthology:/a/teuthology-2012-12-13_19:00:03-regression-next-testing-basic/13746$ cat config.yaml 
kernel: &id001
  kdb: true
  sha1: 2978257c56935878f8a756c6cb169b569e99bb91
nuke-on-error: true
overrides:
  ceph:
    conf:
      global:
        ms inject socket failures: 5000
    fs: btrfs
    log-whitelist:
    - slow request
    sha1: 8cf367cb79046b08cc593b14f77526eef2758ee6
  s3tests:
    branch: next
  workunit:
    sha1: 8cf367cb79046b08cc593b14f77526eef2758ee6
roles:
- - mon.a
  - mon.c
  - osd.0
  - osd.1
  - osd.2
- - mon.b
  - mds.a
  - osd.3
  - osd.4
  - osd.5
- - client.0
- - client.1
- - client.2
targets:
  ubuntu@plana27.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHsjsqjgaLtco9/n4XOXwoTaBmkHxVgzNy3ElRQUccEn3LxQB4a5k3GFuju7Zdq4I9eHYajcjc8h2cHZWSYaDObbJE4I7X1QByEhBtIeyKIFajOVHQvczuVktpWXCoyeg/4M9b7E14AypFy4HpHA6KWAfwPJbvvEUp5jCFw+GHi/XhkrtU/7ydRwQsAql39YzpbDSacsyzblj685akhyY/Dn5sHVnmFaIkMmc64LJMsu9k93EKqD4LxtUn3A4G0qpC4lKA2RatTjYBVNBUPqNyCou6l88gnPgZOjLuWte0g1Met6WbtUJrE+nvAfpf1CAipDFyejIgFIt2U4LbUX9j
  ubuntu@plana48.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC1ScpN+amKWlGYSJOt8sqzLTCXYaU5QTqB3bbss463/VRcDQwabXs6tib54CO6FzWaloSqMaPZchA4HeNXOwdQ9daBnG1b0vrqs0B5jZnCVzvK9AoWGDg0tm68PWYr1AtJcNyIsutywVRdzjA2nzSioKU59dKugVog/+pkoB0hGYvXo72pePMV00IrgMr9FSbDnxi3L9iJvi2LD0Pnecx6DMnaDob/T+X5y0piap5esjwIIq7wqvXuEJE9jdmxPfHo3ise2j9UA2SGI5b7HL3YOemo0zic7ukCMvlc8Ag5dQnjANTcj2eUJyagvzJgxGqhoxyAp/WpmaHZkvf0RasB
  ubuntu@plana55.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCdrzGTR0Fbl6sedYlwlX+FlmF6fuE3l/RTu2kzOkmG47rPEn5CI37Injb7Epc50RXCbUIfzmDqtEY6uZT3YssYrE4jvhQlynPndbn1KmiTbgxTyuumGXv7O4OOntezighA1W49phUNZys1DhdEEO8VSQAIdHrBgBLhY9DDgC4LAhrP4BSbDTN0rUXtYYHBj4aa3sJV0o3sKjpsyjjlieEQnto6JkjK6EGZCSuY+AyMZyLJjFTgMwJ9i4aC5eZoWZAWSDfDsxo8PtFR+kjUmz5uiheyn5lAzKBxmd4ZNojf7wOhSGia0ghbtUeQkdoRZXZhP2ourNn3uAguf1xt43kX
  ubuntu@plana78.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCl9XSQkT+TSrTTaMDs/IFmQ3nIyrLNiqLFEahRBSLwIMbVppF2EG9XVT1AkRbul9DbY6BTSVGd/AO205di6kFRv9qpHFPfMqUt3XUATEyBZ7drpwliNlM8taFxb43hPTVwLstqM+0wy1arlan6onug+S45ORvtWaKfo4w6e6DcQDKuH4N718mNzMk/ePkr+UbmeGLSZnxKOvKSfA4vFWzEpEJEQlpqSbCQzAl06qTxE9rMCAyQkSLPrr6UmEG1FY0ZX3M1vZYzV5MCVR1J2xofHvsatfbwZjgzJI2136aC8xOBLN4TT0URQ5pYWxAixEsoDRej2u3eYLWRFa9EyIvd
  ubuntu@plana79.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCx0nVMVy140vXGRPqjqx63mfytPlqmoN7YoJ3Si0ti1XtvJTftB9EdQGwqj/tsY95DeUNBtAQs5TBsiLr1E/JHlKt7EXwyWsJNB2ntvkPJOMxoounypjkVgfv91EWmERQGFsalDmIYjSuSCG28g5Vaz8il9D7fH/ykKZ38EQChhPXIpB2bieJOr2Xm6llde1q2rUEltV17EmiQvu9eUuxb9y9h057k6GSqpsTViPADlT7CG7W60bqWs8d7TvV4rvPhUy6oyUp1ar8116NMSFUiaTgVTidDiQ3xyZeguwJAbzh86MQdHVhSi89W/vjvoEP1opjZP3RArB4BoNwzz/Dh
tasks:
- internal.lock_machines: 5
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.check_conflict: null
- kernel: *id001
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock: null
- ceph: null
- rbd.xfstests:
    client.0:
      scratch_image: scratch_image-0
      test_image: test_image-0
      tests: 1-9 11-15 17 19-21 26-29 31-34 41 46-48 50-54 56 61 63-67 69-70 74-76
        78-79 84-89 91
    client.1:
      scratch_image: scratch_image-1
      test_image: test_image-1
      tests: 92 100 103 105 108 110 116-121 124 126 129-132
    client.2:
      scratch_image: scratch_image-2
      test_image: test_image-2
      tests: 133-135 137-141 164-167 184 186-190 192 194 196 199 201 203 214-216 220-227
        234 236-238 241 243-249 253 257-259 261-262 269 273 275 277-278

ubuntu@teuthology:/a/teuthology-2012-12-13_19:00:03-regression-next-testing-basic/13746$ cat summary.yaml 
ceph-sha1: 8cf367cb79046b08cc593b14f77526eef2758ee6
client.0-kernel-sha1: 2978257c56935878f8a756c6cb169b569e99bb91
client.1-kernel-sha1: 2978257c56935878f8a756c6cb169b569e99bb91
client.2-kernel-sha1: 2978257c56935878f8a756c6cb169b569e99bb91
description: collection:kernel-singleton fs:btrfs.yaml msgr-failures:few.yaml tasks:rbd_xfstests.yaml
duration: 5109.355261087418
failure_reason: '''/tmp/cephtest/archive/syslog/kern.log:2012-12-14T00:58:08.429389-08:00
  plana78 kernel: [ 6893.585942] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554

  '' in syslog'
flavor: basic
mon.a-kernel-sha1: 2978257c56935878f8a756c6cb169b569e99bb91
mon.b-kernel-sha1: 2978257c56935878f8a756c6cb169b569e99bb91
owner: scheduled_teuthology@teuthology
success: false

2012-12-14T00:58:00.699214-08:00 plana78 kernel: [ 6885.855893] libceph: osd0 10.214.133.37:6800 socket closed
2012-12-14T00:58:08.299416-08:00 plana78 kernel: [ 6893.441466] quiet_error: 48 callbacks suppressed
2012-12-14T00:58:08.299432-08:00 plana78 kernel: [ 6893.441471] Buffer I/O error on device rbd2, logical block 3072
2012-12-14T00:58:08.314294-08:00 plana78 kernel: [ 6893.456297] lost page write due to I/O error on rbd2
2012-12-14T00:58:08.314308-08:00 plana78 kernel: [ 6893.456410] Buffer I/O error on device rbd2, logical block 3073
2012-12-14T00:58:08.314311-08:00 plana78 kernel: [ 6893.471008] lost page write due to I/O error on rbd2
2012-12-14T00:58:08.314313-08:00 plana78 kernel: [ 6893.471013] Buffer I/O error on device rbd2, logical block 3074
2012-12-14T00:58:08.328913-08:00 plana78 kernel: [ 6893.485604] lost page write due to I/O error on rbd2
2012-12-14T00:58:08.328928-08:00 plana78 kernel: [ 6893.485608] Buffer I/O error on device rbd2, logical block 3075
2012-12-14T00:58:08.343633-08:00 plana78 kernel: [ 6893.500332] lost page write due to I/O error on rbd2
2012-12-14T00:58:08.343647-08:00 plana78 kernel: [ 6893.500337] Buffer I/O error on device rbd2, logical block 3076
2012-12-14T00:58:08.358265-08:00 plana78 kernel: [ 6893.514902] lost page write due to I/O error on rbd2
2012-12-14T00:58:08.358279-08:00 plana78 kernel: [ 6893.514906] Buffer I/O error on device rbd2, logical block 3077
2012-12-14T00:58:08.372730-08:00 plana78 kernel: [ 6893.529342] lost page write due to I/O error on rbd2
2012-12-14T00:58:08.372744-08:00 plana78 kernel: [ 6893.529346] Buffer I/O error on device rbd2, logical block 3078
2012-12-14T00:58:08.387059-08:00 plana78 kernel: [ 6893.543652] lost page write due to I/O error on rbd2
2012-12-14T00:58:08.387074-08:00 plana78 kernel: [ 6893.543656] Buffer I/O error on device rbd2, logical block 3079
2012-12-14T00:58:08.401241-08:00 plana78 kernel: [ 6893.557807] lost page write due to I/O error on rbd2
2012-12-14T00:58:08.401255-08:00 plana78 kernel: [ 6893.557812] Buffer I/O error on device rbd2, logical block 3080
2012-12-14T00:58:08.415372-08:00 plana78 kernel: [ 6893.571898] lost page write due to I/O error on rbd2
2012-12-14T00:58:08.415386-08:00 plana78 kernel: [ 6893.571902] Buffer I/O error on device rbd2, logical block 3081
2012-12-14T00:58:08.429367-08:00 plana78 kernel: [ 6893.585884] lost page write due to I/O error on rbd2
2012-12-14T00:58:08.429389-08:00 plana78 kernel: [ 6893.585942] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:08.471480-08:00 plana78 kernel: [ 6893.611486]     last function: xfs_end_io+0x0/0x110 [xfs]
2012-12-14T00:58:08.471495-08:00 plana78 kernel: [ 6893.627968] 1 lock held by kworker/0:1/17554:
2012-12-14T00:58:08.471512-08:00 plana78 kernel: [ 6893.627973]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.471515-08:00 plana78 kernel: [ 6893.628000] Pid: 17554, comm: kworker/0:1 Not tainted 3.6.0-ceph-00174-g2978257 #1
2012-12-14T00:58:08.471516-08:00 plana78 kernel: [ 6893.628005] Call Trace:
2012-12-14T00:58:08.471526-08:00 plana78 kernel: [ 6893.628016]  [<ffffffff810719d7>] process_one_work+0x397/0x5f0
2012-12-14T00:58:08.471528-08:00 plana78 kernel: [ 6893.628020]  [<ffffffff81071776>] ? process_one_work+0x136/0x5f0
2012-12-14T00:58:08.504289-08:00 plana78 kernel: [ 6893.628043]  [<ffffffffa03a2300>] ? xfs_destroy_ioend+0x90/0x90 [xfs]
2012-12-14T00:58:08.504304-08:00 plana78 kernel: [ 6893.628050]  [<ffffffff810735fd>] worker_thread+0x18d/0x4f0
2012-12-14T00:58:08.504307-08:00 plana78 kernel: [ 6893.628055]  [<ffffffff81073470>] ? manage_workers+0x320/0x320
2012-12-14T00:58:08.504309-08:00 plana78 kernel: [ 6893.628060]  [<ffffffff810791fe>] kthread+0xae/0xc0
2012-12-14T00:58:08.504311-08:00 plana78 kernel: [ 6893.628066]  [<ffffffff810b393d>] ? trace_hardirqs_on+0xd/0x10
2012-12-14T00:58:08.504313-08:00 plana78 kernel: [ 6893.628072]  [<ffffffff8163f3c4>] kernel_thread_helper+0x4/0x10
2012-12-14T00:58:08.504315-08:00 plana78 kernel: [ 6893.628077]  [<ffffffff816360b0>] ? retint_restore_args+0x13/0x13
2012-12-14T00:58:08.504317-08:00 plana78 kernel: [ 6893.628081]  [<ffffffff81079150>] ? flush_kthread_work+0x1a0/0x1a0
2012-12-14T00:58:08.504319-08:00 plana78 kernel: [ 6893.628085]  [<ffffffff8163f3c0>] ? gs_change+0x13/0x13
2012-12-14T00:58:08.504328-08:00 plana78 kernel: [ 6893.628654] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:08.524428-08:00 plana78 kernel: [ 6893.660738]     last function: con_work+0x0/0x2f60 [libceph]
2012-12-14T00:58:08.524442-08:00 plana78 kernel: [ 6893.680765] 1 lock held by kworker/0:1/17554:
2012-12-14T00:58:08.524446-08:00 plana78 kernel: [ 6893.680767]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.524448-08:00 plana78 kernel: [ 6893.680791] Pid: 17554, comm: kworker/0:1 Not tainted 3.6.0-ceph-00174-g2978257 #1
2012-12-14T00:58:08.524449-08:00 plana78 kernel: [ 6893.680793] Call Trace:
2012-12-14T00:58:08.524451-08:00 plana78 kernel: [ 6893.680799]  [<ffffffff810719d7>] process_one_work+0x397/0x5f0
2012-12-14T00:58:08.524453-08:00 plana78 kernel: [ 6893.680803]  [<ffffffff81071776>] ? process_one_work+0x136/0x5f0
2012-12-14T00:58:08.524455-08:00 plana78 kernel: [ 6893.680813]  [<ffffffffa04f95e0>] ? ceph_msg_new+0x2e0/0x2e0 [libceph]
2012-12-14T00:58:08.524457-08:00 plana78 kernel: [ 6893.680824]  [<ffffffff810735fd>] worker_thread+0x18d/0x4f0
2012-12-14T00:58:08.524474-08:00 plana78 kernel: [ 6893.680834]  [<ffffffff81073470>] ? manage_workers+0x320/0x320
2012-12-14T00:58:08.524477-08:00 plana78 kernel: [ 6893.680838]  [<ffffffff810791fe>] kthread+0xae/0xc0
2012-12-14T00:58:08.524479-08:00 plana78 kernel: [ 6893.680843]  [<ffffffff810b393d>] ? trace_hardirqs_on+0xd/0x10
2012-12-14T00:58:08.524481-08:00 plana78 kernel: [ 6893.680848]  [<ffffffff8163f3c4>] kernel_thread_helper+0x4/0x10
2012-12-14T00:58:08.524482-08:00 plana78 kernel: [ 6893.680853]  [<ffffffff816360b0>] ? retint_restore_args+0x13/0x13
2012-12-14T00:58:08.524485-08:00 plana78 kernel: [ 6893.680857]  [<ffffffff81079150>] ? flush_kthread_work+0x1a0/0x1a0
2012-12-14T00:58:08.524486-08:00 plana78 kernel: [ 6893.680861]  [<ffffffff8163f3c0>] ? gs_change+0x13/0x13
2012-12-14T00:58:08.524497-08:00 plana78 kernel: [ 6893.680896] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:08.585773-08:00 plana78 kernel: [ 6893.719354]     last function: fb_flashcursor+0x0/0x150
2012-12-14T00:58:08.585787-08:00 plana78 kernel: [ 6893.742034] 1 lock held by kworker/0:1/17554:
2012-12-14T00:58:08.585791-08:00 plana78 kernel: [ 6893.742036]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.585794-08:00 plana78 kernel: [ 6893.742062] Pid: 17554, comm: kworker/0:1 Not tainted 3.6.0-ceph-00174-g2978257 #1
2012-12-14T00:58:08.585795-08:00 plana78 kernel: [ 6893.742068] Call Trace:
2012-12-14T00:58:08.585819-08:00 plana78 kernel: [ 6893.742075]  [<ffffffff810719d7>] process_one_work+0x397/0x5f0
2012-12-14T00:58:08.585822-08:00 plana78 kernel: [ 6893.742079]  [<ffffffff81071776>] ? process_one_work+0x136/0x5f0
2012-12-14T00:58:08.585824-08:00 plana78 kernel: [ 6893.742084]  [<ffffffff8136a7f0>] ? get_color.isra.16+0x160/0x160
2012-12-14T00:58:08.585826-08:00 plana78 kernel: [ 6893.742089]  [<ffffffff810735fd>] worker_thread+0x18d/0x4f0
2012-12-14T00:58:08.585828-08:00 plana78 kernel: [ 6893.742094]  [<ffffffff81073470>] ? manage_workers+0x320/0x320
2012-12-14T00:58:08.585829-08:00 plana78 kernel: [ 6893.742098]  [<ffffffff810791fe>] kthread+0xae/0xc0
2012-12-14T00:58:08.585832-08:00 plana78 kernel: [ 6893.742103]  [<ffffffff810b393d>] ? trace_hardirqs_on+0xd/0x10
2012-12-14T00:58:08.585833-08:00 plana78 kernel: [ 6893.742114]  [<ffffffff8163f3c4>] kernel_thread_helper+0x4/0x10
2012-12-14T00:58:08.585842-08:00 plana78 kernel: [ 6893.742122]  [<ffffffff816360b0>] ? retint_restore_args+0x13/0x13
2012-12-14T00:58:08.585844-08:00 plana78 kernel: [ 6893.742127]  [<ffffffff81079150>] ? flush_kthread_work+0x1a0/0x1a0
2012-12-14T00:58:08.585846-08:00 plana78 kernel: [ 6893.742131]  [<ffffffff8163f3c0>] ? gs_change+0x13/0x13
2012-12-14T00:58:08.630611-08:00 plana78 kernel: [ 6893.742239] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:08.656671-08:00 plana78 kernel: [ 6893.786813]     last function: xfs_end_io+0x0/0x110 [xfs]
2012-12-14T00:58:08.708157-08:00 plana78 kernel: [ 6893.812880] 2 locks held by kworker/0:1/17554:
2012-12-14T00:58:08.708172-08:00 plana78 kernel: [ 6893.812882]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.708176-08:00 plana78 kernel: [ 6893.812903]  #1:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.708178-08:00 plana78 kernel: [ 6893.812923] Pid: 17554, comm: kworker/0:1 Not tainted 3.6.0-ceph-00174-g2978257 #1
2012-12-14T00:58:08.708179-08:00 plana78 kernel: [ 6893.812926] Call Trace:
2012-12-14T00:58:08.708181-08:00 plana78 kernel: [ 6893.812931]  [<ffffffff810719d7>] process_one_work+0x397/0x5f0
2012-12-14T00:58:08.708183-08:00 plana78 kernel: [ 6893.812936]  [<ffffffff81071776>] ? process_one_work+0x136/0x5f0
2012-12-14T00:58:08.708186-08:00 plana78 kernel: [ 6893.812950]  [<ffffffffa03a2300>] ? xfs_destroy_ioend+0x90/0x90 [xfs]
2012-12-14T00:58:08.708188-08:00 plana78 kernel: [ 6893.812956]  [<ffffffff810735fd>] worker_thread+0x18d/0x4f0
2012-12-14T00:58:08.708190-08:00 plana78 kernel: [ 6893.812961]  [<ffffffff81073470>] ? manage_workers+0x320/0x320
2012-12-14T00:58:08.708191-08:00 plana78 kernel: [ 6893.812965]  [<ffffffff810791fe>] kthread+0xae/0xc0
2012-12-14T00:58:08.708193-08:00 plana78 kernel: [ 6893.812970]  [<ffffffff810b393d>] ? trace_hardirqs_on+0xd/0x10
2012-12-14T00:58:08.708195-08:00 plana78 kernel: [ 6893.812975]  [<ffffffff8163f3c4>] kernel_thread_helper+0x4/0x10
2012-12-14T00:58:08.708197-08:00 plana78 kernel: [ 6893.812980]  [<ffffffff816360b0>] ? retint_restore_args+0x13/0x13
2012-12-14T00:58:08.708200-08:00 plana78 kernel: [ 6893.812985]  [<ffffffff81079150>] ? flush_kthread_work+0x1a0/0x1a0
2012-12-14T00:58:08.708201-08:00 plana78 kernel: [ 6893.812989]  [<ffffffff8163f3c0>] ? gs_change+0x13/0x13
2012-12-14T00:58:08.708203-08:00 plana78 kernel: [ 6893.813152] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:08.737349-08:00 plana78 kernel: [ 6893.864253]     last function: xfs_end_io+0x0/0x110 [xfs]
2012-12-14T00:58:08.737363-08:00 plana78 kernel: [ 6893.893331] 3 locks held by kworker/0:1/17554:
2012-12-14T00:58:08.737367-08:00 plana78 kernel: [ 6893.893333]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.737370-08:00 plana78 kernel: [ 6893.893355]  #1:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.737400-08:00 plana78 kernel: [ 6893.893374]  #2:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.737403-08:00 plana78 kernel: [ 6893.893402] Pid: 17554, comm: kworker/0:1 Not tainted 3.6.0-ceph-00174-g2978257 #1
2012-12-14T00:58:08.737404-08:00 plana78 kernel: [ 6893.893404] Call Trace:
2012-12-14T00:58:08.737406-08:00 plana78 kernel: [ 6893.893410]  [<ffffffff810719d7>] process_one_work+0x397/0x5f0
2012-12-14T00:58:08.737408-08:00 plana78 kernel: [ 6893.893415]  [<ffffffff81071776>] ? process_one_work+0x136/0x5f0
2012-12-14T00:58:08.737411-08:00 plana78 kernel: [ 6893.893435]  [<ffffffffa03a2300>] ? xfs_destroy_ioend+0x90/0x90 [xfs]
2012-12-14T00:58:08.737424-08:00 plana78 kernel: [ 6893.893441]  [<ffffffff810735fd>] worker_thread+0x18d/0x4f0
2012-12-14T00:58:08.737426-08:00 plana78 kernel: [ 6893.893447]  [<ffffffff81073470>] ? manage_workers+0x320/0x320
2012-12-14T00:58:08.737428-08:00 plana78 kernel: [ 6893.893451]  [<ffffffff810791fe>] kthread+0xae/0xc0
2012-12-14T00:58:08.737433-08:00 plana78 kernel: [ 6893.893457]  [<ffffffff810b393d>] ? trace_hardirqs_on+0xd/0x10
2012-12-14T00:58:08.737447-08:00 plana78 kernel: [ 6893.893465]  [<ffffffff8163f3c4>] kernel_thread_helper+0x4/0x10
2012-12-14T00:58:08.737450-08:00 plana78 kernel: [ 6893.893470]  [<ffffffff816360b0>] ? retint_restore_args+0x13/0x13
2012-12-14T00:58:08.737452-08:00 plana78 kernel: [ 6893.893475]  [<ffffffff81079150>] ? flush_kthread_work+0x1a0/0x1a0
2012-12-14T00:58:08.737460-08:00 plana78 kernel: [ 6893.893482]  [<ffffffff8163f3c0>] ? gs_change+0x13/0x13
2012-12-14T00:58:08.794870-08:00 plana78 kernel: [ 6893.893641] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:08.827147-08:00 plana78 kernel: [ 6893.950760]     last function: xfs_end_io+0x0/0x110 [xfs]
2012-12-14T00:58:08.889636-08:00 plana78 kernel: [ 6893.983059] 4 locks held by kworker/0:1/17554:
2012-12-14T00:58:08.889646-08:00 plana78 kernel: [ 6893.983061]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.889649-08:00 plana78 kernel: [ 6893.983083]  #1:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.889651-08:00 plana78 kernel: [ 6893.983102]  #2:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.889652-08:00 plana78 kernel: [ 6893.983122]  #3:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.889653-08:00 plana78 kernel: [ 6893.983143] Pid: 17554, comm: kworker/0:1 Not tainted 3.6.0-ceph-00174-g2978257 #1
2012-12-14T00:58:08.889654-08:00 plana78 kernel: [ 6893.983145] Call Trace:
2012-12-14T00:58:08.889655-08:00 plana78 kernel: [ 6893.983151]  [<ffffffff810719d7>] process_one_work+0x397/0x5f0
2012-12-14T00:58:08.889657-08:00 plana78 kernel: [ 6893.983156]  [<ffffffff81071776>] ? process_one_work+0x136/0x5f0
2012-12-14T00:58:08.889658-08:00 plana78 kernel: [ 6893.983170]  [<ffffffffa03a2300>] ? xfs_destroy_ioend+0x90/0x90 [xfs]
2012-12-14T00:58:08.889662-08:00 plana78 kernel: [ 6893.983176]  [<ffffffff810735fd>] worker_thread+0x18d/0x4f0
2012-12-14T00:58:08.889664-08:00 plana78 kernel: [ 6893.983181]  [<ffffffff81073470>] ? manage_workers+0x320/0x320
2012-12-14T00:58:08.889665-08:00 plana78 kernel: [ 6893.983186]  [<ffffffff810791fe>] kthread+0xae/0xc0
2012-12-14T00:58:08.889666-08:00 plana78 kernel: [ 6893.983191]  [<ffffffff810b393d>] ? trace_hardirqs_on+0xd/0x10
2012-12-14T00:58:08.889668-08:00 plana78 kernel: [ 6893.983196]  [<ffffffff8163f3c4>] kernel_thread_helper+0x4/0x10
2012-12-14T00:58:08.889669-08:00 plana78 kernel: [ 6893.983201]  [<ffffffff816360b0>] ? retint_restore_args+0x13/0x13
2012-12-14T00:58:08.889670-08:00 plana78 kernel: [ 6893.983205]  [<ffffffff81079150>] ? flush_kthread_work+0x1a0/0x1a0
2012-12-14T00:58:08.889671-08:00 plana78 kernel: [ 6893.983210]  [<ffffffff8163f3c0>] ? gs_change+0x13/0x13
2012-12-14T00:58:08.889673-08:00 plana78 kernel: [ 6893.983294] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:08.923858-08:00 plana78 kernel: [ 6894.045339]     last function: xfs_end_io+0x0/0x110 [xfs]
2012-12-14T00:58:08.923868-08:00 plana78 kernel: [ 6894.079513] 5 locks held by kworker/0:1/17554:
2012-12-14T00:58:08.923871-08:00 plana78 kernel: [ 6894.079514]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.923873-08:00 plana78 kernel: [ 6894.079527]  #1:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.923874-08:00 plana78 kernel: [ 6894.079539]  #2:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.923876-08:00 plana78 kernel: [ 6894.079551]  #3:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.923897-08:00 plana78 kernel: [ 6894.079563]  #4:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:08.923899-08:00 plana78 kernel: [ 6894.079582] Pid: 17554, comm: kworker/0:1 Not tainted 3.6.0-ceph-00174-g2978257 #1
2012-12-14T00:58:08.923900-08:00 plana78 kernel: [ 6894.079583] Call Trace:
2012-12-14T00:58:08.923901-08:00 plana78 kernel: [ 6894.079587]  [<ffffffff810719d7>] process_one_work+0x397/0x5f0
2012-12-14T00:58:08.923903-08:00 plana78 kernel: [ 6894.079590]  [<ffffffff81071776>] ? process_one_work+0x136/0x5f0
2012-12-14T00:58:08.923904-08:00 plana78 kernel: [ 6894.079602]  [<ffffffffa03a2300>] ? xfs_destroy_ioend+0x90/0x90 [xfs]
2012-12-14T00:58:08.923905-08:00 plana78 kernel: [ 6894.079609]  [<ffffffff810735fd>] worker_thread+0x18d/0x4f0
2012-12-14T00:58:08.923916-08:00 plana78 kernel: [ 6894.079614]  [<ffffffff81073470>] ? manage_workers+0x320/0x320
2012-12-14T00:58:08.923917-08:00 plana78 kernel: [ 6894.079617]  [<ffffffff810791fe>] kthread+0xae/0xc0
2012-12-14T00:58:08.923918-08:00 plana78 kernel: [ 6894.079620]  [<ffffffff810b393d>] ? trace_hardirqs_on+0xd/0x10
2012-12-14T00:58:08.923920-08:00 plana78 kernel: [ 6894.079624]  [<ffffffff8163f3c4>] kernel_thread_helper+0x4/0x10
2012-12-14T00:58:08.923921-08:00 plana78 kernel: [ 6894.079630]  [<ffffffff816360b0>] ? retint_restore_args+0x13/0x13
2012-12-14T00:58:08.923925-08:00 plana78 kernel: [ 6894.079634]  [<ffffffff81079150>] ? flush_kthread_work+0x1a0/0x1a0
2012-12-14T00:58:08.923927-08:00 plana78 kernel: [ 6894.079637]  [<ffffffff8163f3c0>] ? gs_change+0x13/0x13
2012-12-14T00:58:08.991976-08:00 plana78 kernel: [ 6894.079802] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:09.029699-08:00 plana78 kernel: [ 6894.147548]     last function: xfs_end_io+0x0/0x110 [xfs]
2012-12-14T00:58:09.029709-08:00 plana78 kernel: [ 6894.185198] 6 locks held by kworker/0:1/17554:
2012-12-14T00:58:09.029712-08:00 plana78 kernel: [ 6894.185200]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.029725-08:00 plana78 kernel: [ 6894.185227]  #1:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.029728-08:00 plana78 kernel: [ 6894.185242]  #2:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.104496-08:00 plana78 kernel: [ 6894.185256]  #3:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.104507-08:00 plana78 kernel: [ 6894.185270]  #4:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.104510-08:00 plana78 kernel: [ 6894.185283]  #5:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.104511-08:00 plana78 kernel: [ 6894.185297] Pid: 17554, comm: kworker/0:1 Not tainted 3.6.0-ceph-00174-g2978257 #1
2012-12-14T00:58:09.104512-08:00 plana78 kernel: [ 6894.185298] Call Trace:
2012-12-14T00:58:09.104513-08:00 plana78 kernel: [ 6894.185302]  [<ffffffff810719d7>] process_one_work+0x397/0x5f0
2012-12-14T00:58:09.104515-08:00 plana78 kernel: [ 6894.185305]  [<ffffffff81071776>] ? process_one_work+0x136/0x5f0
2012-12-14T00:58:09.104516-08:00 plana78 kernel: [ 6894.185314]  [<ffffffffa03a2300>] ? xfs_destroy_ioend+0x90/0x90 [xfs]
2012-12-14T00:58:09.104517-08:00 plana78 kernel: [ 6894.185321]  [<ffffffff810735fd>] worker_thread+0x18d/0x4f0
2012-12-14T00:58:09.104519-08:00 plana78 kernel: [ 6894.185324]  [<ffffffff81073470>] ? manage_workers+0x320/0x320
2012-12-14T00:58:09.104520-08:00 plana78 kernel: [ 6894.185327]  [<ffffffff810791fe>] kthread+0xae/0xc0
2012-12-14T00:58:09.104521-08:00 plana78 kernel: [ 6894.185330]  [<ffffffff810b393d>] ? trace_hardirqs_on+0xd/0x10
2012-12-14T00:58:09.104522-08:00 plana78 kernel: [ 6894.185334]  [<ffffffff8163f3c4>] kernel_thread_helper+0x4/0x10
2012-12-14T00:58:09.104524-08:00 plana78 kernel: [ 6894.185337]  [<ffffffff816360b0>] ? retint_restore_args+0x13/0x13
2012-12-14T00:58:09.104525-08:00 plana78 kernel: [ 6894.185340]  [<ffffffff81079150>] ? flush_kthread_work+0x1a0/0x1a0
2012-12-14T00:58:09.104526-08:00 plana78 kernel: [ 6894.185343]  [<ffffffff8163f3c0>] ? gs_change+0x13/0x13
2012-12-14T00:58:09.104527-08:00 plana78 kernel: [ 6894.185436] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:09.145563-08:00 plana78 kernel: [ 6894.259875]     last function: xfs_end_io+0x0/0x110 [xfs]
2012-12-14T00:58:09.145573-08:00 plana78 kernel: [ 6894.300861] 7 locks held by kworker/0:1/17554:
2012-12-14T00:58:09.145576-08:00 plana78 kernel: [ 6894.300863]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.145588-08:00 plana78 kernel: [ 6894.300890]  #1:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.145591-08:00 plana78 kernel: [ 6894.300905]  #2:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.227158-08:00 plana78 kernel: [ 6894.300920]  #3:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.227169-08:00 plana78 kernel: [ 6894.300933]  #4:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.227172-08:00 plana78 kernel: [ 6894.300946]  #5:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.227173-08:00 plana78 kernel: [ 6894.300959]  #6:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.227175-08:00 plana78 kernel: [ 6894.300973] Pid: 17554, comm: kworker/0:1 Not tainted 3.6.0-ceph-00174-g2978257 #1
2012-12-14T00:58:09.227176-08:00 plana78 kernel: [ 6894.300974] Call Trace:
2012-12-14T00:58:09.227177-08:00 plana78 kernel: [ 6894.300978]  [<ffffffff810719d7>] process_one_work+0x397/0x5f0
2012-12-14T00:58:09.227178-08:00 plana78 kernel: [ 6894.300987]  [<ffffffff81071776>] ? process_one_work+0x136/0x5f0
2012-12-14T00:58:09.227180-08:00 plana78 kernel: [ 6894.300996]  [<ffffffffa03a2300>] ? xfs_destroy_ioend+0x90/0x90 [xfs]
2012-12-14T00:58:09.227181-08:00 plana78 kernel: [ 6894.300999]  [<ffffffff810735fd>] worker_thread+0x18d/0x4f0
2012-12-14T00:58:09.227182-08:00 plana78 kernel: [ 6894.301003]  [<ffffffff81073470>] ? manage_workers+0x320/0x320
2012-12-14T00:58:09.227183-08:00 plana78 kernel: [ 6894.301005]  [<ffffffff810791fe>] kthread+0xae/0xc0
2012-12-14T00:58:09.227184-08:00 plana78 kernel: [ 6894.301009]  [<ffffffff810b393d>] ? trace_hardirqs_on+0xd/0x10
2012-12-14T00:58:09.227186-08:00 plana78 kernel: [ 6894.301012]  [<ffffffff8163f3c4>] kernel_thread_helper+0x4/0x10
2012-12-14T00:58:09.227187-08:00 plana78 kernel: [ 6894.301015]  [<ffffffff816360b0>] ? retint_restore_args+0x13/0x13
2012-12-14T00:58:09.227188-08:00 plana78 kernel: [ 6894.301018]  [<ffffffff81079150>] ? flush_kthread_work+0x1a0/0x1a0
2012-12-14T00:58:09.227189-08:00 plana78 kernel: [ 6894.301021]  [<ffffffff8163f3c0>] ? gs_change+0x13/0x13
2012-12-14T00:58:09.227190-08:00 plana78 kernel: [ 6894.301122] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:09.270475-08:00 plana78 kernel: [ 6894.382309]     last function: xfs_end_io+0x0/0x110 [xfs]
2012-12-14T00:58:09.270485-08:00 plana78 kernel: [ 6894.425571] 8 locks held by kworker/0:1/17554:
2012-12-14T00:58:09.270501-08:00 plana78 kernel: [ 6894.425574]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.270504-08:00 plana78 kernel: [ 6894.425600]  #1:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.353441-08:00 plana78 kernel: [ 6894.425615]  #2:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.353452-08:00 plana78 kernel: [ 6894.425629]  #3:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.353454-08:00 plana78 kernel: [ 6894.425642]  #4:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.353456-08:00 plana78 kernel: [ 6894.425655]  #5:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.353458-08:00 plana78 kernel: [ 6894.425668]  #6:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.353459-08:00 plana78 kernel: [ 6894.425681]  #7:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.353460-08:00 plana78 kernel: [ 6894.425698] Pid: 17554, comm: kworker/0:1 Not tainted 3.6.0-ceph-00174-g2978257 #1
2012-12-14T00:58:09.353461-08:00 plana78 kernel: [ 6894.425700] Call Trace:
2012-12-14T00:58:09.353463-08:00 plana78 kernel: [ 6894.425703]  [<ffffffff810719d7>] process_one_work+0x397/0x5f0
2012-12-14T00:58:09.353464-08:00 plana78 kernel: [ 6894.425707]  [<ffffffff81071776>] ? process_one_work+0x136/0x5f0
2012-12-14T00:58:09.353465-08:00 plana78 kernel: [ 6894.425716]  [<ffffffffa03a2300>] ? xfs_destroy_ioend+0x90/0x90 [xfs]
2012-12-14T00:58:09.353467-08:00 plana78 kernel: [ 6894.425719]  [<ffffffff810735fd>] worker_thread+0x18d/0x4f0
2012-12-14T00:58:09.353468-08:00 plana78 kernel: [ 6894.425723]  [<ffffffff81073470>] ? manage_workers+0x320/0x320
2012-12-14T00:58:09.353469-08:00 plana78 kernel: [ 6894.425725]  [<ffffffff810791fe>] kthread+0xae/0xc0
2012-12-14T00:58:09.353470-08:00 plana78 kernel: [ 6894.425729]  [<ffffffff810b393d>] ? trace_hardirqs_on+0xd/0x10
2012-12-14T00:58:09.353471-08:00 plana78 kernel: [ 6894.425732]  [<ffffffff8163f3c4>] kernel_thread_helper+0x4/0x10
2012-12-14T00:58:09.353473-08:00 plana78 kernel: [ 6894.425736]  [<ffffffff816360b0>] ? retint_restore_args+0x13/0x13
2012-12-14T00:58:09.353474-08:00 plana78 kernel: [ 6894.425739]  [<ffffffff81079150>] ? flush_kthread_work+0x1a0/0x1a0
2012-12-14T00:58:09.353475-08:00 plana78 kernel: [ 6894.425742]  [<ffffffff8163f3c0>] ? gs_change+0x13/0x13
2012-12-14T00:58:09.353476-08:00 plana78 kernel: [ 6894.425872] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:09.396834-08:00 plana78 kernel: [ 6894.508373]     last function: xfs_end_io+0x0/0x110 [xfs]
2012-12-14T00:58:09.396844-08:00 plana78 kernel: [ 6894.551696] 9 locks held by kworker/0:1/17554:
2012-12-14T00:58:09.396847-08:00 plana78 kernel: [ 6894.551699]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.396860-08:00 plana78 kernel: [ 6894.551726]  #1:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.396862-08:00 plana78 kernel: [ 6894.551741]  #2:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.479587-08:00 plana78 kernel: [ 6894.551756]  #3:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.479598-08:00 plana78 kernel: [ 6894.551770]  #4:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.479601-08:00 plana78 kernel: [ 6894.551783]  #5:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.479602-08:00 plana78 kernel: [ 6894.551796]  #6:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.479604-08:00 plana78 kernel: [ 6894.551809]  #7:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.479605-08:00 plana78 kernel: [ 6894.551825]  #8:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.479607-08:00 plana78 kernel: [ 6894.551839] Pid: 17554, comm: kworker/0:1 Not tainted 3.6.0-ceph-00174-g2978257 #1
2012-12-14T00:58:09.479607-08:00 plana78 kernel: [ 6894.551840] Call Trace:
2012-12-14T00:58:09.479609-08:00 plana78 kernel: [ 6894.551844]  [<ffffffff810719d7>] process_one_work+0x397/0x5f0
2012-12-14T00:58:09.479610-08:00 plana78 kernel: [ 6894.551847]  [<ffffffff81071776>] ? process_one_work+0x136/0x5f0
2012-12-14T00:58:09.479612-08:00 plana78 kernel: [ 6894.551856]  [<ffffffffa03a2300>] ? xfs_destroy_ioend+0x90/0x90 [xfs]
2012-12-14T00:58:09.479613-08:00 plana78 kernel: [ 6894.551860]  [<ffffffff810735fd>] worker_thread+0x18d/0x4f0
2012-12-14T00:58:09.479614-08:00 plana78 kernel: [ 6894.551863]  [<ffffffff81073470>] ? manage_workers+0x320/0x320
2012-12-14T00:58:09.479615-08:00 plana78 kernel: [ 6894.551866]  [<ffffffff810791fe>] kthread+0xae/0xc0
2012-12-14T00:58:09.479616-08:00 plana78 kernel: [ 6894.551870]  [<ffffffff810b393d>] ? trace_hardirqs_on+0xd/0x10
2012-12-14T00:58:09.479618-08:00 plana78 kernel: [ 6894.551873]  [<ffffffff8163f3c4>] kernel_thread_helper+0x4/0x10
2012-12-14T00:58:09.479619-08:00 plana78 kernel: [ 6894.551876]  [<ffffffff816360b0>] ? retint_restore_args+0x13/0x13
2012-12-14T00:58:09.479620-08:00 plana78 kernel: [ 6894.551879]  [<ffffffff81079150>] ? flush_kthread_work+0x1a0/0x1a0
2012-12-14T00:58:09.479621-08:00 plana78 kernel: [ 6894.551882]  [<ffffffff8163f3c0>] ? gs_change+0x13/0x13
2012-12-14T00:58:09.479622-08:00 plana78 kernel: [ 6894.552021] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:09.522874-08:00 plana78 kernel: [ 6894.634299]     last function: xfs_end_io+0x0/0x110 [xfs]
2012-12-14T00:58:09.522890-08:00 plana78 kernel: [ 6894.677562] 10 locks held by kworker/0:1/17554:
2012-12-14T00:58:09.605655-08:00 plana78 kernel: [ 6894.677564]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.605666-08:00 plana78 kernel: [ 6894.677581]  #1:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.605669-08:00 plana78 kernel: [ 6894.677595]  #2:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.605671-08:00 plana78 kernel: [ 6894.677608]  #3:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.605672-08:00 plana78 kernel: [ 6894.677622]  #4:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.605674-08:00 plana78 kernel: [ 6894.677635]  #5:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.605675-08:00 plana78 kernel: [ 6894.677655]  #6:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.605677-08:00 plana78 kernel: [ 6894.677667]  #7:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.605678-08:00 plana78 kernel: [ 6894.677680]  #8:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.605680-08:00 plana78 kernel: [ 6894.677693]  #9:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.605681-08:00 plana78 kernel: [ 6894.677706] Pid: 17554, comm: kworker/0:1 Not tainted 3.6.0-ceph-00174-g2978257 #1
2012-12-14T00:58:09.605682-08:00 plana78 kernel: [ 6894.677708] Call Trace:
2012-12-14T00:58:09.605683-08:00 plana78 kernel: [ 6894.677712]  [<ffffffff810719d7>] process_one_work+0x397/0x5f0
2012-12-14T00:58:09.605685-08:00 plana78 kernel: [ 6894.677715]  [<ffffffff81071776>] ? process_one_work+0x136/0x5f0
2012-12-14T00:58:09.605686-08:00 plana78 kernel: [ 6894.677724]  [<ffffffffa03a2300>] ? xfs_destroy_ioend+0x90/0x90 [xfs]
2012-12-14T00:58:09.605687-08:00 plana78 kernel: [ 6894.677728]  [<ffffffff810735fd>] worker_thread+0x18d/0x4f0
2012-12-14T00:58:09.605688-08:00 plana78 kernel: [ 6894.677731]  [<ffffffff81073470>] ? manage_workers+0x320/0x320
2012-12-14T00:58:09.605689-08:00 plana78 kernel: [ 6894.677734]  [<ffffffff810791fe>] kthread+0xae/0xc0
2012-12-14T00:58:09.605691-08:00 plana78 kernel: [ 6894.677738]  [<ffffffff810b393d>] ? trace_hardirqs_on+0xd/0x10
2012-12-14T00:58:09.605692-08:00 plana78 kernel: [ 6894.677741]  [<ffffffff8163f3c4>] kernel_thread_helper+0x4/0x10
2012-12-14T00:58:09.605693-08:00 plana78 kernel: [ 6894.677745]  [<ffffffff816360b0>] ? retint_restore_args+0x13/0x13
2012-12-14T00:58:09.605694-08:00 plana78 kernel: [ 6894.677748]  [<ffffffff81079150>] ? flush_kthread_work+0x1a0/0x1a0
2012-12-14T00:58:09.605695-08:00 plana78 kernel: [ 6894.677751]  [<ffffffff8163f3c0>] ? gs_change+0x13/0x13
2012-12-14T00:58:09.605697-08:00 plana78 kernel: [ 6894.677855] BUG: workqueue leaked lock or atomic: kworker/0:1/0x00000000/17554
2012-12-14T00:58:09.649001-08:00 plana78 kernel: [ 6894.760171]     last function: xfs_end_io+0x0/0x110 [xfs]
2012-12-14T00:58:09.649012-08:00 plana78 kernel: [ 6894.803439] 11 locks held by kworker/0:1/17554:
2012-12-14T00:58:09.649026-08:00 plana78 kernel: [ 6894.803442]  #0:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.649028-08:00 plana78 kernel: [ 6894.803469]  #1:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.731893-08:00 plana78 kernel: [ 6894.803484]  #2:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.731903-08:00 plana78 kernel: [ 6894.803498]  #3:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.731906-08:00 plana78 kernel: [ 6894.803512]  #4:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.731908-08:00 plana78 kernel: [ 6894.803525]  #5:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.731909-08:00 plana78 kernel: [ 6894.803539]  #6:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.731911-08:00 plana78 kernel: [ 6894.803553]  #7:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
2012-12-14T00:58:09.731912-08:00 plana78 kernel: [ 6894.803569]  #8:  (sb_internal#2){.+.+.+}, at: [<ffffffffa03a232b>] xfs_end_io+0x2b/0x110 [xfs]
Actions #1

Updated by Ian Colle over 11 years ago

  • Status changed from New to Won't Fix
  • Assignee set to Alex Elder

XFS bug

Actions #2

Updated by Alex Elder over 11 years ago

When XFS gets an I/O error, there is not a lot it can do.
If it happens to involve user data blocks it could continue
processing (it looks like maybe it does). If it involves any
metadata blocks I'm sure it will shut down the file system.

All that being said, XFS should not be leaking any locks or
atomics, so there probably is an XFS bug in this error path,
or possibly in the generic buffer code (which is where the
"lost page write due to I/O error" message comes from).

I still wonder though, why did XFS get an I/O error?
- If this problem occurred on the rbd client node, then maybe
there is a problem with rbd.
- If this problem occurred on one of the osd nodes, maybe there's
a hardware problem with a disk on that node.

Actions #3

Updated by Alex Elder over 11 years ago

  • Status changed from Won't Fix to In Progress

Answer to my question, based on evidence in this bug:

The control (yaml) file contains this:
overrides:
ceph:
fs: btrfs
That means the OSD's are using btrfs (not xfs) for
backing storage.

It therefore looks like rbd responded to some request
with an I/O error, which I think is not supposed to happen.

HOWEVER...

Looking at the test that was underway at the time (138, and
possibly 137, 139 and 140), a forced shutdown of the file
system was initiated by the test, and I believe this can
cause an I/O error to be reported.

So unless I find new information I'm going to assume that's
what happened.

ON THE OTHER HAND...

It's still possible that the leaked atomic or (maybe more
likely) lock actually originated from rbd, so I'm not sure
we're completely out of the woods yet.

I sent a note to the XFS mailing list to report the "leaked
lock or atomic" message to see if anybody else has any
insights.

Actions #4

Updated by Alex Elder over 11 years ago

Dave Chinner responded to my note with a few questions
requesting more information. I spent some time this
morning collecting and analyzing that and sent it back.

I'm fairly convinced now that it's an XFS problem. And I
also think I may have spotted the problem but I'll let Dave
or others focused on XFS figure that out for sure.

In any case, here's what I think is happening. Here's the
top of the function that just returned:

STATIC void
xfs_end_io(
struct work_struct *work) {
xfs_ioend_t *ioend = container_of(work, xfs_ioend_t, io_work);
struct xfs_inode *ip = XFS_I(ioend->io_inode);
int error = 0;

if (ioend->io_append_trans) {
/* * We've got freeze protection passed with the transaction. * Tell lockdep about it.
*/
rwsem_acquire_read(
&ioend->io_inode->i_sb->s_writers.lock_map[SB_FREEZE_FS-1],
0, 1, THIS_IP);
}
if (XFS_FORCED_SHUTDOWN(ip->i_mount)) {
ioend->io_error = -EIO;
goto done;
}
. . .

The rwsem_acquire_read() call is really for the benefit of lockdep,
and informs it that some locking activity is occurring that it might
not automatically be able to recognize. If lock checking is disabled
this function has no effect.

The error appears to occur right after one of the tests initiates a
forced shutdown. Hence I think the XFS_FORCED_SHUTDOWN() call
yields true, so the "goto done" path is taken.

At this point I think the inverse of the rwsem_acquire_read() call
is not being done in this error/shortcut path. And as a result,
lockdep notices that there's a locking mismatch.

That's my theory anyway. We'll see what Dave comes up with.
It's in his hands unless he gives me reason to look a little
closer at rbd again.

I'll update this bug with followup info as it becomes
available.

Actions #5

Updated by Alex Elder over 11 years ago

  • Status changed from In Progress to Won't Fix

I'm fairly sure this is an XFS problem, so as suggested by
Ian I'm marking this "Won't Fix" (again). If new evidence
shows we have more to look at we can reopen it again...

Actions #6

Updated by Alex Elder over 11 years ago

Dave Chinner has confirmed my explanation. The bug no
longer exists (in its current form) in the latest code,
so we'll benefit from that when we update our own tree.

(Me)

I suspect that lockdep-informational call has to be un-done
somehow in the error paths.

I think the test to XFS_FORCED_SHUTDOWN() immediately following
that might have been taken and the proper cleanup didn't get
done before releasing the ioend structure.

(Dave)
Yeah, that will be the cause - the transaction is not getting
cancelled in this case.

However, the code is different in the current 3.8 tree, and the
transfer of the freeze status occurs after the shutdown check, so
this particular problem is already fixed.

Actions

Also available in: Atom PDF