Project

General

Profile

Bug #2410

hung xfstest #68

Added by Sage Weil almost 12 years ago. Updated over 10 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

27568 ?        S      0:00  |                   \_ /bin/bash ./068
27750 ?        S      0:00  |                       \_ /bin/bash ./068
27756 ?        S      0:00  |                       |   \_ /var/lib/xfstests/ltp/fsstress -d /tmp/cephtest/scratch_mnt.yjGiFNGFxY/fsstress_test_dir -p 2 -n 200
27759 ?        D      0:00  |                       |       \_ /var/lib/xfstests/ltp/fsstress -d /tmp/cephtest/scratch_mnt.yjGiFNGFxY/fsstress_test_dir -p 2 -n 200
27760 ?        D      0:00  |                       |       \_ /var/lib/xfstests/ltp/fsstress -d /tmp/cephtest/scratch_mnt.yjGiFNGFxY/fsstress_test_dir -p 2 -n 200
27769 ?        S      0:00  |                       \_ /bin/sh -f /usr/sbin/xfs_freeze -u /tmp/cephtest/scratch_mnt.yjGiFNGFxY
27773 ?        D      0:00  |                       |   \_ /usr/sbin/xfs_io -F -r -p xfs_freeze -x -c thaw /tmp/cephtest/scratch_mnt.yjGiFNGFxY
27770 ?        S      0:00  |                       \_ tee -a 068.full

root@plana45:/tmp/cephtest# cat /proc/27773/stack
[<ffffffff8131d823>] call_rwsem_down_write_failed+0x13/0x20
[<ffffffff8117dbc8>] thaw_super+0x28/0xd0
[<ffffffff8118d830>] do_vfs_ioctl+0x390/0x590
[<ffffffff8118dad1>] sys_ioctl+0xa1/0xb0
[<ffffffff8161e1a9>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff
root@plana45:/tmp/cephtest# cat /proc/27760/stack
[<ffffffffa028bbdd>] xfs_trans_alloc+0x5d/0xb0 [xfs]
[<ffffffffa02500a9>] xfs_create+0x159/0x640 [xfs]
[<ffffffffa0244592>] xfs_vn_mknod+0xb2/0x1c0 [xfs]
[<ffffffffa02446d3>] xfs_vn_create+0x13/0x20 [xfs]
[<ffffffff811877b5>] vfs_create+0xa5/0xc0
[<ffffffff81188f0e>] do_last+0x70e/0x840
[<ffffffff81189ee6>] path_openat+0xd6/0x3f0
[<ffffffff8118a319>] do_filp_open+0x49/0xa0
[<ffffffff8117a345>] do_sys_open+0x105/0x1e0
[<ffffffff8117a461>] sys_open+0x21/0x30
[<ffffffff8117a486>] sys_creat+0x16/0x20
[<ffffffff8161e1a9>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff
root@plana45:/tmp/cephtest# cat /proc/27759/stack
[<ffffffff811a4971>] sync_inodes_sb+0x131/0x270
[<ffffffff811a9d98>] __sync_filesystem+0x88/0x90
[<ffffffff811a9dbf>] sync_one_sb+0x1f/0x30
[<ffffffff8117eee7>] iterate_supers+0xb7/0xf0
[<ffffffff811a9e1a>] sys_sync+0x4a/0x70
[<ffffffff8161e1a9>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff

no requests in flight:

root@plana45:/tmp/cephtest# cat /sys/kernel/debug/ceph/*/osdc
root@plana45:/tmp/cephtest# cat /sys/kernel/debug/ceph/*/monc
have osdmap 7
want next osdmap

ubuntu@plana45:/tmp/cephtest$ binary/usr/local/bin/ceph -s
2012-05-14 21:33:38.342921    pg v2712: 72 pgs: 72 active+clean; 349 MB data, 793 MB used, 2679 GB / 2794 GB avail
2012-05-14 21:33:38.343644   mds e5: 1/1/1 up {0=a=up:active}
2012-05-14 21:33:38.343697   osd e7: 6 osds: 6 up, 6 in
2012-05-14 21:33:38.343852   log 2012-05-14 21:33:35.753749 osd.3 10.214.132.39:6800/14224 1127 : [INF] 1.17 scrub ok
2012-05-14 21:33:38.343952   mon e1: 3 mons at {a=10.214.132.35:6789/0,b=10.214.132.39:6789/0,c=10.214.132.35:6790/0}

[ 2705.317285] XFS (rbd2): Mounting Filesystem
[ 2705.567336] XFS (rbd2): Ending clean mount
[ 2878.505432] INFO: task flush-250:0:21188 blocked for more than 120 seconds.
[ 2878.539421] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2878.599510] flush-250:0     D ffff88020e050000     0 21188      2 0x00000000
[ 2878.599518]  ffff88020d9e3ae0 0000000000000046 ffff88020c450000 0000000000013b40
[ 2878.599525]  ffff88020d9e3fd8 ffff88020d9e2010 0000000000013b40 0000000000013b40
[ 2878.599536]  ffff88020d9e3fd8 0000000000013b40 ffff880223d23f00 ffff88020c450000
[ 2878.599547] Call Trace:
[ 2878.599557]  [<ffffffff8161480f>] schedule+0x3f/0x60
[ 2878.599597]  [<ffffffffa028bbdd>] xfs_trans_alloc+0x5d/0xb0 [xfs]
[ 2878.599605]  [<ffffffff81072d80>] ? wake_up_bit+0x40/0x40
[ 2878.599630]  [<ffffffffa024a52d>] xfs_log_dirty_inode+0x4d/0xd0 [xfs]
[ 2878.599655]  [<ffffffffa024844b>] xfs_fs_write_inode+0x6b/0x1e0 [xfs]
[ 2878.599663]  [<ffffffff81323d0e>] ? do_raw_spin_unlock+0x5e/0xb0
[ 2878.599672]  [<ffffffff811a3df5>] writeback_single_inode+0x205/0x420
[ 2878.599679]  [<ffffffff811a41be>] writeback_sb_inodes+0x1ae/0x280
[ 2878.599685]  [<ffffffff811a4da9>] wb_writeback+0xf9/0x2d0
[ 2878.599692]  [<ffffffff810aa3c5>] ? trace_hardirqs_on_caller+0x105/0x190
[ 2878.599698]  [<ffffffff811a501d>] wb_do_writeback+0x9d/0x250
[ 2878.599704]  [<ffffffff810aa45d>] ? trace_hardirqs_on+0xd/0x10
[ 2878.599710]  [<ffffffff8105e107>] ? del_timer+0x87/0xf0
[ 2878.599716]  [<ffffffff811a527a>] bdi_writeback_thread+0xaa/0x240
[ 2878.599722]  [<ffffffff811a51d0>] ? wb_do_writeback+0x250/0x250
[ 2878.599728]  [<ffffffff8107280e>] kthread+0xbe/0xd0
[ 2878.599735]  [<ffffffff8161f5f4>] kernel_thread_helper+0x4/0x10
[ 2878.599742]  [<ffffffff81616134>] ? retint_restore_args+0x13/0x13
[ 2878.599749]  [<ffffffff81072750>] ? __init_kthread_worker+0x70/0x70
[ 2878.599755]  [<ffffffff8161f5f0>] ? gs_change+0x13/0x13
[ 2878.599759] no locks held by flush-250:0/21188.
[ 2878.599763] INFO: task fsstress:27759 blocked for more than 120 seconds.
[ 2878.632904] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2878.692387] fsstress        D ffff8801ae76de80     0 27759  27756 0x00000000
[ 2878.692394]  ffff880221585c28 0000000000000046 ffff8801ae76de80 0000000000013b40
[ 2878.692405]  ffff880221585fd8 ffff880221584010 0000000000013b40 0000000000013b40
[ 2878.692415]  ffff880221585fd8 0000000000013b40 ffff880223d13f00 ffff8801ae76de80
[ 2878.692426] Call Trace:
[ 2878.692433]  [<ffffffff8161480f>] schedule+0x3f/0x60
[ 2878.692439]  [<ffffffff81612775>] schedule_timeout+0x235/0x310
[ 2878.692446]  [<ffffffff810aa05d>] ? mark_held_locks+0x7d/0x120
[ 2878.692452]  [<ffffffff81615e10>] ? _raw_spin_unlock_irqrestore+0x40/0x70
[ 2878.692459]  [<ffffffff81615dc0>] ? _raw_spin_unlock_irq+0x30/0x40
[ 2878.692466]  [<ffffffff810aa3c5>] ? trace_hardirqs_on_caller+0x105/0x190
[ 2878.692472]  [<ffffffff8161464f>] wait_for_common+0xcf/0x170
[ 2878.692479]  [<ffffffff81081e90>] ? try_to_wake_up+0x300/0x300
[ 2878.692486]  [<ffffffff81615d84>] ? _raw_spin_unlock_bh+0x34/0x40
[ 2878.692492]  [<ffffffff811a9da0>] ? __sync_filesystem+0x90/0x90
[ 2878.692498]  [<ffffffff816147cd>] wait_for_completion+0x1d/0x20
[ 2878.692504]  [<ffffffff811a4971>] sync_inodes_sb+0x131/0x270
[ 2878.692511]  [<ffffffff811a9da0>] ? __sync_filesystem+0x90/0x90
[ 2878.692516]  [<ffffffff811a9d98>] __sync_filesystem+0x88/0x90
[ 2878.692522]  [<ffffffff811a9dbf>] sync_one_sb+0x1f/0x30
[ 2878.692529]  [<ffffffff8117eee7>] iterate_supers+0xb7/0xf0
[ 2878.692535]  [<ffffffff811a9e1a>] sys_sync+0x4a/0x70
[ 2878.692542]  [<ffffffff8161e1a9>] system_call_fastpath+0x16/0x1b
[ 2878.692546] 1 lock held by fsstress/27759:
[ 2878.692549]  #0:  (&type->s_umount_key#40){++++++}, at: [<ffffffff8117eed1>] iterate_supers+0xa1/0xf0
[ 2878.692565] INFO: task fsstress:27760 blocked for more than 120 seconds.
[ 2878.726279] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2878.786612] fsstress        D 0000000000000000     0 27760  27756 0x00000000
[ 2878.786619]  ffff88020da31b18 0000000000000046 ffff8801ae768000 0000000000013b40
[ 2878.786628]  ffff88020da31fd8 ffff88020da30010 0000000000013b40 0000000000013b40
[ 2878.786638]  ffff88020da31fd8 0000000000013b40 ffffffff81c0d020 ffff8801ae768000
[ 2878.786649] Call Trace:
[ 2878.786655]  [<ffffffff8161480f>] schedule+0x3f/0x60
[ 2878.786688]  [<ffffffffa028bbdd>] xfs_trans_alloc+0x5d/0xb0 [xfs]
[ 2878.786696]  [<ffffffff81072d80>] ? wake_up_bit+0x40/0x40
[ 2878.786721]  [<ffffffffa02500a9>] xfs_create+0x159/0x640 [xfs]
[ 2878.786728]  [<ffffffff811919f5>] ? d_rehash+0x25/0x40
[ 2878.786752]  [<ffffffffa0244592>] xfs_vn_mknod+0xb2/0x1c0 [xfs]
[ 2878.786775]  [<ffffffffa02446d3>] xfs_vn_create+0x13/0x20 [xfs]
[ 2878.786783]  [<ffffffff811877b5>] vfs_create+0xa5/0xc0
[ 2878.786790]  [<ffffffff81188f0e>] do_last+0x70e/0x840
[ 2878.786796]  [<ffffffff81189ee6>] path_openat+0xd6/0x3f0
[ 2878.786804]  [<ffffffff81142c35>] ? might_fault+0x45/0xa0
[ 2878.786811]  [<ffffffff81142c35>] ? might_fault+0x45/0xa0
[ 2878.786816]  [<ffffffff8118a319>] do_filp_open+0x49/0xa0
[ 2878.786823]  [<ffffffff81615e6b>] ? _raw_spin_unlock+0x2b/0x40
[ 2878.786828]  [<ffffffff8119829a>] ? alloc_fd+0xfa/0x140
[ 2878.786836]  [<ffffffff8117a345>] do_sys_open+0x105/0x1e0
[ 2878.786842]  [<ffffffff810aa3c5>] ? trace_hardirqs_on_caller+0x105/0x190
[ 2878.786849]  [<ffffffff8117a461>] sys_open+0x21/0x30
[ 2878.786855]  [<ffffffff8117a486>] sys_creat+0x16/0x20
[ 2878.786861]  [<ffffffff8161e1a9>] system_call_fastpath+0x16/0x1b
[ 2878.786866] 1 lock held by fsstress/27760:
[ 2878.786868]  #0:  (&type->i_mutex_dir_key#9){+.+.+.}, at: [<ffffffff81188ad8>] do_last+0x2d8/0x840
[ 2878.786885] INFO: task xfs_io:27773 blocked for more than 120 seconds.
[ 2878.821068] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2878.885064] xfs_io          D ffff88021c683870     0 27773  27769 0x00000000
[ 2878.885071]  ffff88020f869d68 0000000000000046 ffff880221788000 0000000000013b40
[ 2878.885081]  ffff88020f869fd8 ffff88020f868010 0000000000013b40 0000000000013b40
[ 2878.885091]  ffff88020f869fd8 0000000000013b40 ffff880223d8bf00 ffff880221788000
[ 2878.885103] Call Trace:
[ 2878.885109]  [<ffffffff8161480f>] schedule+0x3f/0x60
[ 2878.885116]  [<ffffffff81615485>] rwsem_down_failed_common+0xc5/0x160
[ 2878.885122]  [<ffffffff81615533>] rwsem_down_write_failed+0x13/0x20
[ 2878.885131]  [<ffffffff8131d823>] call_rwsem_down_write_failed+0x13/0x20
[ 2878.885138]  [<ffffffff816136b2>] ? down_write+0x52/0x60
[ 2878.885145]  [<ffffffff8117dbc8>] ? thaw_super+0x28/0xd0
[ 2878.885150]  [<ffffffff8117dbc8>] thaw_super+0x28/0xd0
[ 2878.885157]  [<ffffffff811870c5>] ? putname+0x35/0x50
[ 2878.885163]  [<ffffffff8118d830>] do_vfs_ioctl+0x390/0x590
[ 2878.885169]  [<ffffffff810aa45d>] ? trace_hardirqs_on+0xd/0x10
[ 2878.885176]  [<ffffffff81616119>] ? retint_swapgs+0x13/0x1b
[ 2878.885181]  [<ffffffff8118dad1>] sys_ioctl+0xa1/0xb0
[ 2878.885188]  [<ffffffff8161e1a9>] system_call_fastpath+0x16/0x1b
[ 2878.885193] 1 lock held by xfs_io/27773:
[ 2878.885195]  #0:  (&type->s_umount_key#40){++++++}, at: [<ffffffff8117dbc8>] thaw_super+0x28/0xd0
[ 2998.675898] INFO: task flush-250:0:21188 blocked for more than 120 seconds.
[ 2998.712733] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2998.780803] flush-250:0     D ffff88020e050000     0 21188      2 0x00000000
[ 2998.780810]  ffff88020d9e3ae0 0000000000000046 ffff88020c450000 0000000000013b40
[ 2998.780818]  ffff88020d9e3fd8 ffff88020d9e2010 0000000000013b40 0000000000013b40
[ 2998.780825]  ffff88020d9e3fd8 0000000000013b40 ffff880223d23f00 ffff88020c450000
[ 2998.780832] Call Trace:
[ 2998.780839]  [<ffffffff8161480f>] schedule+0x3f/0x60
[ 2998.780872]  [<ffffffffa028bbdd>] xfs_trans_alloc+0x5d/0xb0 [xfs]
[ 2998.780879]  [<ffffffff81072d80>] ? wake_up_bit+0x40/0x40
[ 2998.780905]  [<ffffffffa024a52d>] xfs_log_dirty_inode+0x4d/0xd0 [xfs]
[ 2998.780930]  [<ffffffffa024844b>] xfs_fs_write_inode+0x6b/0x1e0 [xfs]
[ 2998.780937]  [<ffffffff81323d0e>] ? do_raw_spin_unlock+0x5e/0xb0
[ 2998.780944]  [<ffffffff811a3df5>] writeback_single_inode+0x205/0x420
[ 2998.780951]  [<ffffffff811a41be>] writeback_sb_inodes+0x1ae/0x280
[ 2998.780957]  [<ffffffff811a4da9>] wb_writeback+0xf9/0x2d0
[ 2998.780964]  [<ffffffff810aa3c5>] ? trace_hardirqs_on_caller+0x105/0x190
[ 2998.780970]  [<ffffffff811a501d>] wb_do_writeback+0x9d/0x250
[ 2998.780977]  [<ffffffff810aa45d>] ? trace_hardirqs_on+0xd/0x10
[ 2998.780982]  [<ffffffff8105e107>] ? del_timer+0x87/0xf0
[ 2998.780988]  [<ffffffff811a527a>] bdi_writeback_thread+0xaa/0x240
[ 2998.780994]  [<ffffffff811a51d0>] ? wb_do_writeback+0x250/0x250
[ 2998.781000]  [<ffffffff8107280e>] kthread+0xbe/0xd0
[ 2998.781007]  [<ffffffff8161f5f4>] kernel_thread_helper+0x4/0x10
[ 2998.781014]  [<ffffffff81616134>] ? retint_restore_args+0x13/0x13
[ 2998.781020]  [<ffffffff81072750>] ? __init_kthread_worker+0x70/0x70
[ 2998.781026]  [<ffffffff8161f5f0>] ? gs_change+0x13/0x13
[ 2998.781030] no locks held by flush-250:0/21188.
[ 2998.781034] INFO: task fsstress:27759 blocked for more than 120 seconds.
[ 2998.819740] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2998.892054] fsstress        D ffff8801ae76de80     0 27759  27756 0x00000000
[ 2998.892060]  ffff880221585c28 0000000000000046 ffff8801ae76de80 0000000000013b40
[ 2998.892067]  ffff880221585fd8 ffff880221584010 0000000000013b40 0000000000013b40
[ 2998.892074]  ffff880221585fd8 0000000000013b40 ffff880223d13f00 ffff8801ae76de80
[ 2998.892081] Call Trace:
[ 2998.892087]  [<ffffffff8161480f>] schedule+0x3f/0x60
[ 2998.892092]  [<ffffffff81612775>] schedule_timeout+0x235/0x310
[ 2998.892098]  [<ffffffff810aa05d>] ? mark_held_locks+0x7d/0x120
[ 2998.892103]  [<ffffffff81615e10>] ? _raw_spin_unlock_irqrestore+0x40/0x70
[ 2998.892109]  [<ffffffff81615dc0>] ? _raw_spin_unlock_irq+0x30/0x40
[ 2998.892114]  [<ffffffff810aa3c5>] ? trace_hardirqs_on_caller+0x105/0x190
[ 2998.892120]  [<ffffffff8161464f>] wait_for_common+0xcf/0x170
[ 2998.892127]  [<ffffffff81081e90>] ? try_to_wake_up+0x300/0x300
[ 2998.892133]  [<ffffffff81615d84>] ? _raw_spin_unlock_bh+0x34/0x40
[ 2998.892139]  [<ffffffff811a9da0>] ? __sync_filesystem+0x90/0x90
[ 2998.892145]  [<ffffffff816147cd>] wait_for_completion+0x1d/0x20
[ 2998.892152]  [<ffffffff811a4971>] sync_inodes_sb+0x131/0x270
[ 2998.892159]  [<ffffffff811a9da0>] ? __sync_filesystem+0x90/0x90
[ 2998.892164]  [<ffffffff811a9d98>] __sync_filesystem+0x88/0x90
[ 2998.892170]  [<ffffffff811a9dbf>] sync_one_sb+0x1f/0x30
[ 2998.892176]  [<ffffffff8117eee7>] iterate_supers+0xb7/0xf0
[ 2998.892181]  [<ffffffff811a9e1a>] sys_sync+0x4a/0x70
[ 2998.892188]  [<ffffffff8161e1a9>] system_call_fastpath+0x16/0x1b
[ 2998.892193] 1 lock held by fsstress/27759:
[ 2998.892195]  #0:  (&type->s_umount_key#40){++++++}, at: [<ffffffff8117eed1>] iterate_supers+0xa1/0xf0
[ 2998.892212] INFO: task fsstress:27760 blocked for more than 120 seconds.
[ 2998.932704] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2999.008989] fsstress        D 0000000000000000     0 27760  27756 0x00000000
[ 2999.008996]  ffff88020da31b18 0000000000000046 ffff8801ae768000 0000000000013b40
[ 2999.009003]  ffff88020da31fd8 ffff88020da30010 0000000000013b40 0000000000013b40
[ 2999.009010]  ffff88020da31fd8 0000000000013b40 ffffffff81c0d020 ffff8801ae768000
[ 2999.009017] Call Trace:
[ 2999.009024]  [<ffffffff8161480f>] schedule+0x3f/0x60
[ 2999.009057]  [<ffffffffa028bbdd>] xfs_trans_alloc+0x5d/0xb0 [xfs]
[ 2999.009064]  [<ffffffff81072d80>] ? wake_up_bit+0x40/0x40
[ 2999.009090]  [<ffffffffa02500a9>] xfs_create+0x159/0x640 [xfs]
[ 2999.009096]  [<ffffffff811919f5>] ? d_rehash+0x25/0x40
[ 2999.009120]  [<ffffffffa0244592>] xfs_vn_mknod+0xb2/0x1c0 [xfs]
[ 2999.009144]  [<ffffffffa02446d3>] xfs_vn_create+0x13/0x20 [xfs]
[ 2999.009152]  [<ffffffff811877b5>] vfs_create+0xa5/0xc0
[ 2999.009158]  [<ffffffff81188f0e>] do_last+0x70e/0x840
[ 2999.009164]  [<ffffffff81189ee6>] path_openat+0xd6/0x3f0
[ 2999.009171]  [<ffffffff81142c35>] ? might_fault+0x45/0xa0
[ 2999.009177]  [<ffffffff81142c35>] ? might_fault+0x45/0xa0
[ 2999.009183]  [<ffffffff8118a319>] do_filp_open+0x49/0xa0
[ 2999.009190]  [<ffffffff81615e6b>] ? _raw_spin_unlock+0x2b/0x40
[ 2999.009196]  [<ffffffff8119829a>] ? alloc_fd+0xfa/0x140
[ 2999.009203]  [<ffffffff8117a345>] do_sys_open+0x105/0x1e0
[ 2999.009209]  [<ffffffff810aa3c5>] ? trace_hardirqs_on_caller+0x105/0x190
[ 2999.009216]  [<ffffffff8117a461>] sys_open+0x21/0x30
[ 2999.009222]  [<ffffffff8117a486>] sys_creat+0x16/0x20
[ 2999.009228]  [<ffffffff8161e1a9>] system_call_fastpath+0x16/0x1b
[ 2999.009233] 1 lock held by fsstress/27760:
[ 2999.009235]  #0:  (&type->i_mutex_dir_key#9){+.+.+.}, at: [<ffffffff81188ad8>] do_last+0x2d8/0x840
[ 2999.009252] INFO: task xfs_io:27773 blocked for more than 120 seconds.
[ 2999.051504] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2999.131557] xfs_io          D ffff88021c683870     0 27773  27769 0x00000000
[ 2999.131563]  ffff88020f869d68 0000000000000046 ffff880221788000 0000000000013b40
[ 2999.131571]  ffff88020f869fd8 ffff88020f868010 0000000000013b40 0000000000013b40
[ 2999.131578]  ffff88020f869fd8 0000000000013b40 ffff880223d8bf00 ffff880221788000
[ 2999.131585] Call Trace:
[ 2999.131590]  [<ffffffff8161480f>] schedule+0x3f/0x60
[ 2999.131596]  [<ffffffff81615485>] rwsem_down_failed_common+0xc5/0x160
[ 2999.131601]  [<ffffffff81615533>] rwsem_down_write_failed+0x13/0x20
[ 2999.131607]  [<ffffffff8131d823>] call_rwsem_down_write_failed+0x13/0x20
[ 2999.131612]  [<ffffffff816136b2>] ? down_write+0x52/0x60
[ 2999.131617]  [<ffffffff8117dbc8>] ? thaw_super+0x28/0xd0
[ 2999.131623]  [<ffffffff8117dbc8>] thaw_super+0x28/0xd0
[ 2999.131629]  [<ffffffff811870c5>] ? putname+0x35/0x50
[ 2999.131635]  [<ffffffff8118d830>] do_vfs_ioctl+0x390/0x590
[ 2999.131641]  [<ffffffff810aa45d>] ? trace_hardirqs_on+0xd/0x10
[ 2999.131647]  [<ffffffff81616119>] ? retint_swapgs+0x13/0x1b
[ 2999.131653]  [<ffffffff8118dad1>] sys_ioctl+0xa1/0xb0
[ 2999.131659]  [<ffffffff8161e1a9>] system_call_fastpath+0x16/0x1b
[ 2999.131664] 1 lock held by xfs_io/27773:
[ 2999.131667]  #0:  (&type->s_umount_key#40){++++++}, at: [<ffffffff8117dbc8>] thaw_super+0x28/0xd0
[ 3118.916298] INFO: task flush-250:0:21188 blocked for more than 120 seconds.
[ 3118.961379] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3119.046925] flush-250:0     D ffff88020e050000     0 21188      2 0x00000000
[ 3119.046932]  ffff88020d9e3ae0 0000000000000046 ffff88020c450000 0000000000013b40
[ 3119.046941]  ffff88020d9e3fd8 ffff88020d9e2010 0000000000013b40 0000000000013b40
[ 3119.046951]  ffff88020d9e3fd8 0000000000013b40 ffff880223d23f00 ffff88020c450000
[ 3119.046962] Call Trace:
[ 3119.046971]  [<ffffffff8161480f>] schedule+0x3f/0x60
[ 3119.047004]  [<ffffffffa028bbdd>] xfs_trans_alloc+0x5d/0xb0 [xfs]
[ 3119.047012]  [<ffffffff81072d80>] ? wake_up_bit+0x40/0x40
[ 3119.047037]  [<ffffffffa024a52d>] xfs_log_dirty_inode+0x4d/0xd0 [xfs]
[ 3119.047062]  [<ffffffffa024844b>] xfs_fs_write_inode+0x6b/0x1e0 [xfs]
[ 3119.047068]  [<ffffffff81323d0e>] ? do_raw_spin_unlock+0x5e/0xb0
[ 3119.047075]  [<ffffffff811a3df5>] writeback_single_inode+0x205/0x420
[ 3119.047082]  [<ffffffff811a41be>] writeback_sb_inodes+0x1ae/0x280
[ 3119.047088]  [<ffffffff811a4da9>] wb_writeback+0xf9/0x2d0
[ 3119.047095]  [<ffffffff810aa3c5>] ? trace_hardirqs_on_caller+0x105/0x190
[ 3119.047101]  [<ffffffff811a501d>] wb_do_writeback+0x9d/0x250
[ 3119.047107]  [<ffffffff810aa45d>] ? trace_hardirqs_on+0xd/0x10
[ 3119.047113]  [<ffffffff8105e107>] ? del_timer+0x87/0xf0
[ 3119.047119]  [<ffffffff811a527a>] bdi_writeback_thread+0xaa/0x240
[ 3119.047125]  [<ffffffff811a51d0>] ? wb_do_writeback+0x250/0x250
[ 3119.047131]  [<ffffffff8107280e>] kthread+0xbe/0xd0
[ 3119.047138]  [<ffffffff8161f5f4>] kernel_thread_helper+0x4/0x10
[ 3119.047145]  [<ffffffff81616134>] ? retint_restore_args+0x13/0x13
[ 3119.047151]  [<ffffffff81072750>] ? __init_kthread_worker+0x70/0x70
[ 3119.047158]  [<ffffffff8161f5f0>] ? gs_change+0x13/0x13
[ 3119.047162] no locks held by flush-250:0/21188.
[ 3119.047166] INFO: task fsstress:27759 blocked for more than 120 seconds.
[ 3119.093845] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 3119.182787] fsstress        D ffff8801ae76de80     0 27759  27756 0x00000000
[ 3119.182792]  ffff880221585c28 0000000000000046 ffff8801ae76de80 0000000000013b40
[ 3119.182797]  ffff880221585fd8 ffff880221584010 0000000000013b40 0000000000013b40
[ 3119.182801]  ffff880221585fd8 0000000000013b40 ffff880223d13f00 ffff8801ae76de80
[ 3119.182806] Call Trace:
[ 3119.182811]  [<ffffffff8161480f>] schedule+0x3f/0x60
[ 3119.182815]  [<ffffffff81612775>] schedule_timeout+0x235/0x310
[ 3119.182819]  [<ffffffff810aa05d>] ? mark_held_locks+0x7d/0x120
[ 3119.182824]  [<ffffffff81615e10>] ? _raw_spin_unlock_irqrestore+0x40/0x70
[ 3119.182828]  [<ffffffff81615dc0>] ? _raw_spin_unlock_irq+0x30/0x40
[ 3119.182832]  [<ffffffff810aa3c5>] ? trace_hardirqs_on_caller+0x105/0x190
[ 3119.182837]  [<ffffffff8161464f>] wait_for_common+0xcf/0x170
[ 3119.182841]  [<ffffffff81081e90>] ? try_to_wake_up+0x300/0x300
[ 3119.182846]  [<ffffffff81615d84>] ? _raw_spin_unlock_bh+0x34/0x40
[ 3119.182850]  [<ffffffff811a9da0>] ? __sync_filesystem+0x90/0x90
[ 3119.182855]  [<ffffffff816147cd>] wait_for_completion+0x1d/0x20
[ 3119.182860]  [<ffffffff811a4971>] sync_inodes_sb+0x131/0x270
[ 3119.182865]  [<ffffffff811a9da0>] ? __sync_filesystem+0x90/0x90
[ 3119.182869]  [<ffffffff811a9d98>] __sync_filesystem+0x88/0x90
[ 3119.182873]  [<ffffffff811a9dbf>] sync_one_sb+0x1f/0x30
[ 3119.182878]  [<ffffffff8117eee7>] iterate_supers+0xb7/0xf0
[ 3119.182883]  [<ffffffff811a9e1a>] sys_sync+0x4a/0x70
[ 3119.182888]  [<ffffffff8161e1a9>] system_call_fastpath+0x16/0x1b
[ 3119.182892] 1 lock held by fsstress/27759:
[ 3119.182894]  #0:  (&type->s_umount_key#40){++++++}, at: [<ffffffff8117eed1>] iterate_supers+0xa1/0xf0
[ 3542.438013] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro,user_xattr,user_xattr
[ 3605.617947] libceph: osd4 10.214.132.39:6803 socket closed
[ 4412.261444] libceph: osd1 10.214.132.35:6803 socket closed
[ 4504.400876] libceph: osd4 10.214.132.39:6803 socket closed
[ 5310.964815] libceph: osd1 10.214.132.35:6803 socket closed
[ 5403.157403] libceph: osd4 10.214.132.39:6803 socket closed
[ 6209.747302] libceph: osd1 10.214.132.35:6803 socket closed
[ 6301.936520] libceph: osd4 10.214.132.39:6803 socket closed
[ 7108.529982] libceph: osd1 10.214.132.35:6803 socket closed
[ 7200.663628] libceph: osd4 10.214.132.39:6803 socket closed
[ 8007.300816] libceph: osd1 10.214.132.35:6803 socket closed
[ 8099.428977] libceph: osd4 10.214.132.39:6803 socket closed
[ 8906.028953] libceph: osd1 10.214.132.35:6803 socket closed
[ 8998.159201] libceph: osd4 10.214.132.39:6803 socket closed
[ 9804.791744] libceph: osd1 10.214.132.35:6803 socket closed
[ 9896.923221] libceph: osd4 10.214.132.39:6803 socket closed
[10703.513695] libceph: osd1 10.214.132.35:6803 socket closed
[10795.690020] libceph: osd4 10.214.132.39:6803 socket closed
[11602.282116] libceph: osd1 10.214.132.35:6803 socket closed
[11694.461941] libceph: osd4 10.214.132.39:6803 socket closed
[10703.513695] libceph: osd1 10.214.132.35:6803 socket closed
[10795.690020] libceph: osd4 10.214.132.39:6803 socket closed
[11602.282116] libceph: osd1 10.214.132.35:6803 socket closed
[11694.461941] libceph: osd4 10.214.132.39:6803 socket closed
[12501.005286] libceph: osd1 10.214.132.35:6803 socket closed
[12593.162602] libceph: osd4 10.214.132.39:6803 socket closed
[13399.729361] libceph: osd1 10.214.132.35:6803 socket closed
[13491.904459] libceph: osd4 10.214.132.39:6803 socket closed
[14298.448510] libceph: osd1 10.214.132.35:6803 socket closed
[14298.469524] libceph: osd1 10.214.132.35:6803 connect authorization failure
[14390.667972] libceph: osd4 10.214.132.39:6803 socket closed
[14390.687656] libceph: osd4 10.214.132.39:6803 connect authorization failure
[15197.989602] libceph: osd1 10.214.132.35:6803 socket closed
[15290.621621] libceph: osd4 10.214.132.39:6803 socket closed
[16096.739395] libceph: osd1 10.214.132.35:6803 socket closed
[16096.755643] libceph: osd1 10.214.132.35:6803 connect authorization failure
[16189.333925] libceph: osd4 10.214.132.39:6803 socket closed
[16189.351098] libceph: osd4 10.214.132.39:6803 connect authorization failure
[16996.758728] libceph: osd1 10.214.132.35:6803 socket closed
[17088.683455] libceph: osd4 10.214.132.39:6803 socket closed
[17895.509620] libceph: osd1 10.214.132.35:6803 socket closed
[17895.527631] libceph: osd1 10.214.132.35:6803 connect authorization failure
[17987.439070] libceph: osd4 10.214.132.39:6803 socket closed
[17987.458057] libceph: osd4 10.214.132.39:6803 connect authorization failure
[18794.733864] libceph: osd1 10.214.132.35:6803 socket closed
[18887.457991] libceph: osd4 10.214.132.39:6803 socket closed
[19693.491166] libceph: osd1 10.214.132.35:6803 socket closed
[19693.511198] libceph: osd1 10.214.132.35:6803 connect authorization failure
[19786.219440] libceph: osd4 10.214.132.39:6803 socket closed
[19786.240244] libceph: osd4 10.214.132.39:6803 connect authorization failure
[20592.602092] libceph: osd1 10.214.132.35:6803 socket closed
[20685.248602] libceph: osd4 10.214.132.39:6803 socket closed
[21491.360715] libceph: osd1 10.214.132.35:6803 socket closed
[21491.382413] libceph: osd1 10.214.132.35:6803 connect authorization failure
[21584.008206] libceph: osd4 10.214.132.39:6803 socket closed
[21584.030711] libceph: osd4 10.214.132.39:6803 connect authorization failure
[22390.464577] libceph: osd1 10.214.132.35:6803 socket closed
[22483.074575] libceph: osd4 10.214.132.39:6803 socket closed
[23289.223081] libceph: osd1 10.214.132.35:6803 socket closed
[23289.246184] libceph: osd1 10.214.132.35:6803 connect authorization failure
[23381.807785] libceph: osd4 10.214.132.39:6803 socket closed
[23381.830917] libceph: osd4 10.214.132.39:6803 connect authorization failure
[24189.175889] libceph: osd1 10.214.132.35:6803 socket closed
[24281.045354] libceph: osd4 10.214.132.39:6803 socket closed
[25087.899002] libceph: osd1 10.214.132.35:6803 socket closed
[25087.922109] libceph: osd1 10.214.132.35:6803 connect authorization failure
[25179.738671] libceph: osd4 10.214.132.39:6803 socket closed
[25179.761844] libceph: osd4 10.214.132.39:6803 connect authorization failure
[25987.009541] libceph: osd1 10.214.132.35:6803 socket closed
[26079.029581] libceph: osd4 10.214.132.39:6803 socket closed
[26885.772289] libceph: osd1 10.214.132.35:6803 socket closed
[26885.795330] libceph: osd1 10.214.132.35:6803 connect authorization failure
[26977.791640] libceph: osd4 10.214.132.39:6803 socket closed
[26977.814486] libceph: osd4 10.214.132.39:6803 connect authorization failure
[27785.055352] libceph: osd1 10.214.132.35:6803 socket closed
[27877.753597] libceph: osd4 10.214.132.39:6803 socket closed
[28683.819203] libceph: osd1 10.214.132.35:6803 socket closed
[28683.842182] libceph: osd1 10.214.132.35:6803 connect authorization failure
[28776.517540] libceph: osd4 10.214.132.39:6803 socket closed
[28776.540666] libceph: osd4 10.214.132.39:6803 connect authorization failure
[29583.771568] libceph: osd1 10.214.132.35:6803 socket closed
[29675.609244] libceph: osd4 10.214.132.39:6803 socket closed
[30482.535524] libceph: osd1 10.214.132.35:6803 socket closed
[30482.558674] libceph: osd1 10.214.132.35:6803 connect authorization failure
[30574.377152] libceph: osd4 10.214.132.39:6803 socket closed
[30574.400323] libceph: osd4 10.214.132.39:6803 connect authorization failure
[31381.637776] libceph: osd1 10.214.132.35:6803 socket closed
[31473.478638] libceph: osd4 10.214.132.39:6803 socket closed
[32280.402043] libceph: osd1 10.214.132.35:6803 socket closed
[32280.425063] libceph: osd1 10.214.132.35:6803 connect authorization failure
[32372.242179] libceph: osd4 10.214.132.39:6803 socket closed
[32372.265308] libceph: osd4 10.214.132.39:6803 connect authorization failure
[33179.424676] libceph: osd1 10.214.132.35:6803 socket closed
[33271.494946] libceph: osd4 10.214.132.39:6803 socket closed
[33271.518741] libceph: osd4 10.214.132.39:6803 connect authorization failure

ubuntu@teuthology:/a/nightly_coverage_2012-05-14-b/1410

Associated revisions

Revision 97c9f014 (diff)
Added by Sage Weil almost 12 years ago

qa: disable xfstest 68 for now

Stop the qa noise we fix #2410. Looks like a freeze/thaw thing.

Maybe Jan's new freeze/thaw code will address this? That's probably
wishful thinking.

Signed-off-by: Sage Weil <>

History

#1 Updated by Sage Weil almost 12 years ago

  • Priority changed from Normal to High

#2 Updated by Sage Weil almost 12 years ago

disabled 68 in qa for the time being.

#3 Updated by Alex Elder over 11 years ago

This appears to be an XFS problem, where the file
system is having trouble getting space in its
journal. I inquired about this and was told that
generally there could have been some recent fixes
to XFS that might address whatever the root cause
of this could have been.

So I think we should just re-examine this after we
upgrade to newer XFS code (e.g., in the 3.7 kernel).

#4 Updated by Sage Weil over 11 years ago

  • Priority changed from High to Normal

#5 Updated by Sage Weil about 11 years ago

  • Project changed from Linux kernel client to rbd
  • Category deleted (rbd)

#6 Updated by Sage Weil over 10 years ago

  • Status changed from New to Closed

Also available in: Atom PDF