Project

General

Profile

Actions

Bug #106

closed

msgpool depletion?

Added by Sage Weil almost 14 years ago. Updated over 13 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
Severity:
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

which pool is it?

[104608.030333] ceph: msgpool_get ffff88010f1fa370 now 0/0, may fail
[104608.036614] ------------[ cut here ]------------
[104608.041472] WARNING: at fs/ceph/msgpool.c:148 ceph_msgpool_get+0x183/0x247 [ceph]()
[104608.049358] Hardware name: H8SSL-I2
[104608.052978] Modules linked in: aes_x86_64 aes_generic ceph fan ac battery psmouse ehci_hcd ohci_hcd ide_pci_generic thermal button processor
[104608.066105] Pid: 2618, comm: ceph-msgr/1 Not tainted 2.6.34-rc3 #26
[104608.072526] Call Trace:
[104608.075124] [<ffffffffa00717c2>] ? ceph_msgpool_get+0x183/0x247 [ceph]
[104608.081876] [<ffffffff810357e0>] warn_slowpath_common+0x77/0x8f
[104608.088023] [<ffffffff81035807>] warn_slowpath_null+0xf/0x11
[104608.093936] [<ffffffffa00717c2>] ceph_msgpool_get+0x183/0x247 [ceph]
[104608.100506] [<ffffffff81425c54>] ? __mutex_unlock_slowpath+0x10d/0x130
[104608.107246] [<ffffffff81058061>] ? trace_hardirqs_on_caller+0x113/0x13e
[104608.114091] [<ffffffffa0079b76>] mon_alloc_msg+0x70/0xad [ceph]
[104608.120242] [<ffffffffa006e4f1>] try_read+0x7d3/0x1358 [ceph]
[104608.126199] [<ffffffff810099e3>] ? native_sched_clock+0x37/0x71
[104608.132339] [<ffffffff8104f2c2>] ? sched_clock_local+0x11/0x73
[104608.138384] [<ffffffff8105abf5>] ? __lock_acquire+0x7eb/0x84e
[104608.144352] [<ffffffff81056719>] ? put_lock_stats+0xe/0x27
[104608.150066] [<ffffffffa0070a10>] con_work+0x11a/0x6bc [ceph]
[104608.155937] [<ffffffff810477ea>] worker_thread+0x1e8/0x2fa
[104608.161638] [<ffffffff81047791>] ? worker_thread+0x18f/0x2fa
[104608.173598] [<ffffffffa00708f6>] ? con_work+0x0/0x6bc [ceph]
[104608.179475] [<ffffffff8104a8b0>] ? autoremove_wake_function+0x0/0x38
[104608.186045] [<ffffffff81047602>] ? worker_thread+0x0/0x2fa
[104608.191748] [<ffffffff8104a57e>] kthread+0x7d/0x85
[104608.196757] [<ffffffff81003794>] kernel_thread_helper+0x4/0x10
[104608.202806] [<ffffffff81428700>] ? restore_args+0x0/0x30
[104608.208332] [<ffffffff8104a501>] ? kthread+0x0/0x85
[104608.213421] [<ffffffff81003790>] ? kernel_thread_helper+0x0/0x10
[104608.219637] ---[ end trace f623e5d0d0b42fe4 ]---

Actions #1

Updated by Yehuda Sadeh almost 14 years ago

On what version did it happen? Do we have any reproducible scenario?

Actions #2

Updated by Sage Weil almost 14 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF