Project

General

Profile

Actions

Bug #45609

closed

kclient: kclient node get stuck when the async create/unlink enabled

Added by Xiubo Li almost 4 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
fs
Crash signature (v1):
Crash signature (v2):

Description

This issue was initially from Zheng Yan, and locally I have reproduced it by running for 5 hours:

Steps:

1, enable async create/unlink ops by specifying the `-o nowsync` when mounting
2, in one bash terminal run:

   # for i in {0..1000}; do pkill -9 ceph-mds; sleep 30; ./bin/ceph-mds -i a -c /data/ceph/build/ceph.conf; sleep 20; done

3, in another bash terminal run:
   # for i in {0..1000}; do for i in {0..10}; do mkdir /mnt/cephfs/dir_$i; touch /mnt/cephfs/dir_$i/$i; touch /mnt/cephfs/dir_$i/$i_1; echo "create /mnt/cephfs/dir_$i/$i"; done; sleep 2; for i in {0..10}; do rm -rf /mnt/cephfs/dir_$i; echo "unlinking /mnt/cephfs/dir_$i"; done; done

<4>[15912.549018] libceph: mds0 (1)10.72.36.245:6827 socket closed (con state OPEN)
<4>[15912.623341] libceph: mds0 (1)10.72.36.245:6827 socket error on write
<4>[15913.327196] libceph: mds0 (1)10.72.36.245:6827 socket error on write
<4>[15914.351224] libceph: mds0 (1)10.72.36.245:6827 socket error on write
<4>[15916.335132] libceph: mds0 (1)10.72.36.245:6827 socket error on write
<4>[15920.366979] libceph: mds0 (1)10.72.36.245:6827 socket error on write
<4>[15928.494697] libceph: mds0 (1)10.72.36.245:6827 socket error on write
<6>[15949.636088] ceph: mds0 reconnect start
<6>[15949.636099] ceph: session 00000000f871e47b state reconnecting
<6>[15949.641863] ceph: mds0 reconnect success
<6>[15952.723364] ceph: mds0 recovery completed
<6>[15981.740583] libceph: mon2 (1)10.72.36.245:40083 session lost, hunting for new mon
<3>[16097.960293] INFO: task kworker/18:1:32111 blocked for more than 122 seconds.
<3>[16097.960860]       Tainted: G            E     5.7.0-rc5+ #80
<3>[16097.961332] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6>[16097.961868] kworker/18:1    D    0 32111      2 0x80004080
<6>[16097.962151] Workqueue: ceph-msgr ceph_con_workfn [libceph]
<4>[16097.962188] Call Trace:
<4>[16097.962353]  ? __schedule+0x276/0x6e0
<4>[16097.962359]  ? schedule+0x40/0xb0
<4>[16097.962364]  ? schedule_preempt_disabled+0xa/0x10
<4>[16097.962368]  ? __mutex_lock.isra.8+0x2b5/0x4a0
<4>[16097.962460]  ? kick_requests+0x21/0x100 [ceph]
<4>[16097.962485]  ? ceph_mdsc_handle_mdsmap+0x19c/0x5f0 [ceph]
<4>[16097.962503]  ? extra_mon_dispatch+0x34/0x40 [ceph]
<4>[16097.962523]  ? extra_mon_dispatch+0x34/0x40 [ceph]
<4>[16097.962580]  ? dispatch+0x77/0x930 [libceph]
<4>[16097.962602]  ? try_read+0x78b/0x11e0 [libceph]
<4>[16097.962619]  ? __switch_to_asm+0x40/0x70
<4>[16097.962623]  ? __switch_to_asm+0x34/0x70
<4>[16097.962627]  ? __switch_to_asm+0x40/0x70
<4>[16097.962631]  ? __switch_to_asm+0x34/0x70
<4>[16097.962635]  ? __switch_to_asm+0x40/0x70
<4>[16097.962654]  ? ceph_con_workfn+0x130/0x5e0 [libceph]
<4>[16097.962713]  ? process_one_work+0x1ad/0x370
<4>[16097.962717]  ? worker_thread+0x30/0x390
<4>[16097.962722]  ? create_worker+0x1a0/0x1a0
<4>[16097.962737]  ? kthread+0x112/0x130
<4>[16097.962742]  ? kthread_park+0x80/0x80
<4>[16097.962747]  ? ret_from_fork+0x35/0x40
<3>[16097.962758] INFO: task kworker/25:1:1747 blocked for more than 122 seconds.
<3>[16097.963233]       Tainted: G            E     5.7.0-rc5+ #80
<3>[16097.963792] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6>[16097.964298] kworker/25:1    D    0  1747      2 0x80004080
<6>[16097.964325] Workqueue: ceph-msgr ceph_con_workfn [libceph]
<4>[16097.964331] Call Trace:
<4>[16097.964340]  ? __schedule+0x276/0x6e0
<4>[16097.964344]  ? schedule+0x40/0xb0
<4>[16097.964347]  ? schedule_preempt_disabled+0xa/0x10
<4>[16097.964351]  ? __mutex_lock.isra.8+0x2b5/0x4a0
<4>[16097.964376]  ? handle_reply+0x33f/0x6f0 [ceph]
<4>[16097.964407]  ? dispatch+0xa6/0xbc0 [ceph]
<4>[16097.964429]  ? read_partial_message+0x214/0x770 [libceph]
<4>[16097.964449]  ? try_read+0x78b/0x11e0 [libceph]
<4>[16097.964454]  ? __switch_to_asm+0x40/0x70
<4>[16097.964458]  ? __switch_to_asm+0x34/0x70
<4>[16097.964461]  ? __switch_to_asm+0x40/0x70
<4>[16097.964465]  ? __switch_to_asm+0x34/0x70
<4>[16097.964470]  ? __switch_to_asm+0x40/0x70
<4>[16097.964489]  ? ceph_con_workfn+0x130/0x5e0 [libceph]
<4>[16097.964494]  ? process_one_work+0x1ad/0x370
<4>[16097.964498]  ? worker_thread+0x30/0x390
<4>[16097.964501]  ? create_worker+0x1a0/0x1a0
<4>[16097.964506]  ? kthread+0x112/0x130
<4>[16097.964511]  ? kthread_park+0x80/0x80
<4>[16097.964516]  ? ret_from_fork+0x35/0x40
<3>[16097.964542] INFO: task kworker/27:2:25505 blocked for more than 122 seconds.
<3>[16097.965033]       Tainted: G            E     5.7.0-rc5+ #80
<3>[16097.965521] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6>[16097.966053] kworker/27:2    D    0 25505      2 0x80004080
<6>[16097.966090] Workqueue: events delayed_work [ceph]
<4>[16097.966093] Call Trace:
<4>[16097.966100]  ? __schedule+0x276/0x6e0
<4>[16097.966104]  ? __switch_to_asm+0x40/0x70
<4>[16097.966107]  ? schedule+0x40/0xb0
<4>[16097.966137]  ? schedule_preempt_disabled+0xa/0x10
<4>[16097.966140]  ? __mutex_lock.isra.8+0x2b5/0x4a0
<4>[16097.966144]  ? __switch_to_asm+0x40/0x70
<4>[16097.966148]  ? __switch_to_asm+0x40/0x70
<4>[16097.966152]  ? __switch_to_asm+0x34/0x70
<4>[16097.966176]  ? delayed_work+0x2e/0x290 [ceph]
<4>[16097.966181]  ? process_one_work+0x1ad/0x370
<4>[16097.966186]  ? worker_thread+0x30/0x390
<4>[16097.966189]  ? create_worker+0x1a0/0x1a0
<4>[16097.966194]  ? kthread+0x112/0x130
<4>[16097.966198]  ? kthread_park+0x80/0x80
<4>[16097.966203]  ? ret_from_fork+0x35/0x40
<3>[16220.835757] INFO: task kworker/18:1:32111 blocked for more than 245 seconds.
<3>[16220.836212]       Tainted: G            E     5.7.0-rc5+ #80
<3>[16220.836682] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6>[16220.837213] kworker/18:1    D    0 32111      2 0x80004080
<6>[16220.837253] Workqueue: ceph-msgr ceph_con_workfn [libceph]
<4>[16220.837256] Call Trace:
<4>[16220.837270]  ? __schedule+0x276/0x6e0
<4>[16220.837274]  ? schedule+0x40/0xb0
<4>[16220.837277]  ? schedule_preempt_disabled+0xa/0x10
<4>[16220.837281]  ? __mutex_lock.isra.8+0x2b5/0x4a0
<4>[16220.837310]  ? kick_requests+0x21/0x100 [ceph]
<4>[16220.837335]  ? ceph_mdsc_handle_mdsmap+0x19c/0x5f0 [ceph]
<4>[16220.837372]  ? extra_mon_dispatch+0x34/0x40 [ceph]
<4>[16220.837386]  ? extra_mon_dispatch+0x34/0x40 [ceph]
<4>[16220.837409]  ? dispatch+0x77/0x930 [libceph]
<4>[16220.837429]  ? try_read+0x78b/0x11e0 [libceph]
<4>[16220.837437]  ? __switch_to_asm+0x40/0x70
<4>[16220.837441]  ? __switch_to_asm+0x34/0x70
<4>[16220.837444]  ? __switch_to_asm+0x40/0x70
<4>[16220.837448]  ? __switch_to_asm+0x34/0x70
<4>[16220.837452]  ? __switch_to_asm+0x40/0x70
<4>[16220.837472]  ? ceph_con_workfn+0x130/0x5e0 [libceph]
<4>[16220.837478]  ? process_one_work+0x1ad/0x370
<4>[16220.837482]  ? worker_thread+0x30/0x390
<4>[16220.837486]  ? create_worker+0x1a0/0x1a0
<4>[16220.837491]  ? kthread+0x112/0x130
<4>[16220.837496]  ? kthread_park+0x80/0x80
<4>[16220.837501]  ? ret_from_fork+0x35/0x40
<3>[16220.837516] INFO: task kworker/25:1:1747 blocked for more than 245 seconds.
<3>[16220.838157]       Tainted: G            E     5.7.0-rc5+ #80
<3>[16220.838702] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6>[16220.839220] kworker/25:1    D    0  1747      2 0x80004080
<6>[16220.839248] Workqueue: ceph-msgr ceph_con_workfn [libceph]
<4>[16220.839251] Call Trace:
<4>[16220.839258]  ? __schedule+0x276/0x6e0
<4>[16220.839262]  ? schedule+0x40/0xb0
<4>[16220.839266]  ? schedule_preempt_disabled+0xa/0x10
<4>[16220.839270]  ? __mutex_lock.isra.8+0x2b5/0x4a0
<4>[16220.839296]  ? handle_reply+0x33f/0x6f0 [ceph]
<4>[16220.839318]  ? dispatch+0xa6/0xbc0 [ceph]
<4>[16220.839337]  ? read_partial_message+0x214/0x770 [libceph]
<4>[16220.839357]  ? try_read+0x78b/0x11e0 [libceph]
<4>[16220.839363]  ? __switch_to_asm+0x40/0x70
<4>[16220.839367]  ? __switch_to_asm+0x34/0x70
<4>[16220.839371]  ? __switch_to_asm+0x40/0x70
<4>[16220.839375]  ? __switch_to_asm+0x34/0x70
<4>[16220.839378]  ? __switch_to_asm+0x40/0x70
<4>[16220.839397]  ? ceph_con_workfn+0x130/0x5e0 [libceph]
<4>[16220.839402]  ? process_one_work+0x1ad/0x370
<4>[16220.839406]  ? worker_thread+0x30/0x390
<4>[16220.839409]  ? create_worker+0x1a0/0x1a0
<4>[16220.839414]  ? kthread+0x112/0x130
<4>[16220.839418]  ? kthread_park+0x80/0x80
<4>[16220.839423]  ? ret_from_fork+0x35/0x40
<3>[16220.839436] INFO: task kworker/27:2:25505 blocked for more than 245 seconds.
<3>[16220.839982]       Tainted: G            E     5.7.0-rc5+ #80
<3>[16220.840488] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6>[16220.841036] kworker/27:2    D    0 25505      2 0x80004080
<6>[16220.841066] Workqueue: events delayed_work [ceph]
<4>[16220.841075] Call Trace:
<4>[16220.841081]  ? __schedule+0x276/0x6e0
<4>[16220.841085]  ? __switch_to_asm+0x40/0x70
<4>[16220.841089]  ? schedule+0x40/0xb0
<4>[16220.841104]  ? schedule_preempt_disabled+0xa/0x10
<4>[16220.841114]  ? __mutex_lock.isra.8+0x2b5/0x4a0
<4>[16220.841123]  ? __switch_to_asm+0x40/0x70
<4>[16220.841130]  ? __switch_to_asm+0x40/0x70
<4>[16220.841138]  ? __switch_to_asm+0x34/0x70
<4>[16220.841161]  ? delayed_work+0x2e/0x290 [ceph]
<4>[16220.841170]  ? process_one_work+0x1ad/0x370
<4>[16220.841174]  ? worker_thread+0x30/0x390
<4>[16220.841181]  ? create_worker+0x1a0/0x1a0
<4>[16220.841186]  ? kthread+0x112/0x130
<4>[16220.841194]  ? kthread_park+0x80/0x80
<4>[16220.841199]  ? ret_from_fork+0x35/0x40
<3>[16343.711131] INFO: task kworker/18:1:32111 blocked for more than 368 seconds.
<3>[16343.711610]       Tainted: G            E     5.7.0-rc5+ #80
<3>[16343.712057] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6>[16343.712586] kworker/18:1    D    0 32111      2 0x80004080
<6>[16343.712642] Workqueue: ceph-msgr ceph_con_workfn [libceph]
<4>[16343.712647] Call Trace:
<4>[16343.712713]  ? __schedule+0x276/0x6e0
<4>[16343.712718]  ? schedule+0x40/0xb0
<4>[16343.712721]  ? schedule_preempt_disabled+0xa/0x10
<4>[16343.712725]  ? __mutex_lock.isra.8+0x2b5/0x4a0
<4>[16343.712758]  ? kick_requests+0x21/0x100 [ceph]
<4>[16343.712783]  ? ceph_mdsc_handle_mdsmap+0x19c/0x5f0 [ceph]
<4>[16343.712799]  ? extra_mon_dispatch+0x34/0x40 [ceph]
<4>[16343.712813]  ? extra_mon_dispatch+0x34/0x40 [ceph]
<4>[16343.712835]  ? dispatch+0x77/0x930 [libceph]
<4>[16343.712855]  ? try_read+0x78b/0x11e0 [libceph]
<4>[16343.712865]  ? __switch_to_asm+0x40/0x70
<4>[16343.712869]  ? __switch_to_asm+0x34/0x70
<4>[16343.712873]  ? __switch_to_asm+0x40/0x70
<4>[16343.712876]  ? __switch_to_asm+0x34/0x70
<4>[16343.712880]  ? __switch_to_asm+0x40/0x70
<4>[16343.712899]  ? ceph_con_workfn+0x130/0x5e0 [libceph]
<4>[16343.712906]  ? process_one_work+0x1ad/0x370
<4>[16343.712909]  ? worker_thread+0x30/0x390
<4>[16343.712918]  ? create_worker+0x1a0/0x1a0
<4>[16343.712949]  ? kthread+0x112/0x130
<4>[16343.712954]  ? kthread_park+0x80/0x80
<4>[16343.712959]  ? ret_from_fork+0x35/0x40
<3>[16343.712970] INFO: task kworker/25:1:1747 blocked for more than 368 seconds.
<3>[16343.713489]       Tainted: G            E     5.7.0-rc5+ #80
<3>[16343.714003] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6>[16343.714498] kworker/25:1    D    0  1747      2 0x80004080
<6>[16343.714536] Workqueue: ceph-msgr ceph_con_workfn [libceph]
<4>[16343.714538] Call Trace:
<4>[16343.714543]  ? __schedule+0x276/0x6e0
<4>[16343.714547]  ? schedule+0x40/0xb0
<4>[16343.714550]  ? schedule_preempt_disabled+0xa/0x10
<4>[16343.714554]  ? __mutex_lock.isra.8+0x2b5/0x4a0
<4>[16343.714580]  ? handle_reply+0x33f/0x6f0 [ceph]
<4>[16343.714604]  ? dispatch+0xa6/0xbc0 [ceph]
<4>[16343.714626]  ? read_partial_message+0x214/0x770 [libceph]
<4>[16343.714646]  ? try_read+0x78b/0x11e0 [libceph]
<4>[16343.714651]  ? __switch_to_asm+0x40/0x70
<4>[16343.714654]  ? __switch_to_asm+0x34/0x70
<4>[16343.714658]  ? __switch_to_asm+0x40/0x70
<4>[16343.714662]  ? __switch_to_asm+0x34/0x70
<4>[16343.714665]  ? __switch_to_asm+0x40/0x70
<4>[16343.714685]  ? ceph_con_workfn+0x130/0x5e0 [libceph]
<4>[16343.714689]  ? process_one_work+0x1ad/0x370
<4>[16343.714693]  ? worker_thread+0x30/0x390
<4>[16343.714697]  ? create_worker+0x1a0/0x1a0
<4>[16343.714701]  ? kthread+0x112/0x130
<4>[16343.714706]  ? kthread_park+0x80/0x80
<4>[16343.714710]  ? ret_from_fork+0x35/0x40
<3>[16343.714724] INFO: task kworker/27:2:25505 blocked for more than 368 seconds.
<3>[16343.715235]       Tainted: G            E     5.7.0-rc5+ #80
<3>[16343.715724] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6>[16343.716274] kworker/27:2    D    0 25505      2 0x80004080
<6>[16343.716304] Workqueue: events delayed_work [ceph]
<4>[16343.716311] Call Trace:
<4>[16343.716319]  ? __schedule+0x276/0x6e0
<4>[16343.716327]  ? __switch_to_asm+0x40/0x70
<4>[16343.716331]  ? schedule+0x40/0xb0
<4>[16343.716338]  ? schedule_preempt_disabled+0xa/0x10
<4>[16343.716342]  ? __mutex_lock.isra.8+0x2b5/0x4a0
<4>[16343.716349]  ? __switch_to_asm+0x40/0x70
<4>[16343.716353]  ? __switch_to_asm+0x40/0x70
<4>[16343.716361]  ? __switch_to_asm+0x34/0x70
<4>[16343.716397]  ? delayed_work+0x2e/0x290 [ceph]
<4>[16343.716405]  ? process_one_work+0x1ad/0x370
<4>[16343.716409]  ? worker_thread+0x30/0x390
<4>[16343.716413]  ? create_worker+0x1a0/0x1a0
<4>[16343.716417]  ? kthread+0x112/0x130
<4>[16343.716422]  ? kthread_park+0x80/0x80
<4>[16343.716431]  ? ret_from_fork+0x35/0x40
<3>[16466.586567] INFO: task kworker/18:1:32111 blocked for more than 491 seconds.
<3>[16466.586820]       Tainted: G            E     5.7.0-rc5+ #80
<3>[16466.587063] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6>[16466.587275] kworker/18:1    D    0 32111      2 0x80004080
<6>[16466.587311] Workqueue: ceph-msgr ceph_con_workfn [libceph]
<4>[16466.587318] Call Trace:
<4>[16466.587374]  ? __schedule+0x276/0x6e0
<4>[16466.587376]  ? schedule+0x40/0xb0
<4>[16466.587378]  ? schedule_preempt_disabled+0xa/0x10
<4>[16466.587380]  ? __mutex_lock.isra.8+0x2b5/0x4a0
<4>[16466.587407]  ? kick_requests+0x21/0x100 [ceph]
<4>[16466.587424]  ? ceph_mdsc_handle_mdsmap+0x19c/0x5f0 [ceph]
<4>[16466.587448]  ? extra_mon_dispatch+0x34/0x40 [ceph]
<4>[16466.587461]  ? extra_mon_dispatch+0x34/0x40 [ceph]
<4>[16466.587472]  ? dispatch+0x77/0x930 [libceph]
<4>[16466.587489]  ? try_read+0x78b/0x11e0 [libceph]
<4>[16466.587499]  ? __switch_to_asm+0x40/0x70
<4>[16466.587537]  ? __switch_to_asm+0x34/0x70
<4>[16466.587545]  ? __switch_to_asm+0x40/0x70
<4>[16466.587553]  ? __switch_to_asm+0x34/0x70
<4>[16466.587561]  ? __switch_to_asm+0x40/0x70
<4>[16466.587576]  ? ceph_con_workfn+0x130/0x5e0 [libceph]
<4>[16466.587581]  ? process_one_work+0x1ad/0x370
<4>[16466.587583]  ? worker_thread+0x30/0x390
<4>[16466.587585]  ? create_worker+0x1a0/0x1a0
<4>[16466.587589]  ? kthread+0x112/0x130
<4>[16466.587591]  ? kthread_park+0x80/0x80
<4>[16466.587594]  ? ret_from_fork+0x35/0x40
Actions #1

Updated by Xiubo Li almost 4 years ago

  • Status changed from New to In Progress

It was deadlock in check_new_map(), and we should always be sure that the mdsc->mutex is nested in s->s_mutex.

Actions #2

Updated by Xiubo Li almost 4 years ago

The `objdump -S` info:

11308 0000000000007d20 <ceph_mdsc_handle_mdsmap>:
11309 {
11310     7d20:       e8 00 00 00 00          callq  7d25 <ceph_mdsc_handle_mdsmap+0x5>
11311     7d25:       4c 8d 54 24 08          lea    0x8(%rsp),%r10
11312     7d2a:       48 83 e4 f0             and    $0xfffffffffffffff0,%rsp
11313     7d2e:       41 ff 72 f8             pushq  -0x8(%r10)
11314     7d32:       55                      push   %rbp
11315     7d33:       48 89 e5                mov    %rsp,%rbp
...
11438     7e94:       7e 43                   jle    7ed9 <ceph_mdsc_handle_mdsmap+0x1b9>
11439                         if (oldstate != CEPH_MDS_STATE_CREATING &&
11440     7e96:       8b 45 9c                mov    -0x64(%rbp),%eax
11441     7e99:       83 c0 07                add    $0x7,%eax
11442     7e9c:       83 f8 01                cmp    $0x1,%eax
11443     7e9f:       0f 87 00 00 00 00       ja     7ea5 <ceph_mdsc_handle_mdsmap+0x185>
11444                         kick_requests(mdsc, i);
11445     7ea5:       44 89 e6                mov    %r12d,%esi
11446     7ea8:       48 89 df                mov    %rbx,%rdi
11447                         mutex_lock(&s->s_mutex);
11448     7eab:       4d 8d 6e 28             lea    0x28(%r14),%r13
11449                         kick_requests(mdsc, i);
11450     7eaf:       e8 1c df ff ff          callq  5dd0 <kick_requests>
11451                         mutex_lock(&s->s_mutex);
11452     7eb4:       4c 89 ef                mov    %r13,%rdi
11453     7eb7:       e8 00 00 00 00          callq  7ebc <ceph_mdsc_handle_mdsmap+0x19c>
11454                         ceph_kick_flushing_caps(mdsc, s);
11455     7ebc:       4c 89 f6                mov    %r14,%rsi
11456     7ebf:       48 89 df                mov    %rbx,%rdi
11457     7ec2:       e8 00 00 00 00          callq  7ec7 <ceph_mdsc_handle_mdsmap+0x1a7>
11458                         mutex_unlock(&s->s_mutex);
11459     7ec7:       4c 89 ef                mov    %r13,%rdi
11460     7eca:       e8 00 00 00 00          callq  7ecf <ceph_mdsc_handle_mdsmap+0x1af>
11461                         wake_up_session_caps(s, RECONNECT);
11462     7ecf:       31 f6                   xor    %esi,%esi
11463     7ed1:       4c 89 f7                mov    %r14,%rdi
...

 9274 0000000000006440 <handle_reply>:
 9275 {
 9276     6440:       e8 00 00 00 00          callq  6445 <handle_reply+0x5>
 9277     6445:       41 57                   push   %r15
 9278     6447:       41 56                   push   %r14
 9279     6449:       41 55                   push   %r13
 9280     644b:       41 54                   push   %r12
 9281     644d:       49 89 f4                mov    %rsi,%r12
 9282     6450:       55                      push   %rbp
 9283     6451:       53                      push   %rbx
...
 9523     674d:       4d 89 9f 18 03 00 00    mov    %r11,0x318(%r15)
 9524         new->prev = prev;
 9525     6754:       4d 89 87 20 03 00 00    mov    %r8,0x320(%r15)
 9526         case 8: *(volatile __u64 *)p = *(__u64 *)res; break;
 9527     675b:       4d 89 10                mov    %r10,(%r8)
 9528     675e:       48 89 cf                mov    %rcx,%rdi
 9529     6761:       ff 14 25 00 00 00 00    callq  *0x0
 9530                 ceph_unreserve_caps(mdsc, &req->r_caps_reservation);
 9531     6768:       49 8d b7 cc 03 00 00    lea    0x3cc(%r15),%rsi
 9532     676f:       48 89 df                mov    %rbx,%rdi
 9533     6772:       e8 00 00 00 00          callq  6777 <handle_reply+0x337>
 9534         mutex_lock(&mdsc->mutex);
 9535     6777:       4c 89 ef                mov    %r13,%rdi
 9536     677a:       e8 00 00 00 00          callq  677f <handle_reply+0x33f>
 9537     677f:       49 8b 87 90 00 00 00    mov    0x90(%r15),%rax
 9538         if (!test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags)) {
 9539     6786:       a8 04                   test   $0x4,%al
 9540     6788:       75 79                   jne    6803 <handle_reply+0x3c3>
 9541                         req->r_reply =  ceph_msg_get(msg);
 9542     678a:       4c 89 e7                mov    %r12,%rdi
 9543     678d:       e8 00 00 00 00          callq  6792 <handle_reply+0x352>
 9544     6792:       49 89 87 48 01 00 00    mov    %rax,0x148(%r15)
 9545                 asm volatile(LOCK_PREFIX "orb %1,%0" 
 9546     6799:       f0 41 80 8f 90 00 00    lock orb $0x20,0x90(%r15)
...
Actions #3

Updated by Xiubo Li almost 4 years ago

  • Status changed from In Progress to Fix Under Review
Actions #4

Updated by Xiubo Li almost 4 years ago

Xiubo Li wrote:

The patchwork: https://patchwork.kernel.org/patch/11559595/

This is merged by Jeff.

Actions #5

Updated by Xiubo Li over 3 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF