Project

General

Profile

Actions

Bug #59307

open

[rbd-mirror] snap create timed out notifying lock owner

Added by Prasanna Kumar Kalever about 1 year ago. Updated about 1 year ago.

Status:
In Progress
Priority:
High
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Adding 500ms delay to east and west network interfaces and then enabling the mirroring (and take a snapshot) leads to stuck in mirror snap create with "snap create timed out notifying lock owner"

How Consistent:
reproduced 3/3

Reproducer steps:

✨ cat netns.sh

#!/bin/sh

#For East network namespace:
sudo ip netns add east
sudo ip netns exec east ip link set dev lo up
sudo ip netns exec east ip link list
sudo ip link add veth0 type veth peer name veth1
sudo ip link set veth1 netns east
sudo ip netns exec east ifconfig veth1 10.1.1.1/24 up
sudo ifconfig veth0 10.1.1.2/24 up
sudo ip netns exec east route

#For West network namespace:
sudo ip netns add west
sudo ip netns exec west ip link set dev lo up
sudo ip netns exec west ip link list
sudo ip link add veth2 type veth peer name veth3
sudo ip link set veth3 netns west
sudo ip netns exec west ifconfig veth3 10.2.2.1/24 up
sudo ifconfig veth2 10.2.2.2/24 up
sudo ip netns exec west route

#Establish connection between two network ns
sudo ip link add br0 type bridge
sudo ip link set br0 up
sudo ip addr add 10.3.3.1/24 brd + dev br0
sudo ip link set veth0 master br0
sudo ip link set veth2 master br0

sudo ip netns exec east ip route add 10.2.2.1 dev veth1
sudo ip netns exec west ip route add 10.1.1.1 dev veth3

✨ cat mirrorenv_ns.sh

#!/bin/bash
set -xe
environment(){
if [ ! -f site-a.conf ] && [ ! -L site-a.conf ]; then
        ln -s run/clustera/ceph.conf site-a.conf
fi
if [ ! -f site-b.conf ] && [ ! -L site-b.conf ]; then
        ln -s run/clusterb/ceph.conf site-b.conf
fi
}

stop(){
sudo ip netns exec east bash -c "../src/mstop.sh clustera" 
sudo ip netns exec west bash -c "../src/mstop.sh clusterb" 
sudo ip netns exec east killall rbd-mirror
sudo ip netns exec east killall rbd-mirror
sudo ip netns exec west killall rbd-mirror
sudo ip netns exec west killall rbd-mirror
}

start(){

sudo ip netns exec east bash -c "MON=1 OSD=1 MGR=1 MDS=0 RGW=0 ../src/mstart.sh clustera --short -n -d --without-dashboard" 
sudo ip netns exec west bash -c "MON=1 OSD=1 MGR=1 MDS=0 RGW=0 ../src/mstart.sh clusterb --short -n -d --without-dashboard" 

#setup pool
sudo ip netns exec east ./bin/ceph --cluster site-a osd pool create pool1
sudo ip netns exec east ./bin/rbd --cluster site-a pool init pool1

sudo ip netns exec west ./bin/ceph --cluster site-b osd pool create pool1
sudo ip netns exec west ./bin/rbd --cluster site-b pool init pool1

#set to image mirror
sudo ip netns exec east ./bin/rbd --cluster site-a mirror pool enable pool1 image

#create token
sudo ip netns exec east ./bin/rbd --cluster site-a mirror pool peer bootstrap create pool1 | tail -n 1 > token

#import token
sudo ip netns exec west ./bin/rbd --cluster site-b mirror pool peer bootstrap import --site-name site-b pool1 token

#start rbd-mirror

sudo ip netns exec east ./bin/rbd-mirror --cluster site-a --rbd-mirror-delete-retry-interval=5 --rbd-mirror-image-state-check-interval=5 --rbd-mirror-journal-poll-age=1 --rbd-mirror-pool-replayers-refresh-interval=5 --debug-rbd=30 --debug-journaler=30 --debug-rbd_mirror=30 --daemonize=true
sudo ip netns exec west ./bin/rbd-mirror --cluster site-b --rbd-mirror-delete-retry-interval=5 --rbd-mirror-image-state-check-interval=5 --rbd-mirror-journal-poll-age=1 --rbd-mirror-pool-replayers-refresh-interval=5 --debug-rbd=30 --debug-journaler=30 --debug-rbd_mirror=30 --daemonize=true

sudo ip netns exec east ./bin/ceph --cluster site-a config set global debug_rbd 30
sudo ip netns exec east ./bin/ceph --cluster site-a config set global debug_rbd_mirror 30
#sudo ip netns exec east ./bin/ceph --cluster site-a config set client.rbd-mirror-peer debug_ms 1

sudo ip netns exec west ./bin/ceph --cluster site-b config set global debug_rbd 30
sudo ip netns exec west ./bin/ceph --cluster site-b config set global debug_rbd_mirror 30
#sudo ip netns exec west ./bin/ceph --cluster site-b config set client.rbd-mirror-peer debug_ms 1

}

NUM_ARGS=`echo "$@" | awk '{print NF}'`
ACTION=$1
if [ "$ACTION" == "start" ]; then
        echo setting up environment
        environment
        echo start
        start
elif [ "$ACTION" == "stop" ]; then
        echo stop
        stop
else
        echo "Option not recognized" 
fi

To setup two ceph clusters for mirroring, run below commands

✨ ./netns.sh
✨ ./mirrorenv_ns.sh start

Add delay in the network interfaces in east and west clusters
✨ ip netns exec east bash tc qdisc add dev veth1 root netem delay 500ms
✨ ip netns exec west bash tc qdisc add dev veth3 root netem delay 500ms

Terminal 1: The below command will open the east network namespace bash
✨ sudo ip netns exec east bash
Create some images
✨ for i in {0..1}; do ./bin/rbd --cluster=site-a create --size 6G pool1/img$i; done
Map & Mount the images
✨ for i in {0..1}; do ./bin/rbd-nbd --cluster=site-a map pool1/img$i; done
✨ for i in {0..1}; do mkfs.xfs /dev/nbd$i; done
✨ for i in {0..1}; do mkdir /mnt/nbd$i; done
✨ for i in {0..1}; do mount /dev/nbd$i /mnt/nbd$i; done

Perform IO
✨ mkdir fio_output
✨ cat randrw.fio

 
[global]
refill_buffers
time_based=1
size=5g
direct=1
group_reporting
ioengine=libaio

[workload]
rw=randrw
rate_iops=40,10
blocksize=4KB
#norandommap
iodepth=4
numjobs=1
#runtime=2d
runtime=60m

✨ for i in {0..1}; do fio randrw.fio --filename=/mnt/nbd${i}/file${i} --output=fio_output/fio${i}.txt & done

Terminal 2: Open a new east bash terminal, as above terminal is occupied by fio workload
✨ sudo ip netns exec east bash
Enable mirroring on all images
✨ for i in {0..1}; do ./bin/rbd --cluster=site-a mirror image enable pool1/img$i snapshot; done
Custom schedule the snapshots ( or you can use snapshot schedule command too)
✨ while true; do for i in {0..1}; do time ./bin/rbd --cluster site-a mirror image snapshot pool1/img$i; done; sleep 60; done

The above snapshot command will stuck in mirror snapshot create, grep logs for "snap create timed out notifying lock owner"

Actions #1

Updated by Prasanna Kumar Kalever about 1 year ago

Here are some logs

2023-04-03T22:38:54.230+0530 7f4761916640 20 librbd::mirror::snapshot::util:  can_create_primary_snapshot: previous snapshot snap_id=6 [mirror state=primary, complete=1, mirror_peer_uuids=cc20fd2b-bd6e-4ea1-937e-e81d2b44aca9, clean_since_snap_id=head]
2023-04-03T22:38:54.231+0530 7f4761916640 15 librbd::mirror::snapshot::CreatePrimaryRequest: 0x7f47380143f0 get_mirror_peers:
2023-04-03T22:38:54.238+0530 7f4761115640 15 librbd::mirror::snapshot::CreatePrimaryRequest: 0x7f47380143f0 handle_get_mirror_peers: r=0
2023-04-03T22:38:54.238+0530 7f4761115640 15 librbd::mirror::snapshot::CreatePrimaryRequest: 0x7f47380143f0 create_snapshot: name=.mirror.primary.f6b9efff-a8c2-4c72-92be-d5effefe1132.421c3436-15fa-4b8e-9a5c-a85c4e7b28ed, ns=[mirror state=primary, complete=0, mirror_peer_uuids=cc20fd2b-bd6e-4ea1-937e-e81d2b44aca9, clean_since_snap_id=head]
2023-04-03T22:38:54.238+0530 7f4761115640  5 librbd::Operations: 0x556fe5a16ac0 snap_create: snap_name=.mirror.primary.f6b9efff-a8c2-4c72-92be-d5effefe1132.421c3436-15fa-4b8e-9a5c-a85c4e7b28ed

[...]

2023-04-03T23:08:33.420+0530 7f4761916640 20 librbd::ManagedLock: 0x7f4738003e38 is_lock_owner: =0
2023-04-03T23:08:33.420+0530 7f4761916640 20 librbd::Operations: send_acquire_exclusive_lock
2023-04-03T23:08:33.420+0530 7f4761916640 10 librbd::ManagedLock: 0x7f4738003e38 try_acquire_lock:
2023-04-03T23:08:33.420+0530 7f4761916640 10 librbd::ManagedLock: 0x7f4738003e38 send_acquire_lock:
2023-04-03T23:08:33.420+0530 7f4761916640 10 librbd::ExclusiveLock: 0x7f4738003e20 pre_acquire_lock_handler
2023-04-03T23:08:33.420+0530 7f4761916640 10 librbd::exclusive_lock::PreAcquireRequest: 0x7f4738019b10 send_prepare_lock:
2023-04-03T23:08:33.420+0530 7f4761916640 10 librbd::ImageState: 0x556fe5bab1d0 prepare_lock
2023-04-03T23:08:33.420+0530 7f4761916640 10 librbd::ImageState: 0x556fe5bab1d0 0x556fe5bab1d0 send_prepare_lock_unlock
2023-04-03T23:08:33.420+0530 7f4761916640 10 librbd::exclusive_lock::PreAcquireRequest: 0x7f4738019b10 handle_prepare_lock: r=0
2023-04-03T23:08:33.420+0530 7f4761916640 10 librbd::exclusive_lock::PreAcquireRequest: 0x7f4738019b10 send_flush_notifies:
2023-04-03T23:08:33.420+0530 7f4761916640 10 librbd::exclusive_lock::PreAcquireRequest: 0x7f4738019b10 handle_flush_notifies:
2023-04-03T23:08:33.420+0530 7f4761916640 10 librbd::ManagedLock: 0x7f4738003e38 handle_pre_acquire_lock: r=0
2023-04-03T23:08:33.420+0530 7f4761916640 10 librbd::managed_lock::AcquireRequest: 0x7f4754089000 send_get_locker:
2023-04-03T23:08:33.420+0530 7f4761916640 10 librbd::managed_lock::GetLockerRequest: 0x7f47380134e0 send_get_lockers:
2023-04-03T23:08:33.421+0530 7f4761115640 10 librbd::managed_lock::GetLockerRequest: 0x7f47380134e0 handle_get_lockers: r=0
2023-04-03T23:08:33.421+0530 7f4761115640 10 librbd::managed_lock::GetLockerRequest: 0x7f47380134e0 handle_get_lockers: retrieved exclusive locker: client.4161@10.1.1.1:0/838881452
2023-04-03T23:08:33.421+0530 7f4761115640 10 librbd::managed_lock::GetLockerRequest: 0x7f47380134e0 finish: r=0
2023-04-03T23:08:33.421+0530 7f4761115640 10 librbd::managed_lock::AcquireRequest: 0x7f4754089000 handle_get_locker: r=0
2023-04-03T23:08:33.421+0530 7f4761115640 10 librbd::managed_lock::AcquireRequest: 0x7f4754089000 send_lock: entity=client.4182, cookie=auto 139943859010656
2023-04-03T23:08:33.442+0530 7f4761916640 10 librbd::managed_lock::AcquireRequest: 0x7f4754089000 handle_lock: r=-16
2023-04-03T23:08:33.442+0530 7f4761916640 10 librbd::managed_lock::AcquireRequest: 0x7f4754089000 send_break_lock:
2023-04-03T23:08:33.442+0530 7f4761916640 10 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 send_get_watchers:
2023-04-03T23:08:33.444+0530 7f4761115640 10 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 handle_get_watchers: r=0
2023-04-03T23:08:33.444+0530 7f4761115640 20 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 handle_get_watchers: watcher=[addr=10.2.2.1:0/2534447899, entity=client.4150]
2023-04-03T23:08:33.444+0530 7f4761115640 20 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 handle_get_watchers: watcher=[addr=10.1.1.1:0/1622664613, entity=client.4182]
2023-04-03T23:08:33.444+0530 7f4761115640 20 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 handle_get_watchers: watcher=[addr=10.1.1.1:0/838881452, entity=client.4161]
2023-04-03T23:08:33.444+0530 7f4761115640 10 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 handle_get_watchers: lock owner is still alive
2023-04-03T23:08:33.444+0530 7f4761115640 10 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 finish: r=-11
2023-04-03T23:08:33.444+0530 7f4761115640 10 librbd::managed_lock::AcquireRequest: 0x7f4754089000 handle_break_lock: r=-11
2023-04-03T23:08:33.444+0530 7f4761115640  5 librbd::managed_lock::AcquireRequest: 0x7f4754089000 handle_break_lock: lock owner is still alive
2023-04-03T23:08:33.444+0530 7f4761115640 10 librbd::ManagedLock: 0x7f4738003e38 handle_acquire_lock: r=-11
2023-04-03T23:08:33.444+0530 7f4761115640  5 librbd::ManagedLock: 0x7f4738003e38 handle_acquire_lock: unable to acquire exclusive lock
2023-04-03T23:08:33.444+0530 7f4761115640 10 librbd::ExclusiveLock: 0x7f4738003e20 post_acquire_lock_handler: r=-11
2023-04-03T23:08:33.444+0530 7f4761115640 10 librbd::ImageState: 0x556fe5bab1d0 handle_prepare_lock_complete
2023-04-03T23:08:33.444+0530 7f4761115640 10 librbd::ManagedLock: 0x7f4738003e38 handle_post_acquire_lock: r=0
2023-04-03T23:08:33.444+0530 7f4761115640 20 librbd::Operations: handle_acquire_exclusive_lock: r=0
2023-04-03T23:08:33.444+0530 7f4761115640 20 librbd::ManagedLock: 0x7f4738003e38 is_lock_owner: =0
2023-04-03T23:08:33.444+0530 7f4761115640 20 librbd::Operations: send_remote_request
2023-04-03T23:08:33.444+0530 7f4761115640 20 librbd::ManagedLock: 0x7f4738003e38 is_lock_owner: =0
2023-04-03T23:08:33.444+0530 7f4761115640 10 librbd::ImageWatcher: 0x7f4738008d70 async request: [4182,139943859010656,1]
2023-04-03T23:08:33.444+0530 7f4761115640 20 librbd::ImageWatcher: scheduling async request time out: [4182,139943859010656,1]
2023-04-03T23:08:33.444+0530 7f4761115640 20 librbd::image_watcher::NotifyLockOwner: 0x7f473801dd90 send_notify
2023-04-03T23:08:33.444+0530 7f4761115640 20 librbd::watcher::Notifier: 0x7f4738008e30 notify: pending=1
2023-04-03T23:08:33.445+0530 7f4761916640  5 librbd::Watcher: 0x7f4738008d70 notifications_blocked: blocked=0
2023-04-03T23:08:33.445+0530 7f4761916640 10 librbd::Watcher::C_NotifyAck 0x7f4738019740 C_NotifyAck: id=81604380598, handle=139943859010656
2023-04-03T23:08:33.445+0530 7f4761916640 10 librbd::ImageWatcher: 0x7f4738008d70 remote snap_create request: [4182,139943859010656,1] [mirror state=primary, complete=0, mirror_peer_uuids=cc20fd2b-bd6e-4ea1-937e-e81d2b44aca9, clean_since_snap_id=head] .mirror.primary.f6b9efff-a8c2-4c72-92be-d5effefe1132.421c3436-15fa-4b8e-9a5c-a85c4e7b28ed 0
2023-04-03T23:08:33.446+0530 7f4761916640 20 librbd::ExclusiveLock: 0x7f4738003e20 accept_request=0 (request_type=0)
2023-04-03T23:08:33.446+0530 7f4761916640 10 librbd::Watcher::C_NotifyAck 0x7f4738019740 finish: r=0
2023-04-03T23:08:34.447+0530 7f4761115640 20 librbd::watcher::Notifier: 0x7f4738008e30 handle_notify: r=0
2023-04-03T23:08:34.447+0530 7f4761115640 20 librbd::watcher::Notifier: 0x7f4738008e30 handle_notify: pending=0
2023-04-03T23:08:34.447+0530 7f4761115640 20 librbd::image_watcher::NotifyLockOwner: 0x7f473801dd90 handle_notify: r=0
2023-04-03T23:08:34.447+0530 7f4761115640 20 librbd::image_watcher::NotifyLockOwner: 0x7f473801dd90 handle_notify client responded with r=-110
2023-04-03T23:08:34.447+0530 7f4761115640 20 librbd::ImageWatcher: remove_async_request: [4182,139943859010656,1]
2023-04-03T23:08:34.447+0530 7f4761115640 20 librbd::Operations: handle_remote_request: r=-110
2023-04-03T23:08:34.447+0530 7f4761115640  5 librbd::Operations: snap create timed out notifying lock owner
[...]
2023-04-03T23:08:34.447+0530 7f4761115640 20 librbd::ManagedLock: 0x7f4738003e38 is_lock_owner: =0
2023-04-03T23:08:34.447+0530 7f4761115640 20 librbd::Operations: send_acquire_exclusive_lock
2023-04-03T23:08:34.447+0530 7f4761115640 10 librbd::ManagedLock: 0x7f4738003e38 try_acquire_lock:
2023-04-03T23:08:34.447+0530 7f4761115640 10 librbd::ManagedLock: 0x7f4738003e38 send_acquire_lock:
2023-04-03T23:08:34.447+0530 7f4761115640 10 librbd::ExclusiveLock: 0x7f4738003e20 pre_acquire_lock_handler
2023-04-03T23:08:34.447+0530 7f4761115640 10 librbd::exclusive_lock::PreAcquireRequest: 0x7f4724015240 send_prepare_lock:
2023-04-03T23:08:34.447+0530 7f4761115640 10 librbd::ImageState: 0x556fe5bab1d0 prepare_lock
2023-04-03T23:08:34.447+0530 7f4761115640 10 librbd::ImageState: 0x556fe5bab1d0 0x556fe5bab1d0 send_prepare_lock_unlock
2023-04-03T23:08:34.447+0530 7f4761115640 10 librbd::exclusive_lock::PreAcquireRequest: 0x7f4724015240 handle_prepare_lock: r=0
2023-04-03T23:08:34.447+0530 7f4761115640 10 librbd::exclusive_lock::PreAcquireRequest: 0x7f4724015240 send_flush_notifies:
2023-04-03T23:08:34.447+0530 7f4761115640 10 librbd::exclusive_lock::PreAcquireRequest: 0x7f4724015240 handle_flush_notifies:
2023-04-03T23:08:34.447+0530 7f4761115640 10 librbd::ManagedLock: 0x7f4738003e38 handle_pre_acquire_lock: r=0
2023-04-03T23:08:34.447+0530 7f4761115640 10 librbd::managed_lock::AcquireRequest: 0x7f4754089000 send_get_locker:
2023-04-03T23:08:34.447+0530 7f4761115640 10 librbd::managed_lock::GetLockerRequest: 0x7f4724003f40 send_get_lockers:
2023-04-03T23:08:34.448+0530 7f4761916640 10 librbd::managed_lock::GetLockerRequest: 0x7f4724003f40 handle_get_lockers: r=0
2023-04-03T23:08:34.448+0530 7f4761916640 10 librbd::managed_lock::GetLockerRequest: 0x7f4724003f40 handle_get_lockers: retrieved exclusive locker: client.4161@10.1.1.1:0/838881452
2023-04-03T23:08:34.448+0530 7f4761916640 10 librbd::managed_lock::GetLockerRequest: 0x7f4724003f40 finish: r=0
2023-04-03T23:08:34.448+0530 7f4761916640 10 librbd::managed_lock::AcquireRequest: 0x7f4754089000 handle_get_locker: r=0
2023-04-03T23:08:34.448+0530 7f4761916640 10 librbd::managed_lock::AcquireRequest: 0x7f4754089000 send_lock: entity=client.4182, cookie=auto 139943859010656
2023-04-03T23:08:34.468+0530 7f4761115640 10 librbd::managed_lock::AcquireRequest: 0x7f4754089000 handle_lock: r=-16
2023-04-03T23:08:34.468+0530 7f4761115640 10 librbd::managed_lock::AcquireRequest: 0x7f4754089000 send_break_lock:
2023-04-03T23:08:34.468+0530 7f4761115640 10 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 send_get_watchers:
2023-04-03T23:08:34.469+0530 7f4761916640 10 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 handle_get_watchers: r=0
2023-04-03T23:08:34.470+0530 7f4761916640 20 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 handle_get_watchers: watcher=[addr=10.2.2.1:0/2534447899, entity=client.4150]
2023-04-03T23:08:34.470+0530 7f4761916640 20 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 handle_get_watchers: watcher=[addr=10.1.1.1:0/1622664613, entity=client.4182]
2023-04-03T23:08:34.470+0530 7f4761916640 20 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 handle_get_watchers: watcher=[addr=10.1.1.1:0/838881452, entity=client.4161]
2023-04-03T23:08:34.470+0530 7f4761916640 10 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 handle_get_watchers: lock owner is still alive
2023-04-03T23:08:34.470+0530 7f4761916640 10 librbd::managed_lock::BreakRequest: 0x556fe594dfd0 finish: r=-11
2023-04-03T23:08:34.470+0530 7f4761916640 10 librbd::managed_lock::AcquireRequest: 0x7f4754089000 handle_break_lock: r=-11
2023-04-03T23:08:34.470+0530 7f4761916640  5 librbd::managed_lock::AcquireRequest: 0x7f4754089000 handle_break_lock: lock owner is still alive
2023-04-03T23:08:34.470+0530 7f4761916640 10 librbd::ManagedLock: 0x7f4738003e38 handle_acquire_lock: r=-11
2023-04-03T23:08:34.470+0530 7f4761916640  5 librbd::ManagedLock: 0x7f4738003e38 handle_acquire_lock: unable to acquire exclusive lock
2023-04-03T23:08:34.470+0530 7f4761916640 10 librbd::ExclusiveLock: 0x7f4738003e20 post_acquire_lock_handler: r=-11
2023-04-03T23:08:34.470+0530 7f4761916640 10 librbd::ImageState: 0x556fe5bab1d0 handle_prepare_lock_complete
2023-04-03T23:08:34.470+0530 7f4761916640 10 librbd::ManagedLock: 0x7f4738003e38 handle_post_acquire_lock: r=0
2023-04-03T23:08:34.470+0530 7f4761916640 20 librbd::Operations: handle_acquire_exclusive_lock: r=0
2023-04-03T23:08:34.470+0530 7f4761916640 20 librbd::ManagedLock: 0x7f4738003e38 is_lock_owner: =0
2023-04-03T23:08:34.470+0530 7f4761916640 20 librbd::Operations: send_remote_request
2023-04-03T23:08:34.470+0530 7f4761916640 20 librbd::ManagedLock: 0x7f4738003e38 is_lock_owner: =0
2023-04-03T23:08:34.470+0530 7f4761916640 10 librbd::ImageWatcher: 0x7f4738008d70 async request: [4182,139943859010656,1]
2023-04-03T23:08:34.470+0530 7f4761916640 20 librbd::ImageWatcher: scheduling async request time out: [4182,139943859010656,1]
2023-04-03T23:08:34.470+0530 7f4761916640 20 librbd::image_watcher::NotifyLockOwner: 0x7f47540a6040 send_notify
2023-04-03T23:08:34.470+0530 7f4761916640 20 librbd::watcher::Notifier: 0x7f4738008e30 notify: pending=1
2023-04-03T23:08:34.471+0530 7f4761115640  5 librbd::Watcher: 0x7f4738008d70 notifications_blocked: blocked=0
2023-04-03T23:08:34.471+0530 7f4761115640 10 librbd::Watcher::C_NotifyAck 0x7f4724001f90 C_NotifyAck: id=81604380599, handle=139943859010656
2023-04-03T23:08:34.471+0530 7f4761115640 10 librbd::ImageWatcher: 0x7f4738008d70 remote snap_create request: [4182,139943859010656,1] [mirror state=primary, complete=0, mirror_peer_uuids=cc20fd2b-bd6e-4ea1-937e-e81d2b44aca9, clean_since_snap_id=head] .mirror.primary.f6b9efff-a8c2-4c72-92be-d5effefe1132.421c3436-15fa-4b8e-9a5c-a85c4e7b28ed 0
2023-04-03T23:08:34.471+0530 7f4761115640 20 librbd::ExclusiveLock: 0x7f4738003e20 accept_request=0 (request_type=0)
2023-04-03T23:08:34.471+0530 7f4761115640 10 librbd::Watcher::C_NotifyAck 0x7f4724001f90 finish: r=0

Actions #2

Updated by Prasanna Kumar Kalever about 1 year ago

  • Description updated (diff)
Actions #3

Updated by Prasanna Kumar Kalever about 1 year ago

  • Description updated (diff)
Actions

Also available in: Atom PDF