Project

General

Profile

Actions

Bug #64142

open

[RBD Live migration] Inter communication among two source and destination cluster is not happening

Added by Sunil Angadi 3 months ago. Updated 3 months ago.

Status:
New
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
RBD Migration
Backport:
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Started testing this PR https://github.com/ceph/ceph/pull/44470
with upstream PR build

[ceph: root@ceph-rbd1-sdbz-twjcwu-node1-installer /]# ceph -v
ceph version 18.0.0-5826-g11c8c66f (11c8c66f300746a01111440079db16544339440b) reef (dev)

Steps to reproduce:
----
1) Deploy two ceph clusters as source and destination
2) Create one client node common among both clusters
3) copy the both the ceph.conf and ceph.client.admin.keyring from both the clusters into the common client node.
4) Perform migration of source image pool to destination image pool using RBD Live migration commands.

when initiate migration from source cluster pool image to the destination pool facing error as below

[root@ceph-rbd2-sdbz-twjcwu-node2 ceph]# echo '{"type":"native","cluster_name":"site-b","client_name":"client.admin","pool_name":"pool1","image_name":"image1"}' | rbd --cluster site-a --pool pool_dest migration prepare --import-only --source-spec-path - dest_image1
rbd: error opening pool 'pool_dest': (2) No such file or directory

Still source and destination cluster inter commuincation among the cluster not happening
hence source cluster not able to get the details of destination cluster pool
marking as Blocker as it blocks further test in Live migration feature.

Actions #1

Updated by Ilya Dryomov 3 months ago

Hi Sunil,

I'm not sure why you are claiming that the communication isn't happening. Does pool-dest pool exist on site-a? If not, you need to create it before attempting the migration.

Actions #2

Updated by Sunil Angadi 3 months ago

Hi Ilya,

pool_dest is from the destination secondary cluster there i created this pool
i am trying to migrate data from the source cluster to the destination cluster pool

Just by copying both ceph.conf and ceph.client.admin.keyring
will doesn't make both internal communication among the cluster to communicate is it?

please let me know what action we need to perform before starting migration?

Actions #3

Updated by Ilya Dryomov 3 months ago

  • Assignee set to Or Ozeri

You are passing --cluster site-a and "cluster_name":"site-b", so Ceph configuration files should be named site-a.conf and site-b.conf and point to distinct keyring files (one with client.admin key for site-a and another for client.admin key for site-b).

Or, please walk Sunil through further configuration/environment issues here.

Actions #4

Updated by Sunil Angadi 3 months ago

Yes in common client node for two ceph clusters, i have those ceph.conf and keyring files

[root@ceph-rbd2-sdbz-twjcwu-node2 ceph]# ls
rbdmap  site-a.client.admin.keyring  site-a.conf  site-b.client.admin.keyring  site-b.conf

when i try to migrate image1 data from site-a to site-b facing error as below

[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# echo '{"type":"native","cluster_name":"site-a","client_name":"client.admin","pool_name":"pool1","image_name":"image1"}' | rbd --cluster site-a --pool pool_dest migration prepare --import-only --source-spec-path - dest_image1
rbd: error opening pool 'pool_dest': (2) No such file or directory

when trying it on site-b facing below issue
[root@ceph-rbd2-sdbz-twjcwu-node2 ceph]# echo '{"type":"native","cluster_name":"site-a","client_name":"client.admin","pool_name":"pool1","image_name":"image1"}' | rbd --cluster site-b --pool pool_dest migration prepare --import-only --source-spec-path - dest_image1
2024-01-25T06:08:48.712-0500 7fbf820d8c00 -1 librbd::migration::NativeFormat: 0x56209794c4f0 open: invalid pool name
2024-01-25T06:08:48.713-0500 7fbf820d8c00 -1 librbd::migration::OpenSourceImageRequest: 0x562097b07530 handle_open_source: failed to open migration source: (2) No such file or directory
2024-01-25T06:08:48.713-0500 7fbf820d8c00 -1 librbd::Migration: prepare_import: failed to open source image: (2) No such file or directory
rbd: preparing import migration failed: (2) No such file or directory

have the respective pools created in that clusters, but site-a cluster still not able to get site-b pool details
please refer below

[root@ceph-rbd2-sdbz-twjcwu-node2 ceph]# rbd ls -p pool1 --cluster site-a
image1
image2
image3
[root@ceph-rbd2-sdbz-twjcwu-node2 ceph]# ceph df --cluster site-b
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    180 GiB  180 GiB  324 MiB   324 MiB       0.18
TOTAL  180 GiB  180 GiB  324 MiB   324 MiB       0.18

--- POOLS ---
POOL       ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr        1    1  449 KiB        2  1.3 MiB      0     57 GiB
pool_dest   2   32      0 B        0      0 B      0     57 GiB

@Ozeri, please walk through this new feature functionality,
or if you have any docs please share it.

Actions #5

Updated by Sunil Angadi 3 months ago

After a discussion with @Ozeri,
Tried the test steps for a replicated pool with native data format from PR test
https://github.com/ceph/ceph/pull/44470/files#diff-9e60a600f8fbf133c1acb3b81ef26643d7b88d3891fb7adaf84db699a9dbf9a5

Replicated pool test
------

[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# ceph --cluster site-a osd pool create datapool 4
pool 'datapool' created
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# rbd --cluster site-a pool init datapool
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# 
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# ceph --cluster site-b osd pool create datapool 4
pool 'datapool' created
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# rbd --cluster site-b pool init datapool
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# 

[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# echo '{"type":"qcow","stream":{"type":"http","url":"http://download.ceph.com/qa/ubuntu-12.04.qcow2"}}' | rbd --cluster site-b migration prepare --import-only --source-spec-path - datapool/client.0.0
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# rbd ls -p datapool --cluster site-b
client.0.0
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# rbd du datapool/client.0.0 --cluster site-b
NAME        PROVISIONED  USED
client.0.0        2 GiB   0 B

[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# rbd --cluster site-b migration execute datapool/client.0.0
Image migration: 100% complete...done.
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# rbd --cluster site-b migration commit datapool/client.0.0
Commit image migration: 100% complete...done.
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# rbd --cluster site-b snap create datapool/client.0.0@snap
Creating snap: 100% complete...done.
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# rbd --cluster site-b create --size 1G datapool/image1
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# rbd --cluster site-b snap create datapool/image1@snap
Creating snap: 100% complete...done.
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# rbd --cluster site-b create --size 1G datapool/image2
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# rbd --cluster site-b snap create datapool/image2@snap
Creating snap: 100% complete...done.
[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# 

But when migration prepare to site-a fails with below error

[root@ceph-rbd2-sdbz-twjcwu-node2 ~]# echo '{"type":"native","cluster_name":"site-b","client_name":"client.admin","pool_name":"datapool","image_name":"client.0.0","snap_name":"snap"}' | rbd --cluster site-a migration prepare --import-only --source-spec-path - datapool/client.0.0
*** Caught signal (Segmentation fault) **
in thread 7f9dc97fa640 thread_name:taskfin_librbd
ceph version 18.2.1-5.el9cp (6f8b5d0736d7f36ae2994b6f9da678cb6f154a90) reef (stable)
1: /lib64/libc.so.6(+0x54db0) [0x7f9dddc54db0]
2: pthread_mutex_lock()
3: (ceph::logging::Log::submit_entry(ceph::logging::Entry&&)+0x3b) [0x7f9dde85ec1b]
4: /lib64/librbd.so.1(+0x2689d8) [0x7f9ddf2689d8]
5: /lib64/librbd.so.1(+0xeafdd) [0x7f9ddf0eafdd]
6: /lib64/librbd.so.1(+0x147f4d) [0x7f9ddf147f4d]
7: /lib64/librbd.so.1(+0xeafdd) [0x7f9ddf0eafdd]
8: (Finisher::finisher_thread_entry()+0x175) [0x7f9dde616ef5]
9: /lib64/libc.so.6(+0x9f802) [0x7f9dddc9f802]
10: /lib64/libc.so.6(+0x3f450) [0x7f9dddc3f450]
2024-01-25T09:27:31.942-0500 7f9dc97fa640 -1 *** Caught signal (Segmentation fault) **
in thread 7f9dc97fa640 thread_name:taskfin_librbd

ceph version 18.2.1-5.el9cp (6f8b5d0736d7f36ae2994b6f9da678cb6f154a90) reef (stable)
1: /lib64/libc.so.6(+0x54db0) [0x7f9dddc54db0]
2: pthread_mutex_lock()
3: (ceph::logging::Log::submit_entry(ceph::logging::Entry&&)+0x3b) [0x7f9dde85ec1b]
4: /lib64/librbd.so.1(+0x2689d8) [0x7f9ddf2689d8]
5: /lib64/librbd.so.1(+0xeafdd) [0x7f9ddf0eafdd]
6: /lib64/librbd.so.1(+0x147f4d) [0x7f9ddf147f4d]
7: /lib64/librbd.so.1(+0xeafdd) [0x7f9ddf0eafdd]
8: (Finisher::finisher_thread_entry()+0x175) [0x7f9dde616ef5]
9: /lib64/libc.so.6(+0x9f802) [0x7f9dddc9f802]
10: /lib64/libc.so.6(+0x3f450) [0x7f9dddc3f450]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
-246> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command assert hook 0x560120460cf0
-245> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command abort hook 0x560120460cf0
-244> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command leak_some_memory hook 0x560120460cf0
-243> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command perfcounters_dump hook 0x560120460cf0
-242> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command 1 hook 0x560120460cf0
-241> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command perf dump hook 0x560120460cf0
-240> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command perfcounters_schema hook 0x560120460cf0
-239> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command perf histogram dump hook 0x560120460cf0
-238> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command 2 hook 0x560120460cf0
-237> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command perf schema hook 0x560120460cf0
-236> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command counter dump hook 0x560120460cf0
-235> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command counter schema hook 0x560120460cf0
-234> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command perf histogram schema hook 0x560120460cf0
-233> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command perf reset hook 0x560120460cf0
-232> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command config show hook 0x560120460cf0
-231> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command config help hook 0x560120460cf0
-230> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command config set hook 0x560120460cf0
-229> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command config unset hook 0x560120460cf0
-228> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command config get hook 0x560120460cf0
-227> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command config diff hook 0x560120460cf0
-226> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command config diff get hook 0x560120460cf0
-225> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command injectargs hook 0x560120460cf0
-224> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command log flush hook 0x560120460cf0
-223> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command log dump hook 0x560120460cf0
-222> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command log reopen hook 0x560120460cf0
-221> 2024-01-25T09:27:31.912-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command dump_mempools hook 0x560120463378
-220> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00 10 monclient: get_monmap_and_config
-219> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00 10 monclient: build_initial_monmap
-218> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  1 build_initial for_mkfs: 0
-217> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00 10 monclient: monmap:
epoch 0
fsid 14198cc8-b9b5-11ee-84d7-fa163e4fc610
last_changed 2024-01-25T09:27:31.921486-0500
created 2024-01-25T09:27:31.921486-0500
min_mon_release 0 (unknown)
election_strategy: 1
0: [v2:10.0.204.103:3300/0,v1:10.0.204.103:6789/0] mon.noname-c
1: [v2:10.0.205.205:3300/0,v1:10.0.205.205:6789/0] mon.noname-a
2: [v2:10.0.207.179:3300/0,v1:10.0.207.179:6789/0] mon.noname-b

-216> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding auth protocol: cephx
-215> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding auth protocol: cephx
-214> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding auth protocol: cephx
-213> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding auth protocol: none
-212> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding con mode: secure
-211> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding con mode: crc
-210> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding con mode: secure
-209> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding con mode: crc
-208> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding con mode: secure
-207> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding con mode: crc
-206> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding con mode: crc
-205> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding con mode: secure
-204> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding con mode: crc
-203> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding con mode: secure
-202> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding con mode: crc
-201> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204fb0a0) adding con mode: secure
-200> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  2 auth: KeyRing::load: loaded key file /etc/ceph/site-a.client.admin.keyring
-199> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00 10 monclient: init
-198> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding auth protocol: cephx
-197> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding auth protocol: cephx
-196> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding auth protocol: cephx
-195> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding auth protocol: none
-194> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding con mode: secure
-193> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding con mode: crc
-192> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding con mode: secure
-191> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding con mode: crc
-190> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding con mode: secure
-189> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding con mode: crc
-188> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding con mode: crc
-187> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding con mode: secure
-186> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding con mode: crc
-185> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding con mode: secure
-184> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding con mode: crc
-183> 2024-01-25T09:27:31.921-0500 7f9ddd15fc00  5 AuthRegistry(0x7fff0378e160) adding con mode: secure
-182> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00  2 auth: KeyRing::load: loaded key file /etc/ceph/site-a.client.admin.keyring
-181> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00  2 auth: KeyRing::load: loaded key file /etc/ceph/site-a.client.admin.keyring
-180> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00  5 asok(0x560120403e10) register_command rotate-key hook 0x7fff0378e2a8
-179> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00 10 monclient: _reopen_session rank -1
-178> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00 10 monclient: _add_conns ranks=[0,2,1]
-177> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00 10 monclient(hunting): picked mon.noname-c con 0x560120592a10 addr [v2:10.0.204.103:3300/0,v1:10.0.204.103:6789/0]
-176> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00 10 monclient(hunting): picked mon.noname-b con 0x56012058c900 addr [v2:10.0.207.179:3300/0,v1:10.0.207.179:6789/0]
-175> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00 10 monclient(hunting): picked mon.noname-a con 0x56012058d2c0 addr [v2:10.0.205.205:3300/0,v1:10.0.205.205:6789/0]
-174> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00 10 monclient(hunting): start opening mon connection
-173> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00 10 monclient(hunting): start opening mon connection
-172> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00 10 monclient(hunting): start opening mon connection
-171> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00 10 monclient(hunting): _renew_subs
-170> 2024-01-25T09:27:31.922-0500 7f9ddd15fc00 10 monclient(hunting): authenticate will time out at 272715.883510s
-169> 2024-01-25T09:27:31.924-0500 7f9ddb45d640 10 monclient(hunting): get_auth_request con 0x56012058c900 auth_method 0
-168> 2024-01-25T09:27:31.924-0500 7f9ddb45d640 10 monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
-167> 2024-01-25T09:27:31.924-0500 7f9ddb45d640 10 monclient(hunting): _init_auth method 2
-166> 2024-01-25T09:27:31.924-0500 7f9ddb45d640 10 monclient(hunting): _init_auth creating new auth
-165> 2024-01-25T09:27:31.924-0500 7f9ddbc5e640 10 monclient(hunting): get_auth_request con 0x560120592a10 auth_method 0
-164> 2024-01-25T09:27:31.924-0500 7f9ddbc5e640 10 monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
-163> 2024-01-25T09:27:31.924-0500 7f9ddbc5e640 10 monclient(hunting): _init_auth method 2
-162> 2024-01-25T09:27:31.924-0500 7f9ddbc5e640 10 monclient(hunting): _init_auth creating new auth
-161> 2024-01-25T09:27:31.924-0500 7f9ddb45d640 10 monclient(hunting): handle_auth_reply_more payload 9
-160> 2024-01-25T09:27:31.924-0500 7f9ddb45d640 10 monclient(hunting): handle_auth_reply_more payload_len 9
-159> 2024-01-25T09:27:31.924-0500 7f9ddb45d640 10 monclient(hunting): handle_auth_reply_more responding with 36 bytes
-158> 2024-01-25T09:27:31.924-0500 7f9ddcec0640 10 monclient(hunting): get_auth_request con 0x56012058d2c0 auth_method 0
-157> 2024-01-25T09:27:31.924-0500 7f9ddcec0640 10 monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
-156> 2024-01-25T09:27:31.924-0500 7f9ddcec0640 10 monclient(hunting): _init_auth method 2
-155> 2024-01-25T09:27:31.924-0500 7f9ddcec0640 10 monclient(hunting): _init_auth creating new auth
-154> 2024-01-25T09:27:31.924-0500 7f9ddbc5e640 10 monclient(hunting): handle_auth_reply_more payload 9
-153> 2024-01-25T09:27:31.924-0500 7f9ddbc5e640 10 monclient(hunting): handle_auth_reply_more payload_len 9
-152> 2024-01-25T09:27:31.924-0500 7f9ddbc5e640 10 monclient(hunting): handle_auth_reply_more responding with 36 bytes
-151> 2024-01-25T09:27:31.924-0500 7f9ddb45d640 10 monclient(hunting): handle_auth_done global_id 34538 payload 274
-150> 2024-01-25T09:27:31.925-0500 7f9ddb45d640 10 monclient: _finish_hunting 0
-149> 2024-01-25T09:27:31.925-0500 7f9ddb45d640  1 monclient: found mon.noname-b
-148> 2024-01-25T09:27:31.925-0500 7f9ddb45d640 10 monclient: _send_mon_message to mon.noname-b at v2:10.0.207.179:3300/0
-147> 2024-01-25T09:27:31.927-0500 7f9ddac5c640 10 monclient: handle_monmap mon_map magic: 0 v1
-146> 2024-01-25T09:27:31.927-0500 7f9ddac5c640 10 monclient:  got monmap 7 from mon.noname-b (according to old e7)
-145> 2024-01-25T09:27:31.927-0500 7f9ddac5c640 10 monclient: dump:
epoch 7
fsid 14198cc8-b9b5-11ee-84d7-fa163e4fc610
last_changed 2024-01-23T01:15:20.450929-0500
created 2024-01-23T01:03:08.724939-0500
min_mon_release 18 (reef)
election_strategy: 1
0: [v2:10.0.204.103:3300/0,v1:10.0.204.103:6789/0] mon.ceph-rbd1-sdbz-twjcwu-node1-installer
1: [v2:10.0.205.205:3300/0,v1:10.0.205.205:6789/0] mon.ceph-rbd1-sdbz-TWJCWU-node2
2: [v2:10.0.207.179:3300/0,v1:10.0.207.179:6789/0] mon.ceph-rbd1-sdbz-TWJCWU-node3

-144> 2024-01-25T09:27:31.927-0500 7f9ddac5c640 10 monclient: _finish_auth 0
-143> 2024-01-25T09:27:31.927-0500 7f9ddac5c640 10 monclient: _check_auth_tickets
-142> 2024-01-25T09:27:31.927-0500 7f9ddd15fc00  5 monclient: authenticate success, global_id 34538
-141> 2024-01-25T09:27:31.927-0500 7f9ddac5c640 10 monclient: handle_config config(1 keys) v1
-140> 2024-01-25T09:27:31.927-0500 7f9ddd15fc00 10 monclient: get_monmap_and_config success
-139> 2024-01-25T09:27:31.927-0500 7f9ddd15fc00  4 set_mon_vals no callback set
-138> 2024-01-25T09:27:31.927-0500 7f9ddd15fc00 10 set_mon_vals container_image = quay.ceph.io/ceph-ci/ceph@sha256:b3cb0d1c0f066d81acd12f4ca4a331c9b83b1ad819e454934272e16bb5feda88
-137> 2024-01-25T09:27:31.927-0500 7f9ddac5c640 10 monclient: handle_monmap mon_map magic: 0 v1
-136> 2024-01-25T09:27:31.927-0500 7f9ddac5c640 10 monclient:  got monmap 7 from mon.ceph-rbd1-sdbz-TWJCWU-node3 (according to old e7)
-135> 2024-01-25T09:27:31.927-0500 7f9ddac5c640 10 monclient: dump:
epoch 7
fsid 14198cc8-b9b5-11ee-84d7-fa163e4fc610
last_changed 2024-01-23T01:15:20.450929-0500
created 2024-01-23T01:03:08.724939-0500
min_mon_release 18 (reef)
election_strategy: 1
0: [v2:10.0.204.103:3300/0,v1:10.0.204.103:6789/0] mon.ceph-rbd1-sdbz-twjcwu-node1-installer
1: [v2:10.0.205.205:3300/0,v1:10.0.205.205:6789/0] mon.ceph-rbd1-sdbz-TWJCWU-node2
2: [v2:10.0.207.179:3300/0,v1:10.0.207.179:6789/0] mon.ceph-rbd1-sdbz-TWJCWU-node3

-134> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00 10 monclient: shutdown
-133> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 asok(0x560120403e10) unregister_commands rotate-key
-132> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding auth protocol: cephx
-131> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding auth protocol: cephx
-130> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding auth protocol: cephx
-129> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding auth protocol: none
-128> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding con mode: secure
-127> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding con mode: crc
-126> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding con mode: secure
-125> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding con mode: crc
-124> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding con mode: secure
-123> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding con mode: crc
-122> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding con mode: crc
-121> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding con mode: secure
-120> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding con mode: crc
-119> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding con mode: secure
-118> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding con mode: crc
-117> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  5 AuthRegistry(0x5601204f7b98) adding con mode: secure
-116> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  2 auth: KeyRing::load: loaded key file /etc/ceph/site-a.client.admin.keyring
-115> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00 10 monclient: build_initial_monmap
-114> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00  1 build_initial for_mkfs: 0
-113> 2024-01-25T09:27:31.928-0500 7f9ddd15fc00 10 monclient: monmap:
epoch 0

Actions

Also available in: Atom PDF