Project

General

Profile

Actions

Bug #51628

closed

bogus "unable to find a keyring" warning on "rbd mirroring bootstrap import"

Added by Ritesh Shah almost 3 years ago. Updated almost 2 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
octopus,pacific
Regression:
No
Severity:
3 - minor
Reviewed:
07/12/2021
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,

While bootstrapping import command, it fails with the following error message :

[root@ceph-node01 ceph]# rbd --cluster site-b mirror pool peer bootstrap import --site-name site-b rbd token
2021-07-06T09:35:25.293-0400 7f96cc89c2c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2021-07-06T09:35:25.294-0400 7f96cc89c2c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2021-07-06T09:35:25.294-0400 7f96cc89c2c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory

I have admin keys and ceph.conf from both the cluster’s on a common admin and am able to execute all commands on both the cluster’s but not this command.
I followed this : https://docs.ceph.com/en/latest/rbd/rbd-mirroring

Can someone help me understand how to fix the above issue or is this a bug?

Regards
Ritesh


Related issues 2 (0 open2 closed)

Copied to rbd - Backport #52735: pacific: bogus "unable to find a keyring" warning on "rbd mirroring bootstrap import"ResolvedIlya DryomovActions
Copied to rbd - Backport #52736: octopus: bogus "unable to find a keyring" warning on "rbd mirroring bootstrap import"ResolvedIlya DryomovActions
Actions #1

Updated by Ritesh Shah almost 3 years ago

Additional command details for your reference after create two new ceph cluster's and running command from common admin workstation.

On common admin workstation copy the ceph.conf and admin keys from both the ceph cluster's:
[root@workstation-d43b ceph]# scp ceph-mon01:/etc/ceph/ceph.conf /etc/ceph/site-a.conf
ceph.conf 100% 177 127.1KB/s 00:00
[root@workstation-d43b ceph]# scp ceph-mon01:/etc/ceph/ceph.client.admin.keyring /etc/ceph/site-a.client.admin.keyring
ceph.client.admin.keyring 100% 63 50.3KB/s 00:00
[root@workstation-d43b ceph]# scp ceph-node01:/etc/ceph/ceph.client.admin.keyring /etc/ceph/site-b.client.admin.keyring
ceph.client.admin.keyring 100% 63 81.4KB/s 00:00
[root@workstation-d43b ceph]# scp ceph-node01:/etc/ceph/ceph.conf /etc/ceph/site-b.conf
ceph.conf
[root@workstation-d43b ceph]# ls la
total 44
drwxr-xr-x 2 root root 210 Jul 7 01:51 .
drwxr-xr-x. 109 root root 8192 Jul 6 01:53 ..
-rw------
1 root root 63 Jul 7 01:25 site-a.client.admin.keyring
rw-r--r- 1 root root 177 Jul 7 01:25 site-a.conf
rw------ 1 root root 63 Jul 7 01:25 site-b.client.admin.keyring
rw-r--r- 1 root root 177 Jul 7 01:25 site-b.conf
-----

Create data pool and enable it as rbd application on both the cluster's.

[root@workstation-d43b ceph]# ceph --cluster site-a osd pool create data 8
[root@workstation-d43b ceph]# ceph osd pool application enable data rbd --cluster site-a
[root@workstation-d43b ceph]# ceph --cluster site-b osd pool create data 8
[root@workstation-d43b ceph]# ceph osd pool application enable data rbd --cluster site-b

[root@workstation-8864 ~]# rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling --cluster site-a

Enable data pool for mirroring :
[root@workstation-8864 ~]# rbd mirror pool enable data pool --cluster site-a
[root@workstation-8864 ~]# rbd mirror pool info data --cluster site-a
Mode: pool
Site Name: 2705b958-e2ec-11eb-add4-2cc260754989

Peer Sites: none

rbd mirror peer bootstrap :
[root@workstation-8864 ~]# rbd --cluster site-a mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a

rbd mirror peer bootstrap import fails as shown below :
[root@workstation-8864 ~]# rbd --cluster site-b mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a
2021-07-12T05:33:28.044-0400 7f005b8052c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2021-07-12T05:33:28.047-0400 7f005b8052c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2021-07-12T05:33:28.047-0400 7f005b8052c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory

Status after running the above command :
[root@workstation-8864 ~]# rbd mirror pool status data --cluster site-a
health: WARNING
daemon health: UNKNOWN
image health: WARNING
images: 1 total
1 unknown

[root@workstation-8864 ~]# rbd mirror pool status data --cluster site-b
health: UNKNOWN
daemon health: UNKNOWN
image health: OK
images: 0 total

[root@workstation-8864 ~]# ls l /etc/ceph
total 20
-rw-r--r-
1 root root 92 May 13 13:54 rbdmap
rw------ 1 root root 63 Jul 12 05:21 site-a.client.admin.keyring
rw-r--r- 1 root root 177 Jul 12 05:21 site-a.conf
rw------ 1 root root 63 Jul 12 05:22 site-b.client.admin.keyring
rw-r--r- 1 root root 177 Jul 12 05:22 site-b.conf

Both Ceph cluster status:
[root@workstation-8864 ~]# ceph -s --cluster site-a
cluster:
id: 2705b958-e2ec-11eb-add4-2cc260754989
health: HEALTH_WARN
nodeep-scrub flag(s) set

services:
mon: 4 daemons, quorum ceph-mon01.example.com,ceph-mon02,ceph-mon03,proxy01 (age 56m)
mgr: ceph-mon01.example.com.ekcxva(active, since 62m), standbys: ceph-mon02.qzajty
osd: 3 osds: 3 up (since 53m), 3 in (since 53m)
flags nodeep-scrub
data:
pools: 3 pools, 65 pgs
objects: 6 objects, 35 B
usage: 18 MiB used, 30 GiB / 30 GiB avail
pgs: 65 active+clean

[root@workstation-8864 ~]# ceph -s --cluster site-b
cluster:
id: ec08e4cc-e2ed-11eb-aa60-2cc26078e4ef
health: HEALTH_WARN
mon ceph-node01.example.com is low on available space
nodeep-scrub flag(s) set

services:
mon: 4 daemons, quorum ceph-node01.example.com,ceph-node02,ceph-node03,proxy02 (age 45m)
mgr: ceph-node01.example.com.zibauj(active, since 49m), standbys: ceph-node02.hjagsh
osd: 3 osds: 3 up (since 19m), 3 in (since 19m)
flags nodeep-scrub
data:
pools: 3 pools, 65 pgs
objects: 1 objects, 0 B
usage: 18 MiB used, 30 GiB / 30 GiB avail
pgs: 65 active+clean

[root@workstation-8864 ~]# ceph osd lspools --cluster site-a
1 device_health_metrics
2 image-pool
3 data
[root@workstation-8864 ~]# ceph osd lspools --cluster site-b
1 device_health_metrics
2 image-pool
3 data

I see that the rbd-mirror daemons do not get started on the secondary site-b :
[root@ceph-node01 ~]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cf7e5fa7fd91 docker.io/ceph/ceph:v16 n mon.ceph-node0... 51 minutes ago Up 51 minutes ago ceph-ec08e4cc-e2ed-11eb-aa60-2cc26078e4ef-mon.ceph-node01.example.com
7bf3c447e39d docker.io/ceph/ceph:v16 -n mgr.ceph-node0... 51 minutes ago Up 51 minutes ago ceph-ec08e4cc-e2ed-11eb-aa60-2cc26078e4ef-mgr.ceph-node01.example.com.zibauj
606185d19143 docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb -n client.crash.c... 50 minutes ago Up 50 minutes ago ceph-ec08e4cc-e2ed-11eb-aa60-2cc26078e4ef-crash.ceph-node01
8a7467ecf749 docker.io/prom/node-exporter:v0.18.1 --no-collector.ti... 49 minutes ago Up 49 minutes ago ceph-ec08e4cc-e2ed-11eb-aa60-2cc26078e4ef-node-exporter.ceph-node01
03da645ba03e docker.io/ceph/ceph-grafana:6.7.4 /bin/bash 49 minutes ago Up 49 minutes ago ceph-ec08e4cc-e2ed-11eb-aa60-2cc26078e4ef-grafana.ceph-node01
66a49055139f docker.io/prom/alertmanager:v0.20.0 --cluster.listen
... 45 minutes ago Up 45 minutes ago ceph-ec08e4cc-e2ed-11eb-aa60-2cc26078e4ef-alertmanager.ceph-node01
d199525bd15c docker.io/prom/prometheus:v2.18.1 --config.file=/et... 45 minutes ago Up 45 minutes ago ceph-ec08e4cc-e2ed-11eb-aa60-2cc26078e4ef-prometheus.ceph-node01
f9e230d9d2f4 docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb -n osd.0 -f --set... 20 minutes ago Up 20 minutes ago ceph-ec08e4cc-e2ed-11eb-aa60-2cc26078e4ef-osd.0

Also check the same on node of site-a cluster :
[root@ceph-mon01 ~]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
387ce9a2cddd docker.io/ceph/ceph:v16 n mon.ceph-mon01... About an hour ago Up About an hour ago ceph-2705b958-e2ec-11eb-add4-2cc260754989-mon.ceph-mon01.example.com
4e0ceb576858 docker.io/ceph/ceph:v16 -n mgr.ceph-mon01... About an hour ago Up About an hour ago ceph-2705b958-e2ec-11eb-add4-2cc260754989-mgr.ceph-mon01.example.com.ekcxva
72965af0a264 docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb -n client.crash.c... About an hour ago Up About an hour ago ceph-2705b958-e2ec-11eb-add4-2cc260754989-crash.ceph-mon01
825a24d21853 docker.io/prom/node-exporter:v0.18.1 --no-collector.ti... About an hour ago Up About an hour ago ceph-2705b958-e2ec-11eb-add4-2cc260754989-node-exporter.ceph-mon01
0cea80758d5c docker.io/ceph/ceph-grafana:6.7.4 /bin/bash About an hour ago Up About an hour ago ceph-2705b958-e2ec-11eb-add4-2cc260754989-grafana.ceph-mon01
bbb68858bc5c docker.io/prom/alertmanager:v0.20.0 --cluster.listen
... 57 minutes ago Up 57 minutes ago ceph-2705b958-e2ec-11eb-add4-2cc260754989-alertmanager.ceph-mon01
6315dc665e8f docker.io/prom/prometheus:v2.18.1 --config.file=/et... 57 minutes ago Up 57 minutes ago ceph-2705b958-e2ec-11eb-add4-2cc260754989-prometheus.ceph-mon01
5612272f5670 docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb -n osd.2 -f --set... 54 minutes ago Up 54 minutes ago ceph-2705b958-e2ec-11eb-add4-2cc260754989-osd.2

[root@workstation-8864 ~]# ceph --version --cluster site-a
ceph version 16.2.4 (3cbe25cde3cfa028984618ad32de9edc4c1eaed0) pacific (stable)
[root@workstation-8864 ~]# ceph --version --cluster site-b
ceph version 16.2.4 (3cbe25cde3cfa028984618ad32de9edc4c1eaed0) pacific (stable)

Actions #2

Updated by Ilya Dryomov almost 3 years ago

  • Tracker changed from Support to Bug
  • Regression set to No
  • Severity set to 3 - minor

Hi Ritesh,

I don't think "rbd mirror pool peer bootstrap import" actually failed. What is the output of "rbd mirror pool info" on site-b? If you see an rx-only peer there, "rbd mirror pool peer bootstrap import" did its job and these "No such file or directory" errors can be ignored.

Actions #3

Updated by Ilya Dryomov almost 3 years ago

I see that the rbd-mirror daemons do not get started on the secondary site-b :

Did you attempt to start it? It's not automatic.

Actions #4

Updated by Ritesh Shah almost 3 years ago

Hi,

I couldn't find any document on how to start rbd-mirror daemon. Can you point me to a document since that's the point I had mentioned earlier as well.

I was under the impression that bootstrap will create rbd-mirror but I think I was wrong.

Regards
Ritesh

Actions #5

Updated by Deepika Upadhyay almost 3 years ago

Ritesh Shah wrote:

Hi,

I couldn't find any document on how to start rbd-mirror daemon. Can you point me to a document since that's the point I had mentioned earlier as well.

I was under the impression that bootstrap will create rbd-mirror but I think I was wrong.

Regards
Ritesh

@Ritesh might help: https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#rbd-mirror-daemon

Actions #6

Updated by Ritesh Shah almost 3 years ago

Hi,

I installed and enabled rbd-mirroring daemon. Tried configuring one-way mirror. The image1 in data rbd pool seems to be fine but rbd mirror status for the image1 in data pool is unknown. What is the possible issue here?

[root@workstation-0023 ceph]# rbd info data/image1 --cluster site-a
rbd image 'image1':
size 1 GiB in 256 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 5f558a37ac4a
block_name_prefix: rbd_data.5f558a37ac4a
format: 2
features: exclusive-lock, journaling
op_features:
flags:
create_timestamp: Fri Jul 16 02:58:17 2021
access_timestamp: Fri Jul 16 02:58:17 2021
modify_timestamp: Fri Jul 16 02:58:17 2021
journal: 5f558a37ac4a
mirroring state: enabled
mirroring mode: journal
mirroring global id: a25ceee9-2d11-42b3-aded-d398324e700b
mirroring primary: true

[root@workstation-0023 ceph]# rbd mirror image status data/image1 --cluster site-a
image1:
global_id: a25ceee9-2d11-42b3-aded-d398324e700b
state: down+unknown
description: status not found
last_update:

Additional information and commands which I ran to configure one-way mirroring.

[root@workstation-0023 ceph]# ceph --cluster site-a osd pool create data 8
pool 'data' created
[root@workstation-0023 ceph]# ceph osd pool application enable data rbd --cluster site-a
enabled application 'rbd' on pool 'data'
[root@workstation-0023 ceph]# ceph --cluster site-b osd pool create data 8
pool 'data' created
[root@workstation-0023 ceph]# ceph osd pool application enable data rbd --cluster site-b
enabled application 'rbd' on pool 'data'
[root@workstation-0023 ceph]# rbd mirror pool enable data pool --cluster site-a
[root@workstation-0023 ceph]# rbd mirror pool info data --cluster site-a
Mode: pool
Site Name: 90f45bea-e5ed-11eb-bb7a-2cc260754989

Peer Sites: none
[root@workstation-0023 ceph]# rbd mirror pool enable data pool --cluster site-b
[root@workstation-0023 ceph]# rbd mirror pool info data --cluster site-b
Mode: pool
Site Name: 7961beb2-e5f4-11eb-81e9-2cc26078e4ef

Peer Sites: none
[root@workstation-0023 ceph]# rbd --cluster site-a mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a
[root@workstation-0023 ceph]# rbd --cluster site-b mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a
2021-07-16T02:56:31.710-0400 7fbdb78df2c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2021-07-16T02:56:31.711-0400 7fbdb78df2c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2021-07-16T02:56:31.711-0400 7fbdb78df2c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[root@workstation-0023 ceph]# rbd mirror pool status data --cluster site-a
health: WARNING
daemon health: WARNING
image health: OK
images: 0 total
[root@workstation-0023 ceph]# rbd mirror pool status data --cluster site-b
health: WARNING
daemon health: WARNING
image health: OK
images: 0 total
[root@workstation-0023 ceph]# ceph -s --cluster site-a
cluster:
id: 90f45bea-e5ed-11eb-bb7a-2cc260754989
health: HEALTH_WARN
1 stray daemon(s) not managed by cephadm
nodeep-scrub flag(s) set
1 slow ops, oldest one blocked for 8707 sec, mon.ceph-mon03 has slow ops
too many PGs per OSD (273 > max 250)

services:
mon: 4 daemons, quorum ceph-mon01.example.com,ceph-mon02,ceph-mon03,proxy01 (age 2h)
mgr: ceph-mon01.example.com.uaccqk(active, since 2h), standbys: ceph-mon02.ilfbry
mds: 1/1 daemons up, 1 standby
osd: 3 osds: 3 up (since 2h), 3 in (since 2h)
flags nodeep-scrub
rbd-mirror: 1 daemon active (1 hosts)
rgw: 2 daemons active (2 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 11 pools, 273 pgs
objects: 410 objects, 9.6 MiB
usage: 154 MiB used, 30 GiB / 30 GiB avail
pgs: 273 active+clean

[root@workstation-0023 ceph]# ceph -s --cluster site-b
cluster:
id: 7961beb2-e5f4-11eb-81e9-2cc26078e4ef
health: HEALTH_WARN
1 stray daemon(s) not managed by cephadm
mon ceph-node01.example.com is low on available space
nodeep-scrub flag(s) set

services:
mon: 4 daemons, quorum ceph-node01.example.com,ceph-node02,ceph-node03,proxy02 (age 17m)
mgr: ceph-node01.example.com.opjwrx(active, since 103m), standbys: ceph-node02.ajwvyf
osd: 3 osds: 3 up (since 15m), 3 in (since 15m)
flags nodeep-scrub
rbd-mirror: 1 daemon active (1 hosts)
data:
pools: 2 pools, 33 pgs
objects: 1 objects, 0 B
usage: 17 MiB used, 30 GiB / 30 GiB avail
pgs: 33 active+clean

[root@workstation-0023 ceph]# ceph osd lspools --cluster site-a
1 device_health_metrics
2 rbd
3 cephfs.fs_name.meta
4 cephfs.fs_name.data
5 .rgw.root
6 test_zone.rgw.log
7 test_zone.rgw.control
8 test_zone.rgw.meta
9 test_zone.rgw.buckets.index
10 test_zone.rgw.buckets.data
11 data
[root@workstation-0023 ceph]# ceph osd lspools --cluster site-b
1 device_health_metrics
2 data
[root@workstation-0023 ceph]# ceph --version --cluster site-a
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
[root@workstation-0023 ceph]# ceph --version --cluster site-b
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
[root@workstation-0023 ceph]# rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling --cluster site-a
[root@workstation-0023 ceph]# rbd mirror pool info data --cluster site-a
Mode: pool
Site Name: site-a

Peer Sites: none
[root@workstation-0023 ceph]# rbd mirror pool info data --cluster site-b
Mode: pool
Site Name: site-b

Peer Sites:

UUID: b433259d-f6b6-4539-969b-a53a06f65a5e
Name: site-a
Direction: rx-only
Client: client.rbd-mirror-peer
[root@workstation-0023 ceph]# rbd --cluster site-a mirror pool info data --all
Mode: pool
Site Name: site-a

Peer Sites: none
[root@workstation-0023 ceph]# rbd --cluster site-b mirror pool info data --all
Mode: pool
Site Name: site-b

Peer Sites:

UUID: b433259d-f6b6-4539-969b-a53a06f65a5e
Name: site-a
Direction: rx-only
Client: client.rbd-mirror-peer
Mon Host: [v2:192.168.56.64:3300/0,v1:192.168.56.64:6789/0]
Key: AQB+LfFg9B+YDhAA8qTSxgZLhdvfLX/3AeLCRQ==

Actions #7

Updated by Ritesh Shah almost 3 years ago

Ritesh Shah wrote:

Hi,

I installed and enabled rbd-mirroring daemon. Tried configuring one-way mirror. The image1 in data rbd pool seems to be fine but rbd mirror status for the image1 in data pool is unknown. What is the possible issue here?

[root@workstation-0023 ceph]# rbd info data/image1 --cluster site-a
rbd image 'image1':
size 1 GiB in 256 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 5f558a37ac4a
block_name_prefix: rbd_data.5f558a37ac4a
format: 2
features: exclusive-lock, journaling
op_features:
flags:
create_timestamp: Fri Jul 16 02:58:17 2021
access_timestamp: Fri Jul 16 02:58:17 2021
modify_timestamp: Fri Jul 16 02:58:17 2021
journal: 5f558a37ac4a
mirroring state: enabled
mirroring mode: journal
mirroring global id: a25ceee9-2d11-42b3-aded-d398324e700b
mirroring primary: true

[root@workstation-0023 ceph]# rbd mirror image status data/image1 --cluster site-a
image1:
global_id: a25ceee9-2d11-42b3-aded-d398324e700b
state: down+unknown
description: status not found
last_update:

Additional information and commands which I ran to configure one-way mirroring.

[root@workstation-0023 ceph]# ceph --cluster site-a osd pool create data 8
pool 'data' created
[root@workstation-0023 ceph]# ceph osd pool application enable data rbd --cluster site-a
enabled application 'rbd' on pool 'data'
[root@workstation-0023 ceph]# ceph --cluster site-b osd pool create data 8
pool 'data' created
[root@workstation-0023 ceph]# ceph osd pool application enable data rbd --cluster site-b
enabled application 'rbd' on pool 'data'
[root@workstation-0023 ceph]# rbd mirror pool enable data pool --cluster site-a
[root@workstation-0023 ceph]# rbd mirror pool info data --cluster site-a
Mode: pool
Site Name: 90f45bea-e5ed-11eb-bb7a-2cc260754989

Peer Sites: none
[root@workstation-0023 ceph]# rbd mirror pool enable data pool --cluster site-b
[root@workstation-0023 ceph]# rbd mirror pool info data --cluster site-b
Mode: pool
Site Name: 7961beb2-e5f4-11eb-81e9-2cc26078e4ef

Peer Sites: none
[root@workstation-0023 ceph]# rbd --cluster site-a mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a
[root@workstation-0023 ceph]# rbd --cluster site-b mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a
2021-07-16T02:56:31.710-0400 7fbdb78df2c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2021-07-16T02:56:31.711-0400 7fbdb78df2c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2021-07-16T02:56:31.711-0400 7fbdb78df2c0 -1 auth: unable to find a keyring on /etc/ceph/..keyring,/etc/ceph/.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[root@workstation-0023 ceph]# rbd mirror pool status data --cluster site-a
health: WARNING
daemon health: WARNING
image health: OK
images: 0 total
[root@workstation-0023 ceph]# rbd mirror pool status data --cluster site-b
health: WARNING
daemon health: WARNING
image health: OK
images: 0 total
[root@workstation-0023 ceph]# ceph -s --cluster site-a
cluster:
id: 90f45bea-e5ed-11eb-bb7a-2cc260754989
health: HEALTH_WARN
1 stray daemon(s) not managed by cephadm
nodeep-scrub flag(s) set
1 slow ops, oldest one blocked for 8707 sec, mon.ceph-mon03 has slow ops
too many PGs per OSD (273 > max 250)

services:
mon: 4 daemons, quorum ceph-mon01.example.com,ceph-mon02,ceph-mon03,proxy01 (age 2h)
mgr: ceph-mon01.example.com.uaccqk(active, since 2h), standbys: ceph-mon02.ilfbry
mds: 1/1 daemons up, 1 standby
osd: 3 osds: 3 up (since 2h), 3 in (since 2h)
flags nodeep-scrub
rbd-mirror: 1 daemon active (1 hosts)
rgw: 2 daemons active (2 hosts, 1 zones)

data:
volumes: 1/1 healthy
pools: 11 pools, 273 pgs
objects: 410 objects, 9.6 MiB
usage: 154 MiB used, 30 GiB / 30 GiB avail
pgs: 273 active+clean

[root@workstation-0023 ceph]# ceph -s --cluster site-b
cluster:
id: 7961beb2-e5f4-11eb-81e9-2cc26078e4ef
health: HEALTH_WARN
1 stray daemon(s) not managed by cephadm
mon ceph-node01.example.com is low on available space
nodeep-scrub flag(s) set

services:
mon: 4 daemons, quorum ceph-node01.example.com,ceph-node02,ceph-node03,proxy02 (age 17m)
mgr: ceph-node01.example.com.opjwrx(active, since 103m), standbys: ceph-node02.ajwvyf
osd: 3 osds: 3 up (since 15m), 3 in (since 15m)
flags nodeep-scrub
rbd-mirror: 1 daemon active (1 hosts)

data:
pools: 2 pools, 33 pgs
objects: 1 objects, 0 B
usage: 17 MiB used, 30 GiB / 30 GiB avail
pgs: 33 active+clean

[root@workstation-0023 ceph]# ceph osd lspools --cluster site-a
1 device_health_metrics
2 rbd
3 cephfs.fs_name.meta
4 cephfs.fs_name.data
5 .rgw.root
6 test_zone.rgw.log
7 test_zone.rgw.control
8 test_zone.rgw.meta
9 test_zone.rgw.buckets.index
10 test_zone.rgw.buckets.data
11 data
[root@workstation-0023 ceph]# ceph osd lspools --cluster site-b
1 device_health_metrics
2 data
[root@workstation-0023 ceph]# ceph --version --cluster site-a
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
[root@workstation-0023 ceph]# ceph --version --cluster site-b
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
[root@workstation-0023 ceph]# rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling --cluster site-a
[root@workstation-0023 ceph]# rbd mirror pool info data --cluster site-a
Mode: pool
Site Name: site-a

Peer Sites: none
[root@workstation-0023 ceph]# rbd mirror pool info data --cluster site-b
Mode: pool
Site Name: site-b

Peer Sites:

UUID: b433259d-f6b6-4539-969b-a53a06f65a5e
Name: site-a
Direction: rx-only
Client: client.rbd-mirror-peer
[root@workstation-0023 ceph]# rbd --cluster site-a mirror pool info data --all
Mode: pool
Site Name: site-a

Peer Sites: none
[root@workstation-0023 ceph]# rbd --cluster site-b mirror pool info data --all
Mode: pool
Site Name: site-b

Peer Sites:

UUID: b433259d-f6b6-4539-969b-a53a06f65a5e
Name: site-a
Direction: rx-only
Client: client.rbd-mirror-peer
Mon Host: [v2:192.168.56.64:3300/0,v1:192.168.56.64:6789/0]
Key: AQB+LfFg9B+YDhAA8qTSxgZLhdvfLX/3AeLCRQ==

Actions #8

Updated by Ilya Dryomov almost 3 years ago

How are you starting rbd-mirror daemons? "1 stray daemon(s) not managed by cephadm" warnings seem related. I suspect you are doing it manually and there is probably something wrong with the options, the config file, etc.

Also, since you are constraining the direction, no need to start rbd-mirror daemon in site-a. The one in site-b should be sufficient, once started properly.

Actions #9

Updated by Ritesh Shah almost 3 years ago

I am starting the rbd-mirror daemon manually outside of cephadm since I coudln't find any information on how to start rbd-mirror in cephadm using this link provided earlier - https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#rbd-mirror-daemon
This was the link provided earlier in this tracker.

Yes, even I was concerned for the error message regarding stray daemon related to rbd-mirroring. Is there a proper doc detailing how to install , configure and run rbd-mirror daemon before we enable mirroring in Ceph.

Understand that rbd-mirror is required only on one site due to one-way mirroring. Thanks for your inputs and responses so far.

Actions #10

Updated by Ilya Dryomov over 2 years ago

cephadm is fairly new so the documentation may be lacking but there should be nothing special about rbd-mirror daemons. You can deploy them with "ceph orch apply rbd-mirror ...". See https://docs.ceph.com/en/latest/cephadm/service-management/ for details.

Actions #11

Updated by Ritesh Shah over 2 years ago

Thanks for your inputs Ilya Dryomov.
I couldn't find a way to start rbd-mirror in cephadm shell earlier. After starting rbd-mirror within cephadm shell, I could successfully complete one-way rbd-mirror.

I will try two way rbd mirror tomorrow. I see that the document needs to be updated with some of the points we discussed here to ensure rbd-mirror is successfully configured and working.

Actions #12

Updated by Ritesh Shah over 2 years ago

Tested and two-way mirroring works. Thanks for your inputs. You may pls. close this ticket.

Actions #13

Updated by Ilya Dryomov over 2 years ago

  • Subject changed from rbd mirroring bootstrap import fails to bogus "unable to find a keyring" warning on "rbd mirroring bootstrap import"
  • Status changed from New to In Progress
  • Assignee set to Ilya Dryomov

Thanks for confirming, Ritesh. Still need to make that warning go away to avoid confusion.

Actions #14

Updated by Loïc Dachary over 2 years ago

  • Target version deleted (v16.2.5)
Actions #15

Updated by Ilya Dryomov over 2 years ago

  • Status changed from In Progress to Fix Under Review
  • Backport set to octopus,pacific
  • Pull request ID set to 43220
Actions #17

Updated by Ilya Dryomov over 2 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #18

Updated by Backport Bot over 2 years ago

  • Copied to Backport #52735: pacific: bogus "unable to find a keyring" warning on "rbd mirroring bootstrap import" added
Actions #19

Updated by Backport Bot over 2 years ago

  • Copied to Backport #52736: octopus: bogus "unable to find a keyring" warning on "rbd mirroring bootstrap import" added
Actions #20

Updated by Ilya Dryomov almost 2 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF