Actions
Bug #49680
closedreconfig fails when cephx key changes
Status:
Duplicate
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Description
2021-03-09T18:55:49.488+0000 7f1301279700 0 [cephadm INFO cephadm.serve] Reconfiguring mds.cephfs.reesi001.umftpx (monmap changed)... 2021-03-09T18:55:49.488+0000 7f1301279700 0 log_channel(cephadm) log [INF] : Reconfiguring mds.cephfs.reesi001.umftpx (monmap changed)... 2021-03-09T18:55:49.488+0000 7f1301279700 1 -- 172.21.2.205:0/954947653 --> [v2:172.21.2.204:3300/0,v1:172.21.2.204:6789/0] -- mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.reesi001.umftpx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1 -- 0x55661182be00 con 0x55661566dc00 2021-03-09T18:55:49.492+0000 7f1329b63700 1 -- 172.21.2.205:0/954947653 <== mon.3 v2:172.21.2.204:3300/0 23614 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "mds.cephfs.reesi001.umftpx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]=-22 key for mds.cephfs.reesi001.umftpx exists but cap osd does not match v226780) v1 ==== 256+0+0 (secure 0 0 0) 0x5566149f8ea0 con 0x55661566dc00
which means we don't actually reconfig the daemon and we keep hitting it every time around the loop.
should cephadm (1) adjust the cephx key, or (2) fall back to 'auth get'?
Actions