Actions
Bug #22554
closedbluestore osd cannot authenticate when created with --osd-id option
Status:
Rejected
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
When you try to create/prepare new bluestore OSD of particular ID (new one) with ceph-volume command it fails to activate and connect to the cluster due to authentication failure.
It ends up with "Operation not permitted" in osd log file.
# ceph-volume lvm create --bluestore --data /dev/vdc --osd-id 6 Running command: sudo vgcreate --force --yes ceph-2feb0f54-91c5-4516-8907-801be133a6cd /dev/vdc stdout: Physical volume "/dev/vdc" successfully created. stdout: Volume group "ceph-2feb0f54-91c5-4516-8907-801be133a6cd" successfully created Running command: sudo lvcreate --yes -l 100%FREE -n osd-block-052d2a21-4f6b-4d99-975a-9efd1ce0d07f ceph-2feb0f54-91c5-4516-8907-801be133a6cd stdout: Logical volume "osd-block-052d2a21-4f6b-4d99-975a-9efd1ce0d07f" created. Running command: sudo mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-6 Running command: chown -R ceph:ceph /dev/dm-1 Running command: sudo ln -s /dev/ceph-2feb0f54-91c5-4516-8907-801be133a6cd/osd-block-052d2a21-4f6b-4d99-975a-9efd1ce0d07f /var/lib/ceph/osd/ceph-6/block Running command: sudo ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-6/activate.monmap stderr: got monmap epoch 1 stderr: Running command: ceph-authtool /var/lib/ceph/osd/ceph-6/keyring --create-keyring --name osd.6 --add-key AQC+gUtaWqamDRAAVBWa4Z8xPJhRGX+l+JwcWw== stdout: creating /var/lib/ceph/osd/ceph-6/keyring stdout: added entity osd.6 auth auth(auid = 18446744073709551615 key=AQC+gUtaWqamDRAAVBWa4Z8xPJhRGX+l+JwcWw== with 0 caps) Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-6/keyring Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-6/ Running command: sudo ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 6 --monmap /var/lib/ceph/osd/ceph-6/activate.monmap --key **************************************** --osd-data /var/lib/ceph/osd/ceph-6/ --osd-uuid 052d2a21-4f6b-4d99-975a-9efd1ce0d07f --setuser ceph --setgroup ceph stderr: 2018-01-02 12:57:36.113544 7fea9656ee00 -1 bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding stderr: 2018-01-02 12:57:36.115132 7fea9656ee00 -1 bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding stderr: 2018-01-02 12:57:36.116581 7fea9656ee00 -1 bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding 2018-01-02 12:57:36.116732 7fea9656ee00 -1 bluestore(/var/lib/ceph/osd/ceph-6/) _read_fsid unparsable uuid stderr: 2018-01-02 12:57:37.255273 7fea9656ee00 -1 key AQC+gUtaWqamDRAAVBWa4Z8xPJhRGX+l+JwcWw== Running command: sudo ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-2feb0f54-91c5-4516-8907-801be133a6cd/osd-block-052d2a21-4f6b-4d99-975a-9efd1ce0d07f --path /var/lib/ceph/osd/ceph-6 Running command: sudo ln -snf /dev/ceph-2feb0f54-91c5-4516-8907-801be133a6cd/osd-block-052d2a21-4f6b-4d99-975a-9efd1ce0d07f /var/lib/ceph/osd/ceph-6/block Running command: chown -R ceph:ceph /dev/dm-1 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 Running command: sudo systemctl enable ceph-volume@lvm-6-052d2a21-4f6b-4d99-975a-9efd1ce0d07f stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-6-052d2a21-4f6b-4d99-975a-9efd1ce0d07f.service → /lib/systemd/system/ceph-volume@.service. Running command: sudo systemctl start ceph-osd@6
/var/log/ceph/ceph-osd.6.log
... 2018-01-02 12:57:59.021251 7f15f92a5e00 1 bluestore(/var/lib/ceph/osd/ceph-6) _open_db opened rocksdb path db options compression=kNoCompression,max_write_buffer_number=4,min_write_buffer_number_to_merge=1,rec ycle_log_file_num=4,write_buffer_size=268435456,writable_file_max_buffer_size=0,compaction_readahead_size=2097152 2018-01-02 12:57:59.025632 7f15f92a5e00 1 freelist init 2018-01-02 12:57:59.025789 7f15f92a5e00 1 bluestore(/var/lib/ceph/osd/ceph-6) _open_alloc opening allocation metadata 2018-01-02 12:57:59.025869 7f15f92a5e00 1 bluestore(/var/lib/ceph/osd/ceph-6) _open_alloc loaded 30715 M in 1 extents 2018-01-02 12:57:59.027883 7f15f92a5e00 0 _get_class not permitted to load kvs 2018-01-02 12:57:59.028238 7f15f92a5e00 0 _get_class not permitted to load lua 2018-01-02 12:57:59.032662 7f15f92a5e00 0 <cls> /build/ceph-12.2.2/src/cls/cephfs/cls_cephfs.cc:197: loading cephfs 2018-01-02 12:57:59.035300 7f15f92a5e00 0 <cls> /build/ceph-12.2.2/src/cls/hello/cls_hello.cc:296: loading cls_hello 2018-01-02 12:57:59.035344 7f15f92a5e00 0 _get_class not permitted to load sdk 2018-01-02 12:57:59.035563 7f15f92a5e00 0 osd.6 0 crush map has features 288232575208783872, adjusting msgr requires for clients 2018-01-02 12:57:59.035593 7f15f92a5e00 0 osd.6 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons 2018-01-02 12:57:59.035613 7f15f92a5e00 0 osd.6 0 crush map has features 288232575208783872, adjusting msgr requires for osds 2018-01-02 12:57:59.035676 7f15f92a5e00 0 osd.6 0 load_pgs 2018-01-02 12:57:59.035697 7f15f92a5e00 0 osd.6 0 load_pgs opened 0 pgs 2018-01-02 12:57:59.035723 7f15f92a5e00 0 osd.6 0 using weightedpriority op queue with priority op cut off at 64. 2018-01-02 12:57:59.036995 7f15f92a5e00 -1 osd.6 0 log_to_monitors {default=true} 2018-01-02 12:57:59.044407 7f15f92a5e00 -1 osd.6 0 init authentication failed: (1) Operation not permitted
Workaround is to manually run this command and restart osd service or prepare osd, run this command an then activate.
echo "{ \"cephx_secret\": \"`cat /var/lib/ceph/osd/ceph-6/osd_key`\" }" | ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new `cat /var/lib/ceph/osd/ceph-6/fsid` 6
Updated by Alfredo Deza over 6 years ago
The `--osd-id` option is only meant to be used for reusing an OSD ID, not for using one from scratch. From the help menu:
ceph-volume osd create --help ... --osd-id OSD_ID Reuse an existing OSD id
I am not sure what other side-effects you can run into by injecting the keys like that. It is something that is not meant to be used this way, and so it is also untested.
Actions