Project

General

Profile

Actions

Bug #22720

closed

Added an osd by ceph-volume,it got an error in systemctl enable ceph-volume@.service

Added by xueyun lau over 6 years ago. Updated over 6 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
multimds
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

when I using ceph-volume to add an osd,prepare is ok, but, when I activate osd,I got an error" Failed to issue method call: No such file or directory",I think it means I have loss something in systemd,so I copy "ceph-volume\@.service" to "/usr/lib/systemd/system",and tried it again.
It still wasn't work.It's a bug or I have something wrong in config??

[root@node-186 ceph]#
[root@node-186 ceph]# ceph-volume lvm prepare --bluestore --data /dev/sdq
Running command: sudo vgcreate --force --yes ceph-ef61518a-d633-4e58-b04d-5ce373726cdd /dev/sdq
stdout: Volume group "ceph-ef61518a-d633-4e58-b04d-5ce373726cdd" successfully created
Running command: sudo lvcreate --yes l 100%FREE -n osd-block-057270ff-ca7e-4d1b-85b4-c51f07a24ff9 ceph-ef61518a-d633-4e58-b04d-5ce373726cdd
stdout: Logical volume "osd-block-057270ff-ca7e-4d1b-85b4-c51f07a24ff9" created.
Running command: sudo mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-7
Running command: chown -R ceph:ceph /dev/dm-0
Running command: sudo ln -s /dev/ceph-ef61518a-d633-4e58-b04d-5ce373726cdd/osd-block-057270ff-ca7e-4d1b-85b4-c51f07a24ff9 /var/lib/ceph/osd/ceph-7/block
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-7/activate.monmap
stderr: got monmap epoch 1
Running command: ceph-authtool /var/lib/ceph/osd/ceph-7/keyring --create-keyring --name osd.7 --add-key AQBwxF5aQ7TfGRAAYBWa2ZeU8rMm+i5scx72FQ==
stdout: creating /var/lib/ceph/osd/ceph-7/keyring
stdout: added entity osd.7 auth auth(auid = 18446744073709551615 key=AQBwxF5aQ7TfGRAAYBWa2ZeU8rMm+i5scx72FQ== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-7/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-7/
Running command: ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 7 --monmap /var/lib/ceph/osd/ceph-7/activate.monmap --key ******************************** --osd-data /var/lib/ceph/osd/ceph-7/ --osd-uuid 057270ff-ca7e-4d1b-85b4-c51f07a24ff9 --setuser ceph --setgroup ceph
stderr: 2018-01-17 11:35:15.556423 7f0283ffcd80 -1 asok(0x892806e1c0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/ceph-osd.7.asok': (13) Permission denied
stderr: 2018-01-17 11:35:15.557343 7f0283ffcd80 -1 bluestore(/var/lib/ceph/osd/ceph-7//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding
stderr: 2018-01-17 11:35:15.557434 7f0283ffcd80 -1 bluestore(/var/lib/ceph/osd/ceph-7//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding
2018-01-17 11:35:15.557554 7f0283ffcd80 -1 bluestore(/var/lib/ceph/osd/ceph-7//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding
2018-01-17 11:35:15.557632 7f0283ffcd80 -1 bluestore(/var/lib/ceph/osd/ceph-7/) _read_fsid unparsable uuid
stderr: 2018-01-17 11:35:16.606865 7f0283ffcd80 -1 key AQBwxF5aQ7TfGRAAYBWa2ZeU8rMm+i5scx72FQ==
stderr: 2018-01-17 11:35:17.126317 7f0283ffcd80 -1 created object store /var/lib/ceph/osd/ceph-7/ for osd.7 fsid ef61518a-d633-4e58-b04d-5ce373726cdd
[root@node-186 ceph]#
[root@node-186 ceph]# ceph-volume lvm activate --bluestore 7 057270ff-ca7e-4d1b-85b4-c51f07a24ff9
Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-ef61518a-d633-4e58-b04d-5ce373726cdd/osd-block-057270ff-ca7e-4d1b-85b4-c51f07a24ff9 --path /var/lib/ceph/osd/ceph-7
Running command: sudo ln -snf /dev/ceph-ef61518a-d633-4e58-b04d-5ce373726cdd/osd-block-057270ff-ca7e-4d1b-85b4-c51f07a24ff9 /var/lib/ceph/osd/ceph-7/block
Running command: chown -R ceph:ceph /dev/dm-0
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-7
Running command: sudo systemctl enable ceph-volume@lvm-7-057270ff-ca7e-4d1b-85b4-c51f07a24ff9
stderr: Failed to issue method call: No such file or directory
-
> RuntimeError: command returned non-zero exit status: 1
[root@node-186 ceph]#

[root@node-186 system]# ceph -v
ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
[root@node-186 system]#
[root@node-186 system]# systemctl enable ceph-volume@lvm-7-057270ff-ca7e-4d1b-85b4-c51f07a24ff9
Failed to issue method call: No such file or directory
[root@node-186 system]# cat ceph-volume\@.service
[Unit]
Description=Ceph Volume activation: %i
After=local-fs.target
Wants=local-fs.target

[Service]
Type=oneshot
KillMode=none
Environment=CEPH_VOLUME_TIMEOUT=10000
ExecStart=/bin/sh -c 'timeout $CEPH_VOLUME_TIMEOUT /usr/local/hstor/ceph_dir/bin/ceph-volume-systemd %i'
TimeoutSec=0

[Install]
WantedBy=multi-user.target
[root@node-186 system]#

Actions

Also available in: Atom PDF