Project

General

Profile

Bug #22720

Added an osd by ceph-volume,it got an error in systemctl enable ceph-volume@.service

Added by xueyun lau about 6 years ago. Updated about 6 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
multimds
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

when I using ceph-volume to add an osd,prepare is ok, but, when I activate osd,I got an error" Failed to issue method call: No such file or directory",I think it means I have loss something in systemd,so I copy "ceph-volume\@.service" to "/usr/lib/systemd/system",and tried it again.
It still wasn't work.It's a bug or I have something wrong in config??

[root@node-186 ceph]#
[root@node-186 ceph]# ceph-volume lvm prepare --bluestore --data /dev/sdq
Running command: sudo vgcreate --force --yes ceph-ef61518a-d633-4e58-b04d-5ce373726cdd /dev/sdq
stdout: Volume group "ceph-ef61518a-d633-4e58-b04d-5ce373726cdd" successfully created
Running command: sudo lvcreate --yes -l 100%FREE -n osd-block-057270ff-ca7e-4d1b-85b4-c51f07a24ff9 ceph-ef61518a-d633-4e58-b04d-5ce373726cdd
stdout: Logical volume "osd-block-057270ff-ca7e-4d1b-85b4-c51f07a24ff9" created.
Running command: sudo mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-7
Running command: chown -R ceph:ceph /dev/dm-0
Running command: sudo ln -s /dev/ceph-ef61518a-d633-4e58-b04d-5ce373726cdd/osd-block-057270ff-ca7e-4d1b-85b4-c51f07a24ff9 /var/lib/ceph/osd/ceph-7/block
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-7/activate.monmap
stderr: got monmap epoch 1
Running command: ceph-authtool /var/lib/ceph/osd/ceph-7/keyring --create-keyring --name osd.7 --add-key AQBwxF5aQ7TfGRAAYBWa2ZeU8rMm+i5scx72FQ==
stdout: creating /var/lib/ceph/osd/ceph-7/keyring
stdout: added entity osd.7 auth auth(auid = 18446744073709551615 key=AQBwxF5aQ7TfGRAAYBWa2ZeU8rMm+i5scx72FQ== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-7/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-7/
Running command: ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 7 --monmap /var/lib/ceph/osd/ceph-7/activate.monmap --key ******************************** --osd-data /var/lib/ceph/osd/ceph-7/ --osd-uuid 057270ff-ca7e-4d1b-85b4-c51f07a24ff9 --setuser ceph --setgroup ceph
stderr: 2018-01-17 11:35:15.556423 7f0283ffcd80 -1 asok(0x892806e1c0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/ceph-osd.7.asok': (13) Permission denied
stderr: 2018-01-17 11:35:15.557343 7f0283ffcd80 -1 bluestore(/var/lib/ceph/osd/ceph-7//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding
stderr: 2018-01-17 11:35:15.557434 7f0283ffcd80 -1 bluestore(/var/lib/ceph/osd/ceph-7//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding
2018-01-17 11:35:15.557554 7f0283ffcd80 -1 bluestore(/var/lib/ceph/osd/ceph-7//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding
2018-01-17 11:35:15.557632 7f0283ffcd80 -1 bluestore(/var/lib/ceph/osd/ceph-7/) _read_fsid unparsable uuid
stderr: 2018-01-17 11:35:16.606865 7f0283ffcd80 -1 key AQBwxF5aQ7TfGRAAYBWa2ZeU8rMm+i5scx72FQ==
stderr: 2018-01-17 11:35:17.126317 7f0283ffcd80 -1 created object store /var/lib/ceph/osd/ceph-7/ for osd.7 fsid ef61518a-d633-4e58-b04d-5ce373726cdd
[root@node-186 ceph]#
[root@node-186 ceph]# ceph-volume lvm activate --bluestore 7 057270ff-ca7e-4d1b-85b4-c51f07a24ff9
Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-ef61518a-d633-4e58-b04d-5ce373726cdd/osd-block-057270ff-ca7e-4d1b-85b4-c51f07a24ff9 --path /var/lib/ceph/osd/ceph-7
Running command: sudo ln -snf /dev/ceph-ef61518a-d633-4e58-b04d-5ce373726cdd/osd-block-057270ff-ca7e-4d1b-85b4-c51f07a24ff9 /var/lib/ceph/osd/ceph-7/block
Running command: chown -R ceph:ceph /dev/dm-0
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-7
Running command: sudo systemctl enable ceph-volume@lvm-7-057270ff-ca7e-4d1b-85b4-c51f07a24ff9
stderr: Failed to issue method call: No such file or directory
--> RuntimeError: command returned non-zero exit status: 1
[root@node-186 ceph]#

[root@node-186 system]# ceph -v
ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
[root@node-186 system]#
[root@node-186 system]# systemctl enable ceph-volume@lvm-7-057270ff-ca7e-4d1b-85b4-c51f07a24ff9
Failed to issue method call: No such file or directory
[root@node-186 system]# cat ceph-volume\@.service
[Unit]
Description=Ceph Volume activation: %i
After=local-fs.target
Wants=local-fs.target

[Service]
Type=oneshot
KillMode=none
Environment=CEPH_VOLUME_TIMEOUT=10000
ExecStart=/bin/sh -c 'timeout $CEPH_VOLUME_TIMEOUT /usr/local/hstor/ceph_dir/bin/ceph-volume-systemd %i'
TimeoutSec=0

[Install]
WantedBy=multi-user.target
[root@node-186 system]#

History

#1 Updated by Alfredo Deza about 6 years ago

What Distro and distro version are you using?

#2 Updated by xueyun lau about 6 years ago

[root@node-186 ~]# uname -a
Linux node-186 4.12.4 #1 SMP Fri Dec 22 11:50:17 HKT 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@node-186 ~]# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
[root@node-186 ~]#

#3 Updated by Alfredo Deza about 6 years ago

We don't test with CentOS 7.1, we are confident this works on 7.4 though. While we investigate this for 7.1, is it possible for you to try 7.4 and see if it works?

#4 Updated by Alfredo Deza about 6 years ago

I haven't been able to replicate this problem with 7.1.

Looking at your systemd unit file for ceph-volume, it looks a bit odd. This line:

ExecStart=/bin/sh -c 'timeout $CEPH_VOLUME_TIMEOUT /usr/local/hstor/ceph_dir/bin/ceph-volume-systemd %i'

Unsure why is this pointing to /usr/local/hstor

Have you installed Ceph using rpms from download.ceph.com or do you have some sort of custom build?

That one line in that unit file looks like this on a vanilla install of Ceph 12.2.2:

ExecStart=/bin/sh -c 'timeout $CEPH_VOLUME_TIMEOUT /usr/sbin/ceph-volume-systemd %i'

The location of the file is at: /usr/lib/systemd/system/ceph-volume\@.service

Below is the output of trying to replicate your issue, but I couldn't, it all works:

Prepare:

[root@node4 vagrant]# ceph-volume lvm prepare --bluestore --data /dev/sde
Running command: sudo vgcreate --force --yes ceph-82087b26-1f18-4a49-9395-ab1b3f15bde9 /dev/sde
 stdout: Physical volume "/dev/sde" successfully created.
 stdout: Volume group "ceph-82087b26-1f18-4a49-9395-ab1b3f15bde9" successfully created
Running command: sudo lvcreate --yes -l 100%FREE -n osd-block-9bae504e-7eb3-4d53-bec8-c8f502e2704f ceph-82087b26-1f18-4a49-9395-ab1b3f15bde9
 stdout: Logical volume "osd-block-9bae504e-7eb3-4d53-bec8-c8f502e2704f" created.
Running command: sudo mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Running command: chown -R ceph:ceph /dev/dm-3
Running command: sudo ln -s /dev/ceph-82087b26-1f18-4a49-9395-ab1b3f15bde9/osd-block-9bae504e-7eb3-4d53-bec8-c8f502e2704f /var/lib/ceph/osd/ceph-1/block
Running command: sudo ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
 stderr: got monmap epoch 1
Running command: ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQDtuGBaO0q4ORAAZ+ASRaK1D7RkoaNog/Nr3A==
 stdout: creating /var/lib/ceph/osd/ceph-1/keyring
 stdout: added entity osd.1 auth auth(auid = 18446744073709551615 key=AQDtuGBaO0q4ORAAZ+ASRaK1D7RkoaNog/Nr3A== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Running command: sudo ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --key **************************************** --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 9bae504e-7eb3-4d53-bec8-c8f502e2704f --setuser ceph --setgroup ceph
 stderr: 2018-01-18 15:10:54.390679 7fd9db451d00 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding
 stderr: 2018-01-18 15:10:54.391434 7fd9db451d00 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding
 stderr: 2018-01-18 15:10:54.391979 7fd9db451d00 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: buffer::malformed_input: void bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding
2018-01-18 15:10:54.392141 7fd9db451d00 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
 stderr: 2018-01-18 15:10:55.421128 7fd9db451d00 -1 key AQDtuGBaO0q4ORAAZ+ASRaK1D7RkoaNog/Nr3A==

Activate:

[root@node4 vagrant]# ls /var/lib/ceph/osd
ceph-0  ceph-1
[root@node4 vagrant]# ls /var/lib/ceph/osd/ceph-1
activate.monmap  block  bluefs  ceph_fsid  fsid  keyring  kv_backend  magic  mkfs_done  osd_key  ready  type  whoami
[root@node4 vagrant]# cat /var/lib/ceph/osd/ceph-1/fsid
9bae504e-7eb3-4d53-bec8-c8f502e2704f
[root@node4 vagrant]# ceph-volume lvm activate --bluestore 1 9bae504e-7eb3-4d53-bec8-c8f502e2704f
Running command: sudo ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-82087b26-1f18-4a49-9395-ab1b3f15bde9/osd-block-9bae504e-7eb3-4d53-bec8-c8f502e2704f --path /var/lib/ceph/osd/ceph-1
Running command: sudo ln -snf /dev/ceph-82087b26-1f18-4a49-9395-ab1b3f15bde9/osd-block-9bae504e-7eb3-4d53-bec8-c8f502e2704f /var/lib/ceph/osd/ceph-1/block
Running command: chown -R ceph:ceph /dev/dm-3
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Running command: sudo systemctl enable ceph-volume@lvm-1-9bae504e-7eb3-4d53-bec8-c8f502e2704f
 stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-9bae504e-7eb3-4d53-bec8-c8f502e2704f.service to /usr/lib/systemd/system/ceph-volume@.service.
Running command: sudo systemctl start ceph-osd@1

Verifying:

[root@node4 vagrant]# ps aux | grep ceph-osd
ceph      2612  0.5  5.6 789704 26480 ?        Ssl  15:11   0:00 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
root      2689  0.0  0.2 112660   972 pts/0    R+   15:12   0:00 grep --color=auto ceph-osd
[root@node4 vagrant]# cat /etc/os-release
NAME="CentOS Linux" 
VERSION="7 (Core)" 
ID="centos" 
ID_LIKE="rhel fedora" 
VERSION_ID="7" 
PRETTY_NAME="CentOS Linux 7 (Core)" 
ANSI_COLOR="0;31" 
CPE_NAME="cpe:/o:centos:centos:7" 
HOME_URL="https://www.centos.org/" 
BUG_REPORT_URL="https://bugs.centos.org/" 

CENTOS_MANTISBT_PROJECT="CentOS-7" 
CENTOS_MANTISBT_PROJECT_VERSION="7" 
REDHAT_SUPPORT_PRODUCT="centos" 
REDHAT_SUPPORT_PRODUCT_VERSION="7" 

[root@node4 vagrant]# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
[root@node4 vagrant]# ceph --version
ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)

#5 Updated by Alfredo Deza about 6 years ago

  • Status changed from New to Can't reproduce

#6 Updated by xueyun lau about 6 years ago

Alfredo Deza wrote:

I haven't been able to replicate this problem with 7.1.

Looking at your systemd unit file for ceph-volume, it looks a bit odd. This line:

[...]

Unsure why is this pointing to /usr/local/hstor/ceph_dir

Have you installed Ceph using rpms from download.ceph.com or do you have some sort of custom build?

yes,I have installed ceph at /usr/local/hstor/ceph_dir.

That one line in that unit file looks like this on a vanilla install of Ceph 12.2.2:

[...]

The location of the file is at: /usr/lib/systemd/system/ceph-volume\@.service

Below is the output of trying to replicate your issue, but I couldn't, it all works:

Prepare:
[...]

Activate:
[...]

Verifying:
[...]

YE,I also can run the OSD daemon by "systemctl start ceph-osd@.OSD_ID".
maybe,I have something wrong in config systemd unit.Is cpeh-vomlume unit have another config file except ceph-volume@.service??

I'm not sure that is "systemctl enable ceph-volume" for OSD starting in poweron?

#7 Updated by Alfredo Deza about 6 years ago

  • Status changed from Can't reproduce to Closed

If you have installed ceph in a different dir then you must have the systemd files in places that systemd will recognize. I am not 100% what all those paths are, since we rely on the packaged Ceph binaries (vs. custom built) and those packages place the files in the right spot so everything just works.

Closing this as this is just due to the custom location of your files. Unsure even if those units are in place if your custom installation will go without problems.

Also available in: Atom PDF