Project

General

Profile

Actions

Bug #59103

open

cephadm bootstrap failed on Ubuntu 22.04 for quincy release, pacific release works fine

Added by David Smith about 1 year ago. Updated 11 months ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
common
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
cephadm,ubuntu,bootstrap
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
upgrade/quincy-x
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

cephadm bootstrap failed with error on Ubuntu 22.04 for quincy release, if instead installing the pacific release of cephadm, everything works fine
on the same kind of droplet(ubuntu 22.04, fresh droplet on digital ocean with apt upgrade).

cephadm installed as per the curl/wget method which I think the version is 17.2.5.

curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm

Not from Ubuntu apt repository.
Issue command:

sudo cephadm bootstrap --mon-ip $hostip

gives output of

Deploying ceph-exporter service with default placement...
Non-zero exit code 22 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v17 -e NODE_NAME=ceph-admin -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/418c8f6c-c608-11ed-8d16-ab7f51718fee:/var/log/ceph:z -v /tmp/ceph-tmpwuho16d2:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpokp7_grg:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v17 orch apply ceph-exporter
/usr/bin/ceph: stderr Error EINVAL: Usage:
/usr/bin/ceph: stderr   ceph orch apply -i <yaml spec> [--dry-run]
/usr/bin/ceph: stderr   ceph orch apply <service_type> [--placement=<placement_string>] [--unmanaged]
/usr/bin/ceph: stderr
Traceback (most recent call last):
  File "/usr/bin/cephadm", line 9653, in <module>
    main()
  File "/usr/bin/cephadm", line 9641, in main
    r = ctx.func(ctx)
  File "/usr/bin/cephadm", line 2205, in _default_image
    return func(ctx)
  File "/usr/bin/cephadm", line 5774, in command_bootstrap
    prepare_ssh(ctx, cli, wait_for_mgr_restart)
  File "/usr/bin/cephadm", line 5275, in prepare_ssh
    cli(['orch', 'apply', t])
  File "/usr/bin/cephadm", line 5708, in cli
    return CephContainer(
  File "/usr/bin/cephadm", line 4144, in run
    out, _, _ = call_throws(self.ctx, self.run_cmd(),
  File "/usr/bin/cephadm", line 1853, in call_throws
    raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v17 -e NODE_NAME=ceph-admin -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/418c8f6c-c608-11ed-8d16-ab7f51718fee:/var/log/ceph:z -v /tmp/ceph-tmpwuho16d2:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpokp7_grg:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v17 orch apply ceph-exporter

It seems the docker container has exited with code 22.

from log file: /var/log/ceph/418c8f6c-c608-11ed-8d16-ab7f51718fee


[2023-03-19 03:44:49,241][ceph_volume.util.system][INFO  ] Executable lvs found on the host, will use /sbin/lvs
[2023-03-19 03:44:49,242][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/dev/vdb -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2023-03-19 03:44:49,290][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2023-03-19 03:44:49,290][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o pv_name,vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size
[2023-03-19 03:44:49,330][ceph_volume.process][INFO  ] Running command: /usr/sbin/blkid -c /dev/null -p /dev/vdb
[2023-03-19 03:44:49,333][ceph_volume.process][INFO  ] stdout /dev/vdb: BLOCK_SIZE="2048" SYSTEM_ID="LINUX" APPLICATION_ID="GENISOIMAGE ISO 9660/HFS FILESYSTEM CREATOR (C) 1993 E.YOUNGDALE (C) 1997-2006 J.PEARSON/J.SCHILLING (C) 2006-2007 CDRKIT TEAM" UUID="2023-03-19-02-16-43-00" LABEL="config-2" TYPE="iso9660" USAGE="filesystem" 
[2023-03-19 03:44:49,334][ceph_volume.util.disk][INFO  ] opening device /dev/vdb to check for BlueStore label
[2023-03-19 03:44:49,334][ceph_volume.util.disk][INFO  ] opening device /dev/vdb to check for BlueStore label
[2023-03-19 03:44:49,334][ceph_volume.process][INFO  ] Running command: /usr/sbin/udevadm info --query=property /dev/vdb
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout DEVLINKS=/dev/disk/by-label/config-2 /dev/disk/by-uuid/2023-03-19-02-16-43-00 /dev/disk/by-path/virtio-pci-0000:00:07.0 /dev/disk/by-path/pci-0000:00:07.0
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/vdb
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout DEVPATH=/devices/pci0000:00/0000:00:07.0/virtio4/block/vdb
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout DEVTYPE=disk
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout DISKSEQ=10
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout ID_FS_APPLICATION_ID=GENISOIMAGE\x20ISO\x209660\x2fHFS\x20FILESYSTEM\x20CREATOR\x20\x28C\x29\x201993\x20E.YOUNGDALE\x20\x28C\x29\x201997-2006\x20J.PEARSON\x2fJ.SCHILLING\x20\x28C\x29\x202006-2007\x20CDRKIT\x20TEAM
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout ID_FS_LABEL=config-2
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout ID_FS_LABEL_ENC=config-2
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout ID_FS_SYSTEM_ID=LINUX
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout ID_FS_TYPE=iso9660
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout ID_FS_USAGE=filesystem
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout ID_FS_UUID=2023-03-19-02-16-43-00
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout ID_FS_UUID_ENC=2023-03-19-02-16-43-00
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout ID_PATH=pci-0000:00:07.0
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout ID_PATH_TAG=pci-0000_00_07_0
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout MAJOR=252
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout MINOR=16
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout SUBSYSTEM=block
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout TAGS=:systemd:
[2023-03-19 03:44:49,339][ceph_volume.process][INFO  ] stdout USEC_INITIALIZED=1210866
[2023-03-19 03:44:49,370][ceph_volume.util.system][INFO  ] /dev/vdb was not found as mounted

Actions #1

Updated by David Smith about 1 year ago

Docker engine installed as per
https://docs.docker.com/engine/install/
running as root, not rootless.

Actions #2

Updated by David Smith about 1 year ago

So if I try to run the command manually after the bootstrap failed:

/usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v17 -e NODE_NAME=ceph-admin -e CEPH_USE_RANDOM_NONCE=1 -v /var/log/ceph/418c8f6c-c608-11ed-8d16-ab7f51718fee:/var/log/ceph:z -v /tmp/ceph-tmpwuho16d2:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpokp7_grg:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v17 orch apply ceph-exporter

It reports that:

Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',)

Tried to use docker image from 17.1 to 17.2.5 via specify it in cephadm file, all of them have the same issue.

Actions #3

Updated by David Smith about 1 year ago

Tried to do bootstrap for quincy release using cephadm on Ubuntu 20.04 LTS, same error

Actions #4

Updated by Ilya Dryomov 11 months ago

  • Target version deleted (v17.2.6)
Actions

Also available in: Atom PDF