Bug #49910
closedcephadm | Creating initial admin user... | Please specify the file containing the password/secret with "-i" option.
0%
Description
I got this issue when trying to bootstrap the cephadm ... not sure if this is the right place for this bug so if not then please guide me to propre place.
If some info is missing then let me know, I will add whatever is needed.
Thanks.
root@node0:~# uname -a Linux node0 5.4.0-67-generic #75-Ubuntu SMP Fri Feb 19 18:03:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux root@node0:~# cat /etc/os-release NAME="Ubuntu" VERSION="20.04.2 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.2 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal root@node0:~# docker version Client: Version: 19.03.8 API version: 1.40 Go version: go1.13.8 Git commit: afacb8b7f0 Built: Fri Dec 18 12:15:19 2020 OS/Arch: linux/amd64 Experimental: false Server: Engine: Version: 19.03.8 API version: 1.40 (minimum version 1.12) Go version: go1.13.8 Git commit: afacb8b7f0 Built: Fri Dec 4 23:02:49 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.3.3-0ubuntu2.3 GitCommit: runc: Version: spec: 1.0.1-dev GitCommit: docker-init: Version: 0.18.0 GitCommit: root@node0:~# apt install -y cephadm root@node0:~# cephadm prepare-host Verifying podman|docker is present... Verifying lvm2 is present... Verifying time synchronization is in place... Unit systemd-timesyncd.service is enabled and running Repeating the final host check... podman|docker (/usr/bin/docker) is present systemctl is present lvcreate is present Unit systemd-timesyncd.service is enabled and running Host looks OK root@node0:~# cephadm bootstrap --mon-ip 10.0.10.200 Verifying podman|docker is present... Verifying lvm2 is present... Verifying time synchronization is in place... Unit systemd-timesyncd.service is enabled and running Repeating the final host check... podman|docker (/usr/bin/docker) is present systemctl is present lvcreate is present Unit systemd-timesyncd.service is enabled and running Host looks OK Cluster fsid: 67ce4e86-896a-11eb-8d6a-9b178780b4b2 Verifying IP 10.0.10.200 port 3300 ... Verifying IP 10.0.10.200 port 6789 ... Mon IP 10.0.10.200 is in CIDR network 10.0.10.0/24 Pulling container image docker.io/ceph/ceph:v15... Extracting ceph user uid/gid from container image... Creating initial keys... Creating initial monmap... Creating mon... Waiting for mon to start... Waiting for mon... mon is available Assimilating anything we can from ceph.conf... Generating new minimal ceph.conf... Restarting the monitor... Setting mon public_network... Creating mgr... Verifying port 9283 ... Wrote keyring to /etc/ceph/ceph.client.admin.keyring Wrote config to /etc/ceph/ceph.conf Waiting for mgr to start... Waiting for mgr... mgr not available, waiting (1/10)... mgr not available, waiting (2/10)... mgr not available, waiting (3/10)... mgr not available, waiting (4/10)... mgr not available, waiting (5/10)... mgr not available, waiting (6/10)... mgr not available, waiting (7/10)... mgr not available, waiting (8/10)... mgr is available Enabling cephadm module... Waiting for the mgr to restart... Waiting for Mgr epoch 5... Mgr epoch 5 is available Setting orchestrator backend to cephadm... Generating ssh key... Wrote public SSH key to to /etc/ceph/ceph.pub Adding key to root@localhost's authorized_keys... Adding host node0... Deploying mon service with default placement... Deploying mgr service with default placement... Deploying crash service with default placement... Enabling mgr prometheus module... Deploying prometheus service with default placement... Deploying grafana service with default placement... Deploying node-exporter service with default placement... Deploying alertmanager service with default placement... Enabling the dashboard module... Waiting for the mgr to restart... Waiting for Mgr epoch 14... Mgr epoch 14 is available Generating a dashboard self-signed certificate... Creating initial admin user... Non-zero exit code 22 from /usr/bin/docker run --rm --ipc=host --net=host --entrypoint /usr/bin/ceph -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=node0 -v /var/log/ceph/67ce4e86-896a-11eb-8d6a-9b178780b4b2:/var/log/ceph:z -v /tmp/ceph-tmphqjb_ju7:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw5e0b8ol:/etc/ceph/ceph.conf:z docker.io/ceph/ceph:v15 dashboard ac-user-create admin q4zd9lkfhe administrator --force-password --pwd-update-required /usr/bin/ceph:stderr Error EINVAL: Please specify the file containing the password/secret with "-i" option. Traceback (most recent call last): File "/usr/sbin/cephadm", line 6111, in <module> r = args.func() File "/usr/sbin/cephadm", line 1399, in _default_image return func() File "/usr/sbin/cephadm", line 3238, in command_bootstrap cli(cmd) File "/usr/sbin/cephadm", line 3002, in cli return CephContainer( File "/usr/sbin/cephadm", line 2654, in run out, _, _ = call_throws( File "/usr/sbin/cephadm", line 1060, in call_throws raise RuntimeError('Failed command: %s' % ' '.join(command)) RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --net=host --entrypoint /usr/bin/ceph -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=node0 -v /var/log/ceph/67ce4e86-896a-11eb-8d6a-9b178780b4b2:/var/log/ceph:z -v /tmp/ceph-tmphqjb_ju7:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw5e0b8ol:/etc/ceph/ceph.conf:z docker.io/ceph/ceph:v15 dashboard ac-user-create admin q4zd9lkfhe administrator --force-password --pwd-update-required root@node0:~#
Updated by Austin Brogan about 3 years ago
Conversation from another few users who were impacted by this issue: https://www.reddit.com/r/ceph/comments/mi3asa/cephadm_on_ubuntu_2004
I also ran into this while trying to bootstrap a cluster using cephadm on Ubuntu 20.04.2 LTS while following both Octopus and Pacific directions at docs.ceph.com. I found if I ran apt update after adding the ceph repo the gpg file was unsupported. After deleting the gpg and marking the ceph repo as trusted=yes and apt updating I continuing with the ceph bootstrap. This time the bootstrap worked no problem. I post this here in case anyone needs a workaround and this might work for them. (Beware of ignoring the gpg key as this is dangerous)
Updated by Austin Brogan about 3 years ago
As per /u/lynxeur suggestion here https://www.reddit.com/r/ceph/comments/mi3asa/cephadm_on_ubuntu_2004/gufey25?utm_source=share&utm_medium=web2x&context=3 a better and safe way to do this is as follows:
cd /tmp
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm
chmod +x cephadm
sudo ./cephadm add-repo --release pacific
sudo rm /etc/apt/trusted.gpg.d/ceph.release.gpg
wget https://download.ceph.com/keys/release.asc
sudo apt-key add release.asc
sudo apt update
sudo ./cephadm install
The key are these lines
sudo rm /etc/apt/trusted.gpg.d/ceph.release.gpg
wget https://download.ceph.com/keys/release.asc
sudo apt-key add release.asc
sudo apt update
Updated by Sebastian Wagner almost 3 years ago
- Status changed from New to Can't reproduce
This should be fixed by now. This breaking change was made due to a CVE. That unfortunately meant that older cephadm binaries couldn't bootstrap a more recent cluster.