Actions
Bug #64208
opentest_cephadm.sh: Container version mismatch causes job to fail.
% Done:
0%
Source:
Tags:
test-failure
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Description
/a/yuriw-2024-01-26_01:08:12-rados-wip-yuri2-testing-2024-01-25-1327-reef-distro-default-smithi/7533495
Last set of teuthology logs before the failure was reported:
... 2024-01-27T01:36:08.327 INFO:tasks.workunit.client.0.smithi002.stderr:+ sudo /usr/sbin/cephadm --image quay.ceph.io/ceph-ci/ceph:main bootstrap --mon-id a --mgr-id x --mon-ip 127.0.0.1 --fsid 00000000-0000-0000-0000-0000deadbeef --config tmp.test_c ephadm.sh.FEP7he/tmp.54mEQ219IN --output-config tmp.test_cephadm.sh.FEP7he/tmp.E4dx2ii69Z --output-keyring tmp.test_cephadm.sh.FEP7he/tmp.GvIHQuTTLP --output-pub-ssh-key tmp.test_cephadm.sh.FEP7he/ceph.pub --allow-overwrite --skip-mon-network --ski p-monitoring-stack 2024-01-27T01:36:08.491 INFO:tasks.workunit.client.0.smithi002.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2024-01-27T01:36:08.491 INFO:tasks.workunit.client.0.smithi002.stdout:Verifying podman|docker is present... 2024-01-27T01:36:08.523 INFO:tasks.workunit.client.0.smithi002.stdout:Verifying lvm2 is present... 2024-01-27T01:36:08.523 INFO:tasks.workunit.client.0.smithi002.stdout:Verifying time synchronization is in place... 2024-01-27T01:36:08.547 INFO:tasks.workunit.client.0.smithi002.stdout:Unit chronyd.service is enabled and running 2024-01-27T01:36:08.547 INFO:tasks.workunit.client.0.smithi002.stdout:Repeating the final host check... 2024-01-27T01:36:08.575 INFO:tasks.workunit.client.0.smithi002.stdout:podman (/bin/podman) version 4.6.1 is present 2024-01-27T01:36:08.575 INFO:tasks.workunit.client.0.smithi002.stdout:systemctl is present 2024-01-27T01:36:08.575 INFO:tasks.workunit.client.0.smithi002.stdout:lvcreate is present 2024-01-27T01:36:08.598 INFO:tasks.workunit.client.0.smithi002.stdout:Unit chronyd.service is enabled and running 2024-01-27T01:36:08.598 INFO:tasks.workunit.client.0.smithi002.stdout:Host looks OK 2024-01-27T01:36:08.598 INFO:tasks.workunit.client.0.smithi002.stdout:Cluster fsid: 00000000-0000-0000-0000-0000deadbeef 2024-01-27T01:36:08.599 INFO:tasks.workunit.client.0.smithi002.stdout:Verifying IP 127.0.0.1 port 3300 ... 2024-01-27T01:36:08.599 INFO:tasks.workunit.client.0.smithi002.stdout:Verifying IP 127.0.0.1 port 6789 ... 2024-01-27T01:36:08.600 INFO:tasks.workunit.client.0.smithi002.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2024-01-27T01:36:08.600 INFO:tasks.workunit.client.0.smithi002.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:main... 2024-01-27T01:36:09.691 INFO:tasks.workunit.client.0.smithi002.stdout:Ceph version: ceph version 19.0.0-918-g37d5d931 (37d5d931b0be2231b571c34fab61d106946b8944) squid (dev) 2024-01-27T01:36:09.692 INFO:tasks.workunit.client.0.smithi002.stderr:Error: Container release squid != cephadm release reef; please use matching version of cephadm (pass --allow-mismatched-release to continue anyway) 2024-01-27T01:36:09.692 INFO:tasks.workunit.client.0.smithi002.stdout: 2024-01-27T01:36:09.692 INFO:tasks.workunit.client.0.smithi002.stdout: 2024-01-27T01:36:09.692 INFO:tasks.workunit.client.0.smithi002.stdout:▸ *************** 2024-01-27T01:36:09.692 INFO:tasks.workunit.client.0.smithi002.stdout:▸ Cephadm hit an issue during cluster installation. Current cluster files will NOT BE DELETED automatically to change 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stdout:▸ this behaviour you can pass the --cleanup-on-failure. To remove this broken cluster manually please run: 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stdout: 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stdout:▸ > cephadm rm-cluster --force --fsid 00000000-0000-0000-0000-0000deadbeef 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stdout: 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stdout:▸ in case of any previous broken installation user must use the rm-cluster command to delete the broken cluster: 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stdout: 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stdout:▸ > cephadm rm-cluster --force --zap-osds --fsid <fsid> 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stdout: 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stdout:▸ for more information please refer to https://docs.ceph.com/en/latest/cephadm/operations/#purging-a-cluster 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stdout:▸ *************** 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stdout: 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stdout: 2024-01-27T01:36:09.693 INFO:tasks.workunit.client.0.smithi002.stderr:ERROR: Container release squid != cephadm release reef; please use matching version of cephadm (pass --allow-mismatched-release to continue anyway) ...
Actions