Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2021-12-08T11:02:11Z
Ceph
Redmine
ceph-volume - Bug #53524 (New): CEPHADM_APPLY_SPEC_FAIL is very verbose
https://tracker.ceph.com/issues/53524
2021-12-08T11:02:11Z
Sebastian Wagner
<pre>
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
</pre>
Orchestrator - Bug #53422 (New): tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existi...
https://tracker.ceph.com/issues/53422
2021-11-29T08:47:58Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2021-11-26_13:52:15-orch:cephadm-wip-swagner2-testing-2021-11-26-1129-distro-default-smithi/6528237">https://pulpito.ceph.com/swagner-2021-11-26_13:52:15-orch:cephadm-wip-swagner2-testing-2021-11-26-1129-distro-default-smithi/6528237</a></p>
<pre>
teuthology.orchestra.run.smithi145:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph orch ps --service_name=nfs.test
teuthology.orchestra.run.smithi145.stdout:No daemons reported
test_export_create_with_non_existing_fsname (tasks.cephfs.test_nfs.TestNFS) ... FAIL
======================================================================
FAIL: test_export_create_with_non_existing_fsname (tasks.cephfs.test_nfs.TestNFS)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_ceph-c_c9fd0972675c568f5d517be49820a12e77fab497/qa/tasks/cephfs/test_nfs.py", line 412, in test_export_create_with_non_existing_fsname
self._test_create_cluster()
File "/home/teuthworker/src/git.ceph.com_ceph-c_c9fd0972675c568f5d517be49820a12e77fab497/qa/tasks/cephfs/test_nfs.py", line 125, in _test_create_cluster
self._check_nfs_cluster_status('running', 'NFS Ganesha cluster deployment failed')
File "/home/teuthworker/src/git.ceph.com_ceph-c_c9fd0972675c568f5d517be49820a12e77fab497/qa/tasks/cephfs/test_nfs.py", line 89, in _check_nfs_cluster_status
self.fail(fail_msg)
AssertionError: NFS Ganesha cluster deployment failed
</pre>
<p>Looking at the log, we see the <strong>mfs.test</strong> cluster being created multiple times successfully, but it was never created right before the last test case:</p>
<pre>
2021-11-26T16:46:09: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:46:09: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.phdkqk
2021-11-26T16:46:09: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:46:09: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:46:09: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.phdkqk-rgw
2021-11-26T16:46:09: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.phdkqk on smithi145
2021-11-26T16:46:41: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:46:41: cephadm [INF] Remove service nfs.test
2021-11-26T16:46:41: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.phdkqk...
2021-11-26T16:46:41: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.phdkqk from smithi145
2021-11-26T16:46:45: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.phdkqk
2021-11-26T16:46:45: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.phdkqk-rgw
2021-11-26T16:46:45: cephadm [INF] Purge service nfs.test
2021-11-26T16:46:45: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:46:57: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:46:57: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.vgdnuw
2021-11-26T16:46:57: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:46:57: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:46:57: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.vgdnuw-rgw
2021-11-26T16:46:57: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.vgdnuw on smithi145
2021-11-26T16:47:51: cephadm [INF] Restart service nfs.test
2021-11-26T16:48:22: cephadm [INF] Restart service nfs.test
2021-11-26T16:49:02: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:49:02: cephadm [INF] Remove service nfs.test
2021-11-26T16:49:15: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.vgdnuw...
2021-11-26T16:49:15: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.vgdnuw from smithi145
2021-11-26T16:49:19: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.vgdnuw
2021-11-26T16:49:19: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.vgdnuw-rgw
2021-11-26T16:49:19: cephadm [INF] Purge service nfs.test
2021-11-26T16:49:19: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:49:47: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:49:47: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.heyhit
2021-11-26T16:49:47: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:49:47: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:49:47: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.heyhit-rgw
2021-11-26T16:49:47: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.heyhit on smithi145
2021-11-26T16:50:18: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:50:18: cephadm [INF] Remove service nfs.test
2021-11-26T16:50:18: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.heyhit...
2021-11-26T16:50:18: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.heyhit from smithi145
2021-11-26T16:50:22: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.heyhit
2021-11-26T16:50:22: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.heyhit-rgw
2021-11-26T16:50:22: cephadm [INF] Purge service nfs.test
2021-11-26T16:50:22: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:50:32: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:50:32: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.woegpi
2021-11-26T16:50:32: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:50:32: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:50:32: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.woegpi-rgw
2021-11-26T16:50:32: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.woegpi on smithi145
2021-11-26T16:51:19: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:51:19: cephadm [INF] Remove service nfs.test
2021-11-26T16:51:19: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.woegpi...
2021-11-26T16:51:19: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.woegpi from smithi145
2021-11-26T16:51:34: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.woegpi
2021-11-26T16:51:34: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.woegpi-rgw
2021-11-26T16:51:34: cephadm [INF] Purge service nfs.test
2021-11-26T16:51:34: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:51:56: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:51:56: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.uwhrnz
2021-11-26T16:51:56: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:51:56: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:51:56: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.uwhrnz-rgw
2021-11-26T16:51:56: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.uwhrnz on smithi145
2021-11-26T16:52:27: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:52:27: cephadm [INF] Remove service nfs.test
2021-11-26T16:52:27: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.uwhrnz...
2021-11-26T16:52:27: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.uwhrnz from smithi145
2021-11-26T16:52:32: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.uwhrnz
2021-11-26T16:52:32: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.uwhrnz-rgw
2021-11-26T16:52:32: cephadm [INF] Purge service nfs.test
2021-11-26T16:52:32: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:52:41: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:52:41: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.ccuxek
2021-11-26T16:52:41: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:52:41: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:52:41: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.ccuxek-rgw
2021-11-26T16:52:41: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.ccuxek on smithi145
2021-11-26T16:53:15: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:53:15: cephadm [INF] Remove service nfs.test
2021-11-26T16:53:15: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.ccuxek...
2021-11-26T16:53:15: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.ccuxek from smithi145
2021-11-26T16:53:19: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.ccuxek
2021-11-26T16:53:19: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.ccuxek-rgw
2021-11-26T16:53:19: cephadm [INF] Purge service nfs.test
2021-11-26T16:53:19: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:53:28: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:53:28: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.dduovn
2021-11-26T16:53:28: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:53:28: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:53:28: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.dduovn-rgw
2021-11-26T16:53:28: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.dduovn on smithi145
2021-11-26T16:54:10: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:54:10: cephadm [INF] Remove service nfs.test
2021-11-26T16:54:10: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.dduovn...
2021-11-26T16:54:10: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.dduovn from smithi145
2021-11-26T16:54:14: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.dduovn
2021-11-26T16:54:14: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.dduovn-rgw
2021-11-26T16:54:14: cephadm [INF] Purge service nfs.test
2021-11-26T16:54:14: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:54:23: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.dduovn...
2021-11-26T16:54:23: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.dduovn from smithi145
2021-11-26T16:54:24: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:54:25: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.mcpayf
2021-11-26T16:54:25: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:54:25: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:54:25: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.mcpayf-rgw
2021-11-26T16:54:25: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.mcpayf on smithi145
2021-11-26T16:55:00: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:55:00: cephadm [INF] Remove service nfs.test
2021-11-26T16:55:00: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.mcpayf...
2021-11-26T16:55:00: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.mcpayf from smithi145
2021-11-26T16:55:04: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.mcpayf
2021-11-26T16:55:04: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.mcpayf-rgw
2021-11-26T16:55:04: cephadm [INF] Purge service nfs.test
2021-11-26T16:55:04: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:55:17: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.mcpayf...
2021-11-26T16:55:17: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.mcpayf from smithi145
</pre>
Orchestrator - Bug #51366 (New): cephadm: Super hard to use loopback devices for OSDs
https://tracker.ceph.com/issues/51366
2021-06-25T10:48:09Z
Sebastian Wagner
<a name="Bootstrap-the-clusterh3-losetup"></a>
<h3 >Bootstrap the cluster<br />h3. losetup<a href="#Bootstrap-the-clusterh3-losetup" class="wiki-anchor">¶</a></h3>
<pre>
# fallocate -l 6G disk1.img
# fallocate -l 6G disk2.img
# fallocate -l 6G disk3.img
# sudo losetup $(sudo losetup -f) disk1.img
# sudo losetup $(sudo losetup -f) disk2.img
# sudo losetup $(sudo losetup -f) disk3.img
# sudo wipefs -a /dev/loop0
# sudo wipefs -a /dev/loop1
# sudo wipefs -a /dev/loop2
# lsblk
</pre>
<a name="start-ceph-volume-container"></a>
<h3 >start ceph-volume container<a href="#start-ceph-volume-container" class="wiki-anchor">¶</a></h3>
<p>Run</p>
<pre>
sudo ./cephadm ceph-volume lvm create --data /dev/loop0
</pre>
<p>This won't work. Instead copy the podman command and:</p>
<ul>
<li>Remove <strong>--entrypoint /usr/sbin/ceph-volume</strong></li>
<li>Add <strong>-it</strong></li>
<li>Add <strong>-v /etc/ceph:/etc/ceph</strong></li>
</ul>
<p>execute the podman command</p>
<a name="patch-ceph-volume"></a>
<h3 >patch ceph-volume<a href="#patch-ceph-volume" class="wiki-anchor">¶</a></h3>
<p>In file <strong>/usr/lib/python3.6/site-packages/ceph_volume/util/disk.py * add *loop</strong></p>
<pre>
[root@sebastians-laptop /]# grep -C 5 loop /usr/lib/python3.6/site-packages/ceph_volume/util/disk.py
if not os.path.exists(dev):
return False
# use lsblk first, fall back to using stat
TYPE = lsblk(dev).get('TYPE')
if TYPE:
return TYPE in ['disk', 'mpath', 'loop']
</pre>
<a name="ceph-volume-lvm-create"></a>
<h3 >ceph-volume lvm create<a href="#ceph-volume-lvm-create" class="wiki-anchor">¶</a></h3>
<pre>
# mkdir -p /var/lib/ceph/bootstrap-osd/
# ceph auth get client.bootstrap-osd > /var/lib/ceph/bootstrap-osd/ceph.keyring
# CEPH_VOLUME_DEBUG=true ceph-volume lvm create --data /dev/loop0 --no-systemd
</pre>
<a name="activate-doesnt-work"></a>
<h3 >activate doesn't work:<a href="#activate-doesnt-work" class="wiki-anchor">¶</a></h3>
<pre>
[root@sebastians-laptop /]# ceph cephadm osd activate sebastians-laptop
Created osd(s) 0 on host 'sebastians-laptop'
</pre>
<p>Instead, you need to follow the manual steps: <a class="external" href="https://tracker.ceph.com/issues/46691#note-1">https://tracker.ceph.com/issues/46691#note-1</a></p>
Orchestrator - Bug #51361 (New): KillMode=none is deprecated
https://tracker.ceph.com/issues/51361
2021-06-25T09:05:39Z
Sebastian Wagner
<p>We chaged systemd unit file killmode to none in <a class="external" href="https://github.com/ceph/ceph/pull/33162#issuecomment-584183316">https://github.com/ceph/ceph/pull/33162#issuecomment-584183316</a></p>
<p>Now we're getting a new warning:</p>
<pre>
Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
</pre>
Orchestrator - Bug #50776 (New): cephadm: CRUSH uses bare host names
https://tracker.ceph.com/issues/50776
2021-05-12T15:01:00Z
Sebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/blob/master/src/pybind/mgr/cephadm/module.py#L1411">https://github.com/ceph/ceph/blob/master/src/pybind/mgr/cephadm/module.py#L1411</a></p>
<p><a class="external" href="https://github.com/ceph/ceph/blob/04d9194fd36551b0b45abc06816449c1c0ff248b/src/pybind/mgr/cephadm/services/osd.py#L325">https://github.com/ceph/ceph/blob/04d9194fd36551b0b45abc06816449c1c0ff248b/src/pybind/mgr/cephadm/services/osd.py#L325</a></p>
<p><a class="external" href="https://github.com/ceph/ceph/blob/04d9194fd36551b0b45abc06816449c1c0ff248b/src/pybind/mgr/cephadm/services/osd.py#L50">https://github.com/ceph/ceph/blob/04d9194fd36551b0b45abc06816449c1c0ff248b/src/pybind/mgr/cephadm/services/osd.py#L50</a></p>
<pre>
def bare_name_to_hostname(bare)
...
</pre>
rbd - Bug #46875 (New): TestLibRBD.TestPendingAio: test_librbd.cc:4539: Failure or SIGSEGV
https://tracker.ceph.com/issues/46875
2020-08-10T01:03:45Z
Sebastian Wagner
<pre>
[ RUN ] TestLibRBD.TestPendingAio
using new format!
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/librbd/test_librbd.cc:4539: Failure
Expected equality of these values:
1
rbd_aio_is_complete(comps[i])
Which is: 0
[ FAILED ] TestLibRBD.TestPendingAio (68 ms)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/57209/consoleFull#-361705261e840cee4-f4a4-4183-81dd-42855615f2c1">https://jenkins.ceph.com/job/ceph-pull-requests/57209/consoleFull#-361705261e840cee4-f4a4-4183-81dd-42855615f2c1</a></p>
sepia - Bug #46336 (New): https://download-cc-rdu01.fedoraproject.org is unreliable
https://tracker.ceph.com/issues/46336
2020-07-03T09:58:16Z
Sebastian Wagner
<pre>
2020-07-03T09:05:49.488 INFO:teuthology.orchestra.run.smithi058:> sudo yum -y install ceph-test
2020-07-03T09:05:49.626 INFO:teuthology.orchestra.run.smithi195.stdout:Transaction test succeeded.
2020-07-03T09:05:49.627 INFO:teuthology.orchestra.run.smithi195.stdout:Running transaction
2020-07-03T09:05:49.924 INFO:teuthology.orchestra.run.smithi058.stdout:Last metadata expiration check: 0:00:36 ago on Fri 03 Jul 2020 09:05:13 AM UTC.
2020-07-03T09:05:50.065 INFO:teuthology.orchestra.run.smithi195.stdout: Preparing : 1/1
2020-07-03T09:05:50.238 INFO:teuthology.orchestra.run.smithi195.stdout: Installing : libxslt-1.1.32-3.el8.x86_64 1/6
2020-07-03T09:05:50.310 INFO:teuthology.orchestra.run.smithi058.stdout:Dependencies resolved.
2020-07-03T09:05:50.311 INFO:teuthology.orchestra.run.smithi058.stdout:================================================================================
2020-07-03T09:05:50.311 INFO:teuthology.orchestra.run.smithi058.stdout: Package Arch Version Repository Size
2020-07-03T09:05:50.312 INFO:teuthology.orchestra.run.smithi058.stdout:================================================================================
2020-07-03T09:05:50.312 INFO:teuthology.orchestra.run.smithi058.stdout:Installing:
2020-07-03T09:05:50.312 INFO:teuthology.orchestra.run.smithi058.stdout: ceph-test x86_64 2:16.0.0-3122.ge1d6abcdc6f.el8 ceph 45 M
2020-07-03T09:05:50.313 INFO:teuthology.orchestra.run.smithi058.stdout:Installing dependencies:
2020-07-03T09:05:50.313 INFO:teuthology.orchestra.run.smithi058.stdout: jq x86_64 1.5-12.el8 CentOS-AppStream 161 k
2020-07-03T09:05:50.313 INFO:teuthology.orchestra.run.smithi058.stdout: oniguruma x86_64 6.8.2-1.el8 CentOS-AppStream 188 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout: socat x86_64 1.7.3.2-6.el8 CentOS-AppStream 298 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout: libxslt x86_64 1.1.32-3.el8 CentOS-Base 249 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout: xmlstarlet x86_64 1.6.1-11.el8 epel 69 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout:
2020-07-03T09:05:50.315 INFO:teuthology.orchestra.run.smithi058.stdout:Transaction Summary
2020-07-03T09:05:50.315 INFO:teuthology.orchestra.run.smithi058.stdout:================================================================================
2020-07-03T09:05:50.315 INFO:teuthology.orchestra.run.smithi058.stdout:Install 6 Packages
2020-07-03T09:05:50.316 INFO:teuthology.orchestra.run.smithi058.stdout:
2020-07-03T09:05:50.317 INFO:teuthology.orchestra.run.smithi058.stdout:Total download size: 46 M
2020-07-03T09:05:50.317 INFO:teuthology.orchestra.run.smithi058.stdout:Installed size: 194 M
2020-07-03T09:05:50.317 INFO:teuthology.orchestra.run.smithi058.stdout:Downloading Packages:
2020-07-03T09:05:50.348 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] jq-1.5-12.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/AppStream/x86_64/os/Packages/jq-1.5-12.el8.x86_64.rpm
2020-07-03T09:05:50.348 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] oniguruma-6.8.2-1.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/AppStream/x86_64/os/Packages/oniguruma-6.8.2-1.el8.x86_64.rpm
2020-07-03T09:05:50.394 INFO:teuthology.orchestra.run.smithi195.stdout: Installing : xmlstarlet-1.6.1-11.el8.x86_64 2/6
2020-07-03T09:05:50.454 INFO:teuthology.orchestra.run.smithi058.stdout:(1/6): jq-1.5-12.el8.x86_64.rpm 1.1 MB/s | 161 kB 00:00
2020-07-03T09:05:50.463 INFO:teuthology.orchestra.run.smithi058.stdout:(2/6): oniguruma-6.8.2-1.el8.x86_64.rpm 1.2 MB/s | 188 kB 00:00
2020-07-03T09:05:50.487 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.491 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] libxslt-1.1.32-3.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/BaseOS/x86_64/os/Packages/libxslt-1.1.32-3.el8.x86_64.rpm
2020-07-03T09:05:50.492 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 404 for http://mirror.linux.duke.edu/pub/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.502 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 404 for http://packages.oit.ncsu.edu/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.508 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] libxslt-1.1.32-3.el8.x86_64.rpm: Status code: 404 for http://mirror.linux.duke.edu/pub/centos/8/BaseOS/x86_64/os/Packages/libxslt-1.1.32-3.el8.x86_64.rpm
2020-07-03T09:05:50.508 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] libxslt-1.1.32-3.el8.x86_64.rpm: Status code: 404 for http://packages.oit.ncsu.edu/centos/8/BaseOS/x86_64/os/Packages/libxslt-1.1.32-3.el8.x86_64.rpm
2020-07-03T09:05:50.539 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 404 for http://distro.ibiblio.org/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.539 INFO:teuthology.orchestra.run.smithi058.stdout:[FAILED] socat-1.7.3.2-6.el8.x86_64.rpm: No more mirrors to try - All mirrors were already tried without success
2020-07-03T09:05:50.541 INFO:teuthology.orchestra.run.smithi058.stdout:
2020-07-03T09:05:50.542 INFO:teuthology.orchestra.run.smithi058.stdout:The downloaded packages were saved in cache until the next successful transaction.
2020-07-03T09:05:50.542 INFO:teuthology.orchestra.run.smithi058.stdout:You can remove cached packages by executing 'dnf clean packages'.
2020-07-03T09:05:50.555 INFO:teuthology.orchestra.run.smithi195.stdout: Installing : socat-1.7.3.2-6.el8.x86_64 3/6
2020-07-03T09:05:50.615 INFO:teuthology.orchestra.run.smithi058.stderr:Error: Error downloading packages:
2020-07-03T09:05:50.615 INFO:teuthology.orchestra.run.smithi058.stderr: Cannot download Packages/socat-1.7.3.2-6.el8.x86_64.rpm: All mirrors were tried
2
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-07-03_08:12:34-rados:cephadm-wip-swagner-testing-2020-07-02-1034-distro-basic-smithi/">https://pulpito.ceph.com/swagner-2020-07-03_08:12:34-rados:cephadm-wip-swagner-testing-2020-07-02-1034-distro-basic-smithi/</a></p>
teuthology - Bug #45583 (New): teuthology-suite: "--subset" combined with "--filter" generates du...
https://tracker.ceph.com/issues/45583
2020-05-18T11:03:34Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/">http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/</a></p>
<p>scheduled via</p>
<pre>
teuthology-suite -k distro --priority 75 --suite rados --filter cephadm --subset 1135/9999 --email swagner@suse.com --ceph wip-swagner-testing-2020-05-15-2348 --machine-type smithi
</pre>
<p>scheduled</p>
<ul>
<li><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066708">http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066708</a> </li>
<li><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066741">http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066741</a></li>
</ul>
<p>both having the description:</p>
<pre>
rados/cephadm/upgrade/{1-start.yaml 2-start-upgrade.yaml 3-wait.yaml distro$/{rhel_8.0.yaml} fixed-2.yaml}
</pre>
Infrastructure - Bug #45453 (New): 'https://download.ceph.com/debian-octopus focal Release' does ...
https://tracker.ceph.com/issues/45453
2020-05-08T20:28:48Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/">http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/</a></p>
<pre>
2020-05-08T15:24:54.694 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm/cephadm -v add-repo --release octopus
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:DEBUG:cephadm:Could not locate podman: podman not found
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo GPG key from https://download.ceph.com/keys/release.asc...
2020-05-08T15:24:54.964 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo file at /etc/apt/sources.list.d/ceph.list...
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ test_install_uninstall
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo apt update
2020-05-08T15:24:55.009 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.139 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
2020-05-08T15:24:55.270 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
2020-05-08T15:24:55.284 INFO:tasks.workunit.client.0.smithi204.stdout:Ign:3 https://download.ceph.com/debian-octopus focal InRelease
2020-05-08T15:24:55.320 INFO:tasks.workunit.client.0.smithi204.stdout:Err:4 https://download.ceph.com/debian-octopus focal Release
2020-05-08T15:24:55.321 INFO:tasks.workunit.client.0.smithi204.stdout: 404 Not Found [IP: 158.69.68.124 443]
2020-05-08T15:24:55.351 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease
2020-05-08T15:24:55.434 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
2020-05-08T15:24:56.423 INFO:tasks.workunit.client.0.smithi204.stdout:Reading package lists...
2020-05-08T15:24:56.442 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:E: The repository 'https://download.ceph.com/debian-octopus focal Release' does not have a Release file.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.444 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo yum -y install cephadm
2020-05-08T15:24:56.452 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: yum: command not found
2020-05-08T15:24:56.453 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo dnf -y install cephadm
2020-05-08T15:24:56.459 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: dnf: command not found
2020-05-08T15:24:56.460 DEBUG:teuthology.orchestra.run:got remote process result: 1
</pre>
<p>Are we going to get an ubuntu repo for focal or should I disable this test for it?</p>
teuthology - Bug #45442 (New): ubuntu 20.02: Hang on: "The following packages will be REMOVED:"
https://tracker.ceph.com/issues/45442
2020-05-08T07:43:16Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-07_16:10:50-rados-wip-swagner2-testing-2020-05-07-1308-distro-basic-smithi/5030975/">http://pulpito.ceph.com/swagner-2020-05-07_16:10:50-rados-wip-swagner2-testing-2020-05-07-1308-distro-basic-smithi/5030975/</a></p>
<pre>
2020-05-07T17:31:41.061 INFO:teuthology.orchestra.run.smithi086:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done
2020-05-07T17:31:41.179 INFO:teuthology.orchestra.run.smithi086.stdout:Reading package lists...
2020-05-07T17:31:41.315 INFO:teuthology.orchestra.run.smithi086.stdout:Building dependency tree...
2020-05-07T17:31:41.315 INFO:teuthology.orchestra.run.smithi086.stdout:Reading state information...
2020-05-07T17:31:41.442 INFO:teuthology.orchestra.run.smithi086.stdout:The following packages were automatically installed and are no longer required:
2020-05-07T17:31:41.443 INFO:teuthology.orchestra.run.smithi086.stdout: ceph-mon ceph-osd libboost-iostreams1.71.0
2020-05-07T17:31:41.443 INFO:teuthology.orchestra.run.smithi086.stdout:Use 'sudo apt autoremove' to remove them.
2020-05-07T17:31:41.455 INFO:teuthology.orchestra.run.smithi086.stdout:The following packages will be REMOVED:
2020-05-07T17:31:41.456 INFO:teuthology.orchestra.run.smithi086.stdout: ceph*
2020-05-08T05:03:17.376 DEBUG:teuthology.exit:Got signal 15; running 2 handlers...
2020-05-08T05:03:17.396 DEBUG:teuthology.task.console_log:Killing console logger for smithi086
</pre>
<p>Looks like as if <code>-y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"</code> is still not enough.</p>
teuthology - Bug #44181 (New): Error in syslog: task.internal.syslog: random "*BUG*" in log message
https://tracker.ceph.com/issues/44181
2020-02-18T10:33:27Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/sage-2020-02-18_02:48:28-rados-wip-sage4-testing-2020-02-17-1727-distro-basic-smithi/4776502">http://pulpito.ceph.com/sage-2020-02-18_02:48:28-rados-wip-sage4-testing-2020-02-17-1727-distro-basic-smithi/4776502</a></p>
<p>This job failure was caused by</p>
<p><a class="external" href="https://github.com/ceph/teuthology/blob/291d40053a7a5caedc1d683f08d25399bc3b9ccd/teuthology/task/internal/syslog.py#L98-L144">https://github.com/ceph/teuthology/blob/291d40053a7a5caedc1d683f08d25399bc3b9ccd/teuthology/task/internal/syslog.py#L98-L144</a></p>
<pre>
2020-02-18T06:48:18.388 ERROR:teuthology.task.internal.syslog:Error in syslog on ubuntu@smithi060.front.sepia.ceph.com: /home/ubuntu/cephtest/archive/syslog/misc.log:2020-02-18T06:42:25.361371+00:00 smithi060 bash[10468]: audit 2020-02-18T06:42:24.442267+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.14124 172.21.15.60:0/1' entity='mgr.y' cmd=[{"prefix":"config-key set","key":"mgr/dashboard/key","val":"-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCnskmhDB10Jk6M\nXNpzP+7hOWVV7TYIeAGSapYoNcgQcPrQU0STPGuyUORmnKO4taTVz8EBvPL4p6Mv\niZFEIhL2OL07UexgDqKaAD4lne/KIhYQJVtkqPu/TYYemxa6xyl/V4LGPrSGYx0C\n8huP7RsqEMNLNMr/wG34hG7LCdGtcWk8aylma8XrXgukEHMsJJIeb+ZZKw3AnZCT\nbO++2B+V5DPtE4LId4x3G1PrVumH/whd6ciTtcImFspCRQgwlaPnDLf68bFXF67G\nBWJZZRFoTCc73fu/rUW1vGYk7WFiVi52WVbpgYrPc/AhWNOaH6d0xooBoohRjAYh\n7GDdVa+LAgMBAAECggEAM1HEZpymht0SPLJNx+dQ22wNLvahCoZvNLeZrESJLT7m\nAsr4uXZMHw3SV/SnxecQwr4JetawJJhowCuBYTBsTR2gC39OrzbLXAWm/ywOLfWw\netBz36I3KJw45zTfB9nbQTUuuCyIYngCcNxWwvz0yzLGEUXeudXR0bP1k/01RbZo\nhGe8oQSJzSN4zmfQtx/rSGCXJr23HUjPs0mVHqml2bZL9UZcuKu5RuN8PWSo0aOM\nZwGYa/1pcoo1OsN3XtujY9tU0Ykrd3rteAARMIBFzrktaWWhSdaiOQS0fYAnyGrX\njI7cjlsbtJfTt741wF0hmCZIGS40+HWTmwCTkapWwQKBgQDQkX4ZHyvFD96W81rz\nXLIdSEfgv0+andTC2v5kvlk4cxIYgic2g1R59gekZioOpVIQG/eCwiSFW5ndqjzI\nSGMj2bflL8vXv5q4EX+TL3W5LWOnR8k7FVxJpPsJbbyX93qbpcU/oKNSt5VDUPaL\neooha6lDP+HEAdfWHqe1PxP+9wKBgQDN1UzVWU8ur2tdlql2BVrwfi/J32/ZUFQG\nCPvC9RMdavZjKITu2Rg5LA6kYOnJ9MvVTU59Mf2c+6kKWQTaRhqTqhfAJYtjvmoC\naTGm7HGPywEOeMphF+LAb23DNcCzQFhBVduOfL8MSkTjJOjmaxyYc2qs+ts87NMt\nqCENAaPrDQKBgHvuV/1ZdkqsOVl81QhShku8DWnQg96d9jSqqAr4yE8woQoLHH3Z\n37JwrO3U/xygw3hrBdGextCvM2hxpZhk2vQMhKcclYVnhunlC+dLhio4fESD9WC0\nOphP/hMGL9Ak76fZArHiI+ocyAat7zHF6JofPP6G0QIFDlle8cxS5PDVAoGBAMRB\nByQ5JkV2HqG6YFNWYdICDuClOQj0DVk/wYSulY4sCUacQLtXpUAF4OQcP20/CgaT\n0i2Ot6ixTwi9veG8i+SVflXHtnLhAETSNfRZZyHaRmSdCSGwW5Rt6jMBkn2W8U9C\nZLgj+yjlu270J1hjcn1tNp4+BUG+8M+Mig7TrI4VAoGAYYltCD4bc2bBAPWnF6nk\nqrx16kKg0kjNdhATkBWt76jpsJYRmyo5NALLaB5/k0dS7ftuTmGEZLSnNyl44O2B\n7QH6PaRoP0hX5LtLwSZhiJxd6tDrfwMFzpVGiJHeUNKGS/GKQzlvlxUJb2aOhNWu\nMgFlLWfPOMgxiRpwUhtg0Is=\n-----END PRIVATE KEY-----\n"}]: dispatch
</pre>
<p>Unfortunately this log message contains "<code>BUG</code>" somewhere in the key.</p>
Messengers - Bug #44012 (New): shaman build error: (bionic+crimson): infiniband: error: aggregate...
https://tracker.ceph.com/issues/44012
2020-02-06T12:23:54Z
Sebastian Wagner
<p><a class="external" href="https://shaman.ceph.com/builds/ceph/wip-swagner-testing/3f9622d20ae8c91019b8fb97e46196113164006a/crimson/188435/">https://shaman.ceph.com/builds/ceph/wip-swagner-testing/3f9622d20ae8c91019b8fb97e46196113164006a/crimson/188435/</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=bionic,DIST=bionic,MACHINE_SIZE=huge/36282//consoleFull">https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=bionic,DIST=bionic,MACHINE_SIZE=huge/36282//consoleFull</a></p>
<pre>
Run Build Command:"/usr/bin/make" "cmTC_1399f/fast"
make[2]: Entering directory '/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp'
/usr/bin/make -f CMakeFiles/cmTC_1399f.dir/build.make CMakeFiles/cmTC_1399f.dir/build
make[3]: Entering directory '/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_1399f.dir/src.cxx.o
/usr/bin/c++ -g -O2 -fdebug-prefix-map=/build/ceph-15.1.0-264-g3f9622d=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -DHAVE_IBV_EXP -fPIE -o CMakeFiles/cmTC_1399f.dir/src.cxx.o -c /build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp/src.cxx
/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp/src.cxx: In function 'int main()':
/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp/src.cxx:5:31: error: aggregate 'main()::ibv_exp_gid_attr gid_attr' has incomplete type and cannot be defined
5 | struct ibv_exp_gid_attr gid_attr;
| ^~~~~~~~
/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp/src.cxx:6:7: error: 'ibv_exp_query_gid_attr' was not declared in this scope; did you mean 'ibv_exp_gid_attr'?
6 | ibv_exp_query_gid_attr(ctxt, 1, 0, &gid_attr);
| ^~~~~~~~~~~~~~~~~~~~~~
| ibv_exp_gid_attr
CMakeFiles/cmTC_1399f.dir/build.make:65: recipe for target 'CMakeFiles/cmTC_1399f.dir/src.cxx.o' failed
make[3]: *** [CMakeFiles/cmTC_1399f.dir/src.cxx.o] Error 1
make[3]: Leaving directory '/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_1399f/fast' failed
make[2]: *** [cmTC_1399f/fast] Error 2
make[2]: Leaving directory '/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp'
Source file was:
#include <infiniband/verbs.h>
int main() {
struct ibv_context* ctxt;
struct ibv_exp_gid_attr gid_attr;
ibv_exp_query_gid_attr(ctxt, 1, 0, &gid_attr);
return 0;
}
dh_auto_configure: cd obj-x86_64-linux-gnu && cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_VERBOSE_MAKEFILE=ON -DCMAKE_BUILD_TYPE=None -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -DCMAKE_EXPORT_NO_PACKAGE_REGISTRY=ON -DCMAKE_FIND_PACKAGE_NO_PACKAGE_REGISTRY=ON -DWITH_OCF=ON -DWITH_LTTNG=ON -DWITH_MGR_DASHBOARD_FRONTEND=OFF -DWITH_PYTHON3=3 -DWITH_CEPHFS_JAVA=ON -DWITH_CEPHFS_SHELL=ON -DWITH_SYSTEMD=ON -DCEPH_SYSTEMD_ENV_DIR=/etc/default -DCMAKE_INSTALL_LIBDIR=/usr/lib -DCMAKE_INSTALL_LIBEXECDIR=/usr/lib -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_SYSTEMD_SERVICEDIR=/lib/systemd/system -DBOOST_J=8 -DWITH_BOOST_CONTEXT=ON -DWITH_SEASTAR=ON -DWITH_STATIC_LIBSTDCXX=OFF returned exit code 1
debian/rules:47: recipe for target 'override_dh_auto_configure' failed
make[1]: *** [override_dh_auto_configure] Error 2
make[1]: Leaving directory '/build/ceph-15.1.0-264-g3f9622d'
debian/rules:44: recipe for target 'build' failed
make: *** [build] Error 2
</pre>
ceph-volume - Bug #43858 (New): ceph-volume: lvm zap requires `/dev/` prefix
https://tracker.ceph.com/issues/43858
2020-01-28T12:38:22Z
Sebastian Wagner
<p>This is different to the other lvm commands:</p>
<p>preparing:</p>
<pre>
root@ubuntu:~# losetup -f
/dev/loop12
root@ubuntu:~# LANG=C losetup /dev/loop12 disk-image
root@ubuntu:~# pvcreate /dev/loop12
Physical volume "/dev/loop12" successfully created.
root@ubuntu:~# vgcreate MyVg /dev/loop12
Volume group "MyVg" successfully created
root@ubuntu:~# lvcreate --size 5500M --name MyLV MyVg
Logical volume "MyLV" created.
root@ubuntu:~# ll /dev/MyVg/MyLV
lrwxrwxrwx 1 root root 7 Jan 28 11:38 /dev/MyVg/MyLV -> ../dm-0
root@ubuntu:~# vgs -o vg_tags MyVg
VG Tags
root@ubuntu:~# vgs -o lv_tags MyVg
LV Tags
</pre>
<pre>
# ceph-volume lvm zap MyVg/MyLV
stderr: lsblk: MyVg/MyLV: kein blockorientiertes Gerät
stderr: blkid: Fehler: MyVg/MyLV: Datei oder Verzeichnis nicht gefunden
stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
[--osd-fsid OSD_FSID]
[DEVICES [DEVICES ...]]
ceph-volume lvm zap: error: Unable to proceed with non-existing device: MyVg/MyLV
</pre>
<pre>
# ceph-volume lvm zap /dev/MyVg/MyLV
--> Zapping: /dev/MyVg/MyLV
Running command: /bin/dd if=/dev/zero of=/dev/MyVg/MyLV bs=1M count=10
stderr: 10+0 Datensätze ein
10+0 Datensätze aus
10485760 Bytes (10 MB, 10 MiB) kopiert, 0,00413076 s, 2,5 GB/s
--> Zapping successful for: <LV: /dev/MyVg/MyLV>
</pre>
ceph-cm-ansible - Bug #43738 (New): cephadm: conflicts between attempted installs of libstoragemg...
https://tracker.ceph.com/issues/43738
2020-01-21T10:27:32Z
Sebastian Wagner
<pre>
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 420, in begin
super(CephLab, self).begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 263, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 290, in execute_playbook
self._handle_failure(command, status)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 314, in _handle_failure
raise AnsibleFailedError(failures)
failure_reason: '86 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz
conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
Error Summary
-------------
''}}Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure
log.error(yaml.safe_dump(failure))
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent
node = self.represent_data(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)RepresenterError: (''cannot represent an object'', u''/'')Failure
object was: {''smithi161.front.sepia.ceph.com'': {''_ansible_no_log'': False, ''changed'':
False, u''results'': [], u''rc'': 1, u''invocation'': {u''module_args'': {u''install_weak_deps'':
True, u''autoremove'': False, u''lock_timeout'': 0, u''download_dir'': None, u''install_repoquery'':
True, u''enable_plugin'': [], u''update_cache'': False, u''disable_excludes'': None,
u''exclude'': [], u''installroot'': u''/'', u''allow_downgrade'': False, u''name'':
[u''@core'', u''@base'', u''dnf-utils'', u''git-all'', u''sysstat'', u''libedit'',
u''boost-thread'', u''xfsprogs'', u''gdisk'', u''parted'', u''libgcrypt'', u''fuse-libs'',
u''openssl'', u''libuuid'', u''attr'', u''ant'', u''lsof'', u''gettext'', u''bc'',
u''xfsdump'', u''blktrace'', u''usbredir'', u''podman'', u''podman-docker'', u''libev-devel'',
u''valgrind'', u''nfs-utils'', u''ncurses-devel'', u''gcc'', u''git'', u''python3-nose'',
u''python3-virtualenv'', u''genisoimage'', u''qemu-img'', u''qemu-kvm-core'', u''qemu-kvm-block-rbd'',
u''libacl-devel'', u''dbench'', u''autoconf''], u''download_only'': False, u''bugfix'':
False, u''list'': None, u''disable_gpg_check'': False, u''conf_file'': None, u''update_only'':
False, u''state'': u''present'', u''disablerepo'': [], u''releasever'': None, u''disable_plugin'':
[], u''enablerepo'': [], u''skip_broken'': False, u''security'': False, u''validate_certs'':
True}}, u''failures'': [], u''msg'': u''Unknown Error occured: Transaction check
error:
file /usr/lib/tmpfiles.d/libstoragemgmt.conf conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/doc/libstoragemgmt/NEWS conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmcli.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmd.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/simc_lsmplugin.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log</a></p>
<ul>
<li>os_type: rhel</li>
<li>os_version: '8.0'</li>
<li>description: rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_api_tests.yaml}</li>
</ul>
<p>As this is an ansible error, I'm not sure if this is a cephadm issue. Any clues?</p>
mgr - Bug #40016 (New): mgr/selftest: test_selftest_config_update failes in a vstart cluster
https://tracker.ceph.com/issues/40016
2019-05-23T12:43:50Z
Sebastian Wagner
<pre>
2019-05-23 14:37:59,692.692 INFO:__main__:Running ['./bin/ceph', 'log', 'Ended test tasks.mgr.test_module_selftest.TestModuleSelftest.test_selftest_config_update']
2019-05-23 14:38:02,798.798 INFO:__main__:Stopped test: test_selftest_config_update (tasks.mgr.test_module_selftest.TestModuleSelftest) in 39.056024s
2019-05-23 14:38:02,799.799 INFO:__main__:
2019-05-23 14:38:02,800.800 INFO:__main__:======================================================================
2019-05-23 14:38:02,800.800 INFO:__main__:FAIL: test_selftest_config_update (tasks.mgr.test_module_selftest.TestModuleSelftest)
2019-05-23 14:38:02,800.800 INFO:__main__:----------------------------------------------------------------------
2019-05-23 14:38:02,800.800 INFO:__main__:Traceback (most recent call last):
2019-05-23 14:38:02,801.801 INFO:__main__: File "/home/sebastian/Repos/ceph/qa/tasks/mgr/test_module_selftest.py", line 94, in test_selftest_config_update
2019-05-23 14:38:02,801.801 INFO:__main__: self.assertEqual(get_value(), "None")
2019-05-23 14:38:02,801.801 INFO:__main__:AssertionError: 'testvalue' != 'None'
2019-05-23 14:38:02,801.801 INFO:__main__:
2019-05-23 14:38:02,802.802 INFO:__main__:----------------------------------------------------------------------
2019-05-23 14:38:02,802.802 INFO:__main__:Ran 14 tests in 951.565s
2019-05-23 14:38:02,802.802 INFO:__main__:
2019-05-23 14:38:02,802.802 INFO:__main__:FAILED (failures=1)
2019-05-23 14:38:02,803.803 INFO:__main__:
2019-05-23 14:38:02,803.803 INFO:__main__:======================================================================
2019-05-23 14:38:02,803.803 INFO:__main__:FAIL: test_selftest_config_update (tasks.mgr.test_module_selftest.TestModuleSelftest)
2019-05-23 14:38:02,803.803 INFO:__main__:----------------------------------------------------------------------
2019-05-23 14:38:02,803.803 INFO:__main__:Traceback (most recent call last):
2019-05-23 14:38:02,804.804 INFO:__main__: File "/home/sebastian/Repos/ceph/qa/tasks/mgr/test_module_selftest.py", line 94, in test_selftest_config_update
2019-05-23 14:38:02,804.804 INFO:__main__: self.assertEqual(get_value(), "None")
2019-05-23 14:38:02,804.804 INFO:__main__:AssertionError: 'testvalue' != 'None'
</pre>
<p>running with vstart_running</p>