Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2022-01-28T12:24:42Z
Ceph
Redmine
Orchestrator - Documentation #54051 (New): cephadm: advise users how when when to change OSD specs
https://tracker.ceph.com/issues/54051
2022-01-28T12:24:42Z
Sebastian Wagner
<p>In general it works fine, there is just one caveat: we're not going to touch any already deployed OSDs</p>
<p>I'd recommend to stick to a particular osd layout, but adjust the placement for example<br />let's say testing the deployment on 1 host and then rolling it out on all the others</p>
<p>If a user changes other properties of the OSD spec, he has to understand that existing osds are not getting redeployed. E.g. changing the encraption flag of an OSD spec doesn't magically encrypt any OSDs. Only new OSDs will then be encrypted.</p>
Orchestrator - Feature #53562 (New): cephadm doesn't support osd crush_location_hook
https://tracker.ceph.com/issues/53562
2021-12-09T11:59:53Z
Sebastian Wagner
<p>crush_location_hook is a path to an executable that is executed in order to update the current OSD's crush location. Executed like so:</p>
<pre>
$crush_location_hook --cluster {cluster-name} --id {ID} --type {daemon-type}
</pre>
<p>and prints out the current crush locations.</p>
<p>Workarounds:</p>
<ul>
<li>For a per-host based location, we have: <a class="external" href="https://docs.ceph.com/en/latest/cephadm/host-management/#setting-the-initial-crush-location-of-host">https://docs.ceph.com/en/latest/cephadm/host-management/#setting-the-initial-crush-location-of-host</a> which should cover a lot of use cases.</li>
<li>Build a new container image locally and add the crush_location_hook executable to it. Then set the config option to the file path within the container</li>
</ul>
ceph-volume - Bug #53524 (New): CEPHADM_APPLY_SPEC_FAIL is very verbose
https://tracker.ceph.com/issues/53524
2021-12-08T11:02:11Z
Sebastian Wagner
<pre>
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
</pre>
Orchestrator - Bug #53154 (New): t8y: cephadm: error: unrecognized arguments: --keep-logs
https://tracker.ceph.com/issues/53154
2021-11-04T09:38:54Z
Sebastian Wagner
<pre>
2021-11-03T13:15:09.452 DEBUG:teuthology.orchestra.run.smithi191:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f2abfd4e-3ca4-11ec-8c28-001a4aab830c --force --keep-logs
2021-11-03T13:15:09.584 INFO:teuthology.orchestra.run.smithi191.stderr:usage: cephadm [-h] [--image IMAGE] [--docker] [--data-dir DATA_DIR]
2021-11-03T13:15:09.584 INFO:teuthology.orchestra.run.smithi191.stderr: [--log-dir LOG_DIR] [--logrotate-dir LOGROTATE_DIR]
2021-11-03T13:15:09.585 INFO:teuthology.orchestra.run.smithi191.stderr: [--unit-dir UNIT_DIR] [--verbose] [--timeout TIMEOUT]
2021-11-03T13:15:09.585 INFO:teuthology.orchestra.run.smithi191.stderr: [--retry RETRY] [--env ENV] [--no-container-init]
2021-11-03T13:15:09.585 INFO:teuthology.orchestra.run.smithi191.stderr: {version,pull,inspect-image,ls,list-networks,adopt,rm-daemon,rm-cluster,run,shell,enter,ceph-volume,unit,logs,bootstrap,deplo
y,check-host,prepare-host,add-repo,rm-repo,install,registry-login,gather-facts}
2021-11-03T13:15:09.585 INFO:teuthology.orchestra.run.smithi191.stderr: ...
2021-11-03T13:15:09.585 INFO:teuthology.orchestra.run.smithi191.stderr:cephadm: error: unrecognized arguments: --keep-logs
2021-11-03T13:15:09.595 DEBUG:teuthology.orchestra.run:got remote process result: 2
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2021-11-03_11:47:26-orch:cephadm-wip-swagner-testing-2021-11-03-0958-distro-basic-smithi/6481219">https://pulpito.ceph.com/swagner-2021-11-03_11:47:26-orch:cephadm-wip-swagner-testing-2021-11-03-0958-distro-basic-smithi/6481219</a></p>
Orchestrator - Bug #51361 (New): KillMode=none is deprecated
https://tracker.ceph.com/issues/51361
2021-06-25T09:05:39Z
Sebastian Wagner
<p>We chaged systemd unit file killmode to none in <a class="external" href="https://github.com/ceph/ceph/pull/33162#issuecomment-584183316">https://github.com/ceph/ceph/pull/33162#issuecomment-584183316</a></p>
<p>Now we're getting a new warning:</p>
<pre>
Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
</pre>
Orchestrator - Bug #49287 (New): podman: setting cgroup config for procHooks process caused: Unit...
https://tracker.ceph.com/issues/49287
2021-02-13T00:54:57Z
Sebastian Wagner
<pre>
2021-02-12T16:27:55.195 INFO:teuthology.orchestra.run.smithi014.stderr:Non-zero exit code 127 from /bin/podman run --rm --ipc=host --net=host --entrypoint stat --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph:52fc503cf18cf3bb446b840ba00be073017b8373 -e NODE_NAME=smithi014 quay.ceph.io/ceph-ci/ceph:52fc503cf18cf3bb446b840ba0
0be073017b8373 -c %u %g /var/lib/ceph
2021-02-12T16:27:55.195 INFO:teuthology.orchestra.run.smithi014.stderr:stat: stderr Error: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: process_linux.go:422: setting cgroup config for procHooks process caused: Unit libpod-056038e1126191fba41d8a037275136f2d7aeec9710b9ee
ff792c06d8544b983.scope not found.: OCI runtime error
2021-02-12T16:27:55.201 INFO:teuthology.orchestra.run.smithi014.stderr:Traceback (most recent call last):
2021-02-12T16:27:55.201 INFO:teuthology.orchestra.run.smithi014.stderr: File "/home/ubuntu/cephtest/cephadm", line 7697, in <module>
2021-02-12T16:27:55.201 INFO:teuthology.orchestra.run.smithi014.stderr: main()
2021-02-12T16:27:55.201 INFO:teuthology.orchestra.run.smithi014.stderr: File "/home/ubuntu/cephtest/cephadm", line 7686, in main
2021-02-12T16:27:55.201 INFO:teuthology.orchestra.run.smithi014.stderr: r = ctx.func(ctx)
2021-02-12T16:27:55.201 INFO:teuthology.orchestra.run.smithi014.stderr: File "/home/ubuntu/cephtest/cephadm", line 1566, in _infer_fsid
2021-02-12T16:27:55.201 INFO:teuthology.orchestra.run.smithi014.stderr: return func(ctx)
2021-02-12T16:27:55.201 INFO:teuthology.orchestra.run.smithi014.stderr: File "/home/ubuntu/cephtest/cephadm", line 1603, in _infer_config
2021-02-12T16:27:55.201 INFO:teuthology.orchestra.run.smithi014.stderr: return func(ctx)
2021-02-12T16:27:55.201 INFO:teuthology.orchestra.run.smithi014.stderr: File "/home/ubuntu/cephtest/cephadm", line 1650, in _infer_image
2021-02-12T16:27:55.201 INFO:teuthology.orchestra.run.smithi014.stderr: return func(ctx)
2021-02-12T16:27:55.202 INFO:teuthology.orchestra.run.smithi014.stderr: File "/home/ubuntu/cephtest/cephadm", line 4128, in command_shell
2021-02-12T16:27:55.202 INFO:teuthology.orchestra.run.smithi014.stderr: make_log_dir(ctx, ctx.fsid)
2021-02-12T16:27:55.202 INFO:teuthology.orchestra.run.smithi014.stderr: File "/home/ubuntu/cephtest/cephadm", line 1752, in make_log_dir
2021-02-12T16:27:55.202 INFO:teuthology.orchestra.run.smithi014.stderr: uid, gid = extract_uid_gid(ctx)
2021-02-12T16:27:55.202 INFO:teuthology.orchestra.run.smithi014.stderr: File "/home/ubuntu/cephtest/cephadm", line 2428, in extract_uid_gid
2021-02-12T16:27:55.202 INFO:teuthology.orchestra.run.smithi014.stderr: raise RuntimeError('uid/gid not found')
2021-02-12T16:27:55.202 INFO:teuthology.orchestra.run.smithi014.stderr:RuntimeError: uid/gid not found
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-11_11:00:52-rados:cephadm-wip-swagner3-testing-2021-02-10-1322-distro-basic-smithi/5874630">https://pulpito.ceph.com/swagner-2021-02-11_11:00:52-rados:cephadm-wip-swagner3-testing-2021-02-10-1322-distro-basic-smithi/5874630</a></p>
Orchestrator - Bug #49233 (New): cephadm shell: TLS handshake timeout
https://tracker.ceph.com/issues/49233
2021-02-10T11:26:38Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-09_10:28:14-rados:cephadm-wip-swagner2-testing-2021-02-08-1109-pacific-distro-basic-smithi/5871391">https://pulpito.ceph.com/swagner-2021-02-09_10:28:14-rados:cephadm-wip-swagner2-testing-2021-02-08-1109-pacific-distro-basic-smithi/5871391</a></p>
<pre>
2021-02-10T06:43:35.452 INFO:teuthology.orchestra.run.smithi132.stderr:Non-zero exit code 125 from /usr/bin/docker run --rm --ipc=host --net=host --entrypoint stat -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph:282cc83b9d6c73ac8a35502bb969bc4e36afefcc -e NODE_NAME=smithi132 quay.ceph.io/ceph-ci/ceph:282cc83b9d6c73ac8a35502bb969b
c4e36afefcc -c %u %g /var/lib/ceph
2021-02-10T06:43:35.453 INFO:teuthology.orchestra.run.smithi132.stderr:stat: stderr Unable to find image 'quay.ceph.io/ceph-ci/ceph:282cc83b9d6c73ac8a35502bb969bc4e36afefcc' locally
2021-02-10T06:43:35.453 INFO:teuthology.orchestra.run.smithi132.stderr:stat: stderr /usr/bin/docker: Error response from daemon: Get https://quay.ceph.io/v2/ceph-ci/ceph/manifests/282cc83b9d6c73ac8a35502bb969bc4e36afefcc: net/http: TLS handshake timeout.
2021-02-10T06:43:35.453 INFO:teuthology.orchestra.run.smithi132.stderr:stat: stderr See '/usr/bin/docker run --help'.
2021-02-10T06:43:35.465 INFO:teuthology.orchestra.run.smithi132.stderr:Traceback (most recent call last):
2021-02-10T06:43:35.465 INFO:teuthology.orchestra.run.smithi132.stderr: File "/usr/sbin/cephadm", line 7639, in <module>
2021-02-10T06:43:35.465 INFO:teuthology.orchestra.run.smithi132.stderr: main()
2021-02-10T06:43:35.465 INFO:teuthology.orchestra.run.smithi132.stderr: File "/usr/sbin/cephadm", line 7628, in main
2021-02-10T06:43:35.466 INFO:teuthology.orchestra.run.smithi132.stderr: r = ctx.func(ctx)
2021-02-10T06:43:35.466 INFO:teuthology.orchestra.run.smithi132.stderr: File "/usr/sbin/cephadm", line 1616, in _infer_fsid
2021-02-10T06:43:35.466 INFO:teuthology.orchestra.run.smithi132.stderr: return func(ctx)
2021-02-10T06:43:35.466 INFO:teuthology.orchestra.run.smithi132.stderr: File "/usr/sbin/cephadm", line 1653, in _infer_config
2021-02-10T06:43:35.466 INFO:teuthology.orchestra.run.smithi132.stderr: return func(ctx)
2021-02-10T06:43:35.466 INFO:teuthology.orchestra.run.smithi132.stderr: File "/usr/sbin/cephadm", line 1700, in _infer_image
2021-02-10T06:43:35.466 INFO:teuthology.orchestra.run.smithi132.stderr: return func(ctx)
2021-02-10T06:43:35.466 INFO:teuthology.orchestra.run.smithi132.stderr: File "/usr/sbin/cephadm", line 4115, in command_shell
2021-02-10T06:43:35.466 INFO:teuthology.orchestra.run.smithi132.stderr: make_log_dir(ctx, ctx.fsid)
2021-02-10T06:43:35.466 INFO:teuthology.orchestra.run.smithi132.stderr: File "/usr/sbin/cephadm", line 1802, in make_log_dir
2021-02-10T06:43:35.466 INFO:teuthology.orchestra.run.smithi132.stderr: uid, gid = extract_uid_gid(ctx)
2021-02-10T06:43:35.466 INFO:teuthology.orchestra.run.smithi132.stderr: File "/usr/sbin/cephadm", line 2469, in extract_uid_gid
2021-02-10T06:43:35.467 INFO:teuthology.orchestra.run.smithi132.stderr: raise RuntimeError('uid/gid not found')
2021-02-10T06:43:35.467 INFO:teuthology.orchestra.run.smithi132.stderr:RuntimeError: uid/gid not found
2
</pre>
<p>Looks like we need to use the ignore list form</p>
<p><a class="external" href="https://github.com/ceph/ceph/blob/40d5c37930f9b7f883c3b6da57be481f1fe6fb6c/src/cephadm/cephadm#L3078-L3086">https://github.com/ceph/ceph/blob/40d5c37930f9b7f883c3b6da57be481f1fe6fb6c/src/cephadm/cephadm#L3078-L3086</a></p>
<p>also for command_shell()</p>
Dashboard - Bug #47231 (New): ERROR: setUpClass (tasks.mgr.dashboard.test_cephfs.CephfsTest)
https://tracker.ceph.com/issues/47231
2020-09-01T10:42:27Z
Sebastian Wagner
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-api/2144/">https://jenkins.ceph.com/job/ceph-api/2144/</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-api/3204/">https://jenkins.ceph.com/job/ceph-api/3204/</a></p>
<pre>
2020-09-01 09:18:35,239.239 INFO:__main__:----------------------------------------------------------------------
2020-09-01 09:18:35,239.239 INFO:__main__:Traceback (most recent call last):
2020-09-01 09:18:35,240.240 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/mgr/dashboard/helper.py", line 149, in setUpClass
2020-09-01 09:18:35,240.240 INFO:__main__: cls._assign_ports("dashboard", "ssl_server_port")
2020-09-01 09:18:35,240.240 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/mgr/mgr_test_case.py", line 218, in _assign_ports
2020-09-01 09:18:35,240.240 INFO:__main__: cls.wait_until_true(is_available, timeout=30)
2020-09-01 09:18:35,240.240 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/ceph_test_case.py", line 196, in wait_until_true
2020-09-01 09:18:35,240.240 INFO:__main__: raise TestTimeoutError("Timed out after {0}s".format(elapsed))
2020-09-01 09:18:35,240.240 INFO:__main__:tasks.ceph_test_case.TestTimeoutError: Timed out after 30s
2020-09-01 09:18:35,241.241 INFO:__main__:
2020-09-01 09:18:35,241.241 INFO:__main__:----------------------------------------------------------------------
2020-09-01 09:18:35,241.241 INFO:__main__:Ran 14 tests in 1278.060s
2020-09-01 09:18:35,241.241 INFO:__main__:
2020-09-01 09:18:35,241.241 INFO:__main__:
</pre>
rbd - Bug #46875 (New): TestLibRBD.TestPendingAio: test_librbd.cc:4539: Failure or SIGSEGV
https://tracker.ceph.com/issues/46875
2020-08-10T01:03:45Z
Sebastian Wagner
<pre>
[ RUN ] TestLibRBD.TestPendingAio
using new format!
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/librbd/test_librbd.cc:4539: Failure
Expected equality of these values:
1
rbd_aio_is_complete(comps[i])
Which is: 0
[ FAILED ] TestLibRBD.TestPendingAio (68 ms)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/57209/consoleFull#-361705261e840cee4-f4a4-4183-81dd-42855615f2c1">https://jenkins.ceph.com/job/ceph-pull-requests/57209/consoleFull#-361705261e840cee4-f4a4-4183-81dd-42855615f2c1</a></p>
Dashboard - Bug #46848 (New): ERROR: test_perf_counters_list (tasks.mgr.dashboard.test_perf_count...
https://tracker.ceph.com/issues/46848
2020-08-06T15:58:27Z
Sebastian Wagner
<pre>
2020-08-06 12:13:06,380.380 INFO:__main__:----------------------------------------------------------------------
2020-08-06 12:13:06,381.381 INFO:__main__:Traceback (most recent call last):
2020-08-06 12:13:06,381.381 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/mgr/dashboard/helper.py", line 191, in setUp
2020-08-06 12:13:06,381.381 INFO:__main__: self.wait_for_health_clear(self.TIMEOUT_HEALTH_CLEAR)
2020-08-06 12:13:06,381.381 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/ceph_test_case.py", line 162, in wait_for_health_clear
2020-08-06 12:13:06,382.382 INFO:__main__: self.wait_until_true(is_clear, timeout)
2020-08-06 12:13:06,382.382 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/ceph_test_case.py", line 194, in wait_until_true
2020-08-06 12:13:06,383.383 INFO:__main__: raise RuntimeError("Timed out after {0}s".format(elapsed))
2020-08-06 12:13:06,384.384 INFO:__main__:RuntimeError: Timed out after 60s
2020-08-06 12:13:06,384.384 INFO:__main__:
2020-08-06 12:13:06,384.384 INFO:__main__:----------------------------------------------------------------------
2020-08-06 12:13:06,386.386 INFO:__main__:Ran 117 tests in 1744.663s
2020-08-06 12:13:06,386.386 INFO:__main__:
2020-08-06 12:13:06,387.387 INFO:__main__:
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-api/278/">https://jenkins.ceph.com/job/ceph-api/278/</a></p>
Dashboard - Bug #46797 (New): ERROR: test_pool_update_metadata (tasks.mgr.dashboard.test_pool.Poo...
https://tracker.ceph.com/issues/46797
2020-07-31T10:43:42Z
Sebastian Wagner
<pre>
2020-07-31 05:16:37,145.145 INFO:__main__:----------------------------------------------------------------------
2020-07-31 05:16:37,145.145 INFO:__main__:Traceback (most recent call last):
2020-07-31 05:16:37,146.146 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_pool.py", line 327, in test_pool_update_metadata
2020-07-31 05:16:37,146.146 INFO:__main__: with self.__yield_pool(pool_name):
2020-07-31 05:16:37,146.146 INFO:__main__: File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
2020-07-31 05:16:37,146.146 INFO:__main__: return next(self.gen)
2020-07-31 05:16:37,146.146 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_pool.py", line 58, in __yield_pool
2020-07-31 05:16:37,147.147 INFO:__main__: data = self._create_pool(name, data)
2020-07-31 05:16:37,147.147 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_pool.py", line 77, in _create_pool
2020-07-31 05:16:37,147.147 INFO:__main__: self._task_post('/api/pool/', data)
2020-07-31 05:16:37,147.147 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 337, in _task_post
2020-07-31 05:16:37,147.147 INFO:__main__: return cls._task_request('POST', url, data, timeout)
2020-07-31 05:16:37,148.148 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 317, in _task_request
2020-07-31 05:16:37,148.148 INFO:__main__: .format(task_name, task_metadata, _res))
2020-07-31 05:16:37,148.148 INFO:__main__:Exception: Waiting for task (pool/create, {'pool_name': 'pool_update_metadata'}) to finish timed out. {'executing_tasks': [{'name': 'pool/create', 'metadata': {'pool_name': 'pool_update_metadata'}, 'begin_time': '2020-07-31T05:14:56.624619Z', 'progress': 0}, {'name': 'progress/PG autoscaler decreasing pool 11 PGs from 32 to 8', 'metadata': {'pool': 11}, 'begin_time': '2020-07-31T05:12:39.386827Z', 'progress': 45}], 'finished_tasks': [{'name': 'pool/create', 'metadata': {'pool_name': 'pool_update_configuration'}, 'begin_time': '2020-07-31T05:14:35.870663Z', 'end_time': '2020-07-31T05:14:43.978899Z', 'duration': 8.108235836029053, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'pool_update_compression'}, 'begin_time': '2020-07-31T05:14:08.247703Z', 'end_time': '2020-07-31T05:14:18.308763Z', 'duration': 10.061060190200806, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'progress/PG autoscaler increasing pool 15 PGs from 8 to 32', 'metadata': {'pool': 15}, 'begin_time': '2020-07-31T05:12:40.733971Z', 'end_time': '2020-07-31T05:13:40.765727Z', 'duration': 60.031755685806274, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'dashboard_pool_quota2'}, 'begin_time': '2020-07-31T05:13:06.829904Z', 'end_time': '2020-07-31T05:13:11.590168Z', 'duration': 4.760263919830322, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'dashboard_pool_quota1'}, 'begin_time': '2020-07-31T05:13:02.254149Z', 'end_time': '2020-07-31T05:13:03.345344Z', 'duration': 1.0911946296691895, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'dashboard_pool3'}, 'begin_time': '2020-07-31T05:12:26.802483Z', 'end_time': '2020-07-31T05:12:42.674605Z', 'duration': 15.872122049331665, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'sadfs'}, 'begin_time': '2020-07-31T05:12:22.019784Z', 'end_time': '2020-07-31T05:12:22.030813Z', 'duration': 0.011028766632080078, 'progress': 0, 'success': False, 'ret_value': None, 'exception': {'detail': "[errno -2] specified rule dnf doesn't exist"}}, {'name': 'progress/Rebalancing after osd.0 marked in', 'metadata': {'osd': 0}, 'begin_time': '2020-07-31T05:09:13.529738Z', 'end_time': '2020-07-31T05:09:22.037584Z', 'duration': 8.507845878601074, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'progress/Rebalancing after osd.0 marked out', 'metadata': {'osd': 0}, 'begin_time': '2020-07-31T05:09:12.471868Z', 'end_time': '2020-07-31T05:09:13.529236Z', 'duration': 1.057368516921997, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'progress/apply_drivesgroups', 'metadata': {'origin': 'orchestrator'}, 'begin_time': '2020-07-31T05:08:51.012334Z', 'end_time': '2020-07-31T05:08:51.014928Z', 'duration': 0.0025937557220458984, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}]}
2020-07-31 05:16:37,148.148 INFO:__main__:
2020-07-31 05:16:37,149.149 INFO:__main__:----------------------------------------------------------------------
2020-07-31 05:16:37,149.149 INFO:__main__:Ran 138 tests in 2056.558s
2020-07-31 05:16:37,149.149 INFO:__main__:
2020-07-31 05:16:37,149.149 INFO:__main__:
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4785/">https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4785/</a></p>
Dashboard - Bug #46686 (New): ERROR: setUpClass (tasks.mgr.dashboard.test_perf_counters.PerfCount...
https://tracker.ceph.com/issues/46686
2020-07-23T08:50:22Z
Sebastian Wagner
<pre>
2020-07-23 06:26:36,824.824 INFO:__main__:======================================================================
2020-07-23 06:26:36,824.824 INFO:__main__:ERROR: setUpClass (tasks.mgr.dashboard.test_perf_counters.PerfCountersControllerTest)
2020-07-23 06:26:36,824.824 INFO:__main__:----------------------------------------------------------------------
2020-07-23 06:26:36,824.824 INFO:__main__:Traceback (most recent call last):
2020-07-23 06:26:36,825.825 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 150, in setUpClass
2020-07-23 06:26:36,825.825 INFO:__main__: cls._load_module("dashboard")
2020-07-23 06:26:36,825.825 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/mgr_test_case.py", line 157, in _load_module
2020-07-23 06:26:36,826.826 INFO:__main__: cls.wait_until_true(has_restarted, timeout=30)
2020-07-23 06:26:36,826.826 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/ceph_test_case.py", line 194, in wait_until_true
2020-07-23 06:26:36,826.826 INFO:__main__: raise RuntimeError("Timed out after {0}s".format(elapsed))
2020-07-23 06:26:36,826.826 INFO:__main__:RuntimeError: Timed out after 30s
2020-07-23 06:26:36,826.826 INFO:__main__:
2020-07-23 06:26:36,826.826 INFO:__main__:----------------------------------------------------------------------
2020-07-23 06:26:36,826.826 INFO:__main__:Ran 116 tests in 2136.427s
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4199/">https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4199/</a></p>
Orchestrator - Feature #45876 (New): cephadm: handle port conflicts gracefully
https://tracker.ceph.com/issues/45876
2020-06-04T10:10:45Z
Sebastian Wagner
<pre>
INFO:cephadm:Verifying port 9100 ...
WARNING:cephadm:Cannot bind to IP 0.0.0.0 port 9100: [Errno 98] Address already in use
ERROR: TCP Port(s) '9100' required for node-exporter is already in use
Traceback (most recent call last):
File "/usr/share/ceph/mgr/cephadm/module.py", line 1638, in _run_cephadm
code, '\n'.join(err)))
RuntimeError: cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploying daemon node-exporter.ceph-mon ...
INFO:cephadm:Verifying port 9100 ...
WARNING:cephadm:Cannot bind to IP 0.0.0.0 port 9100: [Errno 98] Address already in use
ERROR: TCP Port(s) '9100' required for node-exporter is already in use
2020-05-15T13:33:46.966159+0000 mgr.ceph-mgr.dixgvy (mgr.14161) 678 : cephadm [WRN] Failed to apply node-exporter spec ServiceSpec(
{'placement': PlacementSpec(host_pattern='*'), 'service_type': 'node-exporter', 'service_id': None, 'unmanaged': False}
): cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploying daemon node-exporter.ceph-mon ...
INFO:cephadm:Verifying port 9100 ...
WARNING:cephadm:Cannot bind to IP 0.0.0.0 port 9100: [Errno 98] Address already in use
ERROR: TCP Port(s) '9100' required for node-exporter is already in use
</pre>
<p>Important bits are:</p>
<ul>
<li><strong>We already know which services want which ports.</strong> </li>
<li>we can easily prevent port conflicts for known daemons.</li>
<li>open Q: how to handle unknown daemons (i.e. pre-existing node expoter)</li>
</ul>
Orchestrator - Feature #44864 (New): cephadm: garbage collect old container images
https://tracker.ceph.com/issues/44864
2020-03-31T15:45:58Z
Sebastian Wagner
<p>cephadm: garbage collect old container images</p>
ceph-cm-ansible - Bug #43738 (New): cephadm: conflicts between attempted installs of libstoragemg...
https://tracker.ceph.com/issues/43738
2020-01-21T10:27:32Z
Sebastian Wagner
<pre>
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 420, in begin
super(CephLab, self).begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 263, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 290, in execute_playbook
self._handle_failure(command, status)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 314, in _handle_failure
raise AnsibleFailedError(failures)
failure_reason: '86 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz
conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
Error Summary
-------------
''}}Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure
log.error(yaml.safe_dump(failure))
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent
node = self.represent_data(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)RepresenterError: (''cannot represent an object'', u''/'')Failure
object was: {''smithi161.front.sepia.ceph.com'': {''_ansible_no_log'': False, ''changed'':
False, u''results'': [], u''rc'': 1, u''invocation'': {u''module_args'': {u''install_weak_deps'':
True, u''autoremove'': False, u''lock_timeout'': 0, u''download_dir'': None, u''install_repoquery'':
True, u''enable_plugin'': [], u''update_cache'': False, u''disable_excludes'': None,
u''exclude'': [], u''installroot'': u''/'', u''allow_downgrade'': False, u''name'':
[u''@core'', u''@base'', u''dnf-utils'', u''git-all'', u''sysstat'', u''libedit'',
u''boost-thread'', u''xfsprogs'', u''gdisk'', u''parted'', u''libgcrypt'', u''fuse-libs'',
u''openssl'', u''libuuid'', u''attr'', u''ant'', u''lsof'', u''gettext'', u''bc'',
u''xfsdump'', u''blktrace'', u''usbredir'', u''podman'', u''podman-docker'', u''libev-devel'',
u''valgrind'', u''nfs-utils'', u''ncurses-devel'', u''gcc'', u''git'', u''python3-nose'',
u''python3-virtualenv'', u''genisoimage'', u''qemu-img'', u''qemu-kvm-core'', u''qemu-kvm-block-rbd'',
u''libacl-devel'', u''dbench'', u''autoconf''], u''download_only'': False, u''bugfix'':
False, u''list'': None, u''disable_gpg_check'': False, u''conf_file'': None, u''update_only'':
False, u''state'': u''present'', u''disablerepo'': [], u''releasever'': None, u''disable_plugin'':
[], u''enablerepo'': [], u''skip_broken'': False, u''security'': False, u''validate_certs'':
True}}, u''failures'': [], u''msg'': u''Unknown Error occured: Transaction check
error:
file /usr/lib/tmpfiles.d/libstoragemgmt.conf conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/doc/libstoragemgmt/NEWS conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmcli.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmd.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/simc_lsmplugin.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log</a></p>
<ul>
<li>os_type: rhel</li>
<li>os_version: '8.0'</li>
<li>description: rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_api_tests.yaml}</li>
</ul>
<p>As this is an ansible error, I'm not sure if this is a cephadm issue. Any clues?</p>