Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2021-12-08T11:02:11Z
Ceph
Redmine
ceph-volume - Bug #53524 (New): CEPHADM_APPLY_SPEC_FAIL is very verbose
https://tracker.ceph.com/issues/53524
2021-12-08T11:02:11Z
Sebastian Wagner
<pre>
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
</pre>
rbd - Bug #46875 (New): TestLibRBD.TestPendingAio: test_librbd.cc:4539: Failure or SIGSEGV
https://tracker.ceph.com/issues/46875
2020-08-10T01:03:45Z
Sebastian Wagner
<pre>
[ RUN ] TestLibRBD.TestPendingAio
using new format!
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/librbd/test_librbd.cc:4539: Failure
Expected equality of these values:
1
rbd_aio_is_complete(comps[i])
Which is: 0
[ FAILED ] TestLibRBD.TestPendingAio (68 ms)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/57209/consoleFull#-361705261e840cee4-f4a4-4183-81dd-42855615f2c1">https://jenkins.ceph.com/job/ceph-pull-requests/57209/consoleFull#-361705261e840cee4-f4a4-4183-81dd-42855615f2c1</a></p>
sepia - Bug #46336 (New): https://download-cc-rdu01.fedoraproject.org is unreliable
https://tracker.ceph.com/issues/46336
2020-07-03T09:58:16Z
Sebastian Wagner
<pre>
2020-07-03T09:05:49.488 INFO:teuthology.orchestra.run.smithi058:> sudo yum -y install ceph-test
2020-07-03T09:05:49.626 INFO:teuthology.orchestra.run.smithi195.stdout:Transaction test succeeded.
2020-07-03T09:05:49.627 INFO:teuthology.orchestra.run.smithi195.stdout:Running transaction
2020-07-03T09:05:49.924 INFO:teuthology.orchestra.run.smithi058.stdout:Last metadata expiration check: 0:00:36 ago on Fri 03 Jul 2020 09:05:13 AM UTC.
2020-07-03T09:05:50.065 INFO:teuthology.orchestra.run.smithi195.stdout: Preparing : 1/1
2020-07-03T09:05:50.238 INFO:teuthology.orchestra.run.smithi195.stdout: Installing : libxslt-1.1.32-3.el8.x86_64 1/6
2020-07-03T09:05:50.310 INFO:teuthology.orchestra.run.smithi058.stdout:Dependencies resolved.
2020-07-03T09:05:50.311 INFO:teuthology.orchestra.run.smithi058.stdout:================================================================================
2020-07-03T09:05:50.311 INFO:teuthology.orchestra.run.smithi058.stdout: Package Arch Version Repository Size
2020-07-03T09:05:50.312 INFO:teuthology.orchestra.run.smithi058.stdout:================================================================================
2020-07-03T09:05:50.312 INFO:teuthology.orchestra.run.smithi058.stdout:Installing:
2020-07-03T09:05:50.312 INFO:teuthology.orchestra.run.smithi058.stdout: ceph-test x86_64 2:16.0.0-3122.ge1d6abcdc6f.el8 ceph 45 M
2020-07-03T09:05:50.313 INFO:teuthology.orchestra.run.smithi058.stdout:Installing dependencies:
2020-07-03T09:05:50.313 INFO:teuthology.orchestra.run.smithi058.stdout: jq x86_64 1.5-12.el8 CentOS-AppStream 161 k
2020-07-03T09:05:50.313 INFO:teuthology.orchestra.run.smithi058.stdout: oniguruma x86_64 6.8.2-1.el8 CentOS-AppStream 188 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout: socat x86_64 1.7.3.2-6.el8 CentOS-AppStream 298 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout: libxslt x86_64 1.1.32-3.el8 CentOS-Base 249 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout: xmlstarlet x86_64 1.6.1-11.el8 epel 69 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout:
2020-07-03T09:05:50.315 INFO:teuthology.orchestra.run.smithi058.stdout:Transaction Summary
2020-07-03T09:05:50.315 INFO:teuthology.orchestra.run.smithi058.stdout:================================================================================
2020-07-03T09:05:50.315 INFO:teuthology.orchestra.run.smithi058.stdout:Install 6 Packages
2020-07-03T09:05:50.316 INFO:teuthology.orchestra.run.smithi058.stdout:
2020-07-03T09:05:50.317 INFO:teuthology.orchestra.run.smithi058.stdout:Total download size: 46 M
2020-07-03T09:05:50.317 INFO:teuthology.orchestra.run.smithi058.stdout:Installed size: 194 M
2020-07-03T09:05:50.317 INFO:teuthology.orchestra.run.smithi058.stdout:Downloading Packages:
2020-07-03T09:05:50.348 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] jq-1.5-12.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/AppStream/x86_64/os/Packages/jq-1.5-12.el8.x86_64.rpm
2020-07-03T09:05:50.348 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] oniguruma-6.8.2-1.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/AppStream/x86_64/os/Packages/oniguruma-6.8.2-1.el8.x86_64.rpm
2020-07-03T09:05:50.394 INFO:teuthology.orchestra.run.smithi195.stdout: Installing : xmlstarlet-1.6.1-11.el8.x86_64 2/6
2020-07-03T09:05:50.454 INFO:teuthology.orchestra.run.smithi058.stdout:(1/6): jq-1.5-12.el8.x86_64.rpm 1.1 MB/s | 161 kB 00:00
2020-07-03T09:05:50.463 INFO:teuthology.orchestra.run.smithi058.stdout:(2/6): oniguruma-6.8.2-1.el8.x86_64.rpm 1.2 MB/s | 188 kB 00:00
2020-07-03T09:05:50.487 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.491 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] libxslt-1.1.32-3.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/BaseOS/x86_64/os/Packages/libxslt-1.1.32-3.el8.x86_64.rpm
2020-07-03T09:05:50.492 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 404 for http://mirror.linux.duke.edu/pub/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.502 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 404 for http://packages.oit.ncsu.edu/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.508 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] libxslt-1.1.32-3.el8.x86_64.rpm: Status code: 404 for http://mirror.linux.duke.edu/pub/centos/8/BaseOS/x86_64/os/Packages/libxslt-1.1.32-3.el8.x86_64.rpm
2020-07-03T09:05:50.508 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] libxslt-1.1.32-3.el8.x86_64.rpm: Status code: 404 for http://packages.oit.ncsu.edu/centos/8/BaseOS/x86_64/os/Packages/libxslt-1.1.32-3.el8.x86_64.rpm
2020-07-03T09:05:50.539 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 404 for http://distro.ibiblio.org/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.539 INFO:teuthology.orchestra.run.smithi058.stdout:[FAILED] socat-1.7.3.2-6.el8.x86_64.rpm: No more mirrors to try - All mirrors were already tried without success
2020-07-03T09:05:50.541 INFO:teuthology.orchestra.run.smithi058.stdout:
2020-07-03T09:05:50.542 INFO:teuthology.orchestra.run.smithi058.stdout:The downloaded packages were saved in cache until the next successful transaction.
2020-07-03T09:05:50.542 INFO:teuthology.orchestra.run.smithi058.stdout:You can remove cached packages by executing 'dnf clean packages'.
2020-07-03T09:05:50.555 INFO:teuthology.orchestra.run.smithi195.stdout: Installing : socat-1.7.3.2-6.el8.x86_64 3/6
2020-07-03T09:05:50.615 INFO:teuthology.orchestra.run.smithi058.stderr:Error: Error downloading packages:
2020-07-03T09:05:50.615 INFO:teuthology.orchestra.run.smithi058.stderr: Cannot download Packages/socat-1.7.3.2-6.el8.x86_64.rpm: All mirrors were tried
2
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-07-03_08:12:34-rados:cephadm-wip-swagner-testing-2020-07-02-1034-distro-basic-smithi/">https://pulpito.ceph.com/swagner-2020-07-03_08:12:34-rados:cephadm-wip-swagner-testing-2020-07-02-1034-distro-basic-smithi/</a></p>
Orchestrator - Documentation #45936 (New): cephadm: document restart the whole cluster
https://tracker.ceph.com/issues/45936
2020-06-08T13:46:40Z
Sebastian Wagner
<pre>
[15:23:28] <dcapone2004> I have a ceph dev cluster of 3 nodes deployed using cephadm with octopus on centos 8
[15:23:42] <-- beigestair (~beigestai@0BGAAAFRJ.tor-irc.dnsbl.oftc.net) hat das Netzwerk verlassen (Remote host closed the connection)
[15:24:26] <dcapone2004> this is going to be a hyperconverged openstack cluster, that i am essentially testing....a key element is that the location that we deploy this cluster at will change in about 18 months, so I have been trying to write up procedures to safely shut down the cluster and power it back up
[15:24:35] <dcapone2004> this is the latest place where i have run into an issue
[15:24:52] --> ragedragon (~ragedrago@i16-lef02-th2-89-83-167-245.ft.lns.abo.bbox.fr) hat #ceph betreten
[15:25:39] <dcapone2004> I stopped all disk activity on the cluster, set osd noout, then sht down the nodes of the cluster 1 by 1, with the active Manager being LAST
[15:25:55] --> beigestair (~beigestai@9J5AADHU6.tor-irc.dnsbl.oftc.net) hat #ceph betreten
[15:26:24] <dcapone2004> a few hours later, I tried to power the cluster back up starting with the last active manager and going in the reverse order that i shut the down
[15:26:44] <dcapone2004> and no I lost 2 OSD containers and all my manager containers
[15:26:48] <dcapone2004> now*
[15:27:10] <dcapone2004> ceph orch daemon redploy does nothing, nor does restart
[15:27:43] <dcapone2004> and when simply trying podman start, podman claims to not know about those containers, but the ceph dashboard shows the OSDs are in but down
[15:28:38] <SebastianW> dcapone2004: what does "I lost my manager containers" mean?
[15:28:57] <dcapone2004> meaning ceph -s shows no active manager containers
[15:29:31] <dcapone2004> podman start mgr.dev-lx-ceph11 (my hostname) says the container doesnt exist
[15:31:22] <dcapone2004> originally when i first started it up I only lost 1 of 2 containers, then after trying to use redeploy, the second disappeared....I am unsure if this is/was related to my attempt to upgrade to 15.2.3 which failed (and I filed a bug report for) and major the inconsistant version numbers between containers caused this issue
</pre>
Orchestrator - Documentation #45896 (New): cephadm: Need a manual howto: "upgrade the cluster man...
https://tracker.ceph.com/issues/45896
2020-06-04T13:18:43Z
Sebastian Wagner
<p>symptom:</p>
<pre>
$ ceph mon ok-to-stop host2
Error EBUSY: not enough monitors would be available (host1) after stopping mons [host2]
</pre>
<p>Maybe something like this could work:</p>
<pre>
ceph orch upgrade start --force-daemon mon.host1
</pre>
<p>possible workaround would be to manually perform the upgrade step:</p>
<pre>
ceph config set mon.host1 <container-image>
ceph orch redeploy mon.host1
<wait, till cephadm refreshes `ceph orch ps`
</pre>
teuthology - Bug #45583 (New): teuthology-suite: "--subset" combined with "--filter" generates du...
https://tracker.ceph.com/issues/45583
2020-05-18T11:03:34Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/">http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/</a></p>
<p>scheduled via</p>
<pre>
teuthology-suite -k distro --priority 75 --suite rados --filter cephadm --subset 1135/9999 --email swagner@suse.com --ceph wip-swagner-testing-2020-05-15-2348 --machine-type smithi
</pre>
<p>scheduled</p>
<ul>
<li><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066708">http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066708</a> </li>
<li><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066741">http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066741</a></li>
</ul>
<p>both having the description:</p>
<pre>
rados/cephadm/upgrade/{1-start.yaml 2-start-upgrade.yaml 3-wait.yaml distro$/{rhel_8.0.yaml} fixed-2.yaml}
</pre>
Infrastructure - Bug #45453 (New): 'https://download.ceph.com/debian-octopus focal Release' does ...
https://tracker.ceph.com/issues/45453
2020-05-08T20:28:48Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/">http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/</a></p>
<pre>
2020-05-08T15:24:54.694 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm/cephadm -v add-repo --release octopus
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:DEBUG:cephadm:Could not locate podman: podman not found
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo GPG key from https://download.ceph.com/keys/release.asc...
2020-05-08T15:24:54.964 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo file at /etc/apt/sources.list.d/ceph.list...
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ test_install_uninstall
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo apt update
2020-05-08T15:24:55.009 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.139 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
2020-05-08T15:24:55.270 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
2020-05-08T15:24:55.284 INFO:tasks.workunit.client.0.smithi204.stdout:Ign:3 https://download.ceph.com/debian-octopus focal InRelease
2020-05-08T15:24:55.320 INFO:tasks.workunit.client.0.smithi204.stdout:Err:4 https://download.ceph.com/debian-octopus focal Release
2020-05-08T15:24:55.321 INFO:tasks.workunit.client.0.smithi204.stdout: 404 Not Found [IP: 158.69.68.124 443]
2020-05-08T15:24:55.351 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease
2020-05-08T15:24:55.434 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
2020-05-08T15:24:56.423 INFO:tasks.workunit.client.0.smithi204.stdout:Reading package lists...
2020-05-08T15:24:56.442 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:E: The repository 'https://download.ceph.com/debian-octopus focal Release' does not have a Release file.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.444 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo yum -y install cephadm
2020-05-08T15:24:56.452 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: yum: command not found
2020-05-08T15:24:56.453 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo dnf -y install cephadm
2020-05-08T15:24:56.459 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: dnf: command not found
2020-05-08T15:24:56.460 DEBUG:teuthology.orchestra.run:got remote process result: 1
</pre>
<p>Are we going to get an ubuntu repo for focal or should I disable this test for it?</p>
teuthology - Bug #45442 (New): ubuntu 20.02: Hang on: "The following packages will be REMOVED:"
https://tracker.ceph.com/issues/45442
2020-05-08T07:43:16Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-07_16:10:50-rados-wip-swagner2-testing-2020-05-07-1308-distro-basic-smithi/5030975/">http://pulpito.ceph.com/swagner-2020-05-07_16:10:50-rados-wip-swagner2-testing-2020-05-07-1308-distro-basic-smithi/5030975/</a></p>
<pre>
2020-05-07T17:31:41.061 INFO:teuthology.orchestra.run.smithi086:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done
2020-05-07T17:31:41.179 INFO:teuthology.orchestra.run.smithi086.stdout:Reading package lists...
2020-05-07T17:31:41.315 INFO:teuthology.orchestra.run.smithi086.stdout:Building dependency tree...
2020-05-07T17:31:41.315 INFO:teuthology.orchestra.run.smithi086.stdout:Reading state information...
2020-05-07T17:31:41.442 INFO:teuthology.orchestra.run.smithi086.stdout:The following packages were automatically installed and are no longer required:
2020-05-07T17:31:41.443 INFO:teuthology.orchestra.run.smithi086.stdout: ceph-mon ceph-osd libboost-iostreams1.71.0
2020-05-07T17:31:41.443 INFO:teuthology.orchestra.run.smithi086.stdout:Use 'sudo apt autoremove' to remove them.
2020-05-07T17:31:41.455 INFO:teuthology.orchestra.run.smithi086.stdout:The following packages will be REMOVED:
2020-05-07T17:31:41.456 INFO:teuthology.orchestra.run.smithi086.stdout: ceph*
2020-05-08T05:03:17.376 DEBUG:teuthology.exit:Got signal 15; running 2 handlers...
2020-05-08T05:03:17.396 DEBUG:teuthology.task.console_log:Killing console logger for smithi086
</pre>
<p>Looks like as if <code>-y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"</code> is still not enough.</p>
RADOS - Feature #45079 (New): HEALTH_WARN, if require-osd-release is < mimic and OSD wants to joi...
https://tracker.ceph.com/issues/45079
2020-04-14T10:25:33Z
Sebastian Wagner
<p>When upgrading a cluster to octopus, users should get a warning, if require-osd-release is < mimic as this prevents osds from joining the cluster.</p>
<p>Right now, we get a INF in the logs:</p>
<pre>
cluster [INF] disallowing boot of octopus+ OSD osd.1 v1:172.16.1.25:6800/3051821808 because require_osd
</pre>
<p>this should be a HEALTH_WARN instead.</p>
ceph-volume - Feature #44630 (New): cephadm: improve behaviour with virtual disks
https://tracker.ceph.com/issues/44630
2020-03-16T17:44:09Z
Sebastian Wagner
<p>when your disks are virtual, the identify OSDs should just msg...it's<br />a no-op anyway. this could be extended to kvm virt disks and vmware to<br />cover the bases.</p>
<p>the vendor 0x1af4 is the "known" descriptor for VirtIO Disk - so we<br />should use that in the inventory display</p>
teuthology - Bug #44181 (New): Error in syslog: task.internal.syslog: random "*BUG*" in log message
https://tracker.ceph.com/issues/44181
2020-02-18T10:33:27Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/sage-2020-02-18_02:48:28-rados-wip-sage4-testing-2020-02-17-1727-distro-basic-smithi/4776502">http://pulpito.ceph.com/sage-2020-02-18_02:48:28-rados-wip-sage4-testing-2020-02-17-1727-distro-basic-smithi/4776502</a></p>
<p>This job failure was caused by</p>
<p><a class="external" href="https://github.com/ceph/teuthology/blob/291d40053a7a5caedc1d683f08d25399bc3b9ccd/teuthology/task/internal/syslog.py#L98-L144">https://github.com/ceph/teuthology/blob/291d40053a7a5caedc1d683f08d25399bc3b9ccd/teuthology/task/internal/syslog.py#L98-L144</a></p>
<pre>
2020-02-18T06:48:18.388 ERROR:teuthology.task.internal.syslog:Error in syslog on ubuntu@smithi060.front.sepia.ceph.com: /home/ubuntu/cephtest/archive/syslog/misc.log:2020-02-18T06:42:25.361371+00:00 smithi060 bash[10468]: audit 2020-02-18T06:42:24.442267+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.14124 172.21.15.60:0/1' entity='mgr.y' cmd=[{"prefix":"config-key set","key":"mgr/dashboard/key","val":"-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCnskmhDB10Jk6M\nXNpzP+7hOWVV7TYIeAGSapYoNcgQcPrQU0STPGuyUORmnKO4taTVz8EBvPL4p6Mv\niZFEIhL2OL07UexgDqKaAD4lne/KIhYQJVtkqPu/TYYemxa6xyl/V4LGPrSGYx0C\n8huP7RsqEMNLNMr/wG34hG7LCdGtcWk8aylma8XrXgukEHMsJJIeb+ZZKw3AnZCT\nbO++2B+V5DPtE4LId4x3G1PrVumH/whd6ciTtcImFspCRQgwlaPnDLf68bFXF67G\nBWJZZRFoTCc73fu/rUW1vGYk7WFiVi52WVbpgYrPc/AhWNOaH6d0xooBoohRjAYh\n7GDdVa+LAgMBAAECggEAM1HEZpymht0SPLJNx+dQ22wNLvahCoZvNLeZrESJLT7m\nAsr4uXZMHw3SV/SnxecQwr4JetawJJhowCuBYTBsTR2gC39OrzbLXAWm/ywOLfWw\netBz36I3KJw45zTfB9nbQTUuuCyIYngCcNxWwvz0yzLGEUXeudXR0bP1k/01RbZo\nhGe8oQSJzSN4zmfQtx/rSGCXJr23HUjPs0mVHqml2bZL9UZcuKu5RuN8PWSo0aOM\nZwGYa/1pcoo1OsN3XtujY9tU0Ykrd3rteAARMIBFzrktaWWhSdaiOQS0fYAnyGrX\njI7cjlsbtJfTt741wF0hmCZIGS40+HWTmwCTkapWwQKBgQDQkX4ZHyvFD96W81rz\nXLIdSEfgv0+andTC2v5kvlk4cxIYgic2g1R59gekZioOpVIQG/eCwiSFW5ndqjzI\nSGMj2bflL8vXv5q4EX+TL3W5LWOnR8k7FVxJpPsJbbyX93qbpcU/oKNSt5VDUPaL\neooha6lDP+HEAdfWHqe1PxP+9wKBgQDN1UzVWU8ur2tdlql2BVrwfi/J32/ZUFQG\nCPvC9RMdavZjKITu2Rg5LA6kYOnJ9MvVTU59Mf2c+6kKWQTaRhqTqhfAJYtjvmoC\naTGm7HGPywEOeMphF+LAb23DNcCzQFhBVduOfL8MSkTjJOjmaxyYc2qs+ts87NMt\nqCENAaPrDQKBgHvuV/1ZdkqsOVl81QhShku8DWnQg96d9jSqqAr4yE8woQoLHH3Z\n37JwrO3U/xygw3hrBdGextCvM2hxpZhk2vQMhKcclYVnhunlC+dLhio4fESD9WC0\nOphP/hMGL9Ak76fZArHiI+ocyAat7zHF6JofPP6G0QIFDlle8cxS5PDVAoGBAMRB\nByQ5JkV2HqG6YFNWYdICDuClOQj0DVk/wYSulY4sCUacQLtXpUAF4OQcP20/CgaT\n0i2Ot6ixTwi9veG8i+SVflXHtnLhAETSNfRZZyHaRmSdCSGwW5Rt6jMBkn2W8U9C\nZLgj+yjlu270J1hjcn1tNp4+BUG+8M+Mig7TrI4VAoGAYYltCD4bc2bBAPWnF6nk\nqrx16kKg0kjNdhATkBWt76jpsJYRmyo5NALLaB5/k0dS7ftuTmGEZLSnNyl44O2B\n7QH6PaRoP0hX5LtLwSZhiJxd6tDrfwMFzpVGiJHeUNKGS/GKQzlvlxUJb2aOhNWu\nMgFlLWfPOMgxiRpwUhtg0Is=\n-----END PRIVATE KEY-----\n"}]: dispatch
</pre>
<p>Unfortunately this log message contains "<code>BUG</code>" somewhere in the key.</p>
Messengers - Bug #44012 (New): shaman build error: (bionic+crimson): infiniband: error: aggregate...
https://tracker.ceph.com/issues/44012
2020-02-06T12:23:54Z
Sebastian Wagner
<p><a class="external" href="https://shaman.ceph.com/builds/ceph/wip-swagner-testing/3f9622d20ae8c91019b8fb97e46196113164006a/crimson/188435/">https://shaman.ceph.com/builds/ceph/wip-swagner-testing/3f9622d20ae8c91019b8fb97e46196113164006a/crimson/188435/</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=bionic,DIST=bionic,MACHINE_SIZE=huge/36282//consoleFull">https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=bionic,DIST=bionic,MACHINE_SIZE=huge/36282//consoleFull</a></p>
<pre>
Run Build Command:"/usr/bin/make" "cmTC_1399f/fast"
make[2]: Entering directory '/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp'
/usr/bin/make -f CMakeFiles/cmTC_1399f.dir/build.make CMakeFiles/cmTC_1399f.dir/build
make[3]: Entering directory '/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_1399f.dir/src.cxx.o
/usr/bin/c++ -g -O2 -fdebug-prefix-map=/build/ceph-15.1.0-264-g3f9622d=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -DHAVE_IBV_EXP -fPIE -o CMakeFiles/cmTC_1399f.dir/src.cxx.o -c /build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp/src.cxx
/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp/src.cxx: In function 'int main()':
/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp/src.cxx:5:31: error: aggregate 'main()::ibv_exp_gid_attr gid_attr' has incomplete type and cannot be defined
5 | struct ibv_exp_gid_attr gid_attr;
| ^~~~~~~~
/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp/src.cxx:6:7: error: 'ibv_exp_query_gid_attr' was not declared in this scope; did you mean 'ibv_exp_gid_attr'?
6 | ibv_exp_query_gid_attr(ctxt, 1, 0, &gid_attr);
| ^~~~~~~~~~~~~~~~~~~~~~
| ibv_exp_gid_attr
CMakeFiles/cmTC_1399f.dir/build.make:65: recipe for target 'CMakeFiles/cmTC_1399f.dir/src.cxx.o' failed
make[3]: *** [CMakeFiles/cmTC_1399f.dir/src.cxx.o] Error 1
make[3]: Leaving directory '/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_1399f/fast' failed
make[2]: *** [cmTC_1399f/fast] Error 2
make[2]: Leaving directory '/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp'
Source file was:
#include <infiniband/verbs.h>
int main() {
struct ibv_context* ctxt;
struct ibv_exp_gid_attr gid_attr;
ibv_exp_query_gid_attr(ctxt, 1, 0, &gid_attr);
return 0;
}
dh_auto_configure: cd obj-x86_64-linux-gnu && cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_VERBOSE_MAKEFILE=ON -DCMAKE_BUILD_TYPE=None -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -DCMAKE_EXPORT_NO_PACKAGE_REGISTRY=ON -DCMAKE_FIND_PACKAGE_NO_PACKAGE_REGISTRY=ON -DWITH_OCF=ON -DWITH_LTTNG=ON -DWITH_MGR_DASHBOARD_FRONTEND=OFF -DWITH_PYTHON3=3 -DWITH_CEPHFS_JAVA=ON -DWITH_CEPHFS_SHELL=ON -DWITH_SYSTEMD=ON -DCEPH_SYSTEMD_ENV_DIR=/etc/default -DCMAKE_INSTALL_LIBDIR=/usr/lib -DCMAKE_INSTALL_LIBEXECDIR=/usr/lib -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_SYSTEMD_SERVICEDIR=/lib/systemd/system -DBOOST_J=8 -DWITH_BOOST_CONTEXT=ON -DWITH_SEASTAR=ON -DWITH_STATIC_LIBSTDCXX=OFF returned exit code 1
debian/rules:47: recipe for target 'override_dh_auto_configure' failed
make[1]: *** [override_dh_auto_configure] Error 2
make[1]: Leaving directory '/build/ceph-15.1.0-264-g3f9622d'
debian/rules:44: recipe for target 'build' failed
make: *** [build] Error 2
</pre>
ceph-volume - Bug #43858 (New): ceph-volume: lvm zap requires `/dev/` prefix
https://tracker.ceph.com/issues/43858
2020-01-28T12:38:22Z
Sebastian Wagner
<p>This is different to the other lvm commands:</p>
<p>preparing:</p>
<pre>
root@ubuntu:~# losetup -f
/dev/loop12
root@ubuntu:~# LANG=C losetup /dev/loop12 disk-image
root@ubuntu:~# pvcreate /dev/loop12
Physical volume "/dev/loop12" successfully created.
root@ubuntu:~# vgcreate MyVg /dev/loop12
Volume group "MyVg" successfully created
root@ubuntu:~# lvcreate --size 5500M --name MyLV MyVg
Logical volume "MyLV" created.
root@ubuntu:~# ll /dev/MyVg/MyLV
lrwxrwxrwx 1 root root 7 Jan 28 11:38 /dev/MyVg/MyLV -> ../dm-0
root@ubuntu:~# vgs -o vg_tags MyVg
VG Tags
root@ubuntu:~# vgs -o lv_tags MyVg
LV Tags
</pre>
<pre>
# ceph-volume lvm zap MyVg/MyLV
stderr: lsblk: MyVg/MyLV: kein blockorientiertes Gerät
stderr: blkid: Fehler: MyVg/MyLV: Datei oder Verzeichnis nicht gefunden
stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
[--osd-fsid OSD_FSID]
[DEVICES [DEVICES ...]]
ceph-volume lvm zap: error: Unable to proceed with non-existing device: MyVg/MyLV
</pre>
<pre>
# ceph-volume lvm zap /dev/MyVg/MyLV
--> Zapping: /dev/MyVg/MyLV
Running command: /bin/dd if=/dev/zero of=/dev/MyVg/MyLV bs=1M count=10
stderr: 10+0 Datensätze ein
10+0 Datensätze aus
10485760 Bytes (10 MB, 10 MiB) kopiert, 0,00413076 s, 2,5 GB/s
--> Zapping successful for: <LV: /dev/MyVg/MyLV>
</pre>
ceph-cm-ansible - Bug #43738 (New): cephadm: conflicts between attempted installs of libstoragemg...
https://tracker.ceph.com/issues/43738
2020-01-21T10:27:32Z
Sebastian Wagner
<pre>
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 420, in begin
super(CephLab, self).begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 263, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 290, in execute_playbook
self._handle_failure(command, status)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 314, in _handle_failure
raise AnsibleFailedError(failures)
failure_reason: '86 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz
conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
Error Summary
-------------
''}}Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure
log.error(yaml.safe_dump(failure))
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent
node = self.represent_data(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)RepresenterError: (''cannot represent an object'', u''/'')Failure
object was: {''smithi161.front.sepia.ceph.com'': {''_ansible_no_log'': False, ''changed'':
False, u''results'': [], u''rc'': 1, u''invocation'': {u''module_args'': {u''install_weak_deps'':
True, u''autoremove'': False, u''lock_timeout'': 0, u''download_dir'': None, u''install_repoquery'':
True, u''enable_plugin'': [], u''update_cache'': False, u''disable_excludes'': None,
u''exclude'': [], u''installroot'': u''/'', u''allow_downgrade'': False, u''name'':
[u''@core'', u''@base'', u''dnf-utils'', u''git-all'', u''sysstat'', u''libedit'',
u''boost-thread'', u''xfsprogs'', u''gdisk'', u''parted'', u''libgcrypt'', u''fuse-libs'',
u''openssl'', u''libuuid'', u''attr'', u''ant'', u''lsof'', u''gettext'', u''bc'',
u''xfsdump'', u''blktrace'', u''usbredir'', u''podman'', u''podman-docker'', u''libev-devel'',
u''valgrind'', u''nfs-utils'', u''ncurses-devel'', u''gcc'', u''git'', u''python3-nose'',
u''python3-virtualenv'', u''genisoimage'', u''qemu-img'', u''qemu-kvm-core'', u''qemu-kvm-block-rbd'',
u''libacl-devel'', u''dbench'', u''autoconf''], u''download_only'': False, u''bugfix'':
False, u''list'': None, u''disable_gpg_check'': False, u''conf_file'': None, u''update_only'':
False, u''state'': u''present'', u''disablerepo'': [], u''releasever'': None, u''disable_plugin'':
[], u''enablerepo'': [], u''skip_broken'': False, u''security'': False, u''validate_certs'':
True}}, u''failures'': [], u''msg'': u''Unknown Error occured: Transaction check
error:
file /usr/lib/tmpfiles.d/libstoragemgmt.conf conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/doc/libstoragemgmt/NEWS conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmcli.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmd.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/simc_lsmplugin.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log</a></p>
<ul>
<li>os_type: rhel</li>
<li>os_version: '8.0'</li>
<li>description: rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_api_tests.yaml}</li>
</ul>
<p>As this is an ansible error, I'm not sure if this is a cephadm issue. Any clues?</p>
mgr - Bug #40016 (New): mgr/selftest: test_selftest_config_update failes in a vstart cluster
https://tracker.ceph.com/issues/40016
2019-05-23T12:43:50Z
Sebastian Wagner
<pre>
2019-05-23 14:37:59,692.692 INFO:__main__:Running ['./bin/ceph', 'log', 'Ended test tasks.mgr.test_module_selftest.TestModuleSelftest.test_selftest_config_update']
2019-05-23 14:38:02,798.798 INFO:__main__:Stopped test: test_selftest_config_update (tasks.mgr.test_module_selftest.TestModuleSelftest) in 39.056024s
2019-05-23 14:38:02,799.799 INFO:__main__:
2019-05-23 14:38:02,800.800 INFO:__main__:======================================================================
2019-05-23 14:38:02,800.800 INFO:__main__:FAIL: test_selftest_config_update (tasks.mgr.test_module_selftest.TestModuleSelftest)
2019-05-23 14:38:02,800.800 INFO:__main__:----------------------------------------------------------------------
2019-05-23 14:38:02,800.800 INFO:__main__:Traceback (most recent call last):
2019-05-23 14:38:02,801.801 INFO:__main__: File "/home/sebastian/Repos/ceph/qa/tasks/mgr/test_module_selftest.py", line 94, in test_selftest_config_update
2019-05-23 14:38:02,801.801 INFO:__main__: self.assertEqual(get_value(), "None")
2019-05-23 14:38:02,801.801 INFO:__main__:AssertionError: 'testvalue' != 'None'
2019-05-23 14:38:02,801.801 INFO:__main__:
2019-05-23 14:38:02,802.802 INFO:__main__:----------------------------------------------------------------------
2019-05-23 14:38:02,802.802 INFO:__main__:Ran 14 tests in 951.565s
2019-05-23 14:38:02,802.802 INFO:__main__:
2019-05-23 14:38:02,802.802 INFO:__main__:FAILED (failures=1)
2019-05-23 14:38:02,803.803 INFO:__main__:
2019-05-23 14:38:02,803.803 INFO:__main__:======================================================================
2019-05-23 14:38:02,803.803 INFO:__main__:FAIL: test_selftest_config_update (tasks.mgr.test_module_selftest.TestModuleSelftest)
2019-05-23 14:38:02,803.803 INFO:__main__:----------------------------------------------------------------------
2019-05-23 14:38:02,803.803 INFO:__main__:Traceback (most recent call last):
2019-05-23 14:38:02,804.804 INFO:__main__: File "/home/sebastian/Repos/ceph/qa/tasks/mgr/test_module_selftest.py", line 94, in test_selftest_config_update
2019-05-23 14:38:02,804.804 INFO:__main__: self.assertEqual(get_value(), "None")
2019-05-23 14:38:02,804.804 INFO:__main__:AssertionError: 'testvalue' != 'None'
</pre>
<p>running with vstart_running</p>