Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2021-12-08T11:02:11Z
Ceph
Redmine
ceph-volume - Bug #53524 (New): CEPHADM_APPLY_SPEC_FAIL is very verbose
https://tracker.ceph.com/issues/53524
2021-12-08T11:02:11Z
Sebastian Wagner
<pre>
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
</pre>
CephFS - Bug #48873 (Triaged): test_cluster_set_reset_user_config: AssertionError: NFS Ganesha cl...
https://tracker.ceph.com/issues/48873
2021-01-14T11:09:59Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2021-01-13_11:19:08-rados:cephadm-wip-swagner3-testing-2021-01-12-1316-distro-basic-smithi/5783002/">https://pulpito.ceph.com/swagner-2021-01-13_11:19:08-rados:cephadm-wip-swagner3-testing-2021-01-12-1316-distro-basic-smithi/5783002/</a></p>
<pre>
2021-01-13T11:55:37.749 INFO:tasks.cephfs_test_runner:Starting test: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS)
2021-01-13T11:55:37.749 DEBUG:teuthology.orchestra.run.smithi043:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph log 'Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config'
2021-01-13T11:55:38.750 DEBUG:teuthology.orchestra.run.smithi043:> sudo systemctl status nfs-server
2021-01-13T11:55:38.777 INFO:teuthology.orchestra.run.smithi043.stdout:* nfs-server.service - NFS server and services
2021-01-13T11:55:38.778 INFO:teuthology.orchestra.run.smithi043.stdout: Loaded: loaded (/lib/systemd/system/nfs-server.service; disabled; vendor preset: enabled)
2021-01-13T11:55:38.778 INFO:teuthology.orchestra.run.smithi043.stdout: Active: inactive (dead)
2021-01-13T11:55:38.778 INFO:teuthology.orchestra.run.smithi043.stdout:
2021-01-13T11:55:38.779 INFO:teuthology.orchestra.run.smithi043.stdout:Jan 13 11:43:35 smithi006 systemd[1]: Starting NFS server and services...
2021-01-13T11:55:38.779 INFO:teuthology.orchestra.run.smithi043.stdout:Jan 13 11:43:35 smithi006 systemd[1]: Started NFS server and services.
2021-01-13T11:55:38.779 INFO:teuthology.orchestra.run.smithi043.stdout:Jan 13 11:55:10 smithi043 systemd[1]: Stopping NFS server and services...
2021-01-13T11:55:38.779 INFO:teuthology.orchestra.run.smithi043.stdout:Jan 13 11:55:10 smithi043 systemd[1]: Stopped NFS server and services.
2021-01-13T11:55:38.780 DEBUG:teuthology.orchestra.run:got remote process result: 3
2021-01-13T11:55:38.781 DEBUG:teuthology.orchestra.run.smithi043:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph nfs cluster create cephfs test
2021-01-13T11:55:39.183 INFO:teuthology.orchestra.run.smithi043.stdout:NFS Cluster Created Successfully
2021-01-13T11:55:49.201 DEBUG:teuthology.orchestra.run.smithi043:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph orch ps --service_name=nfs.test
2021-01-13T11:55:49.532 INFO:teuthology.orchestra.run.smithi043.stdout:No daemons reported
2021-01-13T11:56:09.549 DEBUG:teuthology.orchestra.run.smithi043:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph orch ps --service_name=nfs.test
2021-01-13T11:56:09.883 INFO:teuthology.orchestra.run.smithi043.stdout:No daemons reported
2021-01-13T11:56:39.900 DEBUG:teuthology.orchestra.run.smithi043:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph orch ps --service_name=nfs.test
2021-01-13T11:56:40.234 INFO:teuthology.orchestra.run.smithi043.stdout:No daemons reported
2021-01-13T11:57:20.256 DEBUG:teuthology.orchestra.run.smithi043:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph orch ps --service_name=nfs.test
2021-01-13T11:57:20.590 INFO:teuthology.orchestra.run.smithi043.stdout:No daemons reported
2021-01-13T11:58:10.606 DEBUG:teuthology.orchestra.run.smithi043:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph orch ps --service_name=nfs.test
2021-01-13T11:58:10.942 INFO:teuthology.orchestra.run.smithi043.stdout:No daemons reported
2021-01-13T11:59:10.958 DEBUG:teuthology.orchestra.run.smithi043:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph orch ps --service_name=nfs.test
2021-01-13T11:59:11.291 INFO:teuthology.orchestra.run.smithi043.stdout:No daemons reported
2021-01-13T11:59:11.306 DEBUG:teuthology.orchestra.run.smithi043:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph log 'Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config'
2021-01-13T11:59:12.048 INFO:tasks.cephfs_test_runner:test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) ... FAIL
2021-01-13T11:59:12.049 INFO:tasks.cephfs_test_runner:
2021-01-13T11:59:12.049 INFO:tasks.cephfs_test_runner:======================================================================
2021-01-13T11:59:12.049 INFO:tasks.cephfs_test_runner:FAIL: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS)
2021-01-13T11:59:12.050 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2021-01-13T11:59:12.050 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2021-01-13T11:59:12.051 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner3-testing-2021-01-12-1316/qa/tasks/cephfs/test_nfs.py", line 462, in test_cluster_set_reset_user_config
2021-01-13T11:59:12.051 INFO:tasks.cephfs_test_runner: self._test_create_cluster()
2021-01-13T11:59:12.051 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner3-testing-2021-01-12-1316/qa/tasks/cephfs/test_nfs.py", line 123, in _test_create_cluster
2021-01-13T11:59:12.052 INFO:tasks.cephfs_test_runner: self._check_nfs_cluster_status('running', 'NFS Ganesha cluster deployment failed')
2021-01-13T11:59:12.052 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner3-testing-2021-01-12-1316/qa/tasks/cephfs/test_nfs.py", line 88, in _check_nfs_cluster_status
2021-01-13T11:59:12.052 INFO:tasks.cephfs_test_runner: self.fail(fail_msg)
2021-01-13T11:59:12.052 INFO:tasks.cephfs_test_runner:AssertionError: NFS Ganesha cluster deployment failed
</pre>
<p>Interestingly, previous calls to _test_create_cluster succeeded.</p>
Infrastructure - Bug #46333 (Resolved): unittest_rgw_dmclock_scheduler: error while loading share...
https://tracker.ceph.com/issues/46333
2020-07-02T15:35:09Z
Sebastian Wagner
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/54797/consoleFull#984723906e840cee4-f4a4-4183-81dd-42855615f2c1">https://jenkins.ceph.com/job/ceph-pull-requests/54797/consoleFull#984723906e840cee4-f4a4-4183-81dd-42855615f2c1</a></p>
<pre>
Start 183: unittest_rgw_dmclock_scheduler
/home/jenkins-build/build/workspace/ceph-pull-requests/build/bin/unittest_rgw_dmclock_scheduler: error while loading shared libraries: libboost_thread.so.1.73.0: cannot open shared object file: No such file or directory
...
99% tests passed, 1 tests failed out of 204
Total Test time (real) = 785.78 sec
The following tests FAILED:
183 - unittest_rgw_dmclock_scheduler (Failed)
Errors while running CTest
Build step 'Execute shell' marked build as failure
</pre>
Infrastructure - Bug #45453 (New): 'https://download.ceph.com/debian-octopus focal Release' does ...
https://tracker.ceph.com/issues/45453
2020-05-08T20:28:48Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/">http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/</a></p>
<pre>
2020-05-08T15:24:54.694 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm/cephadm -v add-repo --release octopus
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:DEBUG:cephadm:Could not locate podman: podman not found
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo GPG key from https://download.ceph.com/keys/release.asc...
2020-05-08T15:24:54.964 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo file at /etc/apt/sources.list.d/ceph.list...
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ test_install_uninstall
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo apt update
2020-05-08T15:24:55.009 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.139 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
2020-05-08T15:24:55.270 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
2020-05-08T15:24:55.284 INFO:tasks.workunit.client.0.smithi204.stdout:Ign:3 https://download.ceph.com/debian-octopus focal InRelease
2020-05-08T15:24:55.320 INFO:tasks.workunit.client.0.smithi204.stdout:Err:4 https://download.ceph.com/debian-octopus focal Release
2020-05-08T15:24:55.321 INFO:tasks.workunit.client.0.smithi204.stdout: 404 Not Found [IP: 158.69.68.124 443]
2020-05-08T15:24:55.351 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease
2020-05-08T15:24:55.434 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
2020-05-08T15:24:56.423 INFO:tasks.workunit.client.0.smithi204.stdout:Reading package lists...
2020-05-08T15:24:56.442 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:E: The repository 'https://download.ceph.com/debian-octopus focal Release' does not have a Release file.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.444 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo yum -y install cephadm
2020-05-08T15:24:56.452 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: yum: command not found
2020-05-08T15:24:56.453 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo dnf -y install cephadm
2020-05-08T15:24:56.459 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: dnf: command not found
2020-05-08T15:24:56.460 DEBUG:teuthology.orchestra.run:got remote process result: 1
</pre>
<p>Are we going to get an ubuntu repo for focal or should I disable this test for it?</p>
bluestore - Bug #45335 (Resolved): cephadm upgrade: OSD.0 is not coming back after restart: rock...
https://tracker.ceph.com/issues/45335
2020-04-29T15:58:52Z
Sebastian Wagner
<pre>
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.842415+0000 mgr.x (mgr.34535) 47 : cephadm [INF] Upgrade: Target is quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537 with id 9c90938ad11a31c5ba9b58ed052bf347591ae047e94bca695e7a022672efd3b9
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.843492+0000 mgr.x (mgr.34535) 48 : cephadm [INF] Upgrade: Checking mgr daemons...
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.848251+0000 mgr.x (mgr.34535) 49 : cephadm [INF] Upgrade: All mgr daemons are up to date.
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.848483+0000 mgr.x (mgr.34535) 50 : cephadm [INF] Upgrade: Checking mon daemons...
2020-04-28T22:15:25.776 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.849302+0000 mgr.x (mgr.34535) 51 : cephadm [INF] Upgrade: Setting container_image for all mon...
2020-04-28T22:15:25.776 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.868867+0000 mgr.x (mgr.34535) 52 : cephadm [INF] Upgrade: All mon daemons are up to date.
2020-04-28T22:15:25.777 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.869043+0000 mgr.x (mgr.34535) 53 : cephadm [INF] Upgrade: Checking crash daemons...
2020-04-28T22:15:25.777 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.869744+0000 mgr.x (mgr.34535) 54 : cephadm [INF] Upgrade: Setting container_image for all crash...
2020-04-28T22:15:25.777 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.870444+0000 mgr.x (mgr.34535) 55 : cephadm [INF] Upgrade: All crash daemons are up to date.
2020-04-28T22:15:25.778 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.870641+0000 mgr.x (mgr.34535) 56 : cephadm [INF] Upgrade: Checking osd daemons...
2020-04-28T22:15:25.778 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cluster 2020-04-28T22:15:24.333492+0000 mon.a (mon.0) 109 : cluster [DBG] mgrmap e25: x(active, since 41s), standbys: y
2020-04-28T22:15:26.521 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cluster 2020-04-28T22:15:25.061515+0000 mgr.x (mgr.34535) 57 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:26.522 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.422931+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.522 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.423236+0000 mgr.x (mgr.34535) 58 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.522 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cephadm 2020-04-28T22:15:25.424056+0000 mgr.x (mgr.34535) 59 : cephadm [INF] Upgrade: It is safe to stop osd.0
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cephadm 2020-04-28T22:15:25.424365+0000 mgr.x (mgr.34535) 60 : cephadm [INF] Upgrade: Redeploying osd.0
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.424676+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]: dispatch
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.428887+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd='[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]': finished
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.429664+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
2020-04-28T22:15:26.524 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.430373+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-04-28T22:15:26.524 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cephadm 2020-04-28T22:15:25.431342+0000 mgr.x (mgr.34535) 61 : cephadm [INF] Deploying daemon osd.0 on smithi151
2020-04-28T22:15:26.524 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.431836+0000 mon.a (mon.0) 115 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "osd.0", "key": "container_image"}]: dispatch
2020-04-28T22:15:26.524 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cluster 2020-04-28T22:15:25.061515+0000 mgr.x (mgr.34535) 57 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.422931+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.423236+0000 mgr.x (mgr.34535) 58 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cephadm 2020-04-28T22:15:25.424056+0000 mgr.x (mgr.34535) 59 : cephadm [INF] Upgrade: It is safe to stop osd.0
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cephadm 2020-04-28T22:15:25.424365+0000 mgr.x (mgr.34535) 60 : cephadm [INF] Upgrade: Redeploying osd.0
2020-04-28T22:15:26.526 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.424676+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]: dispatch
2020-04-28T22:15:26.526 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.428887+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd='[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]': finished
2020-04-28T22:15:26.526 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.429664+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
2020-04-28T22:15:26.527 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.430373+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-04-28T22:15:26.527 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cephadm 2020-04-28T22:15:25.431342+0000 mgr.x (mgr.34535) 61 : cephadm [INF] Deploying daemon osd.0 on smithi151
2020-04-28T22:15:26.527 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.431836+0000 mon.a (mon.0) 115 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "osd.0", "key": "container_image"}]: dispatch
2020-04-28T22:15:26.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cluster 2020-04-28T22:15:25.061515+0000 mgr.x (mgr.34535) 57 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:26.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.422931+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.423236+0000 mgr.x (mgr.34535) 58 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cephadm 2020-04-28T22:15:25.424056+0000 mgr.x (mgr.34535) 59 : cephadm [INF] Upgrade: It is safe to stop osd.0
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cephadm 2020-04-28T22:15:25.424365+0000 mgr.x (mgr.34535) 60 : cephadm [INF] Upgrade: Redeploying osd.0
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.424676+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]: dispatch
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.428887+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd='[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]': finished
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.429664+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.430373+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cephadm 2020-04-28T22:15:25.431342+0000 mgr.x (mgr.34535) 61 : cephadm [INF] Deploying daemon osd.0 on smithi151
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.431836+0000 mon.a (mon.0) 115 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "osd.0", "key": "container_image"}]: dispatch
2020-04-28T22:15:26.991 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 systemd[1]: Stopping Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c...
2020-04-28T22:15:26.992 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[28166]: debug 2020-04-28T22:15:26.838+0000 7fb6a45f8700 -1 received signal: Terminated from Kernel ( Could be generated by pthread_kill(), raise(), abort(), alarm() ) UID: 0
2020-04-28T22:15:26.992 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[28166]: debug 2020-04-28T22:15:26.838+0000 7fb6a45f8700 -1 osd.0 51 *** Got signal Terminated ***
2020-04-28T22:15:26.992 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[28166]: debug 2020-04-28T22:15:26.838+0000 7fb6a45f8700 -1 osd.0 51 *** Immediate shutdown (osd_fast_shutdown=true) ***
2020-04-28T22:15:27.271 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 podman[13417]: 2020-04-28 22:15:26.989657914 +0000 UTC m=+0.182639933 container died 816cfebb822abfc1a17c63b3c8dbf4d82a23b1f976117dbc325a433a615cc931 (image=docker.io/ceph/ceph:v15.2.0, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:27.272 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13417]: 2020-04-28 22:15:27.020897016 +0000 UTC m=+0.213879019 container stop 816cfebb822abfc1a17c63b3c8dbf4d82a23b1f976117dbc325a433a615cc931 (image=docker.io/ceph/ceph:v15.2.0, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:27.272 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13417]: 816cfebb822abfc1a17c63b3c8dbf4d82a23b1f976117dbc325a433a615cc931
2020-04-28T22:15:27.647 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.875889+0000 mon.a (mon.0) 116 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.647 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.875926+0000 mon.a (mon.0) 117 : cluster [INF] osd.0 failed (root=default,host=smithi151) (connection refused reported by osd.2)
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876132+0000 mon.a (mon.0) 118 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876254+0000 mon.a (mon.0) 119 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876427+0000 mon.a (mon.0) 120 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876630+0000 mon.a (mon.0) 121 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876804+0000 mon.a (mon.0) 122 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876953+0000 mon.a (mon.0) 123 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877082+0000 mon.a (mon.0) 124 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877205+0000 mon.a (mon.0) 125 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877341+0000 mon.a (mon.0) 126 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877492+0000 mon.a (mon.0) 127 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.074089+0000 mon.a (mon.0) 128 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.074241+0000 mon.a (mon.0) 129 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.075789+0000 mon.a (mon.0) 130 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.075870+0000 mon.a (mon.0) 131 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.076508+0000 mon.a (mon.0) 132 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.076604+0000 mon.a (mon.0) 133 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.076919+0000 mon.a (mon.0) 134 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077001+0000 mon.a (mon.0) 135 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077113+0000 mon.a (mon.0) 136 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077197+0000 mon.a (mon.0) 137 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077320+0000 mon.a (mon.0) 138 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077390+0000 mon.a (mon.0) 139 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077520+0000 mon.a (mon.0) 140 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077634+0000 mon.a (mon.0) 141 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.654 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.875889+0000 mon.a (mon.0) 116 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.654 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.875926+0000 mon.a (mon.0) 117 : cluster [INF] osd.0 failed (root=default,host=smithi151) (connection refused reported by osd.2)
2020-04-28T22:15:27.654 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876132+0000 mon.a (mon.0) 118 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876254+0000 mon.a (mon.0) 119 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876427+0000 mon.a (mon.0) 120 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876630+0000 mon.a (mon.0) 121 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876804+0000 mon.a (mon.0) 122 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876953+0000 mon.a (mon.0) 123 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877082+0000 mon.a (mon.0) 124 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877205+0000 mon.a (mon.0) 125 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877341+0000 mon.a (mon.0) 126 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877492+0000 mon.a (mon.0) 127 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.074089+0000 mon.a (mon.0) 128 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.074241+0000 mon.a (mon.0) 129 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.075789+0000 mon.a (mon.0) 130 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.075870+0000 mon.a (mon.0) 131 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.076508+0000 mon.a (mon.0) 132 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.076604+0000 mon.a (mon.0) 133 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.076919+0000 mon.a (mon.0) 134 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077001+0000 mon.a (mon.0) 135 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077113+0000 mon.a (mon.0) 136 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077197+0000 mon.a (mon.0) 137 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077320+0000 mon.a (mon.0) 138 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077390+0000 mon.a (mon.0) 139 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.660 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077520+0000 mon.a (mon.0) 140 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.660 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077634+0000 mon.a (mon.0) 141 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.875889+0000 mon.a (mon.0) 116 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.875926+0000 mon.a (mon.0) 117 : cluster [INF] osd.0 failed (root=default,host=smithi151) (connection refused reported by osd.2)
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876132+0000 mon.a (mon.0) 118 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876254+0000 mon.a (mon.0) 119 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876427+0000 mon.a (mon.0) 120 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876630+0000 mon.a (mon.0) 121 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876804+0000 mon.a (mon.0) 122 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876953+0000 mon.a (mon.0) 123 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877082+0000 mon.a (mon.0) 124 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877205+0000 mon.a (mon.0) 125 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877341+0000 mon.a (mon.0) 126 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877492+0000 mon.a (mon.0) 127 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.074089+0000 mon.a (mon.0) 128 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.074241+0000 mon.a (mon.0) 129 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.075789+0000 mon.a (mon.0) 130 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.075870+0000 mon.a (mon.0) 131 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.076508+0000 mon.a (mon.0) 132 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.076604+0000 mon.a (mon.0) 133 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.076919+0000 mon.a (mon.0) 134 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077001+0000 mon.a (mon.0) 135 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077113+0000 mon.a (mon.0) 136 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077197+0000 mon.a (mon.0) 137 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077320+0000 mon.a (mon.0) 138 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.693 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077390+0000 mon.a (mon.0) 139 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.693 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077520+0000 mon.a (mon.0) 140 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.693 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077634+0000 mon.a (mon.0) 141 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:28.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.64575936 +0000 UTC m=+0.606987472 container create c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.820779587 +0000 UTC m=+0.782007706 container init c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.023 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.862377714 +0000 UTC m=+0.823605831 container start c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.023 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.862442318 +0000 UTC m=+0.823670460 container attach c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.349 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 podman[13459]: 2020-04-28 22:15:28.087744802 +0000 UTC m=+1.048972928 container died c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.605 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.062007+0000 mgr.x (mgr.34535) 62 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:28.605 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.338144+0000 mon.a (mon.0) 142 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2020-04-28T22:15:28.605 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.347350+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in
2020-04-28T22:15:28.605 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 podman[13459]: 2020-04-28 22:15:28.587885902 +0000 UTC m=+1.549114039 container remove c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.606 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 systemd[1]: Stopped Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c.
2020-04-28T22:15:28.606 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.062007+0000 mgr.x (mgr.34535) 62 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:28.606 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.338144+0000 mon.a (mon.0) 142 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2020-04-28T22:15:28.607 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.347350+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in
2020-04-28T22:15:28.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:28 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.062007+0000 mgr.x (mgr.34535) 62 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:28.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:28 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.338144+0000 mon.a (mon.0) 142 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2020-04-28T22:15:28.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:28 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.347350+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in
2020-04-28T22:15:29.021 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 systemd[1]: Starting Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c...
2020-04-28T22:15:29.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 podman[13562]: Error: no container with name or ID ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0 found: no such container
2020-04-28T22:15:29.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 systemd[1]: Started Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c.
2020-04-28T22:15:29.322 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.0207602 +0000 UTC m=+0.262426894 container create edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.322 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.120609901 +0000 UTC m=+0.362276575 container init edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.323 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.162161904 +0000 UTC m=+0.403828610 container start edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.323 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.162247399 +0000 UTC m=+0.403914112 container attach edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:29 smithi156 bash[4373]: cluster 2020-04-28T22:15:28.355727+0000 mon.a (mon.0) 144 : cluster [DBG] osdmap e53: 8 total, 7 up, 8 in
2020-04-28T22:15:29.775 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[11812]: audit 2020-04-28T22:15:28.777097+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "container_image"}]: dispatch
2020-04-28T22:15:29.776 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
2020-04-28T22:15:29.776 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/vg_nvme/lv_4 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
2020-04-28T22:15:29.776 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/ln -snf /dev/vg_nvme/lv_4 /var/lib/ceph/osd/ceph-0/block
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: --> ceph-volume lvm activate successful for osd ID: 0
2020-04-28T22:15:29.778 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.444755363 +0000 UTC m=+0.686422056 container died edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:30.180 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.902456456 +0000 UTC m=+1.144123162 container remove edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:30.180 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.137040328 +0000 UTC m=+0.215911961 container create 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.505 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:30 smithi151 bash[10946]: cluster 2020-04-28T22:15:29.062506+0000 mgr.x (mgr.34535) 63 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:30.506 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:30 smithi151 bash[11812]: cluster 2020-04-28T22:15:29.062506+0000 mgr.x (mgr.34535) 63 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:30.506 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.253693628 +0000 UTC m=+0.332565244 container init 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.506 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.295389095 +0000 UTC m=+0.374260713 container start 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.506 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.295458519 +0000 UTC m=+0.374330136 container attach 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:30 smithi156 bash[4373]: cluster 2020-04-28T22:15:29.062506+0000 mgr.x (mgr.34535) 63 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:31.077 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 bash[13577]: debug 2020-04-28T22:15:30.801+0000 7f47628adec0 -1 Falling back to public interface
2020-04-28T22:15:31.077 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.074+0000 7f47628adec0 -1 rocksdb: verify_sharding mismatch on sharding. requested = [(L,1,0-,),(O,3,0-13,),(m,3,0-,)] stored = []
2020-04-28T22:15:31.078 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.074+0000 7f47628adec0 -1 bluestore(/var/lib/ceph/osd/ceph-0) _open_db erroring opening db:
2020-04-28T22:15:31.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:31 smithi156 bash[4373]: cluster 2020-04-28T22:15:31.361256+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)
2020-04-28T22:15:31.731 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[10946]: cluster 2020-04-28T22:15:31.361256+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)
2020-04-28T22:15:31.731 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[11812]: cluster 2020-04-28T22:15:31.361256+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)
2020-04-28T22:15:31.732 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.589+0000 7f47628adec0 -1 osd.0 0 OSD:init: unable to mount object store
2020-04-28T22:15:31.732 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.589+0000 7f47628adec0 -1 ** ERROR: osd init failed: (5) Input/output error
2020-04-28T22:15:32.021 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 podman[13761]: 2020-04-28 22:15:31.729840599 +0000 UTC m=+1.808712241 container died 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:32.614 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:32 smithi151 bash[10946]: cluster 2020-04-28T22:15:31.063005+0000 mgr.x (mgr.34535) 64 : cluster [DBG] pgmap v29: 1 pgs: 1 active+undersized+degraded; 0 B data, 4.0 MiB used, 707 GiB / 715 GiB avail; 1/3 objects degraded (33.333%)
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/mgfritch-2020-04-28_21:52:13-rados-wip-mgfritch-testing-2020-04-28-1427-distro-basic-smithi/4995175/teuthology.log">http://qa-proxy.ceph.com/teuthology/mgfritch-2020-04-28_21:52:13-rados-wip-mgfritch-testing-2020-04-28-1427-distro-basic-smithi/4995175/teuthology.log</a></p>
ceph-volume - Feature #44630 (New): cephadm: improve behaviour with virtual disks
https://tracker.ceph.com/issues/44630
2020-03-16T17:44:09Z
Sebastian Wagner
<p>when your disks are virtual, the identify OSDs should just msg...it's<br />a no-op anyway. this could be extended to kvm virt disks and vmware to<br />cover the bases.</p>
<p>the vendor 0x1af4 is the "known" descriptor for VirtIO Disk - so we<br />should use that in the inventory display</p>
Dashboard - Feature #44605 (Resolved): cephadm: RGW: missing dashboard integration
https://tracker.ceph.com/issues/44605
2020-03-13T16:49:23Z
Sebastian Wagner
<p>I think this is a missed opportunity. Whatever parms need to be set for<br />multisite/zone configs for rgw, we still need a dashboard user with system<br />CAPS and the update applied to the dashboard to use the account.</p>
ceph-volume - Bug #44356 (Resolved): ceph-volume inventory: KeyError: 'ceph.cluster_name'
https://tracker.ceph.com/issues/44356
2020-02-28T18:59:45Z
Sebastian Wagner
<pre>
Module 'cephadm' has failed: cephadm exited with an error code: 1, stderr:INFO:cephadm:/usr/bin/podman:stderr WARNING: The same type, major and minor should not be used for multiple devices.
INFO:cephadm:/usr/bin/podman:stderr WARNING: The same type, major and minor should not be used for multiple devices.
INFO:cephadm:/usr/bin/podman:stderr --> KeyError: 'ceph.cluster_name'
Traceback (most recent call last):
File "<stdin>", line 3394, in <module>
File "<stdin>", line 688, in _infer_fsid
File "<stdin>", line 2202, in command_ceph_volume
File "<stdin>", line 513, in call_throws
RuntimeError: Failed command: /usr/bin/podman run --rm --net=host --privileged --group-add=disk -e CONTAINER_IMAGE=registry.suse.de/suse/sle-15-sp2/update/products/ses7/update/cr/containers/ses/7/ceph/ceph:latest -e NODE_NAME=hses-node1 -v /var/run/ceph/002c389e-54fd-11ea-a99f-52540044d765:/var/run/ceph:z -v /var/log/ceph/002c389e-54fd-11ea-a99f-52540044d765:/var/log/ceph:z -v /var/lib/ceph/002c389e-54fd-11ea-a99f-52540044d765/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm --entrypoint /usr/sbin/ceph-volume registry.suse.de/suse/sle-15-sp2/update/products/ses7/update/cr/containers/ses/7/ceph/ceph:latest inventory --format=json
</pre>
<pre>
hses-node1:~ # ceph -s
cluster:
id: 002c389e-54fd-11ea-a99f-52540044d765
health: HEALTH_ERR
1 filesystem is offline
1 filesystem is online with fewer MDS than max_mds
Module 'cephadm' has failed: cephadm exited with an error code: 1, stderr:INFO:cephadm:/usr/bin/podman:stderr WARNING: The same type, major and minor should not be used for multiple devices.
INFO:cephadm:/usr/bin/podman:stderr WARNING: The same type, major and minor should not be used for multiple devices.
INFO:cephadm:/usr/bin/podman:stderr --> KeyError: 'ceph.cluster_name'
Traceback (most recent call last):
File "<stdin>", line 3394, in <module>
File "<stdin>", line 688, in _infer_fsid
File "<stdin>", line 2202, in command_ceph_volume
File "<stdin>", line 513, in call_throws
RuntimeError: Failed command: /usr/bin/podman run --rm --net=host --privileged --group-add=disk -e CONTAINER_IMAGE=registry.suse.de/suse/sle-15-sp2/update/products/ses7/update/cr/containers/ses/7/ceph/ceph:latest -e NODE_NAME=hses-node1 -v /var/run/ceph/002c389e-54fd-11ea-a99f-52540044d765:/var/run/ceph:z -v /var/log/ceph/002c389e-54fd-11ea-a99f-52540044d765:/var/log/ceph:z -v /var/lib/ceph/002c389e-54fd-11ea-a99f-52540044d765/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm --entrypoint /usr/sbin/ceph-volume registry.suse.de/suse/sle-15-sp2/update/products/ses7/update/cr/containers/ses/7/ceph/ceph:latest inventory --format=json
services:
mon: 4 daemons, quorum hses-node1,hses-node2,hses-node3,hses-node4 (age 11m)
mgr: hses-node1.jxzdin(active, since 13m), standbys: hses-node1.aogwfz, hses-node2.dlyvwy, hses-node3.delhzp, hses-node4.vgmgec
mds: myfs:0
osd: 8 osds: 8 up (since 6m), 8 in (since 6m)
rgw: 4 daemons active (default.default.hses-node1.sirofe, default.default.hses-node2.kvzapg, default.default.hses-node3.dhobfn, default.default.hses-node4.cuulnm)
task status:
data:
pools: 6 pools, 168 pgs
objects: 189 objects, 5.8 KiB
usage: 8.2 GiB used, 152 GiB / 160 GiB avail
pgs: 168 active+clean
</pre>
ceph-volume - Bug #44096 (Resolved): lvm prepare doesn't create vg and thus does not pass vg name...
https://tracker.ceph.com/issues/44096
2020-02-12T12:00:58Z
Sebastian Wagner
<p>Extracted from: <a class="external" href="https://tracker.ceph.com/issues/44028">https://tracker.ceph.com/issues/44028</a></p>
<pre>
$ ceph-volume lvm prepare --bluestore --data /dev/sdc --no-systemd
INFO:cephadm:/usr/bin/docker:stderr unable to read label for /dev/sdc: (2) No such file or directory
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/bin/ceph-authtool --gen-print-key
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 11e7e2c4-7e16-417e-b56a-a45b7a4c6dd6
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-11e7e2c4-7e16-417e-b56a-a45b7a4c6dd6
INFO:cephadm:/usr/bin/docker:stderr stderr: Volume group name has invalid characters
INFO:cephadm:/usr/bin/docker:stderr Run `lvcreate --help' for more information.
INFO:cephadm:/usr/bin/docker:stderr --> Was unable to complete a new OSD, will rollback changes
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
INFO:cephadm:/usr/bin/docker:stderr stderr: purged osd.0
INFO:cephadm:/usr/bin/docker:stderr --> RuntimeError: command returned non-zero exit status: 3
</pre>
<p>Looks like "<code>100%FREE</code>" is often used in the ceph-ansible context: <a class="external" href="https://github.com/search?p=1&q=ceph+100%25FREE&type=Code">https://github.com/search?p=1&q=ceph+100%25FREE&type=Code</a></p>
ceph-volume - Bug #43899 (Resolved): cephadm: Remove the clutch between Teuthology and ceph-volume
https://tracker.ceph.com/issues/43899
2020-01-30T13:07:13Z
Sebastian Wagner
<ul>
<li><a class="external" href="https://github.com/ceph/ceph/commit/614c0eb77eb44dd74165613c7fe818b8330cf7ba">https://github.com/ceph/ceph/commit/614c0eb77eb44dd74165613c7fe818b8330cf7ba</a></li>
</ul>
ceph-volume - Bug #43858 (New): ceph-volume: lvm zap requires `/dev/` prefix
https://tracker.ceph.com/issues/43858
2020-01-28T12:38:22Z
Sebastian Wagner
<p>This is different to the other lvm commands:</p>
<p>preparing:</p>
<pre>
root@ubuntu:~# losetup -f
/dev/loop12
root@ubuntu:~# LANG=C losetup /dev/loop12 disk-image
root@ubuntu:~# pvcreate /dev/loop12
Physical volume "/dev/loop12" successfully created.
root@ubuntu:~# vgcreate MyVg /dev/loop12
Volume group "MyVg" successfully created
root@ubuntu:~# lvcreate --size 5500M --name MyLV MyVg
Logical volume "MyLV" created.
root@ubuntu:~# ll /dev/MyVg/MyLV
lrwxrwxrwx 1 root root 7 Jan 28 11:38 /dev/MyVg/MyLV -> ../dm-0
root@ubuntu:~# vgs -o vg_tags MyVg
VG Tags
root@ubuntu:~# vgs -o lv_tags MyVg
LV Tags
</pre>
<pre>
# ceph-volume lvm zap MyVg/MyLV
stderr: lsblk: MyVg/MyLV: kein blockorientiertes Gerät
stderr: blkid: Fehler: MyVg/MyLV: Datei oder Verzeichnis nicht gefunden
stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
[--osd-fsid OSD_FSID]
[DEVICES [DEVICES ...]]
ceph-volume lvm zap: error: Unable to proceed with non-existing device: MyVg/MyLV
</pre>
<pre>
# ceph-volume lvm zap /dev/MyVg/MyLV
--> Zapping: /dev/MyVg/MyLV
Running command: /bin/dd if=/dev/zero of=/dev/MyVg/MyLV bs=1M count=10
stderr: 10+0 Datensätze ein
10+0 Datensätze aus
10485760 Bytes (10 MB, 10 MiB) kopiert, 0,00413076 s, 2,5 GB/s
--> Zapping successful for: <LV: /dev/MyVg/MyLV>
</pre>
ceph-cm-ansible - Bug #43738 (New): cephadm: conflicts between attempted installs of libstoragemg...
https://tracker.ceph.com/issues/43738
2020-01-21T10:27:32Z
Sebastian Wagner
<pre>
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 420, in begin
super(CephLab, self).begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 263, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 290, in execute_playbook
self._handle_failure(command, status)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 314, in _handle_failure
raise AnsibleFailedError(failures)
failure_reason: '86 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz
conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
Error Summary
-------------
''}}Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure
log.error(yaml.safe_dump(failure))
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent
node = self.represent_data(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)RepresenterError: (''cannot represent an object'', u''/'')Failure
object was: {''smithi161.front.sepia.ceph.com'': {''_ansible_no_log'': False, ''changed'':
False, u''results'': [], u''rc'': 1, u''invocation'': {u''module_args'': {u''install_weak_deps'':
True, u''autoremove'': False, u''lock_timeout'': 0, u''download_dir'': None, u''install_repoquery'':
True, u''enable_plugin'': [], u''update_cache'': False, u''disable_excludes'': None,
u''exclude'': [], u''installroot'': u''/'', u''allow_downgrade'': False, u''name'':
[u''@core'', u''@base'', u''dnf-utils'', u''git-all'', u''sysstat'', u''libedit'',
u''boost-thread'', u''xfsprogs'', u''gdisk'', u''parted'', u''libgcrypt'', u''fuse-libs'',
u''openssl'', u''libuuid'', u''attr'', u''ant'', u''lsof'', u''gettext'', u''bc'',
u''xfsdump'', u''blktrace'', u''usbredir'', u''podman'', u''podman-docker'', u''libev-devel'',
u''valgrind'', u''nfs-utils'', u''ncurses-devel'', u''gcc'', u''git'', u''python3-nose'',
u''python3-virtualenv'', u''genisoimage'', u''qemu-img'', u''qemu-kvm-core'', u''qemu-kvm-block-rbd'',
u''libacl-devel'', u''dbench'', u''autoconf''], u''download_only'': False, u''bugfix'':
False, u''list'': None, u''disable_gpg_check'': False, u''conf_file'': None, u''update_only'':
False, u''state'': u''present'', u''disablerepo'': [], u''releasever'': None, u''disable_plugin'':
[], u''enablerepo'': [], u''skip_broken'': False, u''security'': False, u''validate_certs'':
True}}, u''failures'': [], u''msg'': u''Unknown Error occured: Transaction check
error:
file /usr/lib/tmpfiles.d/libstoragemgmt.conf conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/doc/libstoragemgmt/NEWS conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmcli.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmd.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/simc_lsmplugin.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log</a></p>
<ul>
<li>os_type: rhel</li>
<li>os_version: '8.0'</li>
<li>description: rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_api_tests.yaml}</li>
</ul>
<p>As this is an ansible error, I'm not sure if this is a cephadm issue. Any clues?</p>
Ceph - Bug #42528 (Resolved): python-common bulid failure: File not found: ceph-*.egg-info
https://tracker.ceph.com/issues/42528
2019-10-29T12:19:41Z
Sebastian Wagner
<pre>
PM build errors:
File not found: /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python2.7/site-packages/ceph
File not found by glob: /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python2.7/site-packages/ceph-*.egg-info
+ rm -fr /tmp/install-deps.1830
Build step 'Execute shell' marked build as failure
</pre>
<pre>
running install
running build
running build_py
creating build
creating build/lib
creating build/lib/ceph
copying ceph/__init__.py -> build/lib/ceph
copying ceph/exceptions.py -> build/lib/ceph
creating build/lib/ceph/deployment
copying ceph/deployment/__init__.py -> build/lib/ceph/deployment
copying ceph/deployment/drive_group.py -> build/lib/ceph/deployment
copying ceph/deployment/ssh_orchestrator.py -> build/lib/ceph/deployment
creating build/lib/ceph/tests
copying ceph/tests/__init__.py -> build/lib/ceph/tests
copying ceph/tests/test_drive_group.py -> build/lib/ceph/tests
running install_lib
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
copying build/lib/ceph/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
copying build/lib/ceph/exceptions.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/drive_group.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/ssh_orchestrator.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
copying build/lib/ceph/tests/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
copying build/lib/ceph/tests/test_drive_group.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/exceptions.py to exceptions.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/drive_group.py to drive_group.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/ssh_orchestrator.py to ssh_orchestrator.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests/test_drive_group.py to test_drive_group.cpython-36.pyc
running install_egg_info
running egg_info
creating ceph.egg-info
writing ceph.egg-info/PKG-INFO
writing dependency_links to ceph.egg-info/dependency_links.txt
writing requirements to ceph.egg-info/requires.txt
writing top-level names to ceph.egg-info/top_level.txt
writing manifest file 'ceph.egg-info/SOURCES.txt'
reading manifest file 'ceph.egg-info/SOURCES.txt'
writing manifest file 'ceph.egg-info/SOURCES.txt'
Copying ceph.egg-info to /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph-1.0.0-py3.6.egg-info
running install_scripts
Traceback (most recent call last):
File "setup.py", line 45, in <module>
'Programming Language :: Python :: 3.6',
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, in fetch_build_eggs
parse_requirements(requires), installer=self.fetch_build_egg
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 618, in resolve
dist = best[req.key] = env.best_match(req, self, installer)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 862, in best_match
return self.obtain(req, installer) # try and download/install
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 874, in obtain
return installer(requirement)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 339, in fetch_build_egg
return cmd.easy_install(req)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 623, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 653, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 849, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1130, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1115, in run_setup
run_setup(setup_script, args)
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 69, in run_setup
lambda: execfile(
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 120, in run
return func()
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 71, in <lambda>
{'__file__':setup_script, '__name__':'__main__'}
File "setup.py", line 21, in <module>
packages=find_packages(),
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 269, in __init__
_Distribution.__init__(self,attrs)
File "/usr/lib64/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 302, in finalize_options
ep.load()(self, ep.name, value)
File "build/bdist.linux-x86_64/egg/setuptools_scm/integration.py", line 9, in version_keyword
File "build/bdist.linux-x86_64/egg/setuptools_scm/version.py", line 66, in _warn_if_setuptools_outdated
setuptools_scm.version.SetuptoolsOutdatedWarning: your setuptools is too old (<12)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31272//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31272//consoleFull</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31269//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31269//consoleFull</a></p>
Messengers - Bug #38605 (Can't reproduce): possibly msgr? no_reply to mgr
https://tracker.ceph.com/issues/38605
2019-03-06T14:54:10Z
Sebastian Wagner
<p>I did run a teuthology test locally, which failed with a timeout</p>
<p>I've attached the mgr, the mon (truncated to 1MB) and the teuthology log.</p>
<p>also from the mon log:</p>
<pre>
2019-03-06 15:23:04.204 7f72c901c700 7 mon.a@0(leader).log v484 update_from_paxos applying incremental log 484 2019-03-06 15:22:52.668037 mgr.y (mgr.7003) 14 : audit [DBG] from='client.7079 -' entity='client.admin' cmd=[{"prefix": "device ls-lights", "target": ["mgr", ""]}]: dispatch
2019-03-06 15:23:04.208 7f72c901c700 20 mon.a@0(leader).log v484 update_from_paxos logging for channel 'audit' to file '/home/sebastian/Repos/ceph/build/out/cluster.mon.a.log'
2019-03-06 15:23:04.208 7f72c901c700 15 mon.a@0(leader).log v484 update_from_paxos logging for 1 channels
2019-03-06 15:23:04.208 7f72c901c700 15 mon.a@0(leader).log v484 update_from_paxos channel 'audit' logging 174 bytes
2019-03-06 15:23:04.208 7f72c901c700 10 mon.a@0(leader).log v484 check_subs
2019-03-06 15:23:04.208 7f72c901c700 10 mon.a@0(leader).paxosservice(monmap 1..1) refresh
2019-03-06 15:23:04.208 7f72c901c700 10 mon.a@0(leader).paxosservice(auth 1..10) refresh
2019-03-06 15:23:04.208 7f72c901c700 10 mon.a@0(leader).auth v10 update_from_paxos
2019-03-06 15:23:04.208 7f72c901c700 10 mon.a@0(leader).paxosservice(mgr 1..93) refresh
2019-03-06 15:23:04.212 7f72c901c700 10 mon.a@0(leader).config load_config got 44 keys
2019-03-06 15:23:04.212 7f72c901c700 20 mon.a@0(leader).config load_config config map:
</pre>
ceph-volume - Bug #37390 (Resolved): c-v inventory returns invalid JSON
https://tracker.ceph.com/issues/37390
2018-11-26T13:16:39Z
Sebastian Wagner
<p>print() uses single-quotes by default, which is invalid JSON.</p>