Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2021-12-08T11:02:11Z
Ceph
Redmine
ceph-volume - Bug #53524 (New): CEPHADM_APPLY_SPEC_FAIL is very verbose
https://tracker.ceph.com/issues/53524
2021-12-08T11:02:11Z
Sebastian Wagner
<pre>
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
</pre>
Dashboard - Bug #47231 (New): ERROR: setUpClass (tasks.mgr.dashboard.test_cephfs.CephfsTest)
https://tracker.ceph.com/issues/47231
2020-09-01T10:42:27Z
Sebastian Wagner
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-api/2144/">https://jenkins.ceph.com/job/ceph-api/2144/</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-api/3204/">https://jenkins.ceph.com/job/ceph-api/3204/</a></p>
<pre>
2020-09-01 09:18:35,239.239 INFO:__main__:----------------------------------------------------------------------
2020-09-01 09:18:35,239.239 INFO:__main__:Traceback (most recent call last):
2020-09-01 09:18:35,240.240 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/mgr/dashboard/helper.py", line 149, in setUpClass
2020-09-01 09:18:35,240.240 INFO:__main__: cls._assign_ports("dashboard", "ssl_server_port")
2020-09-01 09:18:35,240.240 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/mgr/mgr_test_case.py", line 218, in _assign_ports
2020-09-01 09:18:35,240.240 INFO:__main__: cls.wait_until_true(is_available, timeout=30)
2020-09-01 09:18:35,240.240 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/ceph_test_case.py", line 196, in wait_until_true
2020-09-01 09:18:35,240.240 INFO:__main__: raise TestTimeoutError("Timed out after {0}s".format(elapsed))
2020-09-01 09:18:35,240.240 INFO:__main__:tasks.ceph_test_case.TestTimeoutError: Timed out after 30s
2020-09-01 09:18:35,241.241 INFO:__main__:
2020-09-01 09:18:35,241.241 INFO:__main__:----------------------------------------------------------------------
2020-09-01 09:18:35,241.241 INFO:__main__:Ran 14 tests in 1278.060s
2020-09-01 09:18:35,241.241 INFO:__main__:
2020-09-01 09:18:35,241.241 INFO:__main__:
</pre>
Dashboard - Bug #46848 (New): ERROR: test_perf_counters_list (tasks.mgr.dashboard.test_perf_count...
https://tracker.ceph.com/issues/46848
2020-08-06T15:58:27Z
Sebastian Wagner
<pre>
2020-08-06 12:13:06,380.380 INFO:__main__:----------------------------------------------------------------------
2020-08-06 12:13:06,381.381 INFO:__main__:Traceback (most recent call last):
2020-08-06 12:13:06,381.381 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/mgr/dashboard/helper.py", line 191, in setUp
2020-08-06 12:13:06,381.381 INFO:__main__: self.wait_for_health_clear(self.TIMEOUT_HEALTH_CLEAR)
2020-08-06 12:13:06,381.381 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/ceph_test_case.py", line 162, in wait_for_health_clear
2020-08-06 12:13:06,382.382 INFO:__main__: self.wait_until_true(is_clear, timeout)
2020-08-06 12:13:06,382.382 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/ceph_test_case.py", line 194, in wait_until_true
2020-08-06 12:13:06,383.383 INFO:__main__: raise RuntimeError("Timed out after {0}s".format(elapsed))
2020-08-06 12:13:06,384.384 INFO:__main__:RuntimeError: Timed out after 60s
2020-08-06 12:13:06,384.384 INFO:__main__:
2020-08-06 12:13:06,384.384 INFO:__main__:----------------------------------------------------------------------
2020-08-06 12:13:06,386.386 INFO:__main__:Ran 117 tests in 1744.663s
2020-08-06 12:13:06,386.386 INFO:__main__:
2020-08-06 12:13:06,387.387 INFO:__main__:
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-api/278/">https://jenkins.ceph.com/job/ceph-api/278/</a></p>
Dashboard - Bug #46797 (New): ERROR: test_pool_update_metadata (tasks.mgr.dashboard.test_pool.Poo...
https://tracker.ceph.com/issues/46797
2020-07-31T10:43:42Z
Sebastian Wagner
<pre>
2020-07-31 05:16:37,145.145 INFO:__main__:----------------------------------------------------------------------
2020-07-31 05:16:37,145.145 INFO:__main__:Traceback (most recent call last):
2020-07-31 05:16:37,146.146 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_pool.py", line 327, in test_pool_update_metadata
2020-07-31 05:16:37,146.146 INFO:__main__: with self.__yield_pool(pool_name):
2020-07-31 05:16:37,146.146 INFO:__main__: File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
2020-07-31 05:16:37,146.146 INFO:__main__: return next(self.gen)
2020-07-31 05:16:37,146.146 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_pool.py", line 58, in __yield_pool
2020-07-31 05:16:37,147.147 INFO:__main__: data = self._create_pool(name, data)
2020-07-31 05:16:37,147.147 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_pool.py", line 77, in _create_pool
2020-07-31 05:16:37,147.147 INFO:__main__: self._task_post('/api/pool/', data)
2020-07-31 05:16:37,147.147 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 337, in _task_post
2020-07-31 05:16:37,147.147 INFO:__main__: return cls._task_request('POST', url, data, timeout)
2020-07-31 05:16:37,148.148 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 317, in _task_request
2020-07-31 05:16:37,148.148 INFO:__main__: .format(task_name, task_metadata, _res))
2020-07-31 05:16:37,148.148 INFO:__main__:Exception: Waiting for task (pool/create, {'pool_name': 'pool_update_metadata'}) to finish timed out. {'executing_tasks': [{'name': 'pool/create', 'metadata': {'pool_name': 'pool_update_metadata'}, 'begin_time': '2020-07-31T05:14:56.624619Z', 'progress': 0}, {'name': 'progress/PG autoscaler decreasing pool 11 PGs from 32 to 8', 'metadata': {'pool': 11}, 'begin_time': '2020-07-31T05:12:39.386827Z', 'progress': 45}], 'finished_tasks': [{'name': 'pool/create', 'metadata': {'pool_name': 'pool_update_configuration'}, 'begin_time': '2020-07-31T05:14:35.870663Z', 'end_time': '2020-07-31T05:14:43.978899Z', 'duration': 8.108235836029053, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'pool_update_compression'}, 'begin_time': '2020-07-31T05:14:08.247703Z', 'end_time': '2020-07-31T05:14:18.308763Z', 'duration': 10.061060190200806, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'progress/PG autoscaler increasing pool 15 PGs from 8 to 32', 'metadata': {'pool': 15}, 'begin_time': '2020-07-31T05:12:40.733971Z', 'end_time': '2020-07-31T05:13:40.765727Z', 'duration': 60.031755685806274, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'dashboard_pool_quota2'}, 'begin_time': '2020-07-31T05:13:06.829904Z', 'end_time': '2020-07-31T05:13:11.590168Z', 'duration': 4.760263919830322, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'dashboard_pool_quota1'}, 'begin_time': '2020-07-31T05:13:02.254149Z', 'end_time': '2020-07-31T05:13:03.345344Z', 'duration': 1.0911946296691895, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'dashboard_pool3'}, 'begin_time': '2020-07-31T05:12:26.802483Z', 'end_time': '2020-07-31T05:12:42.674605Z', 'duration': 15.872122049331665, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'sadfs'}, 'begin_time': '2020-07-31T05:12:22.019784Z', 'end_time': '2020-07-31T05:12:22.030813Z', 'duration': 0.011028766632080078, 'progress': 0, 'success': False, 'ret_value': None, 'exception': {'detail': "[errno -2] specified rule dnf doesn't exist"}}, {'name': 'progress/Rebalancing after osd.0 marked in', 'metadata': {'osd': 0}, 'begin_time': '2020-07-31T05:09:13.529738Z', 'end_time': '2020-07-31T05:09:22.037584Z', 'duration': 8.507845878601074, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'progress/Rebalancing after osd.0 marked out', 'metadata': {'osd': 0}, 'begin_time': '2020-07-31T05:09:12.471868Z', 'end_time': '2020-07-31T05:09:13.529236Z', 'duration': 1.057368516921997, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'progress/apply_drivesgroups', 'metadata': {'origin': 'orchestrator'}, 'begin_time': '2020-07-31T05:08:51.012334Z', 'end_time': '2020-07-31T05:08:51.014928Z', 'duration': 0.0025937557220458984, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}]}
2020-07-31 05:16:37,148.148 INFO:__main__:
2020-07-31 05:16:37,149.149 INFO:__main__:----------------------------------------------------------------------
2020-07-31 05:16:37,149.149 INFO:__main__:Ran 138 tests in 2056.558s
2020-07-31 05:16:37,149.149 INFO:__main__:
2020-07-31 05:16:37,149.149 INFO:__main__:
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4785/">https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4785/</a></p>
Dashboard - Bug #46735 (New): FAIL: test_all (tasks.mgr.dashboard.test_rgw.RgwBucketTest)
https://tracker.ceph.com/issues/46735
2020-07-28T10:55:14Z
Sebastian Wagner
<p>- <a class="external" href="https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4429/">https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4429/</a><br />- <a class="external" href="https://pulpito.ceph.com/jafaj-2020-08-26_09:07:46-rados-wip-jan-testing-2020-08-26-0905-distro-basic-smithi/5377674/">https://pulpito.ceph.com/jafaj-2020-08-26_09:07:46-rados-wip-jan-testing-2020-08-26-0905-distro-basic-smithi/5377674/</a><br />- <a class="external" href="https://pulpito.ceph.com/laura-2020-09-04_11:10:48-rados:dashboard-wip-laura-testing-34831-35785-distro-basic-smithi/">https://pulpito.ceph.com/laura-2020-09-04_11:10:48-rados:dashboard-wip-laura-testing-34831-35785-distro-basic-smithi/</a></p>
<pre>
2020-07-27 15:07:37,370.370 INFO:__main__:Starting test: test_all (tasks.mgr.dashboard.test_rgw.RgwBucketTest)
2020-07-27 15:07:37,371.371 INFO:__main__:Running ['./bin/ceph', 'log', 'Starting test tasks.mgr.dashboard.test_rgw.RgwBucketTest.test_all']
2020-07-27 15:07:38,636.636 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2020-07-27 15:07:44,230.230 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2020-07-27 15:07:53,170.170 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2020-07-27 15:07:53,761.761 INFO:tasks.mgr.dashboard.helper:Request POST to https://slave-ubuntu10.front.sepia.ceph.com:7789/api/rgw/bucket
/tmp/tmp.AGPrgqSy3w/venv/lib/python3.6/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
2020-07-27 15:07:54,130.130 INFO:tasks.mgr.dashboard.helper:Request GET to https://slave-ubuntu10.front.sepia.ceph.com:7789/api/rgw/bucket
/tmp/tmp.AGPrgqSy3w/venv/lib/python3.6/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
2020-07-27 15:07:54,215.215 INFO:tasks.mgr.dashboard.helper:Request GET to https://slave-ubuntu10.front.sepia.ceph.com:7789/api/rgw/bucket/teuth-test-bucket
/tmp/tmp.AGPrgqSy3w/venv/lib/python3.6/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
2020-07-27 15:07:54,409.409 INFO:tasks.mgr.dashboard.helper:Request PUT to https://slave-ubuntu10.front.sepia.ceph.com:7789/api/rgw/bucket/teuth-test-bucket
/tmp/tmp.AGPrgqSy3w/venv/lib/python3.6/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
2020-07-27 15:07:55,083.083 INFO:tasks.mgr.dashboard.helper:Request GET to https://slave-ubuntu10.front.sepia.ceph.com:7789/api/rgw/bucket/teuth-test-bucket
/tmp/tmp.AGPrgqSy3w/venv/lib/python3.6/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
2020-07-27 15:07:55,190.190 INFO:tasks.mgr.dashboard.helper:Request PUT to https://slave-ubuntu10.front.sepia.ceph.com:7789/api/rgw/bucket/teuth-test-bucket
/tmp/tmp.AGPrgqSy3w/venv/lib/python3.6/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
2020-07-27 15:07:55,584.584 INFO:tasks.mgr.dashboard.helper:Request GET to https://slave-ubuntu10.front.sepia.ceph.com:7789/api/rgw/bucket/teuth-test-bucket
/tmp/tmp.AGPrgqSy3w/venv/lib/python3.6/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
2020-07-27 15:08:01,686.686 INFO:tasks.mgr.dashboard.helper:Request PUT to https://slave-ubuntu10.front.sepia.ceph.com:7789/api/rgw/bucket/teuth-test-bucket
/tmp/tmp.AGPrgqSy3w/venv/lib/python3.6/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
2020-07-27 15:08:08,000.000 ERROR:tasks.mgr.dashboard.helper:Request response: {"detail": "Bad MFA credentials: RGW REST API failed request with status code 403\n(b'{\"Code\":\"AccessDenied\",\"BucketName\":\"teuth-test-bucket\",\"RequestId\":\"tx00000'\n b'0000000000000031-005f1eedd7-1158-default\",\"HostId\":\"1158-default-default\"}')", "component": "rgw"}
2020-07-27 15:08:08,001.001 INFO:__main__:Running ['./bin/ceph', 'log', 'Ended test tasks.mgr.dashboard.test_rgw.RgwBucketTest.test_all']
2020-07-27 15:08:09,072.072 INFO:__main__:test_all (tasks.mgr.dashboard.test_rgw.RgwBucketTest) ... FAIL
2020-07-27 15:08:09,072.072 INFO:__main__:Stopped test: test_all (tasks.mgr.dashboard.test_rgw.RgwBucketTest) in 31.701467s
2020-07-27 15:08:09,073.073 INFO:__main__:Running ['./bin/radosgw-admin', 'user', 'rm', '--tenant', 'testx', '--uid=teuth-test-user', '--purge-data']
2020-07-27 15:08:19,107.107 INFO:__main__:Running ['./bin/radosgw-admin', 'user', 'rm', '--tenant', 'testx2', '--uid=teuth-test-user2', '--purge-data']
2020-07-27 15:08:22,371.371 INFO:__main__:Running ['./bin/radosgw-admin', 'user', 'rm', '--uid', 'admin']
2020-07-27 15:08:25,605.605 INFO:__main__:Running ['./bin/radosgw-admin', 'user', 'rm', '--uid=teuth-test-user', '--purge-data']
2020-07-27 15:08:28,899.899 INFO:__main__:
2020-07-27 15:08:28,900.900 INFO:__main__:----------------------------------------------------------------------
2020-07-27 15:08:28,900.900 INFO:__main__:Traceback (most recent call last):
2020-07-27 15:08:28,901.901 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_rgw.py", line 277, in test_all
2020-07-27 15:08:28,901.901 INFO:__main__: self.assertStatus(200)
2020-07-27 15:08:28,901.901 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 386, in assertStatus
2020-07-27 15:08:28,901.901 INFO:__main__: self.assertEqual(self._resp.status_code, status)
2020-07-27 15:08:28,901.901 INFO:__main__:AssertionError: 400 != 200
2020-07-27 15:08:28,902.902 INFO:__main__:
2020-07-27 15:08:28,902.902 INFO:__main__:----------------------------------------------------------------------
2020-07-27 15:08:28,902.902 INFO:__main__:Ran 193 tests in 5318.217s
2020-07-27 15:08:28,902.902 INFO:__main__:
2020-07-27 15:08:28,903.903 INFO:__main__:
</pre>
Dashboard - Bug #46686 (New): ERROR: setUpClass (tasks.mgr.dashboard.test_perf_counters.PerfCount...
https://tracker.ceph.com/issues/46686
2020-07-23T08:50:22Z
Sebastian Wagner
<pre>
2020-07-23 06:26:36,824.824 INFO:__main__:======================================================================
2020-07-23 06:26:36,824.824 INFO:__main__:ERROR: setUpClass (tasks.mgr.dashboard.test_perf_counters.PerfCountersControllerTest)
2020-07-23 06:26:36,824.824 INFO:__main__:----------------------------------------------------------------------
2020-07-23 06:26:36,824.824 INFO:__main__:Traceback (most recent call last):
2020-07-23 06:26:36,825.825 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 150, in setUpClass
2020-07-23 06:26:36,825.825 INFO:__main__: cls._load_module("dashboard")
2020-07-23 06:26:36,825.825 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/mgr_test_case.py", line 157, in _load_module
2020-07-23 06:26:36,826.826 INFO:__main__: cls.wait_until_true(has_restarted, timeout=30)
2020-07-23 06:26:36,826.826 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/ceph_test_case.py", line 194, in wait_until_true
2020-07-23 06:26:36,826.826 INFO:__main__: raise RuntimeError("Timed out after {0}s".format(elapsed))
2020-07-23 06:26:36,826.826 INFO:__main__:RuntimeError: Timed out after 30s
2020-07-23 06:26:36,826.826 INFO:__main__:
2020-07-23 06:26:36,826.826 INFO:__main__:----------------------------------------------------------------------
2020-07-23 06:26:36,826.826 INFO:__main__:Ran 116 tests in 2136.427s
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4199/">https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4199/</a></p>
Dashboard - Bug #46221 (New): test_selftest_cluster_log (tasks.mgr.test_module_selftest.TestModul...
https://tracker.ceph.com/issues/46221
2020-06-26T08:36:00Z
Sebastian Wagner
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/2387/">https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/2387/</a></p>
<pre>
test_selftest_cluster_log (tasks.mgr.test_module_selftest.TestModuleSelftest)
2020-06-26 01:55:16,340.340 INFO:__main__:----------------------------------------------------------------------
2020-06-26 01:55:16,340.340 INFO:__main__:Traceback (most recent call last):
2020-06-26 01:55:16,340.340 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/test_module_selftest.py", line 331, in test_selftest_cluster_log
2020-06-26 01:55:16,340.340 INFO:__main__: priority, message)
2020-06-26 01:55:16,340.340 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/ceph_test_case.py", line 127, in __exit__
2020-06-26 01:55:16,340.340 INFO:__main__: raise AssertionError("Expected log message not found: '{0}'".format(expected_pattern))
2020-06-26 01:55:16,340.340 INFO:__main__:AssertionError: Expected log message not found: '[INF] foo bar info'
2020-06-26 01:55:16,341.341 INFO:__main__:
2020-06-26 01:55:16,341.341 INFO:__main__:----------------------------------------------------------------------
2020-06-26 01:55:16,341.341 INFO:__main__:Ran 274 tests in 4522.206s
2020-06-26 01:55:16,341.341 INFO:__main__:
2020-06-26 01:55:16,341.341 INFO:__main__:
</pre>
Dashboard - Bug #46032 (New): tasks.mgr.dashboard.test_ganesha.GaneshaTest: Timed out waiting for...
https://tracker.ceph.com/issues/46032
2020-06-16T11:26:55Z
Sebastian Wagner
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/1540/">https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/1540/</a></p>
<pre>
setUpClass (tasks.mgr.dashboard.test_ganesha.GaneshaTest)
2020-06-15 11:26:31,992.992 INFO:__main__:----------------------------------------------------------------------
2020-06-15 11:26:31,993.993 INFO:__main__:Traceback (most recent call last):
2020-06-15 11:26:31,993.993 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_ganesha.py", line 28, in setUpClass
2020-06-15 11:26:31,993.993 INFO:__main__: super(GaneshaTest, cls).setUpClass()
2020-06-15 11:26:31,994.994 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 153, in setUpClass
2020-06-15 11:26:31,994.994 INFO:__main__: cls.mds_cluster.mds_stop()
2020-06-15 11:26:31,994.994 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/cephfs/filesystem.py", line 297, in mds_stop
2020-06-15 11:26:31,995.995 INFO:__main__: self._one_or_all(mds_id, lambda id_: self.mds_daemons[id_].stop())
2020-06-15 11:26:31,995.995 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/cephfs/filesystem.py", line 273, in _one_or_all
2020-06-15 11:26:31,996.996 INFO:__main__: p.spawn(cb, mds_id)
2020-06-15 11:26:31,996.996 INFO:__main__: File "/tmp/tmp.1iEZmcQ52k/venv/lib/python3.6/site-packages/teuthology/parallel.py", line 87, in __exit__
2020-06-15 11:26:31,997.997 INFO:__main__: for result in self:
2020-06-15 11:26:31,997.997 INFO:__main__: File "/tmp/tmp.1iEZmcQ52k/venv/lib/python3.6/site-packages/teuthology/parallel.py", line 101, in __next__
2020-06-15 11:26:31,998.998 INFO:__main__: resurrect_traceback(result)
2020-06-15 11:26:31,998.998 INFO:__main__: File "/tmp/tmp.1iEZmcQ52k/venv/lib/python3.6/site-packages/teuthology/parallel.py", line 37, in resurrect_traceback
2020-06-15 11:26:31,998.998 INFO:__main__: reraise(*exc_info)
2020-06-15 11:26:31,999.999 INFO:__main__: File "/tmp/tmp.1iEZmcQ52k/venv/lib/python3.6/site-packages/six.py", line 703, in reraise
2020-06-15 11:26:31,999.999 INFO:__main__: raise value
2020-06-15 11:26:31,999.999 INFO:__main__: File "/tmp/tmp.1iEZmcQ52k/venv/lib/python3.6/site-packages/teuthology/parallel.py", line 24, in capture_traceback
2020-06-15 11:26:32,000.000 INFO:__main__: return func(*args, **kwargs)
2020-06-15 11:26:32,000.000 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/cephfs/filesystem.py", line 297, in
2020-06-15 11:26:32,001.001 INFO:__main__: self._one_or_all(mds_id, lambda id_: self.mds_daemons[id_].stop())
2020-06-15 11:26:32,001.001 INFO:__main__: File "../qa/tasks/vstart_runner.py", line 511, in stop
2020-06-15 11:26:32,001.001 INFO:__main__: self.daemon_type, self.daemon_id))
2020-06-15 11:26:32,001.001 INFO:__main__:teuthology.exceptions.MaxWhileTries: Timed out waiting for daemon mds.a
2020-06-15 11:26:32,002.002 INFO:__main__:
2020-06-15 11:26:32,002.002 INFO:__main__:----------------------------------------------------------------------
2020-06-15 11:26:32,002.002 INFO:__main__:
</pre>
Infrastructure - Bug #45453 (New): 'https://download.ceph.com/debian-octopus focal Release' does ...
https://tracker.ceph.com/issues/45453
2020-05-08T20:28:48Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/">http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/</a></p>
<pre>
2020-05-08T15:24:54.694 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm/cephadm -v add-repo --release octopus
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:DEBUG:cephadm:Could not locate podman: podman not found
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo GPG key from https://download.ceph.com/keys/release.asc...
2020-05-08T15:24:54.964 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo file at /etc/apt/sources.list.d/ceph.list...
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ test_install_uninstall
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo apt update
2020-05-08T15:24:55.009 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.139 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
2020-05-08T15:24:55.270 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
2020-05-08T15:24:55.284 INFO:tasks.workunit.client.0.smithi204.stdout:Ign:3 https://download.ceph.com/debian-octopus focal InRelease
2020-05-08T15:24:55.320 INFO:tasks.workunit.client.0.smithi204.stdout:Err:4 https://download.ceph.com/debian-octopus focal Release
2020-05-08T15:24:55.321 INFO:tasks.workunit.client.0.smithi204.stdout: 404 Not Found [IP: 158.69.68.124 443]
2020-05-08T15:24:55.351 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease
2020-05-08T15:24:55.434 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
2020-05-08T15:24:56.423 INFO:tasks.workunit.client.0.smithi204.stdout:Reading package lists...
2020-05-08T15:24:56.442 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:E: The repository 'https://download.ceph.com/debian-octopus focal Release' does not have a Release file.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.444 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo yum -y install cephadm
2020-05-08T15:24:56.452 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: yum: command not found
2020-05-08T15:24:56.453 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo dnf -y install cephadm
2020-05-08T15:24:56.459 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: dnf: command not found
2020-05-08T15:24:56.460 DEBUG:teuthology.orchestra.run:got remote process result: 1
</pre>
<p>Are we going to get an ubuntu repo for focal or should I disable this test for it?</p>
ceph-volume - Bug #43858 (New): ceph-volume: lvm zap requires `/dev/` prefix
https://tracker.ceph.com/issues/43858
2020-01-28T12:38:22Z
Sebastian Wagner
<p>This is different to the other lvm commands:</p>
<p>preparing:</p>
<pre>
root@ubuntu:~# losetup -f
/dev/loop12
root@ubuntu:~# LANG=C losetup /dev/loop12 disk-image
root@ubuntu:~# pvcreate /dev/loop12
Physical volume "/dev/loop12" successfully created.
root@ubuntu:~# vgcreate MyVg /dev/loop12
Volume group "MyVg" successfully created
root@ubuntu:~# lvcreate --size 5500M --name MyLV MyVg
Logical volume "MyLV" created.
root@ubuntu:~# ll /dev/MyVg/MyLV
lrwxrwxrwx 1 root root 7 Jan 28 11:38 /dev/MyVg/MyLV -> ../dm-0
root@ubuntu:~# vgs -o vg_tags MyVg
VG Tags
root@ubuntu:~# vgs -o lv_tags MyVg
LV Tags
</pre>
<pre>
# ceph-volume lvm zap MyVg/MyLV
stderr: lsblk: MyVg/MyLV: kein blockorientiertes Gerät
stderr: blkid: Fehler: MyVg/MyLV: Datei oder Verzeichnis nicht gefunden
stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
[--osd-fsid OSD_FSID]
[DEVICES [DEVICES ...]]
ceph-volume lvm zap: error: Unable to proceed with non-existing device: MyVg/MyLV
</pre>
<pre>
# ceph-volume lvm zap /dev/MyVg/MyLV
--> Zapping: /dev/MyVg/MyLV
Running command: /bin/dd if=/dev/zero of=/dev/MyVg/MyLV bs=1M count=10
stderr: 10+0 Datensätze ein
10+0 Datensätze aus
10485760 Bytes (10 MB, 10 MiB) kopiert, 0,00413076 s, 2,5 GB/s
--> Zapping successful for: <LV: /dev/MyVg/MyLV>
</pre>
ceph-cm-ansible - Bug #43738 (New): cephadm: conflicts between attempted installs of libstoragemg...
https://tracker.ceph.com/issues/43738
2020-01-21T10:27:32Z
Sebastian Wagner
<pre>
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 420, in begin
super(CephLab, self).begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 263, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 290, in execute_playbook
self._handle_failure(command, status)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 314, in _handle_failure
raise AnsibleFailedError(failures)
failure_reason: '86 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz
conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
Error Summary
-------------
''}}Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure
log.error(yaml.safe_dump(failure))
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent
node = self.represent_data(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)RepresenterError: (''cannot represent an object'', u''/'')Failure
object was: {''smithi161.front.sepia.ceph.com'': {''_ansible_no_log'': False, ''changed'':
False, u''results'': [], u''rc'': 1, u''invocation'': {u''module_args'': {u''install_weak_deps'':
True, u''autoremove'': False, u''lock_timeout'': 0, u''download_dir'': None, u''install_repoquery'':
True, u''enable_plugin'': [], u''update_cache'': False, u''disable_excludes'': None,
u''exclude'': [], u''installroot'': u''/'', u''allow_downgrade'': False, u''name'':
[u''@core'', u''@base'', u''dnf-utils'', u''git-all'', u''sysstat'', u''libedit'',
u''boost-thread'', u''xfsprogs'', u''gdisk'', u''parted'', u''libgcrypt'', u''fuse-libs'',
u''openssl'', u''libuuid'', u''attr'', u''ant'', u''lsof'', u''gettext'', u''bc'',
u''xfsdump'', u''blktrace'', u''usbredir'', u''podman'', u''podman-docker'', u''libev-devel'',
u''valgrind'', u''nfs-utils'', u''ncurses-devel'', u''gcc'', u''git'', u''python3-nose'',
u''python3-virtualenv'', u''genisoimage'', u''qemu-img'', u''qemu-kvm-core'', u''qemu-kvm-block-rbd'',
u''libacl-devel'', u''dbench'', u''autoconf''], u''download_only'': False, u''bugfix'':
False, u''list'': None, u''disable_gpg_check'': False, u''conf_file'': None, u''update_only'':
False, u''state'': u''present'', u''disablerepo'': [], u''releasever'': None, u''disable_plugin'':
[], u''enablerepo'': [], u''skip_broken'': False, u''security'': False, u''validate_certs'':
True}}, u''failures'': [], u''msg'': u''Unknown Error occured: Transaction check
error:
file /usr/lib/tmpfiles.d/libstoragemgmt.conf conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/doc/libstoragemgmt/NEWS conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmcli.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmd.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/simc_lsmplugin.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log</a></p>
<ul>
<li>os_type: rhel</li>
<li>os_version: '8.0'</li>
<li>description: rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_api_tests.yaml}</li>
</ul>
<p>As this is an ansible error, I'm not sure if this is a cephadm issue. Any clues?</p>
Dashboard - Bug #42446 (New): mgr/dashboard: tasks.mgr.dashboard.test_ganesha.GaneshaTest.test_cr...
https://tracker.ceph.com/issues/42446
2019-10-23T14:50:08Z
Sebastian Wagner
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/33/console">https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/33/console</a></p>
<pre>
...snip...
2019-10-23 14:26:16,342.342 INFO:__main__:Starting test: test_create_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)
2019-10-23 14:26:16,342.342 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2019-10-23 14:26:21,848.848 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2019-10-23 14:26:27,358.358 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2019-10-23 14:26:32,976.976 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2019-10-23 14:26:38,513.513 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2019-10-23 14:26:39,038.038 INFO:__main__:test_create_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest) ... ERROR
2019-10-23 14:26:39,039.039 INFO:__main__:Stopped test: test_create_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest) in 22.696573s
2019-10-23 14:26:39,039.039 INFO:__main__:Running ['./bin/radosgw-admin', 'user', 'rm', '--uid', 'admin', '--purge-data']
2019-10-23 14:26:39,483.483 INFO:__main__:Running ['./bin/ceph', 'osd', 'pool', 'delete', 'ganesha', 'ganesha', '--yes-i-really-really-mean-it']
2019-10-23 14:26:40,181.181 INFO:tasks.mgr.dashboard.helper:command result:
2019-10-23 14:26:40,181.181 INFO:__main__:
2019-10-23 14:26:40,181.181 INFO:__main__:======================================================================
2019-10-23 14:26:40,182.182 INFO:__main__:ERROR: test_create_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)
2019-10-23 14:26:40,182.182 INFO:__main__:----------------------------------------------------------------------
2019-10-23 14:26:40,182.182 INFO:__main__:Traceback (most recent call last):
2019-10-23 14:26:40,182.182 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 154, in setUp
2019-10-23 14:26:40,182.182 INFO:__main__: self.wait_for_health_clear(20)
2019-10-23 14:26:40,182.182 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/ceph_test_case.py", line 131, in wait_for_health_clear
2019-10-23 14:26:40,183.183 INFO:__main__: self.wait_until_true(is_clear, timeout)
2019-10-23 14:26:40,183.183 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/ceph_test_case.py", line 163, in wait_until_true
2019-10-23 14:26:40,183.183 INFO:__main__: raise RuntimeError("Timed out after {0}s".format(elapsed))
2019-10-23 14:26:40,183.183 INFO:__main__:RuntimeError: Timed out after 20s
2019-10-23 14:26:40,183.183 INFO:__main__:
2019-10-23 14:26:40,183.183 INFO:__main__:----------------------------------------------------------------------
2019-10-23 14:26:40,184.184 INFO:__main__:Ran 48 tests in 501.380s
</pre>
Dashboard - Bug #39953 (New): mgr/dashboard doesn't properly return `WWW-Authenticate` header for...
https://tracker.ceph.com/issues/39953
2019-05-16T13:48:40Z
Sebastian Wagner
<p>Dashboard violates the bearer authentication protocol by not returning a proper WWW-Authenticate header in case the token is missing.</p>
<p>Would be nice to get the dashboard working with clients like <a class="external" href="https://github.com/brandond/requests-bearer">https://github.com/brandond/requests-bearer</a></p>
Ceph - Bug #38145 (New): /usr/bin/ld: cmdparse.cc.o: bad reloc symbol index
https://tracker.ceph.com/issues/38145
2019-02-01T10:09:43Z
Sebastian Wagner
<p>Hey,</p>
<p>in the Sepia lab in flavour "Ubuntu Xenial", I'm getting a linker error:</p>
<pre>
/usr/bin/ld: common/CMakeFiles/common-common-objs.dir/cmdparse.cc.o: bad reloc symbol index (0x30317453 >= 0x2d1) for offset 0x4961534563497374 in section `.debug_info'
common/CMakeFiles/common-common-objs.dir/cmdparse.cc.o: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
src/CMakeFiles/ceph-common.dir/build.make:446: recipe for target 'lib/libceph-common.so.1' failed
make[4]: *** [lib/libceph-common.so.1] Error 1
make[4]: Leaving directory '/build/ceph-14.0.1-3099-g9e926e9/obj-x86_64-linux-gnu'
</pre>
<ul>
<li><a href="https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=xenial,DIST=xenial,MACHINE_SIZE=huge/17352//consoleFull" class="external">Jenkins Log</a></li>
<li><a href="https://shaman.ceph.com/builds/ceph/wip-swagner-testing/9e926e9927a4c9592403dbce959e526ba3860206/default/140455/" class="external">Shaman build</a></li>
</ul>
<p>I don't know if this is a reproducible error or not.</p>
Ceph - Bug #37373 (New): Interactive mode CLI with Python 3: Traceback when pressing ^D
https://tracker.ceph.com/issues/37373
2018-11-22T15:06:43Z
Sebastian Wagner
<p>Hey,</p>
<p>calling ^d in the repl of the ceph command using Python 3 shows a Traceback:</p>
<pre>
$ ceph
ceph>
ceph>
ceph> Traceback (most recent call last):
File "/home/sebastian/Repos/ceph/build/bin/ceph", line 1250, in <module>
retval = main()
File "/home/sebastian/Repos/ceph/build/bin/ceph", line 1229, in main
raw_write(outbuf)
File "/home/sebastian/Repos/ceph/build/bin/ceph", line 172, in raw_write
raw_stdout.write(buf)
TypeError: a bytes-like object is required, not 'str'
</pre>
<p>Is there anyone that uses this mode? Relates to</p>
<p><a class="external" href="https://marc.info/?i=CALe9h7c5kJudfsQ6Vf_vczUG0CeoN8=dxznC=92RamBvxD9u0w%20()%20mail%20!%20gmail%20!%20com">https://marc.info/?i=CALe9h7c5kJudfsQ6Vf_vczUG0CeoN8=dxznC=92RamBvxD9u0w%20()%20mail%20!%20gmail%20!%20com</a></p>