Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2022-01-28T12:24:42Z
Ceph
Redmine
Orchestrator - Documentation #54051 (New): cephadm: advise users how when when to change OSD specs
https://tracker.ceph.com/issues/54051
2022-01-28T12:24:42Z
Sebastian Wagner
<p>In general it works fine, there is just one caveat: we're not going to touch any already deployed OSDs</p>
<p>I'd recommend to stick to a particular osd layout, but adjust the placement for example<br />let's say testing the deployment on 1 host and then rolling it out on all the others</p>
<p>If a user changes other properties of the OSD spec, he has to understand that existing osds are not getting redeployed. E.g. changing the encraption flag of an OSD spec doesn't magically encrypt any OSDs. Only new OSDs will then be encrypted.</p>
Dashboard - Bug #47231 (New): ERROR: setUpClass (tasks.mgr.dashboard.test_cephfs.CephfsTest)
https://tracker.ceph.com/issues/47231
2020-09-01T10:42:27Z
Sebastian Wagner
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-api/2144/">https://jenkins.ceph.com/job/ceph-api/2144/</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-api/3204/">https://jenkins.ceph.com/job/ceph-api/3204/</a></p>
<pre>
2020-09-01 09:18:35,239.239 INFO:__main__:----------------------------------------------------------------------
2020-09-01 09:18:35,239.239 INFO:__main__:Traceback (most recent call last):
2020-09-01 09:18:35,240.240 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/mgr/dashboard/helper.py", line 149, in setUpClass
2020-09-01 09:18:35,240.240 INFO:__main__: cls._assign_ports("dashboard", "ssl_server_port")
2020-09-01 09:18:35,240.240 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/mgr/mgr_test_case.py", line 218, in _assign_ports
2020-09-01 09:18:35,240.240 INFO:__main__: cls.wait_until_true(is_available, timeout=30)
2020-09-01 09:18:35,240.240 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/ceph_test_case.py", line 196, in wait_until_true
2020-09-01 09:18:35,240.240 INFO:__main__: raise TestTimeoutError("Timed out after {0}s".format(elapsed))
2020-09-01 09:18:35,240.240 INFO:__main__:tasks.ceph_test_case.TestTimeoutError: Timed out after 30s
2020-09-01 09:18:35,241.241 INFO:__main__:
2020-09-01 09:18:35,241.241 INFO:__main__:----------------------------------------------------------------------
2020-09-01 09:18:35,241.241 INFO:__main__:Ran 14 tests in 1278.060s
2020-09-01 09:18:35,241.241 INFO:__main__:
2020-09-01 09:18:35,241.241 INFO:__main__:
</pre>
rbd - Bug #46875 (New): TestLibRBD.TestPendingAio: test_librbd.cc:4539: Failure or SIGSEGV
https://tracker.ceph.com/issues/46875
2020-08-10T01:03:45Z
Sebastian Wagner
<pre>
[ RUN ] TestLibRBD.TestPendingAio
using new format!
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/librbd/test_librbd.cc:4539: Failure
Expected equality of these values:
1
rbd_aio_is_complete(comps[i])
Which is: 0
[ FAILED ] TestLibRBD.TestPendingAio (68 ms)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/57209/consoleFull#-361705261e840cee4-f4a4-4183-81dd-42855615f2c1">https://jenkins.ceph.com/job/ceph-pull-requests/57209/consoleFull#-361705261e840cee4-f4a4-4183-81dd-42855615f2c1</a></p>
Dashboard - Bug #46848 (New): ERROR: test_perf_counters_list (tasks.mgr.dashboard.test_perf_count...
https://tracker.ceph.com/issues/46848
2020-08-06T15:58:27Z
Sebastian Wagner
<pre>
2020-08-06 12:13:06,380.380 INFO:__main__:----------------------------------------------------------------------
2020-08-06 12:13:06,381.381 INFO:__main__:Traceback (most recent call last):
2020-08-06 12:13:06,381.381 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/mgr/dashboard/helper.py", line 191, in setUp
2020-08-06 12:13:06,381.381 INFO:__main__: self.wait_for_health_clear(self.TIMEOUT_HEALTH_CLEAR)
2020-08-06 12:13:06,381.381 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/ceph_test_case.py", line 162, in wait_for_health_clear
2020-08-06 12:13:06,382.382 INFO:__main__: self.wait_until_true(is_clear, timeout)
2020-08-06 12:13:06,382.382 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-api/qa/tasks/ceph_test_case.py", line 194, in wait_until_true
2020-08-06 12:13:06,383.383 INFO:__main__: raise RuntimeError("Timed out after {0}s".format(elapsed))
2020-08-06 12:13:06,384.384 INFO:__main__:RuntimeError: Timed out after 60s
2020-08-06 12:13:06,384.384 INFO:__main__:
2020-08-06 12:13:06,384.384 INFO:__main__:----------------------------------------------------------------------
2020-08-06 12:13:06,386.386 INFO:__main__:Ran 117 tests in 1744.663s
2020-08-06 12:13:06,386.386 INFO:__main__:
2020-08-06 12:13:06,387.387 INFO:__main__:
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-api/278/">https://jenkins.ceph.com/job/ceph-api/278/</a></p>
teuthology - Feature #46834 (New): Presets for teuthology-suite
https://tracker.ceph.com/issues/46834
2020-08-05T08:53:16Z
Sebastian Wagner
<p>When scheduling cephadm runs, I typically schedule them like so:</p>
<pre>
--suite rados/cephadm --subset 0/3
</pre>
<p>But that's not obvious to anyone else. I'd like to have an easy way to schedule runs for specific things in Ceph. Like the dashboard or CephFS, without first asking the component lead for clues how to schedule them.</p>
Dashboard - Bug #46797 (New): ERROR: test_pool_update_metadata (tasks.mgr.dashboard.test_pool.Poo...
https://tracker.ceph.com/issues/46797
2020-07-31T10:43:42Z
Sebastian Wagner
<pre>
2020-07-31 05:16:37,145.145 INFO:__main__:----------------------------------------------------------------------
2020-07-31 05:16:37,145.145 INFO:__main__:Traceback (most recent call last):
2020-07-31 05:16:37,146.146 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_pool.py", line 327, in test_pool_update_metadata
2020-07-31 05:16:37,146.146 INFO:__main__: with self.__yield_pool(pool_name):
2020-07-31 05:16:37,146.146 INFO:__main__: File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
2020-07-31 05:16:37,146.146 INFO:__main__: return next(self.gen)
2020-07-31 05:16:37,146.146 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_pool.py", line 58, in __yield_pool
2020-07-31 05:16:37,147.147 INFO:__main__: data = self._create_pool(name, data)
2020-07-31 05:16:37,147.147 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/test_pool.py", line 77, in _create_pool
2020-07-31 05:16:37,147.147 INFO:__main__: self._task_post('/api/pool/', data)
2020-07-31 05:16:37,147.147 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 337, in _task_post
2020-07-31 05:16:37,147.147 INFO:__main__: return cls._task_request('POST', url, data, timeout)
2020-07-31 05:16:37,148.148 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 317, in _task_request
2020-07-31 05:16:37,148.148 INFO:__main__: .format(task_name, task_metadata, _res))
2020-07-31 05:16:37,148.148 INFO:__main__:Exception: Waiting for task (pool/create, {'pool_name': 'pool_update_metadata'}) to finish timed out. {'executing_tasks': [{'name': 'pool/create', 'metadata': {'pool_name': 'pool_update_metadata'}, 'begin_time': '2020-07-31T05:14:56.624619Z', 'progress': 0}, {'name': 'progress/PG autoscaler decreasing pool 11 PGs from 32 to 8', 'metadata': {'pool': 11}, 'begin_time': '2020-07-31T05:12:39.386827Z', 'progress': 45}], 'finished_tasks': [{'name': 'pool/create', 'metadata': {'pool_name': 'pool_update_configuration'}, 'begin_time': '2020-07-31T05:14:35.870663Z', 'end_time': '2020-07-31T05:14:43.978899Z', 'duration': 8.108235836029053, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'pool_update_compression'}, 'begin_time': '2020-07-31T05:14:08.247703Z', 'end_time': '2020-07-31T05:14:18.308763Z', 'duration': 10.061060190200806, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'progress/PG autoscaler increasing pool 15 PGs from 8 to 32', 'metadata': {'pool': 15}, 'begin_time': '2020-07-31T05:12:40.733971Z', 'end_time': '2020-07-31T05:13:40.765727Z', 'duration': 60.031755685806274, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'dashboard_pool_quota2'}, 'begin_time': '2020-07-31T05:13:06.829904Z', 'end_time': '2020-07-31T05:13:11.590168Z', 'duration': 4.760263919830322, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'dashboard_pool_quota1'}, 'begin_time': '2020-07-31T05:13:02.254149Z', 'end_time': '2020-07-31T05:13:03.345344Z', 'duration': 1.0911946296691895, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'dashboard_pool3'}, 'begin_time': '2020-07-31T05:12:26.802483Z', 'end_time': '2020-07-31T05:12:42.674605Z', 'duration': 15.872122049331665, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'pool/create', 'metadata': {'pool_name': 'sadfs'}, 'begin_time': '2020-07-31T05:12:22.019784Z', 'end_time': '2020-07-31T05:12:22.030813Z', 'duration': 0.011028766632080078, 'progress': 0, 'success': False, 'ret_value': None, 'exception': {'detail': "[errno -2] specified rule dnf doesn't exist"}}, {'name': 'progress/Rebalancing after osd.0 marked in', 'metadata': {'osd': 0}, 'begin_time': '2020-07-31T05:09:13.529738Z', 'end_time': '2020-07-31T05:09:22.037584Z', 'duration': 8.507845878601074, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'progress/Rebalancing after osd.0 marked out', 'metadata': {'osd': 0}, 'begin_time': '2020-07-31T05:09:12.471868Z', 'end_time': '2020-07-31T05:09:13.529236Z', 'duration': 1.057368516921997, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}, {'name': 'progress/apply_drivesgroups', 'metadata': {'origin': 'orchestrator'}, 'begin_time': '2020-07-31T05:08:51.012334Z', 'end_time': '2020-07-31T05:08:51.014928Z', 'duration': 0.0025937557220458984, 'progress': 100, 'success': True, 'ret_value': None, 'exception': None}]}
2020-07-31 05:16:37,148.148 INFO:__main__:
2020-07-31 05:16:37,149.149 INFO:__main__:----------------------------------------------------------------------
2020-07-31 05:16:37,149.149 INFO:__main__:Ran 138 tests in 2056.558s
2020-07-31 05:16:37,149.149 INFO:__main__:
2020-07-31 05:16:37,149.149 INFO:__main__:
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4785/">https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4785/</a></p>
Dashboard - Bug #46686 (New): ERROR: setUpClass (tasks.mgr.dashboard.test_perf_counters.PerfCount...
https://tracker.ceph.com/issues/46686
2020-07-23T08:50:22Z
Sebastian Wagner
<pre>
2020-07-23 06:26:36,824.824 INFO:__main__:======================================================================
2020-07-23 06:26:36,824.824 INFO:__main__:ERROR: setUpClass (tasks.mgr.dashboard.test_perf_counters.PerfCountersControllerTest)
2020-07-23 06:26:36,824.824 INFO:__main__:----------------------------------------------------------------------
2020-07-23 06:26:36,824.824 INFO:__main__:Traceback (most recent call last):
2020-07-23 06:26:36,825.825 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 150, in setUpClass
2020-07-23 06:26:36,825.825 INFO:__main__: cls._load_module("dashboard")
2020-07-23 06:26:36,825.825 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/mgr_test_case.py", line 157, in _load_module
2020-07-23 06:26:36,826.826 INFO:__main__: cls.wait_until_true(has_restarted, timeout=30)
2020-07-23 06:26:36,826.826 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/ceph_test_case.py", line 194, in wait_until_true
2020-07-23 06:26:36,826.826 INFO:__main__: raise RuntimeError("Timed out after {0}s".format(elapsed))
2020-07-23 06:26:36,826.826 INFO:__main__:RuntimeError: Timed out after 30s
2020-07-23 06:26:36,826.826 INFO:__main__:
2020-07-23 06:26:36,826.826 INFO:__main__:----------------------------------------------------------------------
2020-07-23 06:26:36,826.826 INFO:__main__:Ran 116 tests in 2136.427s
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4199/">https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/4199/</a></p>
sepia - Bug #46154 (New): unable to pull ceph/ceph-grafana: connection reset by peer
https://tracker.ceph.com/issues/46154
2020-06-23T13:20:34Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-smithi/5172323/">http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-smithi/5172323/</a></p>
<pre>
2020-06-23T12:20:41.349 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:Deploy daemon grafana.a ...
2020-06-23T12:20:41.350 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:Verifying port 3000 ...
2020-06-23T12:20:46.563 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:Non-zero exit code 125 from /usr/bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=ceph/ceph-grafana:latest -e NODE_NAME=smithi198 --entrypoint stat ceph/ceph-grafana:latest -c %u %g /var/lib/grafana
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Trying to pull docker.io/ceph/ceph-grafana:latest...
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Getting image source signatures
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Copying blob sha256:003efafe5a84678b58http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-smithi/5172323/5af8a06810c47079aa4705e60d07f1c31a52f0e35ce0b5
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr read tcp 172.21.15.198:55100->104.18.125.25:443: read: connection reset by peer
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Error: unable to pull ceph/ceph-grafana:latest: 1 error occurred:
2020-06-23T12:20:46.565 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr * Error writing blob: error storing blob to file "/var/tmp/storage459839576/1": read tcp 172.21.15.198:55100->104.18.125.25:443: read: connection reset by peer
2020-06-23T12:20:46.565 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr
2020-06-23T12:20:46.565 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr
2020-06-23T12:20:46.571 INFO:tasks.workunit.client.0.smithi198.stderr:Traceback (most recent call last):
2020-06-23T12:20:46.571 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 4825, in <module>
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: r = args.func()
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 1182, in _default_image
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: return func()
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 2863, in command_deploy
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: uid, gid = extract_uid_gid_monitoring(daemon_type)
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 2799, in extract_uid_gid_monitoring
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: uid, gid = extract_uid_gid(file_path='/var/lib/grafana')
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 1798, in extract_uid_gid
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: args=['-c', '%u %g', file_path]
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 2275, in run
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr: self.run_cmd(), desc=self.entrypoint, timeout=timeout)
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 861, in call_throws
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr: raise RuntimeError('Failed command: %s' % ' '.join(command))
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr:RuntimeError: Failed command: /usr/bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=ceph/ceph-grafana:latest -e NODE_NAME=smithi198 --entrypoint stat ceph/ceph-grafana:latest -c %u %g /var/lib/grafana
</pre>
<p>does this mean, we have to retry fetching containers?</p>
teuthology - Feature #45722 (New): split requiremenets.txt into teuthology/requirements.txt and c...
https://tracker.ceph.com/issues/45722
2020-05-27T08:26:28Z
Sebastian Wagner
<p>It's super irritating to have the dependencies of /ceph/qa specified in teuthology/requirements.txt making it super awkward and hard to justify to add any dependencies to ceph tasks, workunits or tests.</p>
<p>Would be great to have the dependencies of /ceph/qa specified in /ceph/qa (either in a stray /ceph/qa/requirements.txt or via a proper python package, like /ceph/qa/setup.py)</p>
<p>See <a class="external" href="https://github.com/ceph/teuthology/pull/1493">https://github.com/ceph/teuthology/pull/1493</a> for an example</p>
Ceph - Feature #44745 (New): YAMLFormatter for common/Formatter.h
https://tracker.ceph.com/issues/44745
2020-03-25T11:36:11Z
Sebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/pull/34061">https://github.com/ceph/ceph/pull/34061</a> add a new value <code>yaml</code> for <code>--format</code> in order to support yaml in <code>mgr/cephadm</code>.</p>
<p>Having a YAMLFormatter for common/Formatter.h would be great, too!</p>
teuthology - Bug #44181 (New): Error in syslog: task.internal.syslog: random "*BUG*" in log message
https://tracker.ceph.com/issues/44181
2020-02-18T10:33:27Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/sage-2020-02-18_02:48:28-rados-wip-sage4-testing-2020-02-17-1727-distro-basic-smithi/4776502">http://pulpito.ceph.com/sage-2020-02-18_02:48:28-rados-wip-sage4-testing-2020-02-17-1727-distro-basic-smithi/4776502</a></p>
<p>This job failure was caused by</p>
<p><a class="external" href="https://github.com/ceph/teuthology/blob/291d40053a7a5caedc1d683f08d25399bc3b9ccd/teuthology/task/internal/syslog.py#L98-L144">https://github.com/ceph/teuthology/blob/291d40053a7a5caedc1d683f08d25399bc3b9ccd/teuthology/task/internal/syslog.py#L98-L144</a></p>
<pre>
2020-02-18T06:48:18.388 ERROR:teuthology.task.internal.syslog:Error in syslog on ubuntu@smithi060.front.sepia.ceph.com: /home/ubuntu/cephtest/archive/syslog/misc.log:2020-02-18T06:42:25.361371+00:00 smithi060 bash[10468]: audit 2020-02-18T06:42:24.442267+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.14124 172.21.15.60:0/1' entity='mgr.y' cmd=[{"prefix":"config-key set","key":"mgr/dashboard/key","val":"-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCnskmhDB10Jk6M\nXNpzP+7hOWVV7TYIeAGSapYoNcgQcPrQU0STPGuyUORmnKO4taTVz8EBvPL4p6Mv\niZFEIhL2OL07UexgDqKaAD4lne/KIhYQJVtkqPu/TYYemxa6xyl/V4LGPrSGYx0C\n8huP7RsqEMNLNMr/wG34hG7LCdGtcWk8aylma8XrXgukEHMsJJIeb+ZZKw3AnZCT\nbO++2B+V5DPtE4LId4x3G1PrVumH/whd6ciTtcImFspCRQgwlaPnDLf68bFXF67G\nBWJZZRFoTCc73fu/rUW1vGYk7WFiVi52WVbpgYrPc/AhWNOaH6d0xooBoohRjAYh\n7GDdVa+LAgMBAAECggEAM1HEZpymht0SPLJNx+dQ22wNLvahCoZvNLeZrESJLT7m\nAsr4uXZMHw3SV/SnxecQwr4JetawJJhowCuBYTBsTR2gC39OrzbLXAWm/ywOLfWw\netBz36I3KJw45zTfB9nbQTUuuCyIYngCcNxWwvz0yzLGEUXeudXR0bP1k/01RbZo\nhGe8oQSJzSN4zmfQtx/rSGCXJr23HUjPs0mVHqml2bZL9UZcuKu5RuN8PWSo0aOM\nZwGYa/1pcoo1OsN3XtujY9tU0Ykrd3rteAARMIBFzrktaWWhSdaiOQS0fYAnyGrX\njI7cjlsbtJfTt741wF0hmCZIGS40+HWTmwCTkapWwQKBgQDQkX4ZHyvFD96W81rz\nXLIdSEfgv0+andTC2v5kvlk4cxIYgic2g1R59gekZioOpVIQG/eCwiSFW5ndqjzI\nSGMj2bflL8vXv5q4EX+TL3W5LWOnR8k7FVxJpPsJbbyX93qbpcU/oKNSt5VDUPaL\neooha6lDP+HEAdfWHqe1PxP+9wKBgQDN1UzVWU8ur2tdlql2BVrwfi/J32/ZUFQG\nCPvC9RMdavZjKITu2Rg5LA6kYOnJ9MvVTU59Mf2c+6kKWQTaRhqTqhfAJYtjvmoC\naTGm7HGPywEOeMphF+LAb23DNcCzQFhBVduOfL8MSkTjJOjmaxyYc2qs+ts87NMt\nqCENAaPrDQKBgHvuV/1ZdkqsOVl81QhShku8DWnQg96d9jSqqAr4yE8woQoLHH3Z\n37JwrO3U/xygw3hrBdGextCvM2hxpZhk2vQMhKcclYVnhunlC+dLhio4fESD9WC0\nOphP/hMGL9Ak76fZArHiI+ocyAat7zHF6JofPP6G0QIFDlle8cxS5PDVAoGBAMRB\nByQ5JkV2HqG6YFNWYdICDuClOQj0DVk/wYSulY4sCUacQLtXpUAF4OQcP20/CgaT\n0i2Ot6ixTwi9veG8i+SVflXHtnLhAETSNfRZZyHaRmSdCSGwW5Rt6jMBkn2W8U9C\nZLgj+yjlu270J1hjcn1tNp4+BUG+8M+Mig7TrI4VAoGAYYltCD4bc2bBAPWnF6nk\nqrx16kKg0kjNdhATkBWt76jpsJYRmyo5NALLaB5/k0dS7ftuTmGEZLSnNyl44O2B\n7QH6PaRoP0hX5LtLwSZhiJxd6tDrfwMFzpVGiJHeUNKGS/GKQzlvlxUJb2aOhNWu\nMgFlLWfPOMgxiRpwUhtg0Is=\n-----END PRIVATE KEY-----\n"}]: dispatch
</pre>
<p>Unfortunately this log message contains "<code>BUG</code>" somewhere in the key.</p>
ceph-cm-ansible - Bug #43738 (New): cephadm: conflicts between attempted installs of libstoragemg...
https://tracker.ceph.com/issues/43738
2020-01-21T10:27:32Z
Sebastian Wagner
<pre>
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 420, in begin
super(CephLab, self).begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 263, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 290, in execute_playbook
self._handle_failure(command, status)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 314, in _handle_failure
raise AnsibleFailedError(failures)
failure_reason: '86 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz
conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
Error Summary
-------------
''}}Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure
log.error(yaml.safe_dump(failure))
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent
node = self.represent_data(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)RepresenterError: (''cannot represent an object'', u''/'')Failure
object was: {''smithi161.front.sepia.ceph.com'': {''_ansible_no_log'': False, ''changed'':
False, u''results'': [], u''rc'': 1, u''invocation'': {u''module_args'': {u''install_weak_deps'':
True, u''autoremove'': False, u''lock_timeout'': 0, u''download_dir'': None, u''install_repoquery'':
True, u''enable_plugin'': [], u''update_cache'': False, u''disable_excludes'': None,
u''exclude'': [], u''installroot'': u''/'', u''allow_downgrade'': False, u''name'':
[u''@core'', u''@base'', u''dnf-utils'', u''git-all'', u''sysstat'', u''libedit'',
u''boost-thread'', u''xfsprogs'', u''gdisk'', u''parted'', u''libgcrypt'', u''fuse-libs'',
u''openssl'', u''libuuid'', u''attr'', u''ant'', u''lsof'', u''gettext'', u''bc'',
u''xfsdump'', u''blktrace'', u''usbredir'', u''podman'', u''podman-docker'', u''libev-devel'',
u''valgrind'', u''nfs-utils'', u''ncurses-devel'', u''gcc'', u''git'', u''python3-nose'',
u''python3-virtualenv'', u''genisoimage'', u''qemu-img'', u''qemu-kvm-core'', u''qemu-kvm-block-rbd'',
u''libacl-devel'', u''dbench'', u''autoconf''], u''download_only'': False, u''bugfix'':
False, u''list'': None, u''disable_gpg_check'': False, u''conf_file'': None, u''update_only'':
False, u''state'': u''present'', u''disablerepo'': [], u''releasever'': None, u''disable_plugin'':
[], u''enablerepo'': [], u''skip_broken'': False, u''security'': False, u''validate_certs'':
True}}, u''failures'': [], u''msg'': u''Unknown Error occured: Transaction check
error:
file /usr/lib/tmpfiles.d/libstoragemgmt.conf conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/doc/libstoragemgmt/NEWS conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmcli.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmd.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/simc_lsmplugin.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log</a></p>
<ul>
<li>os_type: rhel</li>
<li>os_version: '8.0'</li>
<li>description: rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_api_tests.yaml}</li>
</ul>
<p>As this is an ansible error, I'm not sure if this is a cephadm issue. Any clues?</p>
teuthology - Feature #40972 (New): Make priority field more descriptive
https://tracker.ceph.com/issues/40972
2019-07-26T10:26:43Z
Sebastian Wagner
<p>The --priority flag accepts a number, but it's not totally clear, which number I should use.</p>
<p>My proposal would be to also accept names, like<br /><pre>
--priority baseline
--priority pr-run
--priority pr-run-high
--priority pr-run-urgent
--priority pr-run-low
</pre></p>
<p>Where each name corresponds to a specific value</p>
<pre>
{
'baseline': 1000
'pr-run': 100
'pr-run-high': 90
'pr-run-urgent': 50
'pr-run-low': 110
}[priority]
</pre>
teuthology - Bug #40749 (New): /task/ansible.py: AnsibleFailedError: RepresenterError: ('cannot r...
https://tracker.ceph.com/issues/40749
2019-07-12T09:41:13Z
Sebastian Wagner
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/teuthology.log</a></p>
<pre>
Thursday 11 July 2019 14:06:56 +0000 (0:00:00.272) 0:03:00.166 *********
===============================================================================
Check for /usr/bin/python ---------------------------------------------- 27.06s
2019-07-11T14:06:56.061 INFO:teuthology.task.ansible.out:users : Create all admin users with sudo access. ----------------------- 19.15s
users : Update authorized_keys using the keys repo --------------------- 18.43s
testnode : Zap all non-root disks --------------------------------------- 9.59s
testnode : Ensure packages are not present. ----------------------------- 9.53s
testnode : Install packages --------------------------------------------- 6.20s
testnode : ifdown and ifup ---------------------------------------------- 5.15s
users : Remove revoked users -------------------------------------------- 4.99s
common : Update apt cache ----------------------------------------------- 4.01s
testnode : Update apt cache. -------------------------------------------- 3.65s
testnode : Install python-apt ------------------------------------------- 3.11s
testnode : Blow away lingering OSD data and FSIDs ----------------------- 2.94s
testnode : Install apt keys --------------------------------------------- 2.09s
common : Install nrpe package and dependencies (Ubuntu) ----------------- 1.99s
testnode : Install packages via pip ------------------------------------- 1.72s
users : Update authorized_keys for each user with literal keys ---------- 1.72s
ansible-managed : Add authorized keys for the ansible user. ------------- 1.59s
Gathering Facts --------------------------------------------------------- 1.59s
testnode : Stop apache2 ------------------------------------------------- 1.45s
common : Upload megacli and cli64 for raid monitoring and smart.pl to /usr/sbin/. --- 1.18s
2019-07-11T14:06:56.319 ERROR:teuthology.task.ansible:Failed to parse ansible failure log: /tmp/teuth_ansible_failures_mF91TY (while parsing a flow mapping
in "/tmp/teuth_ansible_failures_mF91TY", line 1, column 54
expected ',' or '}', but got ':'
in "/tmp/teuth_ansible_failures_mF91TY", line 1, column 274)
2019-07-11T14:06:56.320 INFO:teuthology.task.ansible:Archiving ansible failure log at: /home/teuthworker/archive/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/ansible_failures.yaml
2019-07-11T14:06:56.323 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks
manager.__enter__()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 426, in begin
super(CephLab, self).begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 268, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 295, in execute_playbook
self._handle_failure(command, status)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 319, in _handle_failure
raise AnsibleFailedError(failures)
AnsibleFailedError: 7
/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)
RepresenterError: ('cannot represent an object', u'mira062')
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/ansible_failures.yaml">http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/ansible_failures.yaml</a></p>
<pre>
Failure object was: {'mira062.front.sepia.ceph.com': {'_ansible_no_log': False, u'invocation': {u'module_args': {u'name': u'mira062'}}, 'changed': False, u'msg': u"Command failed rc=1, out=, err=Could not get property: Failed to activate service 'org.freedesktop.hostname1': timed out\n"}}
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure
log.error(yaml.safe_dump(failure))
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent
node = self.represent_data(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)
RepresenterError: ('cannot represent an object', u'mira062')
</pre>
<p>Is this a Teuthology issue or ceph-ansible or is this just because mira062 timed out?</p>
Ceph - Bug #37373 (New): Interactive mode CLI with Python 3: Traceback when pressing ^D
https://tracker.ceph.com/issues/37373
2018-11-22T15:06:43Z
Sebastian Wagner
<p>Hey,</p>
<p>calling ^d in the repl of the ceph command using Python 3 shows a Traceback:</p>
<pre>
$ ceph
ceph>
ceph>
ceph> Traceback (most recent call last):
File "/home/sebastian/Repos/ceph/build/bin/ceph", line 1250, in <module>
retval = main()
File "/home/sebastian/Repos/ceph/build/bin/ceph", line 1229, in main
raw_write(outbuf)
File "/home/sebastian/Repos/ceph/build/bin/ceph", line 172, in raw_write
raw_stdout.write(buf)
TypeError: a bytes-like object is required, not 'str'
</pre>
<p>Is there anyone that uses this mode? Relates to</p>
<p><a class="external" href="https://marc.info/?i=CALe9h7c5kJudfsQ6Vf_vczUG0CeoN8=dxznC=92RamBvxD9u0w%20()%20mail%20!%20gmail%20!%20com">https://marc.info/?i=CALe9h7c5kJudfsQ6Vf_vczUG0CeoN8=dxznC=92RamBvxD9u0w%20()%20mail%20!%20gmail%20!%20com</a></p>