Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2022-01-19T16:07:48Z
Ceph
Redmine
Orchestrator - Bug #53939 (Resolved): ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_RE...
https://tracker.ceph.com/issues/53939
2022-01-19T16:07:48Z
Sebastian Wagner
<pre>
mon[102341]: : cluster [WRN] Health check failed: Upgrading daemon osd.0 on host smithi103 failed. (UPGRADE_REDEPLOY_DAEMON)
mon[66897]: cephadm 2022-01-18T16:27:48.439275+0000 mgr.smithi103.wyeocw (mgr.14712) 129 : cephadm [ERR] cephadm exited with an error code: 1, stderr:Redeploy daemon osd.0 ...
mon[66897]: Non-zero exit code 1 from systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0
mon[66897]: systemctl: stderr Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: systemctl: stderr See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
mon[66897]: Traceback (most recent call last):
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8615, in <module>
mon[66897]: main()
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8603, in main
mon[66897]: r = ctx.func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1790, in _default_image
mon[66897]: return func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 4603, in command_deploy
mon[66897]: ports=daemon_ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2715, in deploy_daemon
mon[66897]: c, osd_fsid=osd_fsid, ports=ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2960, in deploy_daemon_units
mon[66897]: call_throws(ctx, ['systemctl', 'start', unit_name])
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1469, in call_throws
mon[66897]: raise RuntimeError(f'Failed command: {" ".join(command)}: {s}')
mon[66897]: RuntimeError: Failed command: systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0: Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
mon[66897]: Traceback (most recent call last):
mon[66897]: File "/usr/share/ceph/mgr/cephadm/serve.py", line 1402, in _remote_connection
mon[66897]: yield (conn, connr)
mon[66897]: File "/usr/share/ceph/mgr/cephadm/serve.py", line 1295, in _run_cephadm
mon[66897]: code, '\n'.join(err)))
mon[66897]: orchestrator._interface.OrchestratorError: cephadm exited with an error code: 1, stderr:Redeploy daemon osd.0 ...
mon[66897]: Non-zero exit code 1 from systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0
mon[66897]: systemctl: stderr Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: systemctl: stderr See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
mon[66897]: Traceback (most recent call last):
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8615, in <module>
mon[66897]: main()
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8603, in main
mon[66897]: r = ctx.func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1790, in _default_image
mon[66897]: return func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 4603, in command_deploy
mon[66897]: ports=daemon_ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2715, in deploy_daemon
mon[66897]: c, osd_fsid=osd_fsid, ports=ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2960, in deploy_daemon_units
mon[66897]: call_throws(ctx, ['systemctl', 'start', unit_name])
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1469, in call_throws
mon[66897]: raise RuntimeError(f'Failed command: {" ".join(command)}: {s}')
mon[66897]: RuntimeError: Failed command: systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0: Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
...
cephadm 2022-01-18T16:27:48.439412+0000 mgr.smithi103.wyeocw (mgr.14712) 130 : cephadm [ERR] Upgrade: Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed.
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2022-01-18_15:34:53-rados:cephadm-wip-swagner2-testing-2022-01-18-1242-pacific-distro-default-smithi/6624255">https://pulpito.ceph.com/swagner-2022-01-18_15:34:53-rados:cephadm-wip-swagner2-testing-2022-01-18-1242-pacific-distro-default-smithi/6624255</a></p>
Orchestrator - Bug #53904 (Duplicate): cephadm: ingress jobs stuck
https://tracker.ceph.com/issues/53904
2022-01-17T16:07:38Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2022-01-17_12:42:04-orch:cephadm-wip-swagner-testing-2022-01-17-1014-distro-default-smithi/">https://pulpito.ceph.com/swagner-2022-01-17_12:42:04-orch:cephadm-wip-swagner-testing-2022-01-17-1014-distro-default-smithi/</a></p>
<pre>
2022-01-17T13:17:17.053 DEBUG:teuthology.orchestra.run.smithi155:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:1cdf02ebbbdd98a055173cbac4d0171328a564dc shell -c /etc/ceph/ceph.conf -k />
2022-01-17T13:17:17.054 DEBUG:teuthology.orchestra.run.smithi155:> for haproxy in `ceph orch ps | grep ^haproxy.nfs.foo. | awk '"'"'{print $1}'"'"'`; do
2022-01-17T13:17:17.054 DEBUG:teuthology.orchestra.run.smithi155:> ceph orch daemon stop $haproxy
2022-01-17T13:17:17.054 DEBUG:teuthology.orchestra.run.smithi155:> while ! ceph orch ps | grep $haproxy | grep stopped; do sleep 1 ; done
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> cat /mnt/foo/testfile
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> echo $haproxy > /mnt/foo/testfile
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> sync
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> ceph orch daemon start $haproxy
2022-01-17T13:17:17.056 DEBUG:teuthology.orchestra.run.smithi155:> while ! ceph orch ps | grep $haproxy | grep running; do sleep 1 ; done
2022-01-17T13:17:17.056 DEBUG:teuthology.orchestra.run.smithi155:> done
2022-01-17T13:17:17.056 DEBUG:teuthology.orchestra.run.smithi155:> '
</pre><br />...snip...<br /><pre>
2022-01-17T13:17:20.571 INFO:teuthology.orchestra.run.smithi155.stdout:Check with each haproxy down in turn...
2022-01-17T13:17:21.281 INFO:teuthology.orchestra.run.smithi155.stdout:Scheduled to stop haproxy.nfs.foo.smithi155.xhswck on host 'smithi155'
</pre><br />...snip...
<pre>
2022-01-17T13:17:36.893 INFO:teuthology.orchestra.run.smithi155.stdout:haproxy.nfs.foo.smithi155.xhswck smithi155 *:2049,9002 stopped 0s ago 79s - - <unknown> <un>
2022-01-17T13:17:36.898 INFO:teuthology.orchestra.run.smithi155.stdout:test
2022-01-17T13:17:37.528 INFO:teuthology.orchestra.run.smithi155.stdout:Scheduled to start haproxy.nfs.foo.smithi155.xhswck on host 'smithi155'
</pre><br />...snip...<br /><pre>
2022-01-17T13:17:53.182 INFO:teuthology.orchestra.run.smithi155.stdout:haproxy.nfs.foo.smithi155.xhswck smithi155 *:2049,9002 running (5s) 0s ago 95s - - 2.3.17-d1c9119 14b>
2022-01-17T13:17:53.519 INFO:teuthology.orchestra.run.smithi155.stdout:Scheduled to stop haproxy.nfs.foo.smithi162.mahcqs on host 'smithi162'
</pre><br />...snip...<br /><pre>
2022-01-17T13:18:07.810 INFO:teuthology.orchestra.run.smithi155.stdout:haproxy.nfs.foo.smithi162.mahcqs smithi162 *:2049,9002 stopped 0s ago 102s - - <unknown> <unk>
</pre><br />...snip..<br /><pre>
h[14066]: cephadm 2022-01-17T13:17:53.516345+0000 mgr.smithi155.uoijyc (mgr.14206) 339 : cephadm [INF] Schedule stop daemon haproxy.nfs.foo.smithi162.mahcqs
</pre>
<p>But I never see a start of haproxy.nfs.foo.smithi162.mahcqs again.</p>
mgr - Bug #53538 (Resolved): mgr/stats: ZeroDivisionError
https://tracker.ceph.com/issues/53538
2021-12-08T13:37:49Z
Sebastian Wagner
<pre>
root@service-01-08020:~# ceph osd status storage-01-08002
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1623, in _handle_command
return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 416, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/status/module.py", line 338, in handle_osd_status
wr_ops_rate = (self.get_rate("osd", osd_id.__str__(), "osd.op_w") +
File "/usr/share/ceph/mgr/status/module.py", line 28, in get_rate
return (data[-1][1] - data[-2][1]) // int(data[-1][0] - data[-2][0])
ZeroDivisionError: integer division or modulo by zero
</pre>
<p>Since those PRs:</p>
<ul>
<li><a class="external" href="https://github.com/ceph/ceph/pull/25337">https://github.com/ceph/ceph/pull/25337</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/26270">https://github.com/ceph/ceph/pull/26270</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/26270/files#diff-dc6485f717f4dce4863733896375af75963412ebb2abc4b62fcd1f5233eee07dR44">https://github.com/ceph/ceph/pull/26270/files#diff-dc6485f717f4dce4863733896375af75963412ebb2abc4b62fcd1f5233eee07dR44</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/28603">https://github.com/ceph/ceph/pull/28603</a> </li>
<li><a class="external" href="https://tracker.ceph.com/issues/43224#note-11">https://tracker.ceph.com/issues/43224#note-11</a></li>
</ul>
<p>no one had the patience to look into this all over again.</p>
Orchestrator - Bug #51272 (Resolved): upgrade job: mgr.x getting removed by cephadm task: UPGRADE...
https://tracker.ceph.com/issues/51272
2021-06-18T08:47:37Z
Sebastian Wagner
<p>I think this bug is not yet merged.</p>
<ul>
<li><a class="external" href="https://github.com/ceph/ceph/pull/41478/">https://github.com/ceph/ceph/pull/41478/</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/41568">https://github.com/ceph/ceph/pull/41568</a></li>
</ul>
<pre>
rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2}
</pre>
<pre>
roles:
- - mon.a
- mon.c
- mgr.y
- osd.0
- osd.1
- osd.2
- osd.3
- client.0
- node-exporter.a
- alertmanager.a
- - mon.b
- mgr.x
- osd.4
- osd.5
- osd.6
- osd.7
- client.1
- prometheus.a
- grafana.a
- node-exporter.b
</pre>
<p><strong>then</strong></p>
<pre>
: audit 2021-06-15T20:14:24.260141+0000 mgr.y (mgr.14138) 64 : audit [DBG] from='client.34106 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;smithi143=x", "target">
</pre>
<p>notice the placement only contains <strong>2;smithi143=x</strong></p>
<pre>
2021-06-15T20:14:29.203 INFO:journalctl@ceph.mgr.y.smithi135.stdout:Jun 15 20:14:29 smithi135 systemd[1]: Stopping Ceph mgr.y for e2a4517e-ce15-11eb-8c13-001a4aab830c...
</pre>
<p>*resulting in *</p>
<pre>
cluster 2021-06-15T20:21:09.388112+0000 mgr.x (mgr.34112) 238 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 0 B data, 3.7 MiB used, 707 GiB / 715 GiB avail
: debug 2021-06-15T20:21:11.241+0000 7ffa34117700 -1 log_channel(cephadm) log [ERR] : Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon
: audit 2021-06-15T20:21:11.239485+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.34112 ' entity='mgr.x'
: audit 2021-06-15T20:21:11.241293+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.34112 172.21.15.143:0/2430240313' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
: cephadm 2021-06-15T20:21:11.241839+0000 mgr.x (mgr.34112) 239 : cephadm [INF] Upgrade: Target is quay.ceph.io/ceph-ci/ceph:da5e8184007182fa3cd5c8385fee4e08c5620fe2 with id 219a75e51380d5cdf3af7b1fa194d1bedd11>
: cephadm 2021-06-15T20:21:11.244338+0000 mgr.x (mgr.34112) 240 : cephadm [INF] Upgrade: Checking mgr daemons...
: cephadm 2021-06-15T20:21:11.244711+0000 mgr.x (mgr.34112) 241 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x)
: cephadm 2021-06-15T20:21:11.247775+0000 mgr.x (mgr.34112) 242 : cephadm [ERR] Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon
: audit 2021-06-15T20:21:11.253146+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.34112 ' entity='mgr.x'
: cluster 2021-06-15T20:21:11.255641+0000 mgr.x (mgr.34112) 243 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 0 B data, 3.7 MiB used, 707 GiB / 715 GiB avail
: audit 2021-06-15T20:21:11.259712+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.34112 ' entity='mgr.x'
</pre>
<pre>
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:alertmanager.a smithi135 running (117s) 107s ago 2m 0.20.0 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f d7ab1fc469b4
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:grafana.a smithi143 running (2m) 107s ago 2m 6.6.2 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a bdf08596362b
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:mgr.x smithi143 running (6m) 107s ago 6m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 bf659290d1ab
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.a smithi135 running (8m) 107s ago 9m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 a0083afbce6f
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.b smithi143 running (7m) 107s ago 7m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 177430b8b423
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.c smithi135 running (7m) 107s ago 7m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 881e672542be
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:node-exporter.a smithi135 running (2m) 107s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf acd96e0cc12e
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:node-exporter.b smithi143 running (2m) 107s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf a3c897228c6d
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.0 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 9805ecc9628d
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.1 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 29d8fc3fbb7f
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.2 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 193e0a2a0487
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.3 smithi135 running (4m) 107s ago 4m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 e2dea4bf5490
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.4 smithi143 running (4m) 107s ago 4m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 e0e19361a64a
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.5 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 71c57f8c0e3d
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.6 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 4da5baa064d1
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.7 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 098193d20e10
2021-06-15T20:21:16.896 INFO:teuthology.orchestra.run.smithi135.stdout:prometheus.a smithi143 running (110s) 107s ago 2m 2.18.1 docker.io/prom/prometheus:v2.18.1 de242295e225 fb7dd6cd2280
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/yuriw-2021-06-15_18:44:29-rados-wip-yuri8-testing-2021-06-15-0839-octopus-distro-basic-smithi/6174184/teuthology.log">http://qa-proxy.ceph.com/teuthology/yuriw-2021-06-15_18:44:29-rados-wip-yuri8-testing-2021-06-15-0839-octopus-distro-basic-smithi/6174184/teuthology.log</a></p>
RADOS - Bug #49190 (Resolved): LibRadosMiscConnectFailure_ConnectFailure_Test: FAILED ceph_assert...
https://tracker.ceph.com/issues/49190
2021-02-05T10:34:01Z
Sebastian Wagner
<p>I created the branch two days ago and haven't seen this error before:</p>
<pre>
2021-02-05T09:43:02.428 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 2021-02-05T09:43:02.385+0000 7fb2c183e700 10 monclient: discarding stray monitor message auth_reply(proto 2 0 (0) Success) v1
2021-02-05T09:43:02.428 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-435-g49e81916/rpm/el8/BUILD/ceph-17.0.0-435-g49e81916/src/common/config_proxy.h: In function 'void ceph::common::ConfigProxy::call_gate_close(ceph::common::ConfigProxy::md_config_obs_t*)' thread 7fb2d4732500 time 2021-02-05T09:43:02.387202+0000
2021-02-05T09:43:02.429 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-435-g49e81916/rpm/el8/BUILD/ceph-17.0.0-435-g49e81916/src/common/config_proxy.h: 71: FAILED ceph_assert(p != obs_call_gate.end())
2021-02-05T09:43:02.429 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: ceph version 17.0.0-435-g49e81916 (49e81916e1db40399401bf6993250bf570285966) quincy (dev)
2021-02-05T09:43:02.429 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x158) [0x7fb2caca479a]
2021-02-05T09:43:02.429 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 2: /usr/lib64/ceph/libceph-common.so.2(+0x2769b4) [0x7fb2caca49b4]
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 3: (MonClient::shutdown()+0x8eb) [0x7fb2cb035feb]
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 4: (MonClient::get_monmap_and_config()+0x4ad) [0x7fb2cb03ac1d]
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 5: /lib64/librados.so.2(+0xb8ef8) [0x7fb2d4235ef8]
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 6: rados_connect()
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 7: (LibRadosMiscConnectFailure_ConnectFailure_Test::TestBody()+0x35d) [0x561eec3dcc3d]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 8: (void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*)+0x4e) [0x561eec435d4e]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 9: (testing::Test::Run()+0xcb) [0x561eec428d3b]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 10: (testing::TestInfo::Run()+0x135) [0x561eec428ea5]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 11: (testing::TestSuite::Run()+0xc1) [0x561eec429401]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 12: (testing::internal::UnitTestImpl::RunAllTests()+0x445) [0x561eec42b015]
2021-02-05T09:43:02.432 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 13: (bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*)+0x4e) [0x561eec4362be]
2021-02-05T09:43:02.432 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 14: (testing::UnitTest::Run()+0xa0) [0x561eec428f70]
2021-02-05T09:43:02.432 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 15: main()
2
</pre>
<ul>
<li><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859038">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859038</a></li>
<li><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859039">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859039</a></li>
<li><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859041">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859041</a></li>
<li><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859042">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859042</a></li>
</ul>
<p>See <a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/</a></p>
Orchestrator - Documentation #46701 (Resolved): remove `alias ceph='cephadm shell -- ceph'`
https://tracker.ceph.com/issues/46701
2020-07-24T08:16:42Z
Sebastian Wagner
<p>this will lead to unexpected behavior, like</p>
<pre>
$ ceph orch apply -i myfile.yaml
ERROR: no such file or directory: myfile.yaml
</pre>
RADOS - Bug #46596 (Resolved): ceph-osd --mkfs: *** longjmp causes uninitialized stack frame ***:...
https://tracker.ceph.com/issues/46596
2020-07-17T11:24:02Z
Sebastian Wagner
<p>This is likely a regression that was merged yesterday into master (July 16th).</p>
<pre>
2020-07-17T08:16:37.646 INFO:teuthology.orchestra.run.smithi117.stderr:Error EINVAL: Traceback (most recent call last):
2020-07-17T08:16:37.647 INFO:teuthology.orchestra.run.smithi117.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 1170, in _handle_command
2020-07-17T08:16:37.647 INFO:teuthology.orchestra.run.smithi117.stderr: return self.handle_command(inbuf, cmd)
2020-07-17T08:16:37.647 INFO:teuthology.orchestra.run.smithi117.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 116, in handle_command
2020-07-17T08:16:37.648 INFO:teuthology.orchestra.run.smithi117.stderr: return dispatch[cmd['prefix']].call(self, cmd, inbuf)
2020-07-17T08:16:37.648 INFO:teuthology.orchestra.run.smithi117.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 310, in call
2020-07-17T08:16:37.648 INFO:teuthology.orchestra.run.smithi117.stderr: return self.func(mgr, **kwargs)
2020-07-17T08:16:37.649 INFO:teuthology.orchestra.run.smithi117.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 78, in <lambda>
2020-07-17T08:16:37.649 INFO:teuthology.orchestra.run.smithi117.stderr: wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
2020-07-17T08:16:37.649 INFO:teuthology.orchestra.run.smithi117.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 69, in wrapper
2020-07-17T08:16:37.650 INFO:teuthology.orchestra.run.smithi117.stderr: return func(*args, **kwargs)
2020-07-17T08:16:37.650 INFO:teuthology.orchestra.run.smithi117.stderr: File "/usr/share/ceph/mgr/orchestrator/module.py", line 838, in _daemon_add_osd
2020-07-17T08:16:37.650 INFO:teuthology.orchestra.run.smithi117.stderr: raise_if_exception(completion)
2020-07-17T08:16:37.651 INFO:teuthology.orchestra.run.smithi117.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 636, in raise_if_exception
2020-07-17T08:16:37.651 INFO:teuthology.orchestra.run.smithi117.stderr: raise e
2020-07-17T08:16:37.651 INFO:teuthology.orchestra.run.smithi117.stderr:RuntimeError: cephadm exited with an error code: 1, stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph-authtool --gen-print-key
2020-07-17T08:16:37.652 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ea0b3bfb-c92d-4c0d-a6c1-dbaea1d2d9ea
2020-07-17T08:16:37.652 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph-authtool --gen-print-key
2020-07-17T08:16:37.652 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
2020-07-17T08:16:37.652 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/chown -h ceph:ceph /dev/vg_nvme/lv_4
2020-07-17T08:16:37.653 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
2020-07-17T08:16:37.653 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ln -s /dev/vg_nvme/lv_4 /var/lib/ceph/osd/ceph-0/block
2020-07-17T08:16:37.653 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
2020-07-17T08:16:37.654 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: got monmap epoch 3
2020-07-17T08:16:37.654 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQBkXhFfHrtPAhAAPVWYuBP8G0vUDHj9YQlpOQ==
2020-07-17T08:16:37.655 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stdout: creating /var/lib/ceph/osd/ceph-0/keyring
2020-07-17T08:16:37.655 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr added entity osd.0 auth(key=AQBkXhFfHrtPAhAAPVWYuBP8G0vUDHj9YQlpOQ==)
2020-07-17T08:16:37.656 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
2020-07-17T08:16:37.656 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
2020-07-17T08:16:37.656 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity None --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid ea0b3bfb-c92d-4c0d-a6c1-dbaea1d2d9ea --setuser ceph --setgroup ceph
2020-07-17T08:16:37.656 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: WARN 2020-07-17 08:16:36,830 [shard 0] seastar - Unable to set SCHED_FIFO scheduling policy for timer thread; latency impact possible. Try adding CAP_SYS_NICE
2020-07-17T08:16:37.657 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: *** longjmp causes uninitialized stack frame ***: /usr/bin/ceph-osd terminated
2020-07-17T08:16:37.657 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: Aborting on shard 0.
2020-07-17T08:16:37.657 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: Backtrace:
2020-07-17T08:16:37.658 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: 0x0000000000e8060c
2020-07-17T08:16:37.658 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: 0x0000000000e46040
2020-07-17T08:16:37.658 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: 0x0000000000e46108
2020-07-17T08:16:37.659 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: 0x0000000000e461d5
2020-07-17T08:16:37.659 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: /lib64/libpthread.so.0+0x0000000000012dcf
2020-07-17T08:16:37.660 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: /lib64/libc.so.6+0x000000000003770e
2020-07-17T08:16:37.660 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: /lib64/libc.so.6+0x0000000000021b24
2020-07-17T08:16:37.660 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: /lib64/libc.so.6+0x000000000007a896
2020-07-17T08:16:37.660 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: /lib64/libc.so.6+0x000000000010d904
2020-07-17T08:16:37.661 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: /lib64/libc.so.6+0x000000000010d936
2020-07-17T08:16:37.661 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: /lib64/libc.so.6+0x000000000010d7b0
2020-07-17T08:16:37.661 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: /lib64/libc.so.6+0x000000000010d70e
2020-07-17T08:16:37.661 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: 0x0000000000eb4cdf
2020-07-17T08:16:37.662 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: 0x00000000005eb475
2020-07-17T08:16:37.662 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: 0x0000000000e42aa0
2020-07-17T08:16:37.671 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: 0x0000000000e42df3
2020-07-17T08:16:37.672 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: 0x0000000000e6d51d
2020-07-17T08:16:37.672 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: 0x0000000000e19237
2020-07-17T08:16:37.672 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: 0x000000000058b61d
2020-07-17T08:16:37.672 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: /lib64/libc.so.6+0x00000000000236a2
2020-07-17T08:16:37.673 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr stderr: 0x00000000005d133d
2020-07-17T08:16:37.673 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr --> Was unable to complete a new OSD, will rollback changes
2020-07-17T08:16:37.673 INFO:teuthology.orchestra.run.smithi117.stderr:INFO:cephadm:/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-07-17_08:02:08-rados:cephadm-wip-swagner3-testing-2020-07-16-1151-distro-basic-smithi/">https://pulpito.ceph.com/swagner-2020-07-17_08:02:08-rados:cephadm-wip-swagner3-testing-2020-07-16-1151-distro-basic-smithi/</a></p>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-07-17_08:13:18-rados:cephadm-wip-swagner2-testing-2020-07-16-1151-distro-basic-smithi/">https://pulpito.ceph.com/swagner-2020-07-17_08:13:18-rados:cephadm-wip-swagner2-testing-2020-07-16-1151-distro-basic-smithi/</a></p>
<p><a class="external" href="https://pulpito.ceph.com/varsha-2020-07-17_06:11:35-rados-wip-varsha-testing-distro-basic-smithi/">https://pulpito.ceph.com/varsha-2020-07-17_06:11:35-rados-wip-varsha-testing-distro-basic-smithi/</a></p>
Orchestrator - Bug #45427 (Resolved): cephadm: auth get failed: invalid entity_auth mon
https://tracker.ceph.com/issues/45427
2020-05-07T10:13:25Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/mgfritch-2020-05-07_02:27:06-rados-wip-mgfritch-testing-2020-05-06-1821-distro-basic-smithi/5029062">http://pulpito.ceph.com/mgfritch-2020-05-07_02:27:06-rados-wip-mgfritch-testing-2020-05-06-1821-distro-basic-smithi/5029062</a></p>
<pre>
cephadm 2020-05-07T03:43:08.989542+0000 mgr.smithi154.qjpiuj (mgr.27922) 6 : cephadm [ERR] Failed to apply node-exporter spec ServiceSpec({'placement': PlacementSpec(host_pattern='*'), 'service_type': 'node-exporter', 'service_id': None, 'unmanaged': False}): auth get failed: invalid entity_auth mon
Traceback (most recent call last):
File "/usr/share/ceph/mgr/cephadm/module.py", line 2219, in _apply_all_services
if self._apply_service(spec):
File "/usr/share/ceph/mgr/cephadm/module.py", line 2190, in _apply_service
create_func(daemon_id, host) # type: ignore
File "/usr/share/ceph/mgr/cephadm/module.py", line 2967, in _create_node_exporter
return self._create_daemon('node-exporter', daemon_id, host)
File "/usr/share/ceph/mgr/cephadm/module.py", line 2021, in _create_daemon
extra_ceph_config=extra_config.pop('config', ''))
File "/usr/share/ceph/mgr/cephadm/module.py", line 1974, in _get_config_and_keyring
'entity': ename,
File "/usr/share/ceph/mgr/mgr_module.py", line 1096, in check_mon_command
raise MonCommandFailed(f'{cmd_dict["prefix"]} failed: {r.stderr}')
mgr_module.MonCommandFailed: auth get failed: invalid entity_auth mon
</pre>
<p>(as a side note, why do we need the mon keyrig for the node_exporter?)</p>
bluestore - Bug #45335 (Resolved): cephadm upgrade: OSD.0 is not coming back after restart: rock...
https://tracker.ceph.com/issues/45335
2020-04-29T15:58:52Z
Sebastian Wagner
<pre>
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.842415+0000 mgr.x (mgr.34535) 47 : cephadm [INF] Upgrade: Target is quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537 with id 9c90938ad11a31c5ba9b58ed052bf347591ae047e94bca695e7a022672efd3b9
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.843492+0000 mgr.x (mgr.34535) 48 : cephadm [INF] Upgrade: Checking mgr daemons...
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.848251+0000 mgr.x (mgr.34535) 49 : cephadm [INF] Upgrade: All mgr daemons are up to date.
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.848483+0000 mgr.x (mgr.34535) 50 : cephadm [INF] Upgrade: Checking mon daemons...
2020-04-28T22:15:25.776 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.849302+0000 mgr.x (mgr.34535) 51 : cephadm [INF] Upgrade: Setting container_image for all mon...
2020-04-28T22:15:25.776 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.868867+0000 mgr.x (mgr.34535) 52 : cephadm [INF] Upgrade: All mon daemons are up to date.
2020-04-28T22:15:25.777 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.869043+0000 mgr.x (mgr.34535) 53 : cephadm [INF] Upgrade: Checking crash daemons...
2020-04-28T22:15:25.777 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.869744+0000 mgr.x (mgr.34535) 54 : cephadm [INF] Upgrade: Setting container_image for all crash...
2020-04-28T22:15:25.777 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.870444+0000 mgr.x (mgr.34535) 55 : cephadm [INF] Upgrade: All crash daemons are up to date.
2020-04-28T22:15:25.778 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.870641+0000 mgr.x (mgr.34535) 56 : cephadm [INF] Upgrade: Checking osd daemons...
2020-04-28T22:15:25.778 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cluster 2020-04-28T22:15:24.333492+0000 mon.a (mon.0) 109 : cluster [DBG] mgrmap e25: x(active, since 41s), standbys: y
2020-04-28T22:15:26.521 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cluster 2020-04-28T22:15:25.061515+0000 mgr.x (mgr.34535) 57 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:26.522 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.422931+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.522 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.423236+0000 mgr.x (mgr.34535) 58 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.522 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cephadm 2020-04-28T22:15:25.424056+0000 mgr.x (mgr.34535) 59 : cephadm [INF] Upgrade: It is safe to stop osd.0
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cephadm 2020-04-28T22:15:25.424365+0000 mgr.x (mgr.34535) 60 : cephadm [INF] Upgrade: Redeploying osd.0
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.424676+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]: dispatch
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.428887+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd='[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]': finished
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.429664+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
2020-04-28T22:15:26.524 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.430373+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-04-28T22:15:26.524 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cephadm 2020-04-28T22:15:25.431342+0000 mgr.x (mgr.34535) 61 : cephadm [INF] Deploying daemon osd.0 on smithi151
2020-04-28T22:15:26.524 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.431836+0000 mon.a (mon.0) 115 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "osd.0", "key": "container_image"}]: dispatch
2020-04-28T22:15:26.524 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cluster 2020-04-28T22:15:25.061515+0000 mgr.x (mgr.34535) 57 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.422931+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.423236+0000 mgr.x (mgr.34535) 58 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cephadm 2020-04-28T22:15:25.424056+0000 mgr.x (mgr.34535) 59 : cephadm [INF] Upgrade: It is safe to stop osd.0
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cephadm 2020-04-28T22:15:25.424365+0000 mgr.x (mgr.34535) 60 : cephadm [INF] Upgrade: Redeploying osd.0
2020-04-28T22:15:26.526 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.424676+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]: dispatch
2020-04-28T22:15:26.526 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.428887+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd='[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]': finished
2020-04-28T22:15:26.526 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.429664+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
2020-04-28T22:15:26.527 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.430373+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-04-28T22:15:26.527 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cephadm 2020-04-28T22:15:25.431342+0000 mgr.x (mgr.34535) 61 : cephadm [INF] Deploying daemon osd.0 on smithi151
2020-04-28T22:15:26.527 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.431836+0000 mon.a (mon.0) 115 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "osd.0", "key": "container_image"}]: dispatch
2020-04-28T22:15:26.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cluster 2020-04-28T22:15:25.061515+0000 mgr.x (mgr.34535) 57 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:26.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.422931+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.423236+0000 mgr.x (mgr.34535) 58 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cephadm 2020-04-28T22:15:25.424056+0000 mgr.x (mgr.34535) 59 : cephadm [INF] Upgrade: It is safe to stop osd.0
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cephadm 2020-04-28T22:15:25.424365+0000 mgr.x (mgr.34535) 60 : cephadm [INF] Upgrade: Redeploying osd.0
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.424676+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]: dispatch
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.428887+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd='[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]': finished
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.429664+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.430373+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cephadm 2020-04-28T22:15:25.431342+0000 mgr.x (mgr.34535) 61 : cephadm [INF] Deploying daemon osd.0 on smithi151
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.431836+0000 mon.a (mon.0) 115 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "osd.0", "key": "container_image"}]: dispatch
2020-04-28T22:15:26.991 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 systemd[1]: Stopping Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c...
2020-04-28T22:15:26.992 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[28166]: debug 2020-04-28T22:15:26.838+0000 7fb6a45f8700 -1 received signal: Terminated from Kernel ( Could be generated by pthread_kill(), raise(), abort(), alarm() ) UID: 0
2020-04-28T22:15:26.992 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[28166]: debug 2020-04-28T22:15:26.838+0000 7fb6a45f8700 -1 osd.0 51 *** Got signal Terminated ***
2020-04-28T22:15:26.992 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[28166]: debug 2020-04-28T22:15:26.838+0000 7fb6a45f8700 -1 osd.0 51 *** Immediate shutdown (osd_fast_shutdown=true) ***
2020-04-28T22:15:27.271 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 podman[13417]: 2020-04-28 22:15:26.989657914 +0000 UTC m=+0.182639933 container died 816cfebb822abfc1a17c63b3c8dbf4d82a23b1f976117dbc325a433a615cc931 (image=docker.io/ceph/ceph:v15.2.0, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:27.272 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13417]: 2020-04-28 22:15:27.020897016 +0000 UTC m=+0.213879019 container stop 816cfebb822abfc1a17c63b3c8dbf4d82a23b1f976117dbc325a433a615cc931 (image=docker.io/ceph/ceph:v15.2.0, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:27.272 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13417]: 816cfebb822abfc1a17c63b3c8dbf4d82a23b1f976117dbc325a433a615cc931
2020-04-28T22:15:27.647 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.875889+0000 mon.a (mon.0) 116 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.647 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.875926+0000 mon.a (mon.0) 117 : cluster [INF] osd.0 failed (root=default,host=smithi151) (connection refused reported by osd.2)
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876132+0000 mon.a (mon.0) 118 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876254+0000 mon.a (mon.0) 119 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876427+0000 mon.a (mon.0) 120 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876630+0000 mon.a (mon.0) 121 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876804+0000 mon.a (mon.0) 122 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876953+0000 mon.a (mon.0) 123 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877082+0000 mon.a (mon.0) 124 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877205+0000 mon.a (mon.0) 125 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877341+0000 mon.a (mon.0) 126 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877492+0000 mon.a (mon.0) 127 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.074089+0000 mon.a (mon.0) 128 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.074241+0000 mon.a (mon.0) 129 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.075789+0000 mon.a (mon.0) 130 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.075870+0000 mon.a (mon.0) 131 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.076508+0000 mon.a (mon.0) 132 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.076604+0000 mon.a (mon.0) 133 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.076919+0000 mon.a (mon.0) 134 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077001+0000 mon.a (mon.0) 135 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077113+0000 mon.a (mon.0) 136 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077197+0000 mon.a (mon.0) 137 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077320+0000 mon.a (mon.0) 138 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077390+0000 mon.a (mon.0) 139 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077520+0000 mon.a (mon.0) 140 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077634+0000 mon.a (mon.0) 141 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.654 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.875889+0000 mon.a (mon.0) 116 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.654 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.875926+0000 mon.a (mon.0) 117 : cluster [INF] osd.0 failed (root=default,host=smithi151) (connection refused reported by osd.2)
2020-04-28T22:15:27.654 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876132+0000 mon.a (mon.0) 118 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876254+0000 mon.a (mon.0) 119 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876427+0000 mon.a (mon.0) 120 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876630+0000 mon.a (mon.0) 121 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876804+0000 mon.a (mon.0) 122 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876953+0000 mon.a (mon.0) 123 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877082+0000 mon.a (mon.0) 124 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877205+0000 mon.a (mon.0) 125 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877341+0000 mon.a (mon.0) 126 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877492+0000 mon.a (mon.0) 127 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.074089+0000 mon.a (mon.0) 128 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.074241+0000 mon.a (mon.0) 129 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.075789+0000 mon.a (mon.0) 130 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.075870+0000 mon.a (mon.0) 131 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.076508+0000 mon.a (mon.0) 132 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.076604+0000 mon.a (mon.0) 133 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.076919+0000 mon.a (mon.0) 134 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077001+0000 mon.a (mon.0) 135 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077113+0000 mon.a (mon.0) 136 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077197+0000 mon.a (mon.0) 137 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077320+0000 mon.a (mon.0) 138 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077390+0000 mon.a (mon.0) 139 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.660 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077520+0000 mon.a (mon.0) 140 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.660 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077634+0000 mon.a (mon.0) 141 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.875889+0000 mon.a (mon.0) 116 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.875926+0000 mon.a (mon.0) 117 : cluster [INF] osd.0 failed (root=default,host=smithi151) (connection refused reported by osd.2)
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876132+0000 mon.a (mon.0) 118 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876254+0000 mon.a (mon.0) 119 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876427+0000 mon.a (mon.0) 120 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876630+0000 mon.a (mon.0) 121 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876804+0000 mon.a (mon.0) 122 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876953+0000 mon.a (mon.0) 123 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877082+0000 mon.a (mon.0) 124 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877205+0000 mon.a (mon.0) 125 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877341+0000 mon.a (mon.0) 126 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877492+0000 mon.a (mon.0) 127 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.074089+0000 mon.a (mon.0) 128 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.074241+0000 mon.a (mon.0) 129 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.075789+0000 mon.a (mon.0) 130 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.075870+0000 mon.a (mon.0) 131 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.076508+0000 mon.a (mon.0) 132 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.076604+0000 mon.a (mon.0) 133 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.076919+0000 mon.a (mon.0) 134 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077001+0000 mon.a (mon.0) 135 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077113+0000 mon.a (mon.0) 136 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077197+0000 mon.a (mon.0) 137 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077320+0000 mon.a (mon.0) 138 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.693 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077390+0000 mon.a (mon.0) 139 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.693 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077520+0000 mon.a (mon.0) 140 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.693 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077634+0000 mon.a (mon.0) 141 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:28.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.64575936 +0000 UTC m=+0.606987472 container create c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.820779587 +0000 UTC m=+0.782007706 container init c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.023 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.862377714 +0000 UTC m=+0.823605831 container start c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.023 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.862442318 +0000 UTC m=+0.823670460 container attach c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.349 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 podman[13459]: 2020-04-28 22:15:28.087744802 +0000 UTC m=+1.048972928 container died c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.605 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.062007+0000 mgr.x (mgr.34535) 62 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:28.605 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.338144+0000 mon.a (mon.0) 142 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2020-04-28T22:15:28.605 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.347350+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in
2020-04-28T22:15:28.605 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 podman[13459]: 2020-04-28 22:15:28.587885902 +0000 UTC m=+1.549114039 container remove c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.606 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 systemd[1]: Stopped Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c.
2020-04-28T22:15:28.606 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.062007+0000 mgr.x (mgr.34535) 62 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:28.606 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.338144+0000 mon.a (mon.0) 142 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2020-04-28T22:15:28.607 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.347350+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in
2020-04-28T22:15:28.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:28 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.062007+0000 mgr.x (mgr.34535) 62 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:28.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:28 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.338144+0000 mon.a (mon.0) 142 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2020-04-28T22:15:28.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:28 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.347350+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in
2020-04-28T22:15:29.021 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 systemd[1]: Starting Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c...
2020-04-28T22:15:29.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 podman[13562]: Error: no container with name or ID ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0 found: no such container
2020-04-28T22:15:29.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 systemd[1]: Started Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c.
2020-04-28T22:15:29.322 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.0207602 +0000 UTC m=+0.262426894 container create edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.322 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.120609901 +0000 UTC m=+0.362276575 container init edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.323 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.162161904 +0000 UTC m=+0.403828610 container start edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.323 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.162247399 +0000 UTC m=+0.403914112 container attach edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:29 smithi156 bash[4373]: cluster 2020-04-28T22:15:28.355727+0000 mon.a (mon.0) 144 : cluster [DBG] osdmap e53: 8 total, 7 up, 8 in
2020-04-28T22:15:29.775 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[11812]: audit 2020-04-28T22:15:28.777097+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "container_image"}]: dispatch
2020-04-28T22:15:29.776 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
2020-04-28T22:15:29.776 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/vg_nvme/lv_4 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
2020-04-28T22:15:29.776 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/ln -snf /dev/vg_nvme/lv_4 /var/lib/ceph/osd/ceph-0/block
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: --> ceph-volume lvm activate successful for osd ID: 0
2020-04-28T22:15:29.778 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.444755363 +0000 UTC m=+0.686422056 container died edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:30.180 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.902456456 +0000 UTC m=+1.144123162 container remove edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:30.180 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.137040328 +0000 UTC m=+0.215911961 container create 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.505 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:30 smithi151 bash[10946]: cluster 2020-04-28T22:15:29.062506+0000 mgr.x (mgr.34535) 63 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:30.506 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:30 smithi151 bash[11812]: cluster 2020-04-28T22:15:29.062506+0000 mgr.x (mgr.34535) 63 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:30.506 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.253693628 +0000 UTC m=+0.332565244 container init 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.506 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.295389095 +0000 UTC m=+0.374260713 container start 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.506 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.295458519 +0000 UTC m=+0.374330136 container attach 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:30 smithi156 bash[4373]: cluster 2020-04-28T22:15:29.062506+0000 mgr.x (mgr.34535) 63 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:31.077 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 bash[13577]: debug 2020-04-28T22:15:30.801+0000 7f47628adec0 -1 Falling back to public interface
2020-04-28T22:15:31.077 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.074+0000 7f47628adec0 -1 rocksdb: verify_sharding mismatch on sharding. requested = [(L,1,0-,),(O,3,0-13,),(m,3,0-,)] stored = []
2020-04-28T22:15:31.078 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.074+0000 7f47628adec0 -1 bluestore(/var/lib/ceph/osd/ceph-0) _open_db erroring opening db:
2020-04-28T22:15:31.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:31 smithi156 bash[4373]: cluster 2020-04-28T22:15:31.361256+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)
2020-04-28T22:15:31.731 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[10946]: cluster 2020-04-28T22:15:31.361256+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)
2020-04-28T22:15:31.731 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[11812]: cluster 2020-04-28T22:15:31.361256+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)
2020-04-28T22:15:31.732 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.589+0000 7f47628adec0 -1 osd.0 0 OSD:init: unable to mount object store
2020-04-28T22:15:31.732 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.589+0000 7f47628adec0 -1 ** ERROR: osd init failed: (5) Input/output error
2020-04-28T22:15:32.021 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 podman[13761]: 2020-04-28 22:15:31.729840599 +0000 UTC m=+1.808712241 container died 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:32.614 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:32 smithi151 bash[10946]: cluster 2020-04-28T22:15:31.063005+0000 mgr.x (mgr.34535) 64 : cluster [DBG] pgmap v29: 1 pgs: 1 active+undersized+degraded; 0 B data, 4.0 MiB used, 707 GiB / 715 GiB avail; 1/3 objects degraded (33.333%)
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/mgfritch-2020-04-28_21:52:13-rados-wip-mgfritch-testing-2020-04-28-1427-distro-basic-smithi/4995175/teuthology.log">http://qa-proxy.ceph.com/teuthology/mgfritch-2020-04-28_21:52:13-rados-wip-mgfritch-testing-2020-04-28-1427-distro-basic-smithi/4995175/teuthology.log</a></p>
Orchestrator - Bug #44302 (Resolved): cehpadm: apply_mon: NotImplementedError
https://tracker.ceph.com/issues/44302
2020-02-26T09:08:37Z
Sebastian Wagner
<pre>
2020-02-25T17:09:39.919 INFO:teuthology.orchestra.run.smithi202.stderr:2020-02-25T17:09:39.916+0000 7f48cec61700 1 -- 172.21.15.202:0/1095025698 --> v2:172.21.15.202:6800/1 -- mgr_command(tid 0: {"prefix": "orch apply mon", "num": 2, "hosts": ["smithi202:[v2:172.21.15.202:3301,v1:172.21.15.202:6790]=c"], "target": ["mon-mgr", ""]}) v1 -- 0x7f48c8072980 con 0x7f48ac020d20
2020-02-25T17:09:39.921 INFO:teuthology.orchestra.run.smithi202.stderr:2020-02-25T17:09:39.919+0000 7f48be7fc700 1 -- 172.21.15.202:0/1095025698 <== mgr.14130 v2:172.21.15.202:6800/1 1 ==== mgr_command_reply(tid 0: -22 Traceback (most recent call last):
2020-02-25T17:09:39.921 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 1070, in _handle_command
2020-02-25T17:09:39.921 INFO:teuthology.orchestra.run.smithi202.stderr: return self.handle_command(inbuf, cmd)
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 191, in handle_command
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: return dispatch[cmd['prefix']].call(self, cmd, inbuf)
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 309, in call
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: return self.func(mgr, **kwargs)
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 153, in <lambda>
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 144, in wrapper
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: return func(*args, **kwargs)
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/module.py", line 688, in _apply_mon
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: completion = self.apply_mon(spec)
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1718, in inner
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: completion = self._oremote(method_name, args, kwargs)
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1788, in _oremote
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: return mgr.remote(o, meth, *args, **kwargs)
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 1432, in remote
2020-02-25T17:09:39.924 INFO:teuthology.orchestra.run.smithi202.stderr: args, kwargs)
2020-02-25T17:09:39.924 INFO:teuthology.orchestra.run.smithi202.stderr:RuntimeError: Remote method threw exception: Traceback (most recent call last):
2020-02-25T17:09:39.924 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1002, in apply_mon
2020-02-25T17:09:39.924 INFO:teuthology.orchestra.run.smithi202.stderr: raise NotImplementedError()
2020-02-25T17:09:39.924 INFO:teuthology.orchestra.run.smithi202.stderr:NotImplementedError
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2020-02-25_16:47:03-rados-wip-swagner-testing-2020-02-25-1426-distro-basic-smithi/4801966/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2020-02-25_16:47:03-rados-wip-swagner-testing-2020-02-25-1426-distro-basic-smithi/4801966/teuthology.log</a></p>
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-02-25_16:47:03-rados-wip-swagner-testing-2020-02-25-1426-distro-basic-smithi/">http://pulpito.ceph.com/swagner-2020-02-25_16:47:03-rados-wip-swagner-testing-2020-02-25-1426-distro-basic-smithi/</a></p>
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-02-25_16:51:40-rados-wip-swagner2-testing-2020-02-25-1434-distro-basic-smithi/">http://pulpito.ceph.com/swagner-2020-02-25_16:51:40-rados-wip-swagner2-testing-2020-02-25-1434-distro-basic-smithi/</a></p>
Orchestrator - Bug #44138 (Resolved): ModuleNotFoundError: No module named 'jsonpatch'
https://tracker.ceph.com/issues/44138
2020-02-14T08:35:11Z
Sebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/commit/846761ef7afab43144f38bf5631fd859d6964820">https://github.com/ceph/ceph/commit/846761ef7afab43144f38bf5631fd859d6964820</a></p>
Ceph - Bug #42528 (Resolved): python-common bulid failure: File not found: ceph-*.egg-info
https://tracker.ceph.com/issues/42528
2019-10-29T12:19:41Z
Sebastian Wagner
<pre>
PM build errors:
File not found: /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python2.7/site-packages/ceph
File not found by glob: /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python2.7/site-packages/ceph-*.egg-info
+ rm -fr /tmp/install-deps.1830
Build step 'Execute shell' marked build as failure
</pre>
<pre>
running install
running build
running build_py
creating build
creating build/lib
creating build/lib/ceph
copying ceph/__init__.py -> build/lib/ceph
copying ceph/exceptions.py -> build/lib/ceph
creating build/lib/ceph/deployment
copying ceph/deployment/__init__.py -> build/lib/ceph/deployment
copying ceph/deployment/drive_group.py -> build/lib/ceph/deployment
copying ceph/deployment/ssh_orchestrator.py -> build/lib/ceph/deployment
creating build/lib/ceph/tests
copying ceph/tests/__init__.py -> build/lib/ceph/tests
copying ceph/tests/test_drive_group.py -> build/lib/ceph/tests
running install_lib
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
copying build/lib/ceph/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
copying build/lib/ceph/exceptions.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/drive_group.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/ssh_orchestrator.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
copying build/lib/ceph/tests/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
copying build/lib/ceph/tests/test_drive_group.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/exceptions.py to exceptions.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/drive_group.py to drive_group.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/ssh_orchestrator.py to ssh_orchestrator.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests/test_drive_group.py to test_drive_group.cpython-36.pyc
running install_egg_info
running egg_info
creating ceph.egg-info
writing ceph.egg-info/PKG-INFO
writing dependency_links to ceph.egg-info/dependency_links.txt
writing requirements to ceph.egg-info/requires.txt
writing top-level names to ceph.egg-info/top_level.txt
writing manifest file 'ceph.egg-info/SOURCES.txt'
reading manifest file 'ceph.egg-info/SOURCES.txt'
writing manifest file 'ceph.egg-info/SOURCES.txt'
Copying ceph.egg-info to /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph-1.0.0-py3.6.egg-info
running install_scripts
Traceback (most recent call last):
File "setup.py", line 45, in <module>
'Programming Language :: Python :: 3.6',
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, in fetch_build_eggs
parse_requirements(requires), installer=self.fetch_build_egg
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 618, in resolve
dist = best[req.key] = env.best_match(req, self, installer)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 862, in best_match
return self.obtain(req, installer) # try and download/install
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 874, in obtain
return installer(requirement)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 339, in fetch_build_egg
return cmd.easy_install(req)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 623, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 653, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 849, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1130, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1115, in run_setup
run_setup(setup_script, args)
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 69, in run_setup
lambda: execfile(
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 120, in run
return func()
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 71, in <lambda>
{'__file__':setup_script, '__name__':'__main__'}
File "setup.py", line 21, in <module>
packages=find_packages(),
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 269, in __init__
_Distribution.__init__(self,attrs)
File "/usr/lib64/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 302, in finalize_options
ep.load()(self, ep.name, value)
File "build/bdist.linux-x86_64/egg/setuptools_scm/integration.py", line 9, in version_keyword
File "build/bdist.linux-x86_64/egg/setuptools_scm/version.py", line 66, in _warn_if_setuptools_outdated
setuptools_scm.version.SetuptoolsOutdatedWarning: your setuptools is too old (<12)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31272//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31272//consoleFull</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31269//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31269//consoleFull</a></p>
CephFS - Bug #40429 (Resolved): mgr/volumes: subvolume.py calls Exceptions with too few arguments.
https://tracker.ceph.com/issues/40429
2019-06-19T09:45:11Z
Sebastian Wagner
<p>mypy revealed</p>
<pre>
+pybind/mgr/volumes/fs/subvolume.py: note: In member "get_subvolume_path" of class "SubVolume":
+pybind/mgr/volumes/fs/subvolume.py:167: error: Too few arguments for "VolumeException"
+pybind/mgr/volumes/fs/subvolume.py: note: In member "_get_ancestor_xattr" of class "SubVolume":
+pybind/mgr/volumes/fs/subvolume.py:203: error: Too few arguments for "NoData"
</pre>
<p>both of these errors are actual bugs in the code and needs to get fixed.</p>
Orchestrator - Bug #39250 (Resolved): mgr/rook: Fix Python 2 regression
https://tracker.ceph.com/issues/39250
2019-04-11T13:34:16Z
Sebastian Wagner
Orchestrator - Bug #38799 (Resolved): rook-ceph-system namespace hardcoded in the rook orchestrator
https://tracker.ceph.com/issues/38799
2019-03-18T15:14:24Z
Sebastian Wagner
<p>rook/rook_cluster.py:27</p>
<p>this needs to be dynamic.</p>