Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2022-01-19T16:07:48Z
Ceph
Redmine
Orchestrator - Bug #53939 (Resolved): ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_RE...
https://tracker.ceph.com/issues/53939
2022-01-19T16:07:48Z
Sebastian Wagner
<pre>
mon[102341]: : cluster [WRN] Health check failed: Upgrading daemon osd.0 on host smithi103 failed. (UPGRADE_REDEPLOY_DAEMON)
mon[66897]: cephadm 2022-01-18T16:27:48.439275+0000 mgr.smithi103.wyeocw (mgr.14712) 129 : cephadm [ERR] cephadm exited with an error code: 1, stderr:Redeploy daemon osd.0 ...
mon[66897]: Non-zero exit code 1 from systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0
mon[66897]: systemctl: stderr Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: systemctl: stderr See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
mon[66897]: Traceback (most recent call last):
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8615, in <module>
mon[66897]: main()
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8603, in main
mon[66897]: r = ctx.func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1790, in _default_image
mon[66897]: return func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 4603, in command_deploy
mon[66897]: ports=daemon_ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2715, in deploy_daemon
mon[66897]: c, osd_fsid=osd_fsid, ports=ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2960, in deploy_daemon_units
mon[66897]: call_throws(ctx, ['systemctl', 'start', unit_name])
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1469, in call_throws
mon[66897]: raise RuntimeError(f'Failed command: {" ".join(command)}: {s}')
mon[66897]: RuntimeError: Failed command: systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0: Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
mon[66897]: Traceback (most recent call last):
mon[66897]: File "/usr/share/ceph/mgr/cephadm/serve.py", line 1402, in _remote_connection
mon[66897]: yield (conn, connr)
mon[66897]: File "/usr/share/ceph/mgr/cephadm/serve.py", line 1295, in _run_cephadm
mon[66897]: code, '\n'.join(err)))
mon[66897]: orchestrator._interface.OrchestratorError: cephadm exited with an error code: 1, stderr:Redeploy daemon osd.0 ...
mon[66897]: Non-zero exit code 1 from systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0
mon[66897]: systemctl: stderr Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: systemctl: stderr See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
mon[66897]: Traceback (most recent call last):
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8615, in <module>
mon[66897]: main()
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8603, in main
mon[66897]: r = ctx.func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1790, in _default_image
mon[66897]: return func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 4603, in command_deploy
mon[66897]: ports=daemon_ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2715, in deploy_daemon
mon[66897]: c, osd_fsid=osd_fsid, ports=ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2960, in deploy_daemon_units
mon[66897]: call_throws(ctx, ['systemctl', 'start', unit_name])
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1469, in call_throws
mon[66897]: raise RuntimeError(f'Failed command: {" ".join(command)}: {s}')
mon[66897]: RuntimeError: Failed command: systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0: Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
...
cephadm 2022-01-18T16:27:48.439412+0000 mgr.smithi103.wyeocw (mgr.14712) 130 : cephadm [ERR] Upgrade: Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed.
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2022-01-18_15:34:53-rados:cephadm-wip-swagner2-testing-2022-01-18-1242-pacific-distro-default-smithi/6624255">https://pulpito.ceph.com/swagner-2022-01-18_15:34:53-rados:cephadm-wip-swagner2-testing-2022-01-18-1242-pacific-distro-default-smithi/6624255</a></p>
Orchestrator - Bug #53904 (Duplicate): cephadm: ingress jobs stuck
https://tracker.ceph.com/issues/53904
2022-01-17T16:07:38Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2022-01-17_12:42:04-orch:cephadm-wip-swagner-testing-2022-01-17-1014-distro-default-smithi/">https://pulpito.ceph.com/swagner-2022-01-17_12:42:04-orch:cephadm-wip-swagner-testing-2022-01-17-1014-distro-default-smithi/</a></p>
<pre>
2022-01-17T13:17:17.053 DEBUG:teuthology.orchestra.run.smithi155:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:1cdf02ebbbdd98a055173cbac4d0171328a564dc shell -c /etc/ceph/ceph.conf -k />
2022-01-17T13:17:17.054 DEBUG:teuthology.orchestra.run.smithi155:> for haproxy in `ceph orch ps | grep ^haproxy.nfs.foo. | awk '"'"'{print $1}'"'"'`; do
2022-01-17T13:17:17.054 DEBUG:teuthology.orchestra.run.smithi155:> ceph orch daemon stop $haproxy
2022-01-17T13:17:17.054 DEBUG:teuthology.orchestra.run.smithi155:> while ! ceph orch ps | grep $haproxy | grep stopped; do sleep 1 ; done
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> cat /mnt/foo/testfile
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> echo $haproxy > /mnt/foo/testfile
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> sync
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> ceph orch daemon start $haproxy
2022-01-17T13:17:17.056 DEBUG:teuthology.orchestra.run.smithi155:> while ! ceph orch ps | grep $haproxy | grep running; do sleep 1 ; done
2022-01-17T13:17:17.056 DEBUG:teuthology.orchestra.run.smithi155:> done
2022-01-17T13:17:17.056 DEBUG:teuthology.orchestra.run.smithi155:> '
</pre><br />...snip...<br /><pre>
2022-01-17T13:17:20.571 INFO:teuthology.orchestra.run.smithi155.stdout:Check with each haproxy down in turn...
2022-01-17T13:17:21.281 INFO:teuthology.orchestra.run.smithi155.stdout:Scheduled to stop haproxy.nfs.foo.smithi155.xhswck on host 'smithi155'
</pre><br />...snip...
<pre>
2022-01-17T13:17:36.893 INFO:teuthology.orchestra.run.smithi155.stdout:haproxy.nfs.foo.smithi155.xhswck smithi155 *:2049,9002 stopped 0s ago 79s - - <unknown> <un>
2022-01-17T13:17:36.898 INFO:teuthology.orchestra.run.smithi155.stdout:test
2022-01-17T13:17:37.528 INFO:teuthology.orchestra.run.smithi155.stdout:Scheduled to start haproxy.nfs.foo.smithi155.xhswck on host 'smithi155'
</pre><br />...snip...<br /><pre>
2022-01-17T13:17:53.182 INFO:teuthology.orchestra.run.smithi155.stdout:haproxy.nfs.foo.smithi155.xhswck smithi155 *:2049,9002 running (5s) 0s ago 95s - - 2.3.17-d1c9119 14b>
2022-01-17T13:17:53.519 INFO:teuthology.orchestra.run.smithi155.stdout:Scheduled to stop haproxy.nfs.foo.smithi162.mahcqs on host 'smithi162'
</pre><br />...snip...<br /><pre>
2022-01-17T13:18:07.810 INFO:teuthology.orchestra.run.smithi155.stdout:haproxy.nfs.foo.smithi162.mahcqs smithi162 *:2049,9002 stopped 0s ago 102s - - <unknown> <unk>
</pre><br />...snip..<br /><pre>
h[14066]: cephadm 2022-01-17T13:17:53.516345+0000 mgr.smithi155.uoijyc (mgr.14206) 339 : cephadm [INF] Schedule stop daemon haproxy.nfs.foo.smithi162.mahcqs
</pre>
<p>But I never see a start of haproxy.nfs.foo.smithi162.mahcqs again.</p>
Orchestrator - Bug #51272 (Resolved): upgrade job: mgr.x getting removed by cephadm task: UPGRADE...
https://tracker.ceph.com/issues/51272
2021-06-18T08:47:37Z
Sebastian Wagner
<p>I think this bug is not yet merged.</p>
<ul>
<li><a class="external" href="https://github.com/ceph/ceph/pull/41478/">https://github.com/ceph/ceph/pull/41478/</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/41568">https://github.com/ceph/ceph/pull/41568</a></li>
</ul>
<pre>
rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2}
</pre>
<pre>
roles:
- - mon.a
- mon.c
- mgr.y
- osd.0
- osd.1
- osd.2
- osd.3
- client.0
- node-exporter.a
- alertmanager.a
- - mon.b
- mgr.x
- osd.4
- osd.5
- osd.6
- osd.7
- client.1
- prometheus.a
- grafana.a
- node-exporter.b
</pre>
<p><strong>then</strong></p>
<pre>
: audit 2021-06-15T20:14:24.260141+0000 mgr.y (mgr.14138) 64 : audit [DBG] from='client.34106 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;smithi143=x", "target">
</pre>
<p>notice the placement only contains <strong>2;smithi143=x</strong></p>
<pre>
2021-06-15T20:14:29.203 INFO:journalctl@ceph.mgr.y.smithi135.stdout:Jun 15 20:14:29 smithi135 systemd[1]: Stopping Ceph mgr.y for e2a4517e-ce15-11eb-8c13-001a4aab830c...
</pre>
<p>*resulting in *</p>
<pre>
cluster 2021-06-15T20:21:09.388112+0000 mgr.x (mgr.34112) 238 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 0 B data, 3.7 MiB used, 707 GiB / 715 GiB avail
: debug 2021-06-15T20:21:11.241+0000 7ffa34117700 -1 log_channel(cephadm) log [ERR] : Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon
: audit 2021-06-15T20:21:11.239485+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.34112 ' entity='mgr.x'
: audit 2021-06-15T20:21:11.241293+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.34112 172.21.15.143:0/2430240313' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
: cephadm 2021-06-15T20:21:11.241839+0000 mgr.x (mgr.34112) 239 : cephadm [INF] Upgrade: Target is quay.ceph.io/ceph-ci/ceph:da5e8184007182fa3cd5c8385fee4e08c5620fe2 with id 219a75e51380d5cdf3af7b1fa194d1bedd11>
: cephadm 2021-06-15T20:21:11.244338+0000 mgr.x (mgr.34112) 240 : cephadm [INF] Upgrade: Checking mgr daemons...
: cephadm 2021-06-15T20:21:11.244711+0000 mgr.x (mgr.34112) 241 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x)
: cephadm 2021-06-15T20:21:11.247775+0000 mgr.x (mgr.34112) 242 : cephadm [ERR] Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon
: audit 2021-06-15T20:21:11.253146+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.34112 ' entity='mgr.x'
: cluster 2021-06-15T20:21:11.255641+0000 mgr.x (mgr.34112) 243 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 0 B data, 3.7 MiB used, 707 GiB / 715 GiB avail
: audit 2021-06-15T20:21:11.259712+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.34112 ' entity='mgr.x'
</pre>
<pre>
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:alertmanager.a smithi135 running (117s) 107s ago 2m 0.20.0 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f d7ab1fc469b4
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:grafana.a smithi143 running (2m) 107s ago 2m 6.6.2 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a bdf08596362b
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:mgr.x smithi143 running (6m) 107s ago 6m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 bf659290d1ab
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.a smithi135 running (8m) 107s ago 9m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 a0083afbce6f
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.b smithi143 running (7m) 107s ago 7m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 177430b8b423
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.c smithi135 running (7m) 107s ago 7m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 881e672542be
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:node-exporter.a smithi135 running (2m) 107s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf acd96e0cc12e
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:node-exporter.b smithi143 running (2m) 107s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf a3c897228c6d
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.0 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 9805ecc9628d
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.1 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 29d8fc3fbb7f
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.2 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 193e0a2a0487
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.3 smithi135 running (4m) 107s ago 4m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 e2dea4bf5490
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.4 smithi143 running (4m) 107s ago 4m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 e0e19361a64a
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.5 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 71c57f8c0e3d
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.6 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 4da5baa064d1
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.7 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 098193d20e10
2021-06-15T20:21:16.896 INFO:teuthology.orchestra.run.smithi135.stdout:prometheus.a smithi143 running (110s) 107s ago 2m 2.18.1 docker.io/prom/prometheus:v2.18.1 de242295e225 fb7dd6cd2280
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/yuriw-2021-06-15_18:44:29-rados-wip-yuri8-testing-2021-06-15-0839-octopus-distro-basic-smithi/6174184/teuthology.log">http://qa-proxy.ceph.com/teuthology/yuriw-2021-06-15_18:44:29-rados-wip-yuri8-testing-2021-06-15-0839-octopus-distro-basic-smithi/6174184/teuthology.log</a></p>
Orchestrator - Bug #46748 (Resolved): Module 'cephadm' has failed: auth get failed: failed to fin...
https://tracker.ceph.com/issues/46748
2020-07-29T09:53:48Z
Sebastian Wagner
<p>Was purged it yesterday:</p>
<pre>
ceph osd purge 32 --yes-i-really-mean-it
ceph osd tree | grep 32 => no match
ceph osd crush remove osd.32 => device 'osd.32' does not appear in the crush map
</pre>
Orchestrator - Bug #45627 (Resolved): cephadm: frequently getting `1 hosts fail cephadm check`
https://tracker.ceph.com/issues/45627
2020-05-20T13:47:58Z
Sebastian Wagner
<p><a class="external" href="https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ADK3Y2XHTIJ2YV6MFSQX4XPTQ4WP5ETM/">https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ADK3Y2XHTIJ2YV6MFSQX4XPTQ4WP5ETM/</a></p>
<pre>
I can access all rdb devices and CephFS. They work. All OSDs in server-1
is up.
health: HEALTH_WARN
1 hosts fail cephadm check
failed to probe daemons or devices
I even restarted server-1. No luck.
I'm on server-1. cephadm complains it cannot access to server-1. In basic
term, server-1 cannot access server-1 (192.168.0.1)
server-1: 192.168.0.1
server-2: 192.168.0.3
$ ssh -F =(ceph cephadm get-ssh-config) -i =(ceph config-key get
mgr/cephadm/ssh_identity_key) root@server-1
> Success.
</pre>
<p>I think we have to rethink ssh connections. Looks like execnet can't handle being loaded within a long-running daemon.</p>
<pre>
This happens (unfortunately) frequently to me. Look for the active mgr
(ceph -s), and go restart the mgr service there (systemctl list-units |grep
mgr then systemctl restart NAMEOFSERVICE). This normally resolves that
error for me.
</pre>
Orchestrator - Bug #45427 (Resolved): cephadm: auth get failed: invalid entity_auth mon
https://tracker.ceph.com/issues/45427
2020-05-07T10:13:25Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/mgfritch-2020-05-07_02:27:06-rados-wip-mgfritch-testing-2020-05-06-1821-distro-basic-smithi/5029062">http://pulpito.ceph.com/mgfritch-2020-05-07_02:27:06-rados-wip-mgfritch-testing-2020-05-06-1821-distro-basic-smithi/5029062</a></p>
<pre>
cephadm 2020-05-07T03:43:08.989542+0000 mgr.smithi154.qjpiuj (mgr.27922) 6 : cephadm [ERR] Failed to apply node-exporter spec ServiceSpec({'placement': PlacementSpec(host_pattern='*'), 'service_type': 'node-exporter', 'service_id': None, 'unmanaged': False}): auth get failed: invalid entity_auth mon
Traceback (most recent call last):
File "/usr/share/ceph/mgr/cephadm/module.py", line 2219, in _apply_all_services
if self._apply_service(spec):
File "/usr/share/ceph/mgr/cephadm/module.py", line 2190, in _apply_service
create_func(daemon_id, host) # type: ignore
File "/usr/share/ceph/mgr/cephadm/module.py", line 2967, in _create_node_exporter
return self._create_daemon('node-exporter', daemon_id, host)
File "/usr/share/ceph/mgr/cephadm/module.py", line 2021, in _create_daemon
extra_ceph_config=extra_config.pop('config', ''))
File "/usr/share/ceph/mgr/cephadm/module.py", line 1974, in _get_config_and_keyring
'entity': ename,
File "/usr/share/ceph/mgr/mgr_module.py", line 1096, in check_mon_command
raise MonCommandFailed(f'{cmd_dict["prefix"]} failed: {r.stderr}')
mgr_module.MonCommandFailed: auth get failed: invalid entity_auth mon
</pre>
<p>(as a side note, why do we need the mon keyrig for the node_exporter?)</p>
bluestore - Bug #45335 (Resolved): cephadm upgrade: OSD.0 is not coming back after restart: rock...
https://tracker.ceph.com/issues/45335
2020-04-29T15:58:52Z
Sebastian Wagner
<pre>
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.842415+0000 mgr.x (mgr.34535) 47 : cephadm [INF] Upgrade: Target is quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537 with id 9c90938ad11a31c5ba9b58ed052bf347591ae047e94bca695e7a022672efd3b9
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.843492+0000 mgr.x (mgr.34535) 48 : cephadm [INF] Upgrade: Checking mgr daemons...
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.848251+0000 mgr.x (mgr.34535) 49 : cephadm [INF] Upgrade: All mgr daemons are up to date.
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.848483+0000 mgr.x (mgr.34535) 50 : cephadm [INF] Upgrade: Checking mon daemons...
2020-04-28T22:15:25.776 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.849302+0000 mgr.x (mgr.34535) 51 : cephadm [INF] Upgrade: Setting container_image for all mon...
2020-04-28T22:15:25.776 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.868867+0000 mgr.x (mgr.34535) 52 : cephadm [INF] Upgrade: All mon daemons are up to date.
2020-04-28T22:15:25.777 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.869043+0000 mgr.x (mgr.34535) 53 : cephadm [INF] Upgrade: Checking crash daemons...
2020-04-28T22:15:25.777 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.869744+0000 mgr.x (mgr.34535) 54 : cephadm [INF] Upgrade: Setting container_image for all crash...
2020-04-28T22:15:25.777 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.870444+0000 mgr.x (mgr.34535) 55 : cephadm [INF] Upgrade: All crash daemons are up to date.
2020-04-28T22:15:25.778 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.870641+0000 mgr.x (mgr.34535) 56 : cephadm [INF] Upgrade: Checking osd daemons...
2020-04-28T22:15:25.778 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cluster 2020-04-28T22:15:24.333492+0000 mon.a (mon.0) 109 : cluster [DBG] mgrmap e25: x(active, since 41s), standbys: y
2020-04-28T22:15:26.521 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cluster 2020-04-28T22:15:25.061515+0000 mgr.x (mgr.34535) 57 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:26.522 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.422931+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.522 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.423236+0000 mgr.x (mgr.34535) 58 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.522 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cephadm 2020-04-28T22:15:25.424056+0000 mgr.x (mgr.34535) 59 : cephadm [INF] Upgrade: It is safe to stop osd.0
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cephadm 2020-04-28T22:15:25.424365+0000 mgr.x (mgr.34535) 60 : cephadm [INF] Upgrade: Redeploying osd.0
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.424676+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]: dispatch
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.428887+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd='[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]': finished
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.429664+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
2020-04-28T22:15:26.524 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.430373+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-04-28T22:15:26.524 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cephadm 2020-04-28T22:15:25.431342+0000 mgr.x (mgr.34535) 61 : cephadm [INF] Deploying daemon osd.0 on smithi151
2020-04-28T22:15:26.524 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.431836+0000 mon.a (mon.0) 115 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "osd.0", "key": "container_image"}]: dispatch
2020-04-28T22:15:26.524 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cluster 2020-04-28T22:15:25.061515+0000 mgr.x (mgr.34535) 57 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.422931+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.423236+0000 mgr.x (mgr.34535) 58 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cephadm 2020-04-28T22:15:25.424056+0000 mgr.x (mgr.34535) 59 : cephadm [INF] Upgrade: It is safe to stop osd.0
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cephadm 2020-04-28T22:15:25.424365+0000 mgr.x (mgr.34535) 60 : cephadm [INF] Upgrade: Redeploying osd.0
2020-04-28T22:15:26.526 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.424676+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]: dispatch
2020-04-28T22:15:26.526 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.428887+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd='[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]': finished
2020-04-28T22:15:26.526 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.429664+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
2020-04-28T22:15:26.527 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.430373+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-04-28T22:15:26.527 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cephadm 2020-04-28T22:15:25.431342+0000 mgr.x (mgr.34535) 61 : cephadm [INF] Deploying daemon osd.0 on smithi151
2020-04-28T22:15:26.527 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.431836+0000 mon.a (mon.0) 115 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "osd.0", "key": "container_image"}]: dispatch
2020-04-28T22:15:26.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cluster 2020-04-28T22:15:25.061515+0000 mgr.x (mgr.34535) 57 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:26.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.422931+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.423236+0000 mgr.x (mgr.34535) 58 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cephadm 2020-04-28T22:15:25.424056+0000 mgr.x (mgr.34535) 59 : cephadm [INF] Upgrade: It is safe to stop osd.0
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cephadm 2020-04-28T22:15:25.424365+0000 mgr.x (mgr.34535) 60 : cephadm [INF] Upgrade: Redeploying osd.0
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.424676+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]: dispatch
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.428887+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd='[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]': finished
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.429664+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.430373+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cephadm 2020-04-28T22:15:25.431342+0000 mgr.x (mgr.34535) 61 : cephadm [INF] Deploying daemon osd.0 on smithi151
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.431836+0000 mon.a (mon.0) 115 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "osd.0", "key": "container_image"}]: dispatch
2020-04-28T22:15:26.991 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 systemd[1]: Stopping Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c...
2020-04-28T22:15:26.992 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[28166]: debug 2020-04-28T22:15:26.838+0000 7fb6a45f8700 -1 received signal: Terminated from Kernel ( Could be generated by pthread_kill(), raise(), abort(), alarm() ) UID: 0
2020-04-28T22:15:26.992 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[28166]: debug 2020-04-28T22:15:26.838+0000 7fb6a45f8700 -1 osd.0 51 *** Got signal Terminated ***
2020-04-28T22:15:26.992 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[28166]: debug 2020-04-28T22:15:26.838+0000 7fb6a45f8700 -1 osd.0 51 *** Immediate shutdown (osd_fast_shutdown=true) ***
2020-04-28T22:15:27.271 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 podman[13417]: 2020-04-28 22:15:26.989657914 +0000 UTC m=+0.182639933 container died 816cfebb822abfc1a17c63b3c8dbf4d82a23b1f976117dbc325a433a615cc931 (image=docker.io/ceph/ceph:v15.2.0, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:27.272 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13417]: 2020-04-28 22:15:27.020897016 +0000 UTC m=+0.213879019 container stop 816cfebb822abfc1a17c63b3c8dbf4d82a23b1f976117dbc325a433a615cc931 (image=docker.io/ceph/ceph:v15.2.0, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:27.272 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13417]: 816cfebb822abfc1a17c63b3c8dbf4d82a23b1f976117dbc325a433a615cc931
2020-04-28T22:15:27.647 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.875889+0000 mon.a (mon.0) 116 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.647 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.875926+0000 mon.a (mon.0) 117 : cluster [INF] osd.0 failed (root=default,host=smithi151) (connection refused reported by osd.2)
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876132+0000 mon.a (mon.0) 118 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876254+0000 mon.a (mon.0) 119 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876427+0000 mon.a (mon.0) 120 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876630+0000 mon.a (mon.0) 121 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876804+0000 mon.a (mon.0) 122 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876953+0000 mon.a (mon.0) 123 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877082+0000 mon.a (mon.0) 124 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877205+0000 mon.a (mon.0) 125 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877341+0000 mon.a (mon.0) 126 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877492+0000 mon.a (mon.0) 127 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.074089+0000 mon.a (mon.0) 128 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.074241+0000 mon.a (mon.0) 129 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.075789+0000 mon.a (mon.0) 130 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.075870+0000 mon.a (mon.0) 131 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.076508+0000 mon.a (mon.0) 132 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.076604+0000 mon.a (mon.0) 133 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.076919+0000 mon.a (mon.0) 134 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077001+0000 mon.a (mon.0) 135 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077113+0000 mon.a (mon.0) 136 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077197+0000 mon.a (mon.0) 137 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077320+0000 mon.a (mon.0) 138 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077390+0000 mon.a (mon.0) 139 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077520+0000 mon.a (mon.0) 140 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077634+0000 mon.a (mon.0) 141 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.654 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.875889+0000 mon.a (mon.0) 116 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.654 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.875926+0000 mon.a (mon.0) 117 : cluster [INF] osd.0 failed (root=default,host=smithi151) (connection refused reported by osd.2)
2020-04-28T22:15:27.654 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876132+0000 mon.a (mon.0) 118 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876254+0000 mon.a (mon.0) 119 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876427+0000 mon.a (mon.0) 120 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876630+0000 mon.a (mon.0) 121 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876804+0000 mon.a (mon.0) 122 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876953+0000 mon.a (mon.0) 123 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877082+0000 mon.a (mon.0) 124 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877205+0000 mon.a (mon.0) 125 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877341+0000 mon.a (mon.0) 126 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877492+0000 mon.a (mon.0) 127 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.074089+0000 mon.a (mon.0) 128 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.074241+0000 mon.a (mon.0) 129 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.075789+0000 mon.a (mon.0) 130 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.075870+0000 mon.a (mon.0) 131 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.076508+0000 mon.a (mon.0) 132 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.076604+0000 mon.a (mon.0) 133 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.076919+0000 mon.a (mon.0) 134 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077001+0000 mon.a (mon.0) 135 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077113+0000 mon.a (mon.0) 136 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077197+0000 mon.a (mon.0) 137 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077320+0000 mon.a (mon.0) 138 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077390+0000 mon.a (mon.0) 139 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.660 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077520+0000 mon.a (mon.0) 140 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.660 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077634+0000 mon.a (mon.0) 141 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.875889+0000 mon.a (mon.0) 116 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.875926+0000 mon.a (mon.0) 117 : cluster [INF] osd.0 failed (root=default,host=smithi151) (connection refused reported by osd.2)
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876132+0000 mon.a (mon.0) 118 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876254+0000 mon.a (mon.0) 119 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876427+0000 mon.a (mon.0) 120 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876630+0000 mon.a (mon.0) 121 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876804+0000 mon.a (mon.0) 122 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876953+0000 mon.a (mon.0) 123 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877082+0000 mon.a (mon.0) 124 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877205+0000 mon.a (mon.0) 125 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877341+0000 mon.a (mon.0) 126 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877492+0000 mon.a (mon.0) 127 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.074089+0000 mon.a (mon.0) 128 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.074241+0000 mon.a (mon.0) 129 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.075789+0000 mon.a (mon.0) 130 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.075870+0000 mon.a (mon.0) 131 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.076508+0000 mon.a (mon.0) 132 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.076604+0000 mon.a (mon.0) 133 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.076919+0000 mon.a (mon.0) 134 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077001+0000 mon.a (mon.0) 135 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077113+0000 mon.a (mon.0) 136 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077197+0000 mon.a (mon.0) 137 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077320+0000 mon.a (mon.0) 138 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.693 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077390+0000 mon.a (mon.0) 139 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.693 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077520+0000 mon.a (mon.0) 140 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.693 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077634+0000 mon.a (mon.0) 141 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:28.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.64575936 +0000 UTC m=+0.606987472 container create c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.820779587 +0000 UTC m=+0.782007706 container init c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.023 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.862377714 +0000 UTC m=+0.823605831 container start c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.023 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.862442318 +0000 UTC m=+0.823670460 container attach c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.349 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 podman[13459]: 2020-04-28 22:15:28.087744802 +0000 UTC m=+1.048972928 container died c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.605 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.062007+0000 mgr.x (mgr.34535) 62 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:28.605 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.338144+0000 mon.a (mon.0) 142 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2020-04-28T22:15:28.605 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.347350+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in
2020-04-28T22:15:28.605 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 podman[13459]: 2020-04-28 22:15:28.587885902 +0000 UTC m=+1.549114039 container remove c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.606 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 systemd[1]: Stopped Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c.
2020-04-28T22:15:28.606 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.062007+0000 mgr.x (mgr.34535) 62 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:28.606 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.338144+0000 mon.a (mon.0) 142 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2020-04-28T22:15:28.607 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.347350+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in
2020-04-28T22:15:28.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:28 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.062007+0000 mgr.x (mgr.34535) 62 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:28.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:28 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.338144+0000 mon.a (mon.0) 142 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2020-04-28T22:15:28.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:28 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.347350+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in
2020-04-28T22:15:29.021 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 systemd[1]: Starting Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c...
2020-04-28T22:15:29.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 podman[13562]: Error: no container with name or ID ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0 found: no such container
2020-04-28T22:15:29.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 systemd[1]: Started Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c.
2020-04-28T22:15:29.322 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.0207602 +0000 UTC m=+0.262426894 container create edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.322 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.120609901 +0000 UTC m=+0.362276575 container init edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.323 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.162161904 +0000 UTC m=+0.403828610 container start edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.323 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.162247399 +0000 UTC m=+0.403914112 container attach edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:29 smithi156 bash[4373]: cluster 2020-04-28T22:15:28.355727+0000 mon.a (mon.0) 144 : cluster [DBG] osdmap e53: 8 total, 7 up, 8 in
2020-04-28T22:15:29.775 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[11812]: audit 2020-04-28T22:15:28.777097+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "container_image"}]: dispatch
2020-04-28T22:15:29.776 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
2020-04-28T22:15:29.776 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/vg_nvme/lv_4 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
2020-04-28T22:15:29.776 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/ln -snf /dev/vg_nvme/lv_4 /var/lib/ceph/osd/ceph-0/block
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: --> ceph-volume lvm activate successful for osd ID: 0
2020-04-28T22:15:29.778 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.444755363 +0000 UTC m=+0.686422056 container died edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:30.180 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.902456456 +0000 UTC m=+1.144123162 container remove edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:30.180 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.137040328 +0000 UTC m=+0.215911961 container create 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.505 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:30 smithi151 bash[10946]: cluster 2020-04-28T22:15:29.062506+0000 mgr.x (mgr.34535) 63 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:30.506 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:30 smithi151 bash[11812]: cluster 2020-04-28T22:15:29.062506+0000 mgr.x (mgr.34535) 63 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:30.506 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.253693628 +0000 UTC m=+0.332565244 container init 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.506 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.295389095 +0000 UTC m=+0.374260713 container start 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.506 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.295458519 +0000 UTC m=+0.374330136 container attach 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:30 smithi156 bash[4373]: cluster 2020-04-28T22:15:29.062506+0000 mgr.x (mgr.34535) 63 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:31.077 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 bash[13577]: debug 2020-04-28T22:15:30.801+0000 7f47628adec0 -1 Falling back to public interface
2020-04-28T22:15:31.077 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.074+0000 7f47628adec0 -1 rocksdb: verify_sharding mismatch on sharding. requested = [(L,1,0-,),(O,3,0-13,),(m,3,0-,)] stored = []
2020-04-28T22:15:31.078 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.074+0000 7f47628adec0 -1 bluestore(/var/lib/ceph/osd/ceph-0) _open_db erroring opening db:
2020-04-28T22:15:31.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:31 smithi156 bash[4373]: cluster 2020-04-28T22:15:31.361256+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)
2020-04-28T22:15:31.731 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[10946]: cluster 2020-04-28T22:15:31.361256+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)
2020-04-28T22:15:31.731 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[11812]: cluster 2020-04-28T22:15:31.361256+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)
2020-04-28T22:15:31.732 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.589+0000 7f47628adec0 -1 osd.0 0 OSD:init: unable to mount object store
2020-04-28T22:15:31.732 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.589+0000 7f47628adec0 -1 ** ERROR: osd init failed: (5) Input/output error
2020-04-28T22:15:32.021 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 podman[13761]: 2020-04-28 22:15:31.729840599 +0000 UTC m=+1.808712241 container died 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:32.614 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:32 smithi151 bash[10946]: cluster 2020-04-28T22:15:31.063005+0000 mgr.x (mgr.34535) 64 : cluster [DBG] pgmap v29: 1 pgs: 1 active+undersized+degraded; 0 B data, 4.0 MiB used, 707 GiB / 715 GiB avail; 1/3 objects degraded (33.333%)
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/mgfritch-2020-04-28_21:52:13-rados-wip-mgfritch-testing-2020-04-28-1427-distro-basic-smithi/4995175/teuthology.log">http://qa-proxy.ceph.com/teuthology/mgfritch-2020-04-28_21:52:13-rados-wip-mgfritch-testing-2020-04-28-1427-distro-basic-smithi/4995175/teuthology.log</a></p>
Orchestrator - Bug #45081 (Resolved): cephadm: `upgrade check 15.2.1` : OrchestratorError: Failed...
https://tracker.ceph.com/issues/45081
2020-04-14T10:33:34Z
Sebastian Wagner
<pre>
Apr 14 11:22:43 ceph1 bash[37629]: debug 2020-04-14T09:22:42.997+0000 7ff504d1f700 -1 Remote method threw exception: Traceback (most recent call last):
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/cephadm/module.py", line 548, in wrapper
Apr 14 11:22:43 ceph1 bash[37629]: return AsyncCompletion(value=f(*args, **kwargs), name=f.__name__)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/cephadm/module.py", line 3046, in upgrade_check
Apr 14 11:22:43 ceph1 bash[37629]: target_id, target_version = self._get_container_image_id(target_name)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/cephadm/module.py", line 3029, in _get_container_image_id
Apr 14 11:22:43 ceph1 bash[37629]: image_name, host, '\n'.join(out)))
Apr 14 11:22:43 ceph1 bash[37629]: orchestrator._interface.OrchestratorError: Failed to pull 15.2.1 on ceph0:
Apr 14 11:22:43 ceph1 bash[37629]: debug 2020-04-14T09:22:42.997+0000 7ff504d1f700 -1 mgr handle_command module 'orchestrator' command handler threw exception: Remote method threw exception: Traceback (most recent call last):
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/cephadm/module.py", line 548, in wrapper
Apr 14 11:22:43 ceph1 bash[37629]: return AsyncCompletion(value=f(*args, **kwargs), name=f.__name__)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/cephadm/module.py", line 3046, in upgrade_check
Apr 14 11:22:43 ceph1 bash[37629]: target_id, target_version = self._get_container_image_id(target_name)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/cephadm/module.py", line 3029, in _get_container_image_id
Apr 14 11:22:43 ceph1 bash[37629]: image_name, host, '\n'.join(out)))
Apr 14 11:22:43 ceph1 bash[37629]: orchestrator._interface.OrchestratorError: Failed to pull 15.2.1 on ceph0:
Apr 14 11:22:43 ceph1 bash[37629]: debug 2020-04-14T09:22:42.997+0000 7ff504d1f700 -1 mgr.server reply reply (22) Invalid argument Traceback (most recent call last):
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/mgr_module.py", line 1153, in _handle_command
Apr 14 11:22:43 ceph1 bash[37629]: return self.handle_command(inbuf, cmd)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 110, in handle_command
Apr 14 11:22:43 ceph1 bash[37629]: return dispatch[cmd['prefix']].call(self, cmd, inbuf)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/mgr_module.py", line 308, in call
Apr 14 11:22:43 ceph1 bash[37629]: return self.func(mgr, **kwargs)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 72, in <lambda>
Apr 14 11:22:43 ceph1 bash[37629]: wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 63, in wrapper
Apr 14 11:22:43 ceph1 bash[37629]: return func(*args, **kwargs)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/orchestrator/module.py", line 920, in _upgrade_check
Apr 14 11:22:43 ceph1 bash[37629]: completion = self.upgrade_check(image=image, version=ceph_version)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1510, in inner
Apr 14 11:22:43 ceph1 bash[37629]: completion = self._oremote(method_name, args, kwargs)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1581, in _oremote
Apr 14 11:22:43 ceph1 bash[37629]: return mgr.remote(o, meth, *args, **kwargs)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/mgr_module.py", line 1515, in remote
Apr 14 11:22:43 ceph1 bash[37629]: args, kwargs)
Apr 14 11:22:43 ceph1 bash[37629]: RuntimeError: Remote method threw exception: Traceback (most recent call last):
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/cephadm/module.py", line 548, in wrapper
Apr 14 11:22:43 ceph1 bash[37629]: return AsyncCompletion(value=f(*args, **kwargs), name=f.__name__)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/cephadm/module.py", line 3046, in upgrade_check
Apr 14 11:22:43 ceph1 bash[37629]: target_id, target_version = self._get_container_image_id(target_name)
Apr 14 11:22:43 ceph1 bash[37629]: File "/usr/share/ceph/mgr/cephadm/module.py", line 3029, in _get_container_image_id
Apr 14 11:22:43 ceph1 bash[37629]: image_name, host, '\n'.join(out)))
Apr 14 11:22:43 ceph1 bash[37629]: orchestrator._interface.OrchestratorError: Failed to pull 15.2.1 on ceph0:
</pre>
<p>And according to `ceph config-key dump mgr`, container_image_base is still set to default.</p>
<p>Environment:</p>
<ul>
<li>15.2.0</li>
<li>debian + docker</li>
</ul>
Orchestrator - Bug #44934 (Resolved): cephadm RGW: scary remove-deploy loop
https://tracker.ceph.com/issues/44934
2020-04-03T15:19:23Z
Sebastian Wagner
<pre>
[ceph: root@ceph-001 /]# ceph log last cephadm
2020-04-03T15:10:09.084085+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17284 : cephadm [INF] Deploying daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.pnrlnt on ceph-001
2020-04-03T15:10:14.405833+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17288 : cephadm [INF] Removing orphan daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.pnrlnt...
2020-04-03T15:10:14.406263+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17289 : cephadm [INF] Removing daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.pnrlnt from ceph-001
2020-04-03T15:10:17.515543+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17291 : cephadm [INF] Deploying daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.alszjq on ceph-001
2020-04-03T15:10:22.387848+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17295 : cephadm [INF] Removing orphan daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.alszjq...
2020-04-03T15:10:22.388287+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17296 : cephadm [INF] Removing daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.alszjq from ceph-001
2020-04-03T15:10:25.758392+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17298 : cephadm [INF] Deploying daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.skcssh on ceph-001
2020-04-03T15:10:31.296152+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17302 : cephadm [INF] Removing orphan daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.skcssh...
2020-04-03T15:10:31.296668+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17303 : cephadm [INF] Removing daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.skcssh from ceph-001
2020-04-03T15:10:34.460408+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17306 : cephadm [INF] Deploying daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.vdavwx on ceph-001
2020-04-03T15:10:39.835795+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17309 : cephadm [INF] Removing orphan daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.vdavwx...
2020-04-03T15:10:39.836176+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17310 : cephadm [INF] Removing daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.vdavwx from ceph-001
2020-04-03T15:10:43.532714+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17313 : cephadm [INF] Deploying daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.bnvnby on ceph-001
2020-04-03T15:10:48.353039+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17317 : cephadm [INF] Removing orphan daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.bnvnby...
2020-04-03T15:10:48.353354+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17318 : cephadm [INF] Removing daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.bnvnby from ceph-001
2020-04-03T15:10:51.440358+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17320 : cephadm [INF] Deploying daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.wggojs on ceph-001
2020-04-03T15:10:56.439351+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17324 : cephadm [INF] Removing orphan daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.wggojs...
2020-04-03T15:10:56.439577+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17325 : cephadm [INF] Removing daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.wggojs from ceph-001
2020-04-03T15:10:59.756791+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17327 : cephadm [INF] Deploying daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.oazpdw on ceph-001
2020-04-03T15:11:05.694262+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17331 : cephadm [INF] Removing orphan daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.oazpdw...
2020-04-03T15:11:05.694781+0000 mgr.ceph-001.gkjwqp (mgr.24114) 17332 : cephadm [INF] Removing daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.oazpdw from ceph-001
</pre>
<pre>
[ceph: root@ceph-001 /]# ceph orch ls --format json
[
{
"container_image_id": "0881eb8f169f5556a292b4e2c01d683172b12830a62a9225a98a8e206bb734f0",
"container_image_name": "docker.io/prom/alertmanager:latest",
"service_name": "alertmanager",
"size": 1,
"running": 1,
"spec": {
"placement": {
"count": 1
},
"service_type": "alertmanager"
},
"last_refresh": "2020-04-03T15:12:49.199779",
"created": "2020-04-02T19:22:48.831535"
},
{
"container_image_id": "204a01f9b0b6710dd0c0af7f37ce7139c47ff0f0105d778d7104c69282dfbbf1",
"container_image_name": "docker.io/ceph/ceph:v15",
"service_name": "crash",
"size": 1,
"running": 1,
"spec": {
"placement": {
"host_pattern": "*"
},
"service_type": "crash"
},
"last_refresh": "2020-04-03T15:12:49.199859",
"created": "2020-04-02T19:22:39.377773"
},
{
"container_image_id": "87a51ecf0b1c9a7b187b21c1b071425dafea0d765a96d5bc371c791169b3d7f4",
"container_image_name": "docker.io/ceph/ceph-grafana:latest",
"service_name": "grafana",
"size": 1,
"running": 1,
"spec": {
"placement": {
"count": 1
},
"service_type": "grafana"
},
"last_refresh": "2020-04-03T15:12:49.199939",
"created": "2020-04-02T19:22:47.226348"
},
{
"container_image_id": "204a01f9b0b6710dd0c0af7f37ce7139c47ff0f0105d778d7104c69282dfbbf1",
"container_image_name": "docker.io/ceph/ceph:v15",
"service_name": "mgr",
"size": 2,
"running": 1,
"spec": {
"placement": {
"count": 2
},
"service_type": "mgr"
},
"last_refresh": "2020-04-03T15:12:49.199696",
"created": "2020-04-02T19:22:38.502266"
},
{
"container_image_id": "204a01f9b0b6710dd0c0af7f37ce7139c47ff0f0105d778d7104c69282dfbbf1",
"container_image_name": "docker.io/ceph/ceph:v15",
"service_name": "mon",
"size": 5,
"running": 1,
"spec": {
"placement": {
"count": 5
},
"service_type": "mon"
},
"last_refresh": "2020-04-03T15:12:49.199561",
"created": "2020-04-02T19:22:37.710117"
},
{
"container_image_id": "e5a616e4b9cf68dfcad7782b78e118be4310022e874d52da85c55923fb615f87",
"container_image_name": "docker.io/prom/node-exporter:latest",
"service_name": "node-exporter",
"size": 1,
"running": 1,
"spec": {
"placement": {
"host_pattern": "*"
},
"service_type": "node-exporter"
},
"last_refresh": "2020-04-03T15:12:49.200019",
"created": "2020-04-02T19:22:47.999166"
},
{
"container_image_id": "358a0d2395fe711bb8258e8fb4b2d7865c0a9a6463969bcd1452ee8869ea6653",
"container_image_name": "docker.io/prom/prometheus:latest",
"service_name": "prometheus",
"size": 1,
"running": 1,
"spec": {
"placement": {
"count": 1
},
"service_type": "prometheus"
},
"last_refresh": "2020-04-03T15:12:49.200098",
"created": "2020-04-02T19:22:46.398827"
},
{
"service_name": "rgw.default-rgw-realm.eu-central-1.1",
"size": 1,
"running": 0,
"spec": {
"placement": {
"hosts": [
{
"hostname": "ceph-001",
"network": "",
"name": ""
}
]
},
"service_type": "rgw",
"service_id": "default-rgw-realm.eu-central-1.1",
"rgw_realm": "default-rgw-realm",
"rgw_zone": "eu-central-1",
"subcluster": "1"
}
}
]
</pre>
<pre>
[ceph: root@ceph-001 /]# ceph orch ps
NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
alertmanager.ceph-001 ceph-001 running (7h) 0s ago 19h 0.20.0 docker.io/prom/alertmanager:latest 0881eb8f169f d94d7969094d
crash.ceph-001 ceph-001 running (7h) 0s ago 19h 15.2.0 docker.io/ceph/ceph:v15 204a01f9b0b6 c4b036202241
grafana.ceph-001 ceph-001 running (7h) 0s ago 19h 6.6.2 docker.io/ceph/ceph-grafana:latest 87a51ecf0b1c 5b7b94b48f31
mgr.ceph-001.gkjwqp ceph-001 running (7h) 0s ago 19h 15.2.0 docker.io/ceph/ceph:v15 204a01f9b0b6 9ca007280456
mon.ceph-001 ceph-001 running (7h) 0s ago 19h 15.2.0 docker.io/ceph/ceph:v15 204a01f9b0b6 3d1ba9a2b697
node-exporter.ceph-001 ceph-001 running (7h) 0s ago 19h 0.18.1 docker.io/prom/node-exporter:latest e5a616e4b9cf 36d026c68ba1
osd.0 ceph-001 running (7h) 0s ago 18h 15.2.0 docker.io/ceph/ceph:v15 204a01f9b0b6 faf76193cbfe
osd.1 ceph-001 running (7h) 0s ago 18h 15.2.0 docker.io/ceph/ceph:v15 204a01f9b0b6 f82505bae0f1
prometheus.ceph-001 ceph-001 running (7h) 0s ago 19h 2.17.1 docker.io/prom/prometheus:latest 358a0d2395fe 2708d84cd484
</pre>
<pre>
[ceph: root@ceph-001 /]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph-001 ceph-001
</pre>
<p>log is full of</p>
<pre>
"4/3/20 4:51:09 PM[INF]Removing orphan daemon rgw.default-rgw-realm.eu-central-1.1.ceph-001.awxkio..."
</pre>
Dashboard - Bug #44063 (Resolved): tox: ImportError: cannot import name 'ensure_text'
https://tracker.ceph.com/issues/44063
2020-02-10T15:02:20Z
Sebastian Wagner
<pre>
Requirement already satisfied: tox in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (3.14.3)
Requirement already satisfied: filelock<4,>=3.0.0 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (3.0.12)
Requirement already satisfied: packaging>=14 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (20.1)
Requirement already satisfied: py<2,>=1.4.17 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (1.8.1)
Requirement already satisfied: importlib-metadata<2,>=0.12; python_version < "3.8" in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (1.5.0)
Requirement already satisfied: toml>=0.9.4 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (0.10.0)
Requirement already satisfied: pluggy<1,>=0.12.0 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (0.13.1)
Requirement already satisfied: virtualenv>=16.0.0 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (20.0.0)
Requirement already satisfied: six<2,>=1.0.0 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (1.11.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from packaging>=14->tox) (2.4.6)
Requirement already satisfied: zipp>=0.5 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from importlib-metadata<2,>=0.12; python_version < "3.8"->tox) (2.2.0)
Requirement already satisfied: appdirs<2,>=1.4.3 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from virtualenv>=16.0.0->tox) (1.4.3)
Requirement already satisfied: importlib-resources<2,>=1.0; python_version < "3.7" in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from virtualenv>=16.0.0->tox) (1.0.2)
py3 create: /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/py3
ERROR: invocation failed (exit code 1), logfile: /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/py3/log/py3-0.log
================================== log start ===================================
ERROR:root:ImportError: cannot import name 'ensure_text'
=================================== log end ====================================
ERROR: InvocationError for command /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 -m virtualenv --no-download --python /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 py3 (exited with code 1)
lint create: /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/lint
ERROR: invocation failed (exit code 1), logfile: /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/lint/log/lint-0.log
================================== log start ===================================
ERROR:root:ImportError: cannot import name 'ensure_text'
=================================== log end ====================================
ERROR: InvocationError for command /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 -m virtualenv --no-download --python /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 lint (exited with code 1)
check create: /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/check
ERROR: invocation failed (exit code 1), logfile: /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/check/log/check-0.log
================================== log start ===================================
ERROR:root:ImportError: cannot import name 'ensure_text'
=================================== log end ====================================
ERROR: InvocationError for command /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 -m virtualenv --no-download --python /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 check (exited with code 1)
___________________________________ summary ____________________________________
ERROR: py3: InvocationError for command /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 -m virtualenv --no-download --python /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 py3 (exited with code 1)
ERROR: lint: InvocationError for command /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 -m virtualenv --no-download --python /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 lint (exited with code 1)
ERROR: check: InvocationError for command /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 -m virtualenv --no-download --python /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 check (exited with code 1)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/44319/consoleFull#-14159901736733401c-e9d0-4737-9832-6594c5da0afa">https://jenkins.ceph.com/job/ceph-pull-requests/44319/consoleFull#-14159901736733401c-e9d0-4737-9832-6594c5da0afa</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/44308/consoleFull#-156140668e840cee4-f4a4-4183-81dd-42855615f2c1">https://jenkins.ceph.com/job/ceph-pull-requests/44308/consoleFull#-156140668e840cee4-f4a4-4183-81dd-42855615f2c1</a></p>
Ceph - Bug #42528 (Resolved): python-common bulid failure: File not found: ceph-*.egg-info
https://tracker.ceph.com/issues/42528
2019-10-29T12:19:41Z
Sebastian Wagner
<pre>
PM build errors:
File not found: /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python2.7/site-packages/ceph
File not found by glob: /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python2.7/site-packages/ceph-*.egg-info
+ rm -fr /tmp/install-deps.1830
Build step 'Execute shell' marked build as failure
</pre>
<pre>
running install
running build
running build_py
creating build
creating build/lib
creating build/lib/ceph
copying ceph/__init__.py -> build/lib/ceph
copying ceph/exceptions.py -> build/lib/ceph
creating build/lib/ceph/deployment
copying ceph/deployment/__init__.py -> build/lib/ceph/deployment
copying ceph/deployment/drive_group.py -> build/lib/ceph/deployment
copying ceph/deployment/ssh_orchestrator.py -> build/lib/ceph/deployment
creating build/lib/ceph/tests
copying ceph/tests/__init__.py -> build/lib/ceph/tests
copying ceph/tests/test_drive_group.py -> build/lib/ceph/tests
running install_lib
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
copying build/lib/ceph/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
copying build/lib/ceph/exceptions.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/drive_group.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/ssh_orchestrator.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
copying build/lib/ceph/tests/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
copying build/lib/ceph/tests/test_drive_group.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/exceptions.py to exceptions.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/drive_group.py to drive_group.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/ssh_orchestrator.py to ssh_orchestrator.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests/test_drive_group.py to test_drive_group.cpython-36.pyc
running install_egg_info
running egg_info
creating ceph.egg-info
writing ceph.egg-info/PKG-INFO
writing dependency_links to ceph.egg-info/dependency_links.txt
writing requirements to ceph.egg-info/requires.txt
writing top-level names to ceph.egg-info/top_level.txt
writing manifest file 'ceph.egg-info/SOURCES.txt'
reading manifest file 'ceph.egg-info/SOURCES.txt'
writing manifest file 'ceph.egg-info/SOURCES.txt'
Copying ceph.egg-info to /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph-1.0.0-py3.6.egg-info
running install_scripts
Traceback (most recent call last):
File "setup.py", line 45, in <module>
'Programming Language :: Python :: 3.6',
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, in fetch_build_eggs
parse_requirements(requires), installer=self.fetch_build_egg
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 618, in resolve
dist = best[req.key] = env.best_match(req, self, installer)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 862, in best_match
return self.obtain(req, installer) # try and download/install
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 874, in obtain
return installer(requirement)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 339, in fetch_build_egg
return cmd.easy_install(req)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 623, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 653, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 849, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1130, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1115, in run_setup
run_setup(setup_script, args)
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 69, in run_setup
lambda: execfile(
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 120, in run
return func()
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 71, in <lambda>
{'__file__':setup_script, '__name__':'__main__'}
File "setup.py", line 21, in <module>
packages=find_packages(),
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 269, in __init__
_Distribution.__init__(self,attrs)
File "/usr/lib64/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 302, in finalize_options
ep.load()(self, ep.name, value)
File "build/bdist.linux-x86_64/egg/setuptools_scm/integration.py", line 9, in version_keyword
File "build/bdist.linux-x86_64/egg/setuptools_scm/version.py", line 66, in _warn_if_setuptools_outdated
setuptools_scm.version.SetuptoolsOutdatedWarning: your setuptools is too old (<12)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31272//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31272//consoleFull</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31269//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31269//consoleFull</a></p>
CephFS - Bug #40429 (Resolved): mgr/volumes: subvolume.py calls Exceptions with too few arguments.
https://tracker.ceph.com/issues/40429
2019-06-19T09:45:11Z
Sebastian Wagner
<p>mypy revealed</p>
<pre>
+pybind/mgr/volumes/fs/subvolume.py: note: In member "get_subvolume_path" of class "SubVolume":
+pybind/mgr/volumes/fs/subvolume.py:167: error: Too few arguments for "VolumeException"
+pybind/mgr/volumes/fs/subvolume.py: note: In member "_get_ancestor_xattr" of class "SubVolume":
+pybind/mgr/volumes/fs/subvolume.py:203: error: Too few arguments for "NoData"
</pre>
<p>both of these errors are actual bugs in the code and needs to get fixed.</p>
mgr - Bug #39644 (Resolved): mgr/zabbix: ERROR: test_zabbix (tasks.mgr.test_module_selftest.TestM...
https://tracker.ceph.com/issues/39644
2019-05-09T08:32:02Z
Sebastian Wagner
<pre>
======================================================================
ERROR: test_zabbix (tasks.mgr.test_module_selftest.TestModuleSelftest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner-testing/qa/tasks/mgr/test_module_selftest.py", line 41, in test_zabbix
self._selftest_plugin("zabbix")
File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner-testing/qa/tasks/mgr/test_module_selftest.py", line 34, in _selftest_plugin
"mgr", "self-test", "module", module_name)
File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner-testing/qa/tasks/ceph_manager.py", line 1157, in raw_cluster_cmd
stdout=StringIO(),
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 205, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 435, in run
r.wait()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 162, in wait
self._raise_for_status()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 184, in _raise_for_status
node=self.hostname, label=self.label
CommandFailedError: Command failed on smithi023 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph mgr self-test module zabbix'
</pre>
<pre>
2019-05-08 20:53:12.238 7fdcc2030700 -1 Remote method threw exception: Traceback (most recent call last):
File "/usr/share/ceph/mgr/zabbix/module.py", line 458, in self_test
data = self.get_data()
File "/usr/share/ceph/mgr/zabbix/module.py", line 209, in get_data
data['[{0},raw_bytes_used]'.format(pool['name'])] = pool['stats']['raw_bytes_used']
KeyError: ('raw_bytes_used',)
2019-05-08 20:53:12.238 7fdcc2030700 -1 mgr.server reply reply (1) Operation not permitted Test failed: Remote method threw exception: Traceback (most recent call last):
File "/usr/share/ceph/mgr/zabbix/module.py", line 458, in self_test
data = self.get_data()
File "/usr/share/ceph/mgr/zabbix/module.py", line 209, in get_data
data['[{0},raw_bytes_used]'.format(pool['name'])] = pool['stats']['raw_bytes_used']
KeyError: ('raw_bytes_used',)
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2019-05-08_15:36:11-rados:mgr-wip-swagner-testing-distro-basic-smithi/3941021/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2019-05-08_15:36:11-rados:mgr-wip-swagner-testing-distro-basic-smithi/3941021/teuthology.log</a></p>
<p>Introduced in <a class="external" href="https://github.com/ceph/ceph/pull/26152">https://github.com/ceph/ceph/pull/26152</a></p>
<p>Greg, I've assigned it to you, as Dmitriy Rabotjagov is not part of the mgr project</p>
Ceph - Bug #39595 (Resolved): Compile error: ceph-dencoder: invalid operands (*UND* and .gcc_exce...
https://tracker.ceph.com/issues/39595
2019-05-06T09:57:05Z
Sebastian Wagner
<p><a class="external" href="https://shaman.ceph.com/builds/ceph/master/1991495a22fa74210348ffd4f261c314ef3f056c/notcmalloc/152091/">https://shaman.ceph.com/builds/ceph/master/1991495a22fa74210348ffd4f261c314ef3f056c/notcmalloc/152091/</a></p>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/25093//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/25093//consoleFull</a></p>
<pre>
[ 90%] Building CXX object src/tools/ceph-dencoder/CMakeFiles/ceph-dencoder.dir/ceph_dencoder.cc.o
{standard input}: Assembler messages:
{standard input}:1923894: Error: invalid operands (*UND* and .gcc_except_table sections) for `-'
{standard input}:1923897: Error: invalid operands (*UND* and .gcc_except_table sections) for `-'
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
make[2]: *** [src/tools/ceph-dencoder/CMakeFiles/ceph-dencoder.dir/ceph_dencoder.cc.o] Error 1
make[1]: *** [src/tools/ceph-dencoder/CMakeFiles/ceph-dencoder.dir/all] Error 2
make: *** [all] Error 2
error: Bad exit status from /var/tmp/rpm-tmp.I0VnuO (%build)
RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.I0VnuO (%build)
Finished: FAILURE
</pre>
Dashboard - Bug #38590 (Resolved): mimic: dashboard: failed to compile the dashboard: Cannot find...
https://tracker.ceph.com/issues/38590
2019-03-05T18:51:58Z
Sebastian Wagner
<pre>
�[0mDate: �[1m�[37m2019-03-05T05:49:15.789Z�[39m�[22m�[0m
�[0mHash: �[1m�[37m894ed43e42aed84f2e6a�[39m�[22m�[0m
�[0mTime: �[1m�[37m21545�[39m�[22mms�[0m
�[0mchunk {�[1m�[33mscripts�[39m�[22m} �[1m�[32mscripts.fc88ef4a23399c760d0b.bundle.js�[39m�[22m (scripts) 210 kB �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m0�[39m�[22m} �[1m�[32mstyles.89887a238a2462b3f866.bundle.css�[39m�[22m (styles) 211 kB �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m1�[39m�[22m} �[1m�[32mpolyfills.997d8cc03812de50ae67.bundle.js�[39m�[22m (polyfills) 84 bytes �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m2�[39m�[22m} �[1m�[32mmain.ee32620ecd1edff94184.bundle.js�[39m�[22m (main) 84 bytes �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m3�[39m�[22m} �[1m�[32minline.318b50c57b4eba3d437b.bundle.js�[39m�[22m (inline) 796 bytes �[1m�[33m[entry]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0m�[0m
�[0m�[1m�[33mWARNING in Invalid animation value at 11938:14. Ignoring.�[39m�[22m�[0m
�[0m�[0m
�[0m�[1m�[33mWARNING in Invalid animation value at 11937:22. Ignoring.�[39m�[22m�[0m
�[0m�[0m
�[0m�[1m�[31mERROR in node_modules/@types/lodash/common/object.d.ts(1689,12): error TS2304: Cannot find name 'Exclude'.�[39m�[22m�[0m
�[0m�[1m�[31mnode_modules/@types/lodash/common/object.d.ts(1766,12): error TS2304: Cannot find name 'Exclude'.�[39m�[22m�[0m
�[0m�[1m�[31mnode_modules/@types/lodash/common/object.d.ts(1842,34): error TS2304: Cannot find name 'Exclude'.�[39m�[22m�[0m
�[0m�[1m�[31m�[39m�[22m�[0m
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! ceph-dashboard@0.0.0 build: `ng build "--prod"`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the ceph-dashboard@0.0.0 build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/jenkins-build/.npm/_logs/2019-03-05T05_49_15_864Z-debug.log
src/pybind/mgr/dashboard/CMakeFiles/mgr-dashboard-frontend-build.dir/build.make:1435: recipe for target '../src/pybind/mgr/dashboard/frontend/dist' failed
make[3]: *** [../src/pybind/mgr/dashboard/frontend/dist] Error 1
CMakeFiles/Makefile2:4878: recipe for target 'src/pybind/mgr/dashboard/CMakeFiles/mgr-dashboard-frontend-build.dir/all' failed
make[2]: *** [src/pybind/mgr/dashboard/CMakeFiles/mgr-dashboard-frontend-build.dir/all] Error 2
make[2]: *** Waiting for unfinished jobs....
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/15029/consoleFull#-2110717508ed6e825c-5772-40cc-ba3e-4f4430b3e036">https://jenkins.ceph.com/job/ceph-pull-requests/15029/consoleFull#-2110717508ed6e825c-5772-40cc-ba3e-4f4430b3e036</a></p>