Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2022-01-19T16:07:48Z
Ceph
Redmine
Orchestrator - Bug #53939 (Resolved): ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_RE...
https://tracker.ceph.com/issues/53939
2022-01-19T16:07:48Z
Sebastian Wagner
<pre>
mon[102341]: : cluster [WRN] Health check failed: Upgrading daemon osd.0 on host smithi103 failed. (UPGRADE_REDEPLOY_DAEMON)
mon[66897]: cephadm 2022-01-18T16:27:48.439275+0000 mgr.smithi103.wyeocw (mgr.14712) 129 : cephadm [ERR] cephadm exited with an error code: 1, stderr:Redeploy daemon osd.0 ...
mon[66897]: Non-zero exit code 1 from systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0
mon[66897]: systemctl: stderr Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: systemctl: stderr See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
mon[66897]: Traceback (most recent call last):
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8615, in <module>
mon[66897]: main()
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8603, in main
mon[66897]: r = ctx.func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1790, in _default_image
mon[66897]: return func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 4603, in command_deploy
mon[66897]: ports=daemon_ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2715, in deploy_daemon
mon[66897]: c, osd_fsid=osd_fsid, ports=ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2960, in deploy_daemon_units
mon[66897]: call_throws(ctx, ['systemctl', 'start', unit_name])
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1469, in call_throws
mon[66897]: raise RuntimeError(f'Failed command: {" ".join(command)}: {s}')
mon[66897]: RuntimeError: Failed command: systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0: Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
mon[66897]: Traceback (most recent call last):
mon[66897]: File "/usr/share/ceph/mgr/cephadm/serve.py", line 1402, in _remote_connection
mon[66897]: yield (conn, connr)
mon[66897]: File "/usr/share/ceph/mgr/cephadm/serve.py", line 1295, in _run_cephadm
mon[66897]: code, '\n'.join(err)))
mon[66897]: orchestrator._interface.OrchestratorError: cephadm exited with an error code: 1, stderr:Redeploy daemon osd.0 ...
mon[66897]: Non-zero exit code 1 from systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0
mon[66897]: systemctl: stderr Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: systemctl: stderr See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
mon[66897]: Traceback (most recent call last):
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8615, in <module>
mon[66897]: main()
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8603, in main
mon[66897]: r = ctx.func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1790, in _default_image
mon[66897]: return func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 4603, in command_deploy
mon[66897]: ports=daemon_ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2715, in deploy_daemon
mon[66897]: c, osd_fsid=osd_fsid, ports=ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2960, in deploy_daemon_units
mon[66897]: call_throws(ctx, ['systemctl', 'start', unit_name])
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1469, in call_throws
mon[66897]: raise RuntimeError(f'Failed command: {" ".join(command)}: {s}')
mon[66897]: RuntimeError: Failed command: systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0: Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
...
cephadm 2022-01-18T16:27:48.439412+0000 mgr.smithi103.wyeocw (mgr.14712) 130 : cephadm [ERR] Upgrade: Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed.
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2022-01-18_15:34:53-rados:cephadm-wip-swagner2-testing-2022-01-18-1242-pacific-distro-default-smithi/6624255">https://pulpito.ceph.com/swagner-2022-01-18_15:34:53-rados:cephadm-wip-swagner2-testing-2022-01-18-1242-pacific-distro-default-smithi/6624255</a></p>
Orchestrator - Bug #53904 (Duplicate): cephadm: ingress jobs stuck
https://tracker.ceph.com/issues/53904
2022-01-17T16:07:38Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2022-01-17_12:42:04-orch:cephadm-wip-swagner-testing-2022-01-17-1014-distro-default-smithi/">https://pulpito.ceph.com/swagner-2022-01-17_12:42:04-orch:cephadm-wip-swagner-testing-2022-01-17-1014-distro-default-smithi/</a></p>
<pre>
2022-01-17T13:17:17.053 DEBUG:teuthology.orchestra.run.smithi155:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:1cdf02ebbbdd98a055173cbac4d0171328a564dc shell -c /etc/ceph/ceph.conf -k />
2022-01-17T13:17:17.054 DEBUG:teuthology.orchestra.run.smithi155:> for haproxy in `ceph orch ps | grep ^haproxy.nfs.foo. | awk '"'"'{print $1}'"'"'`; do
2022-01-17T13:17:17.054 DEBUG:teuthology.orchestra.run.smithi155:> ceph orch daemon stop $haproxy
2022-01-17T13:17:17.054 DEBUG:teuthology.orchestra.run.smithi155:> while ! ceph orch ps | grep $haproxy | grep stopped; do sleep 1 ; done
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> cat /mnt/foo/testfile
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> echo $haproxy > /mnt/foo/testfile
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> sync
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> ceph orch daemon start $haproxy
2022-01-17T13:17:17.056 DEBUG:teuthology.orchestra.run.smithi155:> while ! ceph orch ps | grep $haproxy | grep running; do sleep 1 ; done
2022-01-17T13:17:17.056 DEBUG:teuthology.orchestra.run.smithi155:> done
2022-01-17T13:17:17.056 DEBUG:teuthology.orchestra.run.smithi155:> '
</pre><br />...snip...<br /><pre>
2022-01-17T13:17:20.571 INFO:teuthology.orchestra.run.smithi155.stdout:Check with each haproxy down in turn...
2022-01-17T13:17:21.281 INFO:teuthology.orchestra.run.smithi155.stdout:Scheduled to stop haproxy.nfs.foo.smithi155.xhswck on host 'smithi155'
</pre><br />...snip...
<pre>
2022-01-17T13:17:36.893 INFO:teuthology.orchestra.run.smithi155.stdout:haproxy.nfs.foo.smithi155.xhswck smithi155 *:2049,9002 stopped 0s ago 79s - - <unknown> <un>
2022-01-17T13:17:36.898 INFO:teuthology.orchestra.run.smithi155.stdout:test
2022-01-17T13:17:37.528 INFO:teuthology.orchestra.run.smithi155.stdout:Scheduled to start haproxy.nfs.foo.smithi155.xhswck on host 'smithi155'
</pre><br />...snip...<br /><pre>
2022-01-17T13:17:53.182 INFO:teuthology.orchestra.run.smithi155.stdout:haproxy.nfs.foo.smithi155.xhswck smithi155 *:2049,9002 running (5s) 0s ago 95s - - 2.3.17-d1c9119 14b>
2022-01-17T13:17:53.519 INFO:teuthology.orchestra.run.smithi155.stdout:Scheduled to stop haproxy.nfs.foo.smithi162.mahcqs on host 'smithi162'
</pre><br />...snip...<br /><pre>
2022-01-17T13:18:07.810 INFO:teuthology.orchestra.run.smithi155.stdout:haproxy.nfs.foo.smithi162.mahcqs smithi162 *:2049,9002 stopped 0s ago 102s - - <unknown> <unk>
</pre><br />...snip..<br /><pre>
h[14066]: cephadm 2022-01-17T13:17:53.516345+0000 mgr.smithi155.uoijyc (mgr.14206) 339 : cephadm [INF] Schedule stop daemon haproxy.nfs.foo.smithi162.mahcqs
</pre>
<p>But I never see a start of haproxy.nfs.foo.smithi162.mahcqs again.</p>
mgr - Bug #53538 (Resolved): mgr/stats: ZeroDivisionError
https://tracker.ceph.com/issues/53538
2021-12-08T13:37:49Z
Sebastian Wagner
<pre>
root@service-01-08020:~# ceph osd status storage-01-08002
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1623, in _handle_command
return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 416, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/status/module.py", line 338, in handle_osd_status
wr_ops_rate = (self.get_rate("osd", osd_id.__str__(), "osd.op_w") +
File "/usr/share/ceph/mgr/status/module.py", line 28, in get_rate
return (data[-1][1] - data[-2][1]) // int(data[-1][0] - data[-2][0])
ZeroDivisionError: integer division or modulo by zero
</pre>
<p>Since those PRs:</p>
<ul>
<li><a class="external" href="https://github.com/ceph/ceph/pull/25337">https://github.com/ceph/ceph/pull/25337</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/26270">https://github.com/ceph/ceph/pull/26270</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/26270/files#diff-dc6485f717f4dce4863733896375af75963412ebb2abc4b62fcd1f5233eee07dR44">https://github.com/ceph/ceph/pull/26270/files#diff-dc6485f717f4dce4863733896375af75963412ebb2abc4b62fcd1f5233eee07dR44</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/28603">https://github.com/ceph/ceph/pull/28603</a> </li>
<li><a class="external" href="https://tracker.ceph.com/issues/43224#note-11">https://tracker.ceph.com/issues/43224#note-11</a></li>
</ul>
<p>no one had the patience to look into this all over again.</p>
Orchestrator - Bug #51590 (Resolved): cephadm: iscsi: The first gateway defined must be the local...
https://tracker.ceph.com/issues/51590
2021-07-08T09:58:40Z
Sebastian Wagner
<p>1. Deploy cluster using cephadm<br />2. Deploy iscsi services using iscsi.yml file</p>
<pre>
[ceph: root@magna007 ~]# cat iscsi.yml
service_type: iscsi
service_id: iscsi
placement:
hosts:
- host1
- host2
spec:
pool: iscsi_pool
trusted_ip_list: "10.8.128.108,10.8.128.113"
api_user: admin
api_password: admin
</pre>
<p>3. Login to container using "podman exec -it 12e38d148b25 /bin/sh" then do gwcli<br />4. Create target and gateways</p>
<pre>
[root@host1 ~]# podman exec -it 99b46c7235da sh
sh-4.4# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.8.128.108 host1 host1
10.8.128.113 host2 host2
127.0.1.1 host1 host1 ceph-3ce40d5c-dd5a-11eb-8a7a-002590fc2538-iscsi.iscsi.host1.kkxugr-tcmu
sh-4.4# gwcli
/iscsi-targets> create target_iqn=iqn.2003-01.com.example.iscsi-gw:ceph-igw
ok
/iscsi-targets> ls
o- iscsi-targets ................................................................................. [DiscoveryAuth: None, Targets: 1]
o- iqn.2003-01.com.example.iscsi-gw:ceph-igw ............................................................ [Auth: None, Gateways: 0]
o- disks ............................................................................................................ [Disks: 0]
o- gateways .............................................................................................. [Up: 0/0, Portals: 0]
o- host-groups .................................................................................................... [Groups : 0]
o- hosts ......................................................................................... [Auth: ACL_ENABLED, Hosts: 0]
/iscsi-targets> goto gateways
/iscsi-target...-igw/gateways> create host1.ceph.example.com 10.8.128.108
The first gateway defined must be the local machine
/iscsi-target...-igw/gateways> create host2.ceph.example.com 10.8.128.113
The first gateway defined must be the local machine
/iscsi-target...-igw/gateways> create host1 10.8.128.108
The first gateway defined must be the local machine
/iscsi-target...-igw/gateways> create host2 10.8.128.113
The first gateway defined must be the local machine
/iscsi-target...-igw/gateways> create ceph-3ce40d5c-dd5a-11eb-8a7a-002590fc2538-iscsi.iscsi.host1.kkxugr-tcmu 10.8.128.108
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
Failed : Gateway 'ceph-3ce40d5c-dd5a-11eb-8a7a-002590fc2538-iscsi.iscsi.host1.kkxugr-tcmu' is not resolvable to an IP address
</pre>
Orchestrator - Bug #51272 (Resolved): upgrade job: mgr.x getting removed by cephadm task: UPGRADE...
https://tracker.ceph.com/issues/51272
2021-06-18T08:47:37Z
Sebastian Wagner
<p>I think this bug is not yet merged.</p>
<ul>
<li><a class="external" href="https://github.com/ceph/ceph/pull/41478/">https://github.com/ceph/ceph/pull/41478/</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/41568">https://github.com/ceph/ceph/pull/41568</a></li>
</ul>
<pre>
rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2}
</pre>
<pre>
roles:
- - mon.a
- mon.c
- mgr.y
- osd.0
- osd.1
- osd.2
- osd.3
- client.0
- node-exporter.a
- alertmanager.a
- - mon.b
- mgr.x
- osd.4
- osd.5
- osd.6
- osd.7
- client.1
- prometheus.a
- grafana.a
- node-exporter.b
</pre>
<p><strong>then</strong></p>
<pre>
: audit 2021-06-15T20:14:24.260141+0000 mgr.y (mgr.14138) 64 : audit [DBG] from='client.34106 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;smithi143=x", "target">
</pre>
<p>notice the placement only contains <strong>2;smithi143=x</strong></p>
<pre>
2021-06-15T20:14:29.203 INFO:journalctl@ceph.mgr.y.smithi135.stdout:Jun 15 20:14:29 smithi135 systemd[1]: Stopping Ceph mgr.y for e2a4517e-ce15-11eb-8c13-001a4aab830c...
</pre>
<p>*resulting in *</p>
<pre>
cluster 2021-06-15T20:21:09.388112+0000 mgr.x (mgr.34112) 238 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 0 B data, 3.7 MiB used, 707 GiB / 715 GiB avail
: debug 2021-06-15T20:21:11.241+0000 7ffa34117700 -1 log_channel(cephadm) log [ERR] : Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon
: audit 2021-06-15T20:21:11.239485+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.34112 ' entity='mgr.x'
: audit 2021-06-15T20:21:11.241293+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.34112 172.21.15.143:0/2430240313' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
: cephadm 2021-06-15T20:21:11.241839+0000 mgr.x (mgr.34112) 239 : cephadm [INF] Upgrade: Target is quay.ceph.io/ceph-ci/ceph:da5e8184007182fa3cd5c8385fee4e08c5620fe2 with id 219a75e51380d5cdf3af7b1fa194d1bedd11>
: cephadm 2021-06-15T20:21:11.244338+0000 mgr.x (mgr.34112) 240 : cephadm [INF] Upgrade: Checking mgr daemons...
: cephadm 2021-06-15T20:21:11.244711+0000 mgr.x (mgr.34112) 241 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x)
: cephadm 2021-06-15T20:21:11.247775+0000 mgr.x (mgr.34112) 242 : cephadm [ERR] Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon
: audit 2021-06-15T20:21:11.253146+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.34112 ' entity='mgr.x'
: cluster 2021-06-15T20:21:11.255641+0000 mgr.x (mgr.34112) 243 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 0 B data, 3.7 MiB used, 707 GiB / 715 GiB avail
: audit 2021-06-15T20:21:11.259712+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.34112 ' entity='mgr.x'
</pre>
<pre>
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:alertmanager.a smithi135 running (117s) 107s ago 2m 0.20.0 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f d7ab1fc469b4
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:grafana.a smithi143 running (2m) 107s ago 2m 6.6.2 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a bdf08596362b
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:mgr.x smithi143 running (6m) 107s ago 6m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 bf659290d1ab
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.a smithi135 running (8m) 107s ago 9m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 a0083afbce6f
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.b smithi143 running (7m) 107s ago 7m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 177430b8b423
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.c smithi135 running (7m) 107s ago 7m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 881e672542be
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:node-exporter.a smithi135 running (2m) 107s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf acd96e0cc12e
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:node-exporter.b smithi143 running (2m) 107s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf a3c897228c6d
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.0 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 9805ecc9628d
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.1 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 29d8fc3fbb7f
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.2 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 193e0a2a0487
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.3 smithi135 running (4m) 107s ago 4m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 e2dea4bf5490
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.4 smithi143 running (4m) 107s ago 4m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 e0e19361a64a
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.5 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 71c57f8c0e3d
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.6 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 4da5baa064d1
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.7 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 098193d20e10
2021-06-15T20:21:16.896 INFO:teuthology.orchestra.run.smithi135.stdout:prometheus.a smithi143 running (110s) 107s ago 2m 2.18.1 docker.io/prom/prometheus:v2.18.1 de242295e225 fb7dd6cd2280
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/yuriw-2021-06-15_18:44:29-rados-wip-yuri8-testing-2021-06-15-0839-octopus-distro-basic-smithi/6174184/teuthology.log">http://qa-proxy.ceph.com/teuthology/yuriw-2021-06-15_18:44:29-rados-wip-yuri8-testing-2021-06-15-0839-octopus-distro-basic-smithi/6174184/teuthology.log</a></p>
Orchestrator - Bug #50759 (Rejected): Redeploying daemon prometheus.a on host smithi159 failed: '...
https://tracker.ceph.com/issues/50759
2021-05-11T14:17:47Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2021-05-11_09:16:20-rados:cephadm-wip-swagner-testing-2021-05-06-1235-distro-basic-smithi/">https://pulpito.ceph.com/swagner-2021-05-11_09:16:20-rados:cephadm-wip-swagner-testing-2021-05-06-1235-distro-basic-smithi/</a></p>
<pre>
cluster 2021-05-11T09:58:40.820539+0000 mgr.y (mgr.44106) 332 cluster [DBG] pgmap v241132 pgs132 active+clean; 7.6 KiB data, 111 MiB used, 715 GiB / 715 GiB avail; 852 B/s rd, 0 op/s
cephadm 2021-05-11T09:58:41.309082+0000 mgr.y (mgr.44106) 333 cephadm [INF] UpgradeUpdating prometheus.a
cephadm 2021-05-11T09:58:41.326009+0000 mgr.y (mgr.44106) 334 cephadm [INF] Deploying daemon prometheus.a on smithi159
cluster 2021-05-11T09:58:40.820539+0000 mgr.y (mgr.44106) 332 cluster [DBG] pgmap v241132 pgs132 active+clean; 7.6 KiB data, 111 MiB used, 715 GiB / 715 GiB avail; 852 B/s rd, 0 op/s
cephadm 2021-05-11T09:58:41.309082+0000 mgr.y (mgr.44106) 333 cephadm [INF] UpgradeUpdating prometheus.a
cephadm 2021-05-11T09:58:41.326009+0000 mgr.y (mgr.44106) 334 cephadm [INF] Deploying daemon prometheus.a on smithi159
cluster 2021-05-11T09:58:40.820539+0000 mgr.y (mgr.44106) 332 cluster [DBG] pgmap v241132 pgs132 active+clean; 7.6 KiB data, 111 MiB used, 715 GiB / 715 GiB avail; 852 B/s rd, 0 op/s
cephadm 2021-05-11T09:58:41.309082+0000 mgr.y (mgr.44106) 333 cephadm [INF] UpgradeUpdating prometheus.a
cephadm 2021-05-11T09:58:41.326009+0000 mgr.y (mgr.44106) 334 cephadm [INF] Deploying daemon prometheus.a on smithi159
cluster 2021-05-11T09:58:42.821549+0000 mgr.y (mgr.44106) 335 cluster [DBG] pgmap v242132 pgs132 active+clean; 7.6 KiB data, 111 MiB used, 715 GiB / 715 GiB avail; 1.2 KiB/s rd, 1 op/s
cluster 2021-05-11T09:58:42.821549+0000 mgr.y (mgr.44106) 335 cluster [DBG] pgmap v242132 pgs132 active+clean; 7.6 KiB data, 111 MiB used, 715 GiB / 715 GiB avail; 1.2 KiB/s rd, 1 op/s
cluster 2021-05-11T09:58:42.821549+0000 mgr.y (mgr.44106) 335 cluster [DBG] pgmap v242132 pgs132 active+clean; 7.6 KiB data, 111 MiB used, 715 GiB / 715 GiB avail; 1.2 KiB/s rd, 1 op/s
debug 2021-05-11T09:58:45.577+0000 7f5afcd9b700 -1 log_channel(cephadm) log [ERR] cephadm exited with an error code1, stderr:Redeploy daemon prometheus.a ...
Traceback (most recent call last):
File "/var/lib/ceph/7e53a912-b23c-11eb-8c10-001a4aab830c/cephadm.1647ffb435456545022d2850dda95cc58ac4bce47ff1845094d2804873b551c2", line 8187, in <module>
main()
File "/var/lib/ceph/7e53a912-b23c-11eb-8c10-001a4aab830c/cephadm.1647ffb435456545022d2850dda95cc58ac4bce47ff1845094d2804873b551c2", line 8175, in main
r = ctx.func(ctx)
File "/var/lib/ceph/7e53a912-b23c-11eb-8c10-001a4aab830c/cephadm.1647ffb435456545022d2850dda95cc58ac4bce47ff1845094d2804873b551c2", line 1760, in _default_image
return func(ctx)
File "/var/lib/ceph/7e53a912-b23c-11eb-8c10-001a4aab830c/cephadm.1647ffb435456545022d2850dda95cc58ac4bce47ff1845094d2804873b551c2", line 4330, in command_deploy
deploy_daemon(ctx, ctx.fsid, daemon_type, daemon_id, c, uid, gid,
File "/var/lib/ceph/7e53a912-b23c-11eb-8c10-001a4aab830c/cephadm.1647ffb435456545022d2850dda95cc58ac4bce47ff1845094d2804873b551c2", line 2598, in deploy_daemon
create_daemon_dirs(
File "/var/lib/ceph/7e53a912-b23c-11eb-8c10-001a4aab830c/cephadm.1647ffb435456545022d2850dda95cc58ac4bce47ff1845094d2804873b551c2", line 2204, in create_daemon_dirs
f.write(content)
UnicodeEncodeError'latin-1' codec can't encode character '\u2265' in position 2023ordinal not in range(256)
Traceback (most recent call last):
File "/usr/share/ceph/mgr/cephadm/serve.py", line 1216, in _remote_connection
yield (conn, connr)
File "/usr/share/ceph/mgr/cephadm/serve.py", line 1113, in _run_cephadm
code, '\n'.join(err)))
orchestrator._interface.OrchestratorErrorcephadm exited with an error code1, stderr:Redeploy daemon prometheus.a ...
Traceback (most recent call last):
File "/var/lib/ceph/7e53a912-b23c-11eb-8c10-001a4aab830c/cephadm.1647ffb435456545022d2850dda95cc58ac4bce47ff1845094d2804873b551c2", line 8187, in <module>
main()
File "/var/lib/ceph/7e53a912-b23c-11eb-8c10-001a4aab830c/cephadm.1647ffb435456545022d2850dda95cc58ac4bce47ff1845094d2804873b551c2", line 8175, in main
r = ctx.func(ctx)
File "/var/lib/ceph/7e53a912-b23c-11eb-8c10-001a4aab830c/cephadm.1647ffb435456545022d2850dda95cc58ac4bce47ff1845094d2804873b551c2", line 1760, in _default_image
return func(ctx)
File "/var/lib/ceph/7e53a912-b23c-11eb-8c10-001a4aab830c/cephadm.1647ffb435456545022d2850dda95cc58ac4bce47ff1845094d2804873b551c2", line 4330, in command_deploy
deploy_daemon(ctx, ctx.fsid, daemon_type, daemon_id, c, uid, gid,
File "/var/lib/ceph/7e53a912-b23c-11eb-8c10-001a4aab830c/cephadm.1647ffb435456545022d2850dda95cc58ac4bce47ff1845094d2804873b551c2", line 2598, in deploy_daemon
create_daemon_dirs(
File "/var/lib/ceph/7e53a912-b23c-11eb-8c10-001a4aab830c/cephadm.1647ffb435456545022d2850dda95cc58ac4bce47ff1845094d2804873b551c2", line 2204, in create_daemon_dirs
f.write(content)
UnicodeEncodeError'latin-1' codec can't encode character '\u2265' in position 2023ordinal not in range(256)
debug 2021-05-11T09:58:45.577+0000 7f5afcd9b700 -1 log_channel(cephadm) log [ERR] UpgradePaused due to UPGRADE_REDEPLOY_DAEMONRedeploying daemon prometheus.a on host smithi159 failed.
</pre>
<ul>
<li>\u2265' is the innocent-looking GREATER-THAN OR EQUAL TO sign, ≥</li>
</ul>
Orchestrator - Bug #49435 (Closed): cephadm: rgw not getting deployed due to HEALTH_WARN
https://tracker.ceph.com/issues/49435
2021-02-23T15:43:49Z
Sebastian Wagner
<p>We should provide a way for users to deploy RGW anyway and at the same time prevent radosgw-admin to block indefinitely.</p>
<p>idea: add a timeout.</p>
Orchestrator - Fix #49336 (Resolved): re-enable coredumps for cephadm
https://tracker.ceph.com/issues/49336
2021-02-17T15:20:00Z
Sebastian Wagner
<p>we reverted the podman --init Pr. we need to find out why we have a problem there</p>
RADOS - Bug #49190 (Resolved): LibRadosMiscConnectFailure_ConnectFailure_Test: FAILED ceph_assert...
https://tracker.ceph.com/issues/49190
2021-02-05T10:34:01Z
Sebastian Wagner
<p>I created the branch two days ago and haven't seen this error before:</p>
<pre>
2021-02-05T09:43:02.428 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 2021-02-05T09:43:02.385+0000 7fb2c183e700 10 monclient: discarding stray monitor message auth_reply(proto 2 0 (0) Success) v1
2021-02-05T09:43:02.428 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-435-g49e81916/rpm/el8/BUILD/ceph-17.0.0-435-g49e81916/src/common/config_proxy.h: In function 'void ceph::common::ConfigProxy::call_gate_close(ceph::common::ConfigProxy::md_config_obs_t*)' thread 7fb2d4732500 time 2021-02-05T09:43:02.387202+0000
2021-02-05T09:43:02.429 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-435-g49e81916/rpm/el8/BUILD/ceph-17.0.0-435-g49e81916/src/common/config_proxy.h: 71: FAILED ceph_assert(p != obs_call_gate.end())
2021-02-05T09:43:02.429 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: ceph version 17.0.0-435-g49e81916 (49e81916e1db40399401bf6993250bf570285966) quincy (dev)
2021-02-05T09:43:02.429 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x158) [0x7fb2caca479a]
2021-02-05T09:43:02.429 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 2: /usr/lib64/ceph/libceph-common.so.2(+0x2769b4) [0x7fb2caca49b4]
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 3: (MonClient::shutdown()+0x8eb) [0x7fb2cb035feb]
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 4: (MonClient::get_monmap_and_config()+0x4ad) [0x7fb2cb03ac1d]
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 5: /lib64/librados.so.2(+0xb8ef8) [0x7fb2d4235ef8]
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 6: rados_connect()
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 7: (LibRadosMiscConnectFailure_ConnectFailure_Test::TestBody()+0x35d) [0x561eec3dcc3d]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 8: (void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*)+0x4e) [0x561eec435d4e]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 9: (testing::Test::Run()+0xcb) [0x561eec428d3b]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 10: (testing::TestInfo::Run()+0x135) [0x561eec428ea5]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 11: (testing::TestSuite::Run()+0xc1) [0x561eec429401]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 12: (testing::internal::UnitTestImpl::RunAllTests()+0x445) [0x561eec42b015]
2021-02-05T09:43:02.432 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 13: (bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*)+0x4e) [0x561eec4362be]
2021-02-05T09:43:02.432 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 14: (testing::UnitTest::Run()+0xa0) [0x561eec428f70]
2021-02-05T09:43:02.432 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 15: main()
2
</pre>
<ul>
<li><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859038">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859038</a></li>
<li><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859039">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859039</a></li>
<li><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859041">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859041</a></li>
<li><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859042">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859042</a></li>
</ul>
<p>See <a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/</a></p>
Orchestrator - Bug #48715 (Resolved): docker-mirror: x509: certificate relies on legacy Common Na...
https://tracker.ceph.com/issues/48715
2020-12-24T10:24:57Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-12-23_18:12:01-rados:cephadm-wip-swagner-testing-2020-12-22-0110-distro-basic-smithi/5734449/">https://pulpito.ceph.com/swagner-2020-12-23_18:12:01-rados:cephadm-wip-swagner-testing-2020-12-22-0110-distro-basic-smithi/5734449/</a></p>
<pre>
stderr Error: Error initializing source docker://ceph/daemon-base:latest-octopus: (Mirrors also failed: [docker-mirror.front.sepia.ceph.com:5000/ceph/daemon-base:latest-octopus: error pinging docker registry docker-mirror.front.sepia.ceph.com:5000: Get "https://docker-mirror.front.sepia.ceph.com:5000/v2/": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0]):
</pre>
Orchestrator - Bug #45628 (Resolved): cephadm qa: smoke should verify daemons are actually running
https://tracker.ceph.com/issues/45628
2020-05-20T14:11:40Z
Sebastian Wagner
<p>RGW failed:</p>
<pre>
2020-05-20T13:08:09.186 INFO:teuthology.orchestra.run.smithi203.stdout:NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
2020-05-20T13:08:09.186 INFO:teuthology.orchestra.run.smithi203.stdout:alertmanager.a smithi203 running (47s) 33s ago 75s 0.20.0 docker.io/prom/alertmanager:latest 0881eb8f169f 9bcf1765c9f6
2020-05-20T13:08:09.187 INFO:teuthology.orchestra.run.smithi203.stdout:grafana.a smithi060 running (58s) 31s ago 58s 6.6.2 docker.io/ceph/ceph-grafana:latest 87a51ecf0b1c 8731e3e51a0c
2020-05-20T13:08:09.187 INFO:teuthology.orchestra.run.smithi203.stdout:mgr.x smithi060 running (4m) 31s ago 4m 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be 1cd43976a17e
2020-05-20T13:08:09.187 INFO:teuthology.orchestra.run.smithi203.stdout:mgr.y smithi203 running (5m) 33s ago 5m 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be 678f88e3c420
2020-05-20T13:08:09.187 INFO:teuthology.orchestra.run.smithi203.stdout:mon.a smithi203 running (5m) 33s ago 6m 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be 68e1b9162747
2020-05-20T13:08:09.187 INFO:teuthology.orchestra.run.smithi203.stdout:mon.b smithi060 running (4m) 31s ago 4m 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be d1383c8a0cf6
2020-05-20T13:08:09.188 INFO:teuthology.orchestra.run.smithi203.stdout:mon.c smithi203 running (4m) 33s ago 4m 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be 27a1a4d7af30
2020-05-20T13:08:09.188 INFO:teuthology.orchestra.run.smithi203.stdout:node-exporter.a smithi203 running (80s) 33s ago 85s 0.18.1 docker.io/prom/node-exporter:latest e5a616e4b9cf e725ba55bfd7
2020-05-20T13:08:09.188 INFO:teuthology.orchestra.run.smithi203.stdout:node-exporter.b smithi060 running (82s) 31s ago 86s 0.18.1 docker.io/prom/node-exporter:latest e5a616e4b9cf da71c458ed71
2020-05-20T13:08:09.188 INFO:teuthology.orchestra.run.smithi203.stdout:osd.0 smithi203 running (3m) 33s ago 3m 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be fbd8df58b740
2020-05-20T13:08:09.188 INFO:teuthology.orchestra.run.smithi203.stdout:osd.1 smithi203 running (3m) 33s ago 3m 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be f82a0984e8cb
2020-05-20T13:08:09.188 INFO:teuthology.orchestra.run.smithi203.stdout:osd.2 smithi203 running (3m) 33s ago 3m 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be 885fb5dfd287
2020-05-20T13:08:09.189 INFO:teuthology.orchestra.run.smithi203.stdout:osd.3 smithi203 running (2m) 33s ago 2m 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be 4e6e5b008f2e
2020-05-20T13:08:09.189 INFO:teuthology.orchestra.run.smithi203.stdout:osd.4 smithi060 running (2m) 31s ago 2m 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be f1714bd9a240
2020-05-20T13:08:09.189 INFO:teuthology.orchestra.run.smithi203.stdout:osd.5 smithi060 running (2m) 31s ago 2m 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be e00f2801348c
2020-05-20T13:08:09.189 INFO:teuthology.orchestra.run.smithi203.stdout:osd.6 smithi060 running (2m) 31s ago 2m 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be 73b01fddb7dd
2020-05-20T13:08:09.189 INFO:teuthology.orchestra.run.smithi203.stdout:osd.7 smithi060 running (107s) 31s ago 110s 16.0.0-1734-gc1cc5045b00 quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 06207838c6be cebb5c6bf000
2020-05-20T13:08:09.190 INFO:teuthology.orchestra.run.smithi203.stdout:prometheus.a smithi060 running (42s) 31s ago 88s 2.18.1 docker.io/prom/prometheus:latest de242295e225 34d837c4f530
2020-05-20T13:08:09.190 INFO:teuthology.orchestra.run.smithi203.stdout:rgw.realm.zone.a smithi203 unknown 33s ago 102s <unknown> quay.io/ceph-ci/ceph:c1cc5045b00842201e98ed965e87b16c8b2acec8 <unknown> <unknown>
</pre>
<p>still the job succeeded:</p>
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-20_12:38:40-rados:cephadm-wip-swagner3-testing-2020-05-20-1009-distro-basic-smithi/5072816/">http://pulpito.ceph.com/swagner-2020-05-20_12:38:40-rados:cephadm-wip-swagner3-testing-2020-05-20-1009-distro-basic-smithi/5072816/</a></p>
Dashboard - Bug #44063 (Resolved): tox: ImportError: cannot import name 'ensure_text'
https://tracker.ceph.com/issues/44063
2020-02-10T15:02:20Z
Sebastian Wagner
<pre>
Requirement already satisfied: tox in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (3.14.3)
Requirement already satisfied: filelock<4,>=3.0.0 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (3.0.12)
Requirement already satisfied: packaging>=14 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (20.1)
Requirement already satisfied: py<2,>=1.4.17 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (1.8.1)
Requirement already satisfied: importlib-metadata<2,>=0.12; python_version < "3.8" in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (1.5.0)
Requirement already satisfied: toml>=0.9.4 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (0.10.0)
Requirement already satisfied: pluggy<1,>=0.12.0 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (0.13.1)
Requirement already satisfied: virtualenv>=16.0.0 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (20.0.0)
Requirement already satisfied: six<2,>=1.0.0 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from tox) (1.11.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from packaging>=14->tox) (2.4.6)
Requirement already satisfied: zipp>=0.5 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from importlib-metadata<2,>=0.12; python_version < "3.8"->tox) (2.2.0)
Requirement already satisfied: appdirs<2,>=1.4.3 in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from virtualenv>=16.0.0->tox) (1.4.3)
Requirement already satisfied: importlib-resources<2,>=1.0; python_version < "3.7" in /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/lib/python3.6/site-packages (from virtualenv>=16.0.0->tox) (1.0.2)
py3 create: /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/py3
ERROR: invocation failed (exit code 1), logfile: /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/py3/log/py3-0.log
================================== log start ===================================
ERROR:root:ImportError: cannot import name 'ensure_text'
=================================== log end ====================================
ERROR: InvocationError for command /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 -m virtualenv --no-download --python /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 py3 (exited with code 1)
lint create: /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/lint
ERROR: invocation failed (exit code 1), logfile: /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/lint/log/lint-0.log
================================== log start ===================================
ERROR:root:ImportError: cannot import name 'ensure_text'
=================================== log end ====================================
ERROR: InvocationError for command /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 -m virtualenv --no-download --python /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 lint (exited with code 1)
check create: /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/check
ERROR: invocation failed (exit code 1), logfile: /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/check/log/check-0.log
================================== log start ===================================
ERROR:root:ImportError: cannot import name 'ensure_text'
=================================== log end ====================================
ERROR: InvocationError for command /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 -m virtualenv --no-download --python /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 check (exited with code 1)
___________________________________ summary ____________________________________
ERROR: py3: InvocationError for command /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 -m virtualenv --no-download --python /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 py3 (exited with code 1)
ERROR: lint: InvocationError for command /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 -m virtualenv --no-download --python /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 lint (exited with code 1)
ERROR: check: InvocationError for command /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 -m virtualenv --no-download --python /home/jenkins-build/build/workspace/ceph-pull-requests/build/mgr-dashboard-virtualenv/bin/python3.6 check (exited with code 1)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/44319/consoleFull#-14159901736733401c-e9d0-4737-9832-6594c5da0afa">https://jenkins.ceph.com/job/ceph-pull-requests/44319/consoleFull#-14159901736733401c-e9d0-4737-9832-6594c5da0afa</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/44308/consoleFull#-156140668e840cee4-f4a4-4183-81dd-42855615f2c1">https://jenkins.ceph.com/job/ceph-pull-requests/44308/consoleFull#-156140668e840cee4-f4a4-4183-81dd-42855615f2c1</a></p>
Ceph - Bug #42528 (Resolved): python-common bulid failure: File not found: ceph-*.egg-info
https://tracker.ceph.com/issues/42528
2019-10-29T12:19:41Z
Sebastian Wagner
<pre>
PM build errors:
File not found: /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python2.7/site-packages/ceph
File not found by glob: /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python2.7/site-packages/ceph-*.egg-info
+ rm -fr /tmp/install-deps.1830
Build step 'Execute shell' marked build as failure
</pre>
<pre>
running install
running build
running build_py
creating build
creating build/lib
creating build/lib/ceph
copying ceph/__init__.py -> build/lib/ceph
copying ceph/exceptions.py -> build/lib/ceph
creating build/lib/ceph/deployment
copying ceph/deployment/__init__.py -> build/lib/ceph/deployment
copying ceph/deployment/drive_group.py -> build/lib/ceph/deployment
copying ceph/deployment/ssh_orchestrator.py -> build/lib/ceph/deployment
creating build/lib/ceph/tests
copying ceph/tests/__init__.py -> build/lib/ceph/tests
copying ceph/tests/test_drive_group.py -> build/lib/ceph/tests
running install_lib
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
copying build/lib/ceph/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
copying build/lib/ceph/exceptions.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/drive_group.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/ssh_orchestrator.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
copying build/lib/ceph/tests/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
copying build/lib/ceph/tests/test_drive_group.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/exceptions.py to exceptions.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/drive_group.py to drive_group.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/ssh_orchestrator.py to ssh_orchestrator.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests/test_drive_group.py to test_drive_group.cpython-36.pyc
running install_egg_info
running egg_info
creating ceph.egg-info
writing ceph.egg-info/PKG-INFO
writing dependency_links to ceph.egg-info/dependency_links.txt
writing requirements to ceph.egg-info/requires.txt
writing top-level names to ceph.egg-info/top_level.txt
writing manifest file 'ceph.egg-info/SOURCES.txt'
reading manifest file 'ceph.egg-info/SOURCES.txt'
writing manifest file 'ceph.egg-info/SOURCES.txt'
Copying ceph.egg-info to /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph-1.0.0-py3.6.egg-info
running install_scripts
Traceback (most recent call last):
File "setup.py", line 45, in <module>
'Programming Language :: Python :: 3.6',
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, in fetch_build_eggs
parse_requirements(requires), installer=self.fetch_build_egg
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 618, in resolve
dist = best[req.key] = env.best_match(req, self, installer)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 862, in best_match
return self.obtain(req, installer) # try and download/install
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 874, in obtain
return installer(requirement)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 339, in fetch_build_egg
return cmd.easy_install(req)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 623, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 653, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 849, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1130, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1115, in run_setup
run_setup(setup_script, args)
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 69, in run_setup
lambda: execfile(
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 120, in run
return func()
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 71, in <lambda>
{'__file__':setup_script, '__name__':'__main__'}
File "setup.py", line 21, in <module>
packages=find_packages(),
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 269, in __init__
_Distribution.__init__(self,attrs)
File "/usr/lib64/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 302, in finalize_options
ep.load()(self, ep.name, value)
File "build/bdist.linux-x86_64/egg/setuptools_scm/integration.py", line 9, in version_keyword
File "build/bdist.linux-x86_64/egg/setuptools_scm/version.py", line 66, in _warn_if_setuptools_outdated
setuptools_scm.version.SetuptoolsOutdatedWarning: your setuptools is too old (<12)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31272//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31272//consoleFull</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31269//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31269//consoleFull</a></p>
mgr - Bug #39644 (Resolved): mgr/zabbix: ERROR: test_zabbix (tasks.mgr.test_module_selftest.TestM...
https://tracker.ceph.com/issues/39644
2019-05-09T08:32:02Z
Sebastian Wagner
<pre>
======================================================================
ERROR: test_zabbix (tasks.mgr.test_module_selftest.TestModuleSelftest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner-testing/qa/tasks/mgr/test_module_selftest.py", line 41, in test_zabbix
self._selftest_plugin("zabbix")
File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner-testing/qa/tasks/mgr/test_module_selftest.py", line 34, in _selftest_plugin
"mgr", "self-test", "module", module_name)
File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner-testing/qa/tasks/ceph_manager.py", line 1157, in raw_cluster_cmd
stdout=StringIO(),
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 205, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 435, in run
r.wait()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 162, in wait
self._raise_for_status()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 184, in _raise_for_status
node=self.hostname, label=self.label
CommandFailedError: Command failed on smithi023 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph mgr self-test module zabbix'
</pre>
<pre>
2019-05-08 20:53:12.238 7fdcc2030700 -1 Remote method threw exception: Traceback (most recent call last):
File "/usr/share/ceph/mgr/zabbix/module.py", line 458, in self_test
data = self.get_data()
File "/usr/share/ceph/mgr/zabbix/module.py", line 209, in get_data
data['[{0},raw_bytes_used]'.format(pool['name'])] = pool['stats']['raw_bytes_used']
KeyError: ('raw_bytes_used',)
2019-05-08 20:53:12.238 7fdcc2030700 -1 mgr.server reply reply (1) Operation not permitted Test failed: Remote method threw exception: Traceback (most recent call last):
File "/usr/share/ceph/mgr/zabbix/module.py", line 458, in self_test
data = self.get_data()
File "/usr/share/ceph/mgr/zabbix/module.py", line 209, in get_data
data['[{0},raw_bytes_used]'.format(pool['name'])] = pool['stats']['raw_bytes_used']
KeyError: ('raw_bytes_used',)
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2019-05-08_15:36:11-rados:mgr-wip-swagner-testing-distro-basic-smithi/3941021/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2019-05-08_15:36:11-rados:mgr-wip-swagner-testing-distro-basic-smithi/3941021/teuthology.log</a></p>
<p>Introduced in <a class="external" href="https://github.com/ceph/ceph/pull/26152">https://github.com/ceph/ceph/pull/26152</a></p>
<p>Greg, I've assigned it to you, as Dmitriy Rabotjagov is not part of the mgr project</p>
Dashboard - Bug #38590 (Resolved): mimic: dashboard: failed to compile the dashboard: Cannot find...
https://tracker.ceph.com/issues/38590
2019-03-05T18:51:58Z
Sebastian Wagner
<pre>
�[0mDate: �[1m�[37m2019-03-05T05:49:15.789Z�[39m�[22m�[0m
�[0mHash: �[1m�[37m894ed43e42aed84f2e6a�[39m�[22m�[0m
�[0mTime: �[1m�[37m21545�[39m�[22mms�[0m
�[0mchunk {�[1m�[33mscripts�[39m�[22m} �[1m�[32mscripts.fc88ef4a23399c760d0b.bundle.js�[39m�[22m (scripts) 210 kB �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m0�[39m�[22m} �[1m�[32mstyles.89887a238a2462b3f866.bundle.css�[39m�[22m (styles) 211 kB �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m1�[39m�[22m} �[1m�[32mpolyfills.997d8cc03812de50ae67.bundle.js�[39m�[22m (polyfills) 84 bytes �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m2�[39m�[22m} �[1m�[32mmain.ee32620ecd1edff94184.bundle.js�[39m�[22m (main) 84 bytes �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m3�[39m�[22m} �[1m�[32minline.318b50c57b4eba3d437b.bundle.js�[39m�[22m (inline) 796 bytes �[1m�[33m[entry]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0m�[0m
�[0m�[1m�[33mWARNING in Invalid animation value at 11938:14. Ignoring.�[39m�[22m�[0m
�[0m�[0m
�[0m�[1m�[33mWARNING in Invalid animation value at 11937:22. Ignoring.�[39m�[22m�[0m
�[0m�[0m
�[0m�[1m�[31mERROR in node_modules/@types/lodash/common/object.d.ts(1689,12): error TS2304: Cannot find name 'Exclude'.�[39m�[22m�[0m
�[0m�[1m�[31mnode_modules/@types/lodash/common/object.d.ts(1766,12): error TS2304: Cannot find name 'Exclude'.�[39m�[22m�[0m
�[0m�[1m�[31mnode_modules/@types/lodash/common/object.d.ts(1842,34): error TS2304: Cannot find name 'Exclude'.�[39m�[22m�[0m
�[0m�[1m�[31m�[39m�[22m�[0m
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! ceph-dashboard@0.0.0 build: `ng build "--prod"`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the ceph-dashboard@0.0.0 build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/jenkins-build/.npm/_logs/2019-03-05T05_49_15_864Z-debug.log
src/pybind/mgr/dashboard/CMakeFiles/mgr-dashboard-frontend-build.dir/build.make:1435: recipe for target '../src/pybind/mgr/dashboard/frontend/dist' failed
make[3]: *** [../src/pybind/mgr/dashboard/frontend/dist] Error 1
CMakeFiles/Makefile2:4878: recipe for target 'src/pybind/mgr/dashboard/CMakeFiles/mgr-dashboard-frontend-build.dir/all' failed
make[2]: *** [src/pybind/mgr/dashboard/CMakeFiles/mgr-dashboard-frontend-build.dir/all] Error 2
make[2]: *** Waiting for unfinished jobs....
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/15029/consoleFull#-2110717508ed6e825c-5772-40cc-ba3e-4f4430b3e036">https://jenkins.ceph.com/job/ceph-pull-requests/15029/consoleFull#-2110717508ed6e825c-5772-40cc-ba3e-4f4430b3e036</a></p>