Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2022-01-19T16:07:48Z
Ceph
Redmine
Orchestrator - Bug #53939 (Resolved): ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_RE...
https://tracker.ceph.com/issues/53939
2022-01-19T16:07:48Z
Sebastian Wagner
<pre>
mon[102341]: : cluster [WRN] Health check failed: Upgrading daemon osd.0 on host smithi103 failed. (UPGRADE_REDEPLOY_DAEMON)
mon[66897]: cephadm 2022-01-18T16:27:48.439275+0000 mgr.smithi103.wyeocw (mgr.14712) 129 : cephadm [ERR] cephadm exited with an error code: 1, stderr:Redeploy daemon osd.0 ...
mon[66897]: Non-zero exit code 1 from systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0
mon[66897]: systemctl: stderr Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: systemctl: stderr See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
mon[66897]: Traceback (most recent call last):
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8615, in <module>
mon[66897]: main()
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8603, in main
mon[66897]: r = ctx.func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1790, in _default_image
mon[66897]: return func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 4603, in command_deploy
mon[66897]: ports=daemon_ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2715, in deploy_daemon
mon[66897]: c, osd_fsid=osd_fsid, ports=ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2960, in deploy_daemon_units
mon[66897]: call_throws(ctx, ['systemctl', 'start', unit_name])
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1469, in call_throws
mon[66897]: raise RuntimeError(f'Failed command: {" ".join(command)}: {s}')
mon[66897]: RuntimeError: Failed command: systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0: Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
mon[66897]: Traceback (most recent call last):
mon[66897]: File "/usr/share/ceph/mgr/cephadm/serve.py", line 1402, in _remote_connection
mon[66897]: yield (conn, connr)
mon[66897]: File "/usr/share/ceph/mgr/cephadm/serve.py", line 1295, in _run_cephadm
mon[66897]: code, '\n'.join(err)))
mon[66897]: orchestrator._interface.OrchestratorError: cephadm exited with an error code: 1, stderr:Redeploy daemon osd.0 ...
mon[66897]: Non-zero exit code 1 from systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0
mon[66897]: systemctl: stderr Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: systemctl: stderr See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
mon[66897]: Traceback (most recent call last):
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8615, in <module>
mon[66897]: main()
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 8603, in main
mon[66897]: r = ctx.func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1790, in _default_image
mon[66897]: return func(ctx)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 4603, in command_deploy
mon[66897]: ports=daemon_ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2715, in deploy_daemon
mon[66897]: c, osd_fsid=osd_fsid, ports=ports)
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 2960, in deploy_daemon_units
mon[66897]: call_throws(ctx, ['systemctl', 'start', unit_name])
mon[66897]: File "/var/lib/ceph/e287ac0e-7879-11ec-8c34-001a4aab830c/cephadm.c659ab77cc705b8440c5bb10bf729dd981addbc618204d30ac82f427ecc4779d", line 1469, in call_throws
mon[66897]: raise RuntimeError(f'Failed command: {" ".join(command)}: {s}')
mon[66897]: RuntimeError: Failed command: systemctl start ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0: Job for ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service failed because a timeout was exceeded.
mon[66897]: See "systemctl status ceph-e287ac0e-7879-11ec-8c34-001a4aab830c@osd.0.service" and "journalctl -xe" for details.
...
cephadm 2022-01-18T16:27:48.439412+0000 mgr.smithi103.wyeocw (mgr.14712) 130 : cephadm [ERR] Upgrade: Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed.
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2022-01-18_15:34:53-rados:cephadm-wip-swagner2-testing-2022-01-18-1242-pacific-distro-default-smithi/6624255">https://pulpito.ceph.com/swagner-2022-01-18_15:34:53-rados:cephadm-wip-swagner2-testing-2022-01-18-1242-pacific-distro-default-smithi/6624255</a></p>
Orchestrator - Bug #53904 (Duplicate): cephadm: ingress jobs stuck
https://tracker.ceph.com/issues/53904
2022-01-17T16:07:38Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2022-01-17_12:42:04-orch:cephadm-wip-swagner-testing-2022-01-17-1014-distro-default-smithi/">https://pulpito.ceph.com/swagner-2022-01-17_12:42:04-orch:cephadm-wip-swagner-testing-2022-01-17-1014-distro-default-smithi/</a></p>
<pre>
2022-01-17T13:17:17.053 DEBUG:teuthology.orchestra.run.smithi155:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:1cdf02ebbbdd98a055173cbac4d0171328a564dc shell -c /etc/ceph/ceph.conf -k />
2022-01-17T13:17:17.054 DEBUG:teuthology.orchestra.run.smithi155:> for haproxy in `ceph orch ps | grep ^haproxy.nfs.foo. | awk '"'"'{print $1}'"'"'`; do
2022-01-17T13:17:17.054 DEBUG:teuthology.orchestra.run.smithi155:> ceph orch daemon stop $haproxy
2022-01-17T13:17:17.054 DEBUG:teuthology.orchestra.run.smithi155:> while ! ceph orch ps | grep $haproxy | grep stopped; do sleep 1 ; done
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> cat /mnt/foo/testfile
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> echo $haproxy > /mnt/foo/testfile
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> sync
2022-01-17T13:17:17.055 DEBUG:teuthology.orchestra.run.smithi155:> ceph orch daemon start $haproxy
2022-01-17T13:17:17.056 DEBUG:teuthology.orchestra.run.smithi155:> while ! ceph orch ps | grep $haproxy | grep running; do sleep 1 ; done
2022-01-17T13:17:17.056 DEBUG:teuthology.orchestra.run.smithi155:> done
2022-01-17T13:17:17.056 DEBUG:teuthology.orchestra.run.smithi155:> '
</pre><br />...snip...<br /><pre>
2022-01-17T13:17:20.571 INFO:teuthology.orchestra.run.smithi155.stdout:Check with each haproxy down in turn...
2022-01-17T13:17:21.281 INFO:teuthology.orchestra.run.smithi155.stdout:Scheduled to stop haproxy.nfs.foo.smithi155.xhswck on host 'smithi155'
</pre><br />...snip...
<pre>
2022-01-17T13:17:36.893 INFO:teuthology.orchestra.run.smithi155.stdout:haproxy.nfs.foo.smithi155.xhswck smithi155 *:2049,9002 stopped 0s ago 79s - - <unknown> <un>
2022-01-17T13:17:36.898 INFO:teuthology.orchestra.run.smithi155.stdout:test
2022-01-17T13:17:37.528 INFO:teuthology.orchestra.run.smithi155.stdout:Scheduled to start haproxy.nfs.foo.smithi155.xhswck on host 'smithi155'
</pre><br />...snip...<br /><pre>
2022-01-17T13:17:53.182 INFO:teuthology.orchestra.run.smithi155.stdout:haproxy.nfs.foo.smithi155.xhswck smithi155 *:2049,9002 running (5s) 0s ago 95s - - 2.3.17-d1c9119 14b>
2022-01-17T13:17:53.519 INFO:teuthology.orchestra.run.smithi155.stdout:Scheduled to stop haproxy.nfs.foo.smithi162.mahcqs on host 'smithi162'
</pre><br />...snip...<br /><pre>
2022-01-17T13:18:07.810 INFO:teuthology.orchestra.run.smithi155.stdout:haproxy.nfs.foo.smithi162.mahcqs smithi162 *:2049,9002 stopped 0s ago 102s - - <unknown> <unk>
</pre><br />...snip..<br /><pre>
h[14066]: cephadm 2022-01-17T13:17:53.516345+0000 mgr.smithi155.uoijyc (mgr.14206) 339 : cephadm [INF] Schedule stop daemon haproxy.nfs.foo.smithi162.mahcqs
</pre>
<p>But I never see a start of haproxy.nfs.foo.smithi162.mahcqs again.</p>
mgr - Bug #53538 (Resolved): mgr/stats: ZeroDivisionError
https://tracker.ceph.com/issues/53538
2021-12-08T13:37:49Z
Sebastian Wagner
<pre>
root@service-01-08020:~# ceph osd status storage-01-08002
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1623, in _handle_command
return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 416, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/status/module.py", line 338, in handle_osd_status
wr_ops_rate = (self.get_rate("osd", osd_id.__str__(), "osd.op_w") +
File "/usr/share/ceph/mgr/status/module.py", line 28, in get_rate
return (data[-1][1] - data[-2][1]) // int(data[-1][0] - data[-2][0])
ZeroDivisionError: integer division or modulo by zero
</pre>
<p>Since those PRs:</p>
<ul>
<li><a class="external" href="https://github.com/ceph/ceph/pull/25337">https://github.com/ceph/ceph/pull/25337</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/26270">https://github.com/ceph/ceph/pull/26270</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/26270/files#diff-dc6485f717f4dce4863733896375af75963412ebb2abc4b62fcd1f5233eee07dR44">https://github.com/ceph/ceph/pull/26270/files#diff-dc6485f717f4dce4863733896375af75963412ebb2abc4b62fcd1f5233eee07dR44</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/28603">https://github.com/ceph/ceph/pull/28603</a> </li>
<li><a class="external" href="https://tracker.ceph.com/issues/43224#note-11">https://tracker.ceph.com/issues/43224#note-11</a></li>
</ul>
<p>no one had the patience to look into this all over again.</p>
Orchestrator - Bug #51590 (Resolved): cephadm: iscsi: The first gateway defined must be the local...
https://tracker.ceph.com/issues/51590
2021-07-08T09:58:40Z
Sebastian Wagner
<p>1. Deploy cluster using cephadm<br />2. Deploy iscsi services using iscsi.yml file</p>
<pre>
[ceph: root@magna007 ~]# cat iscsi.yml
service_type: iscsi
service_id: iscsi
placement:
hosts:
- host1
- host2
spec:
pool: iscsi_pool
trusted_ip_list: "10.8.128.108,10.8.128.113"
api_user: admin
api_password: admin
</pre>
<p>3. Login to container using "podman exec -it 12e38d148b25 /bin/sh" then do gwcli<br />4. Create target and gateways</p>
<pre>
[root@host1 ~]# podman exec -it 99b46c7235da sh
sh-4.4# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.8.128.108 host1 host1
10.8.128.113 host2 host2
127.0.1.1 host1 host1 ceph-3ce40d5c-dd5a-11eb-8a7a-002590fc2538-iscsi.iscsi.host1.kkxugr-tcmu
sh-4.4# gwcli
/iscsi-targets> create target_iqn=iqn.2003-01.com.example.iscsi-gw:ceph-igw
ok
/iscsi-targets> ls
o- iscsi-targets ................................................................................. [DiscoveryAuth: None, Targets: 1]
o- iqn.2003-01.com.example.iscsi-gw:ceph-igw ............................................................ [Auth: None, Gateways: 0]
o- disks ............................................................................................................ [Disks: 0]
o- gateways .............................................................................................. [Up: 0/0, Portals: 0]
o- host-groups .................................................................................................... [Groups : 0]
o- hosts ......................................................................................... [Auth: ACL_ENABLED, Hosts: 0]
/iscsi-targets> goto gateways
/iscsi-target...-igw/gateways> create host1.ceph.example.com 10.8.128.108
The first gateway defined must be the local machine
/iscsi-target...-igw/gateways> create host2.ceph.example.com 10.8.128.113
The first gateway defined must be the local machine
/iscsi-target...-igw/gateways> create host1 10.8.128.108
The first gateway defined must be the local machine
/iscsi-target...-igw/gateways> create host2 10.8.128.113
The first gateway defined must be the local machine
/iscsi-target...-igw/gateways> create ceph-3ce40d5c-dd5a-11eb-8a7a-002590fc2538-iscsi.iscsi.host1.kkxugr-tcmu 10.8.128.108
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
Failed : Gateway 'ceph-3ce40d5c-dd5a-11eb-8a7a-002590fc2538-iscsi.iscsi.host1.kkxugr-tcmu' is not resolvable to an IP address
</pre>
Orchestrator - Bug #51272 (Resolved): upgrade job: mgr.x getting removed by cephadm task: UPGRADE...
https://tracker.ceph.com/issues/51272
2021-06-18T08:47:37Z
Sebastian Wagner
<p>I think this bug is not yet merged.</p>
<ul>
<li><a class="external" href="https://github.com/ceph/ceph/pull/41478/">https://github.com/ceph/ceph/pull/41478/</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/41568">https://github.com/ceph/ceph/pull/41568</a></li>
</ul>
<pre>
rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2}
</pre>
<pre>
roles:
- - mon.a
- mon.c
- mgr.y
- osd.0
- osd.1
- osd.2
- osd.3
- client.0
- node-exporter.a
- alertmanager.a
- - mon.b
- mgr.x
- osd.4
- osd.5
- osd.6
- osd.7
- client.1
- prometheus.a
- grafana.a
- node-exporter.b
</pre>
<p><strong>then</strong></p>
<pre>
: audit 2021-06-15T20:14:24.260141+0000 mgr.y (mgr.14138) 64 : audit [DBG] from='client.34106 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;smithi143=x", "target">
</pre>
<p>notice the placement only contains <strong>2;smithi143=x</strong></p>
<pre>
2021-06-15T20:14:29.203 INFO:journalctl@ceph.mgr.y.smithi135.stdout:Jun 15 20:14:29 smithi135 systemd[1]: Stopping Ceph mgr.y for e2a4517e-ce15-11eb-8c13-001a4aab830c...
</pre>
<p>*resulting in *</p>
<pre>
cluster 2021-06-15T20:21:09.388112+0000 mgr.x (mgr.34112) 238 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 0 B data, 3.7 MiB used, 707 GiB / 715 GiB avail
: debug 2021-06-15T20:21:11.241+0000 7ffa34117700 -1 log_channel(cephadm) log [ERR] : Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon
: audit 2021-06-15T20:21:11.239485+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.34112 ' entity='mgr.x'
: audit 2021-06-15T20:21:11.241293+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.34112 172.21.15.143:0/2430240313' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
: cephadm 2021-06-15T20:21:11.241839+0000 mgr.x (mgr.34112) 239 : cephadm [INF] Upgrade: Target is quay.ceph.io/ceph-ci/ceph:da5e8184007182fa3cd5c8385fee4e08c5620fe2 with id 219a75e51380d5cdf3af7b1fa194d1bedd11>
: cephadm 2021-06-15T20:21:11.244338+0000 mgr.x (mgr.34112) 240 : cephadm [INF] Upgrade: Checking mgr daemons...
: cephadm 2021-06-15T20:21:11.244711+0000 mgr.x (mgr.34112) 241 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x)
: cephadm 2021-06-15T20:21:11.247775+0000 mgr.x (mgr.34112) 242 : cephadm [ERR] Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon
: audit 2021-06-15T20:21:11.253146+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.34112 ' entity='mgr.x'
: cluster 2021-06-15T20:21:11.255641+0000 mgr.x (mgr.34112) 243 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 0 B data, 3.7 MiB used, 707 GiB / 715 GiB avail
: audit 2021-06-15T20:21:11.259712+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.34112 ' entity='mgr.x'
</pre>
<pre>
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:alertmanager.a smithi135 running (117s) 107s ago 2m 0.20.0 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f d7ab1fc469b4
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:grafana.a smithi143 running (2m) 107s ago 2m 6.6.2 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a bdf08596362b
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:mgr.x smithi143 running (6m) 107s ago 6m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 bf659290d1ab
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.a smithi135 running (8m) 107s ago 9m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 a0083afbce6f
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.b smithi143 running (7m) 107s ago 7m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 177430b8b423
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.c smithi135 running (7m) 107s ago 7m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 881e672542be
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:node-exporter.a smithi135 running (2m) 107s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf acd96e0cc12e
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:node-exporter.b smithi143 running (2m) 107s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf a3c897228c6d
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.0 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 9805ecc9628d
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.1 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 29d8fc3fbb7f
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.2 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 193e0a2a0487
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.3 smithi135 running (4m) 107s ago 4m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 e2dea4bf5490
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.4 smithi143 running (4m) 107s ago 4m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 e0e19361a64a
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.5 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 71c57f8c0e3d
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.6 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 4da5baa064d1
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.7 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 098193d20e10
2021-06-15T20:21:16.896 INFO:teuthology.orchestra.run.smithi135.stdout:prometheus.a smithi143 running (110s) 107s ago 2m 2.18.1 docker.io/prom/prometheus:v2.18.1 de242295e225 fb7dd6cd2280
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/yuriw-2021-06-15_18:44:29-rados-wip-yuri8-testing-2021-06-15-0839-octopus-distro-basic-smithi/6174184/teuthology.log">http://qa-proxy.ceph.com/teuthology/yuriw-2021-06-15_18:44:29-rados-wip-yuri8-testing-2021-06-15-0839-octopus-distro-basic-smithi/6174184/teuthology.log</a></p>
Orchestrator - Fix #49336 (Resolved): re-enable coredumps for cephadm
https://tracker.ceph.com/issues/49336
2021-02-17T15:20:00Z
Sebastian Wagner
<p>we reverted the podman --init Pr. we need to find out why we have a problem there</p>
RADOS - Bug #49190 (Resolved): LibRadosMiscConnectFailure_ConnectFailure_Test: FAILED ceph_assert...
https://tracker.ceph.com/issues/49190
2021-02-05T10:34:01Z
Sebastian Wagner
<p>I created the branch two days ago and haven't seen this error before:</p>
<pre>
2021-02-05T09:43:02.428 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 2021-02-05T09:43:02.385+0000 7fb2c183e700 10 monclient: discarding stray monitor message auth_reply(proto 2 0 (0) Success) v1
2021-02-05T09:43:02.428 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-435-g49e81916/rpm/el8/BUILD/ceph-17.0.0-435-g49e81916/src/common/config_proxy.h: In function 'void ceph::common::ConfigProxy::call_gate_close(ceph::common::ConfigProxy::md_config_obs_t*)' thread 7fb2d4732500 time 2021-02-05T09:43:02.387202+0000
2021-02-05T09:43:02.429 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.0.0-435-g49e81916/rpm/el8/BUILD/ceph-17.0.0-435-g49e81916/src/common/config_proxy.h: 71: FAILED ceph_assert(p != obs_call_gate.end())
2021-02-05T09:43:02.429 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: ceph version 17.0.0-435-g49e81916 (49e81916e1db40399401bf6993250bf570285966) quincy (dev)
2021-02-05T09:43:02.429 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x158) [0x7fb2caca479a]
2021-02-05T09:43:02.429 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 2: /usr/lib64/ceph/libceph-common.so.2(+0x2769b4) [0x7fb2caca49b4]
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 3: (MonClient::shutdown()+0x8eb) [0x7fb2cb035feb]
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 4: (MonClient::get_monmap_and_config()+0x4ad) [0x7fb2cb03ac1d]
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 5: /lib64/librados.so.2(+0xb8ef8) [0x7fb2d4235ef8]
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 6: rados_connect()
2021-02-05T09:43:02.430 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 7: (LibRadosMiscConnectFailure_ConnectFailure_Test::TestBody()+0x35d) [0x561eec3dcc3d]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 8: (void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*)+0x4e) [0x561eec435d4e]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 9: (testing::Test::Run()+0xcb) [0x561eec428d3b]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 10: (testing::TestInfo::Run()+0x135) [0x561eec428ea5]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 11: (testing::TestSuite::Run()+0xc1) [0x561eec429401]
2021-02-05T09:43:02.431 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 12: (testing::internal::UnitTestImpl::RunAllTests()+0x445) [0x561eec42b015]
2021-02-05T09:43:02.432 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 13: (bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*)+0x4e) [0x561eec4362be]
2021-02-05T09:43:02.432 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 14: (testing::UnitTest::Run()+0xa0) [0x561eec428f70]
2021-02-05T09:43:02.432 INFO:tasks.workunit.client.0.smithi074.stdout: api_misc: 15: main()
2
</pre>
<ul>
<li><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859038">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859038</a></li>
<li><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859039">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859039</a></li>
<li><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859041">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859041</a></li>
<li><a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859042">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/5859042</a></li>
</ul>
<p>See <a class="external" href="https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/">https://pulpito.ceph.com/swagner-2021-02-05_09:14:24-rados:cephadm-wip-swagner4-testing-2021-02-03-1650-distro-basic-smithi/</a></p>
Orchestrator - Bug #48715 (Resolved): docker-mirror: x509: certificate relies on legacy Common Na...
https://tracker.ceph.com/issues/48715
2020-12-24T10:24:57Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-12-23_18:12:01-rados:cephadm-wip-swagner-testing-2020-12-22-0110-distro-basic-smithi/5734449/">https://pulpito.ceph.com/swagner-2020-12-23_18:12:01-rados:cephadm-wip-swagner-testing-2020-12-22-0110-distro-basic-smithi/5734449/</a></p>
<pre>
stderr Error: Error initializing source docker://ceph/daemon-base:latest-octopus: (Mirrors also failed: [docker-mirror.front.sepia.ceph.com:5000/ceph/daemon-base:latest-octopus: error pinging docker registry docker-mirror.front.sepia.ceph.com:5000: Get "https://docker-mirror.front.sepia.ceph.com:5000/v2/": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0]):
</pre>
Orchestrator - Bug #48535 (Resolved): QA smoke test: cephadm is removing mgr.y
https://tracker.ceph.com/issues/48535
2020-12-10T11:32:13Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/yuriw-2020-12-08_16:18:10-rados-octopus-distro-basic-smithi/5693969">https://pulpito.ceph.com/yuriw-2020-12-08_16:18:10-rados-octopus-distro-basic-smithi/5693969</a></p>
<p>cephadm is properly deploying mgr.y:</p>
<pre>
Dec 08 18:58:47 smithi099 bash[10707]: audit 2020-12-08T18:58:47.142557+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14138 172.21.15.131:0/1846859103' entity='mgr.y' cmd='[{"prefix":"config-ke
y set","key":"mgr/cephadm/host.smithi131","val":"{\"daemons\": {\"mon.a\": {\"daemon_type\": \"mon\", \"daemon_id\": \"a\", \"hostname\": \"smithi131\", \"container_id\": \"5fdb0de44749\", \"container_image_id\": \"dae82b93a77958a9a6819f28e335d1a604c7e3cdecf8bba
5e0c820df2b53a64a\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe\", \"version\": \"15.2.7-661-gb1c1268b\", \"status\": 1, \"status_desc\": \"running\", \"is_active\": false, \"last_refresh\": \"2020-12-08T18:58:
47.137207\", \"created\": \"2020-12-08T18:56:38.730085\", \"started\": \"2020-12-08T18:56:45.378062\"}, \"mgr.y\": {\"daemon_type\": \"mgr\", \"daemon_id\": \"y\", \"hostname\": \"smithi131\", \"container_id\": \"620f6dfea3b3\", \"container_image_id\": \"dae82b9
3a77958a9a6819f28e335d1a604c7e3cdecf8bba5e0c820df2b53a64a\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe\", \"version\": \"15.2.7-661-gb1c1268b\", \"status\": 1, \"status_desc\": \"running\", \"is_active\": fals
e, \"last_refresh\": \"2020-12-08T18:58:47.137300\", \"created\": \"2020-12-08T18:56:47.849275\", \"started\": \"2020-12-08T18:56:47.941925\"}, \"mon.c\": {\"daemon_type\": \"mon\", \"daemon_id\": \"c\", \"hostname\": \"smithi131\", \"container_id\": \"8566d05a0
1c5\", \"container_image_id\": \"dae82b93a77958a9a6819f28e335d1a604c7e3cdecf8bba5e0c820df2b53a64a\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe\", \"version\": \"15.2.7-661-gb1c1268b\", \"status\": 1, \"status_
desc\": \"running\", \"is_active\": false, \"last_refresh\": \"2020-12-08T18:58:47.137347\", \"created\": \"2020-12-08T18:58:16.069285\", \"started\": \"2020-12-08T18:58:16.157886\"}}, \"devices\": [...], \"osdspec_previews\": [], \"daemon_config_deps\": {\"mon.a\": {\"deps\": [], \"last_config\": \"2020-12-08T18:58:44.148173\"}, \"mgr.y\": {\"deps\": [], \"last_con
fig\": \"2020-12-08T18:58:36.178335\"}, \"mon.c\": {\"deps\": [], \"last_config\": \"2020-12-08T18:58:38.524511\"}}, \"last_daemon_update\": \"2020-12-08T18:58:47.137450\", \"last_device_update\": \"2020-12-08T18:57:36.298984\", \"networks\": {\"172.17.0.0/16\":
[\"172.17.0.1\"], \"172.21.0.0/20\": [\"172.21.15.131\"], \"172.21.15.254\": [\"172.21.15.131\"], \"fe80::/64\": [\"fe80::ec4:7aff:fe88:72f9\"]}, \"last_host_check\": \"2020-12-08T18:57:14.445904\"}"}]': finished
</pre>
<p>But at some point, <code>cephadm.py</code> decides to remove it again:</p>
<pre>
Dec 08 18:58:41.703 INFO:teuthology.orchestra.run.smithi099:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 07c2237e-3987-11eb-9811-001a4aab830c -- ceph orch apply mgr '2;smithi099=x'
Dec 08 18:58:47 smithi131 bash[10704]: cephadm 2020-12-08T18:58:47.157062+0000 mgr.y (mgr.14138) 62 : cephadm [INF] Deploying daemon mgr.x on smithi099
Dec 08 18:58:47 smithi099 bash[10707]: cephadm 2020-12-08T18:58:47.157062+0000 mgr.y (mgr.14138) 62 : cephadm [INF] Deploying daemon mgr.x on smithi099
Dec 08 18:58:49 smithi099 bash[10707]: cephadm 2020-12-08T18:58:49.238457+0000 mgr.y (mgr.14138) 64 : cephadm [INF] It is presumed safe to stop ['mgr.y']
Dec 08 18:58:49 smithi099 bash[10707]: cephadm 2020-12-08T18:58:49.238723+0000 mgr.y (mgr.14138) 65 : cephadm [INF] Removing daemon mgr.y from smithi131
</pre>
<p>thus the mgr is then missing:</p>
<pre>
2020-12-08T19:05:16.283 INFO:teuthology.orchestra.run.smithi131.stdout:NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID
2020-12-08T19:05:16.284 INFO:teuthology.orchestra.run.smithi131.stdout:alertmanager 1/1 89s ago 2m smithi131=a;count:1 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f
2020-12-08T19:05:16.284 INFO:teuthology.orchestra.run.smithi131.stdout:grafana 1/1 89s ago 2m smithi099=a;count:1 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a
2020-12-08T19:05:16.284 INFO:teuthology.orchestra.run.smithi131.stdout:iscsi.iscsi 1/1 89s ago 2m smithi099=iscsi.a;count:1 quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:16.284 INFO:teuthology.orchestra.run.smithi131.stdout:mgr 1/2 89s ago 6m smithi099=x;count:2 quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:16.285 INFO:teuthology.orchestra.run.smithi131.stdout:mon 3/0 89s ago - <unmanaged> quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:16.285 INFO:teuthology.orchestra.run.smithi131.stdout:node-exporter 2/2 89s ago 2m smithi131=a;smithi099=b;count:2 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf
2020-12-08T19:05:16.285 INFO:teuthology.orchestra.run.smithi131.stdout:osd.None 8/0 89s ago - <unmanaged> quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:16.285 INFO:teuthology.orchestra.run.smithi131.stdout:prometheus 1/1 89s ago 2m smithi099=a;count:1 docker.io/prom/prometheus:v2.18.1 de242295e225
2020-12-08T19:05:16.286 INFO:teuthology.orchestra.run.smithi131.stdout:rgw.realm.zone 1/1 89s ago 2m smithi131=realm.zone.a;count:1 quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:12.375 INFO:teuthology.orchestra.run.smithi131.stdout:NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
2020-12-08T19:05:12.375 INFO:teuthology.orchestra.run.smithi131.stdout:alertmanager.a smithi131 running (89s) 85s ago 2m 0.20.0 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f 5cfd91f2dec4
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:grafana.a smithi099 running (104s) 85s ago 104s 6.6.2 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a d22d8f54a540
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:iscsi.iscsi.a smithi099 running (2m) 85s ago 2m 3.4 quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 ee4b2dbcfe42
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:mgr.x smithi099 running (6m) 85s ago 6m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 5b50fc3e28b6
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:mon.a smithi131 running (8m) 85s ago 8m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 5fdb0de44749
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:mon.b smithi099 running (6m) 85s ago 6m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 6d898409329d
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:mon.c smithi131 running (6m) 85s ago 6m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 8566d05a01c5
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:node-exporter.a smithi131 running (2m) 85s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf 6c48edd25d55
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:node-exporter.b smithi099 running (2m) 85s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf e4321578ec02
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:osd.0 smithi131 running (5m) 85s ago 5m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 013ba62b1a67
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:osd.1 smithi131 running (5m) 85s ago 5m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 b3249b0b5044
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.2 smithi131 running (4m) 85s ago 4m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 9b79623d7e60
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.3 smithi131 running (4m) 85s ago 4m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 4626c1117138
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.4 smithi099 running (4m) 85s ago 4m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 4a1c1bbd5040
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.5 smithi099 running (3m) 85s ago 3m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 c3e1893f1cc6
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.6 smithi099 running (3m) 85s ago 3m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 292f1b5ea013
2020-12-08T19:05:12.379 INFO:teuthology.orchestra.run.smithi131.stdout:osd.7 smithi099 running (3m) 85s ago 3m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 5a88aabbe925
2020-12-08T19:05:12.379 INFO:teuthology.orchestra.run.smithi131.stdout:prometheus.a smithi099 running (95s) 85s ago 2m 2.18.1 docker.io/prom/prometheus:v2.18.1 de242295e225 45d6bf1407d5
2020-12-08T19:05:12.379 INFO:teuthology.orchestra.run.smithi131.stdout:rgw.realm.zone.a smithi131 running (2m) 85s ago 2m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 ae6e6aa0af5c
</pre>
Orchestrator - Bug #47684 (Resolved): cephadm: auth get failed: failed to find osd.27 in keyring ...
https://tracker.ceph.com/issues/47684
2020-09-29T11:01:48Z
Sebastian Wagner
<pre>
Sep 29 03:53:01 sn-m01 bash[184377]: debug 2020-09-29T03:53:01.500+0000 7f70d9f88700 -1 log_channel(cluster) log [ERR] : Unhandled exception from module 'cephadm' while running on mgr.sn-m01: auth get failed: failed to find osd.27 in keyring retval: -2
Sep 29 03:53:01 sn-m01 bash[184377]: debug 2020-09-29T03:53:01.500+0000 7f70d9f88700 -1 cephadm.serve:
Sep 29 03:53:01 sn-m01 bash[184377]: debug 2020-09-29T03:53:01.500+0000 7f70d9f88700 -1 Traceback (most recent call last):
Sep 29 03:53:01 sn-m01 bash[184377]: File "/usr/share/ceph/mgr/cephadm/module.py", line 513, in serve
Sep 29 03:53:01 sn-m01 bash[184377]: if self.upgrade.continue_upgrade():
Sep 29 03:53:01 sn-m01 bash[184377]: File "/usr/share/ceph/mgr/cephadm/upgrade.py", line 150, in continue_upgrade
Sep 29 03:53:01 sn-m01 bash[184377]: self._do_upgrade()
Sep 29 03:53:01 sn-m01 bash[184377]: File "/usr/share/ceph/mgr/cephadm/upgrade.py", line 315, in _do_upgrade
Sep 29 03:53:01 sn-m01 bash[184377]: image=target_name
Sep 29 03:53:01 sn-m01 bash[184377]: File "/usr/share/ceph/mgr/cephadm/module.py", line 1637, in _daemon_action
Sep 29 03:53:01 sn-m01 bash[184377]: return self._create_daemon(daemon_spec)
Sep 29 03:53:01 sn-m01 bash[184377]: File "/usr/share/ceph/mgr/cephadm/module.py", line 1890, in _create_daemon
Sep 29 03:53:01 sn-m01 bash[184377]: cephadm_config, deps = self.cephadm_services[daemon_spec.daemon_type].generate_config(daemon_spec)
Sep 29 03:53:01 sn-m01 bash[184377]: File "/usr/share/ceph/mgr/cephadm/services/cephadmservice.py", line 97, in generate_config
Sep 29 03:53:01 sn-m01 bash[184377]: extra_ceph_config=daemon_spec.extra_config.pop('config', ''))
Sep 29 03:53:01 sn-m01 bash[184377]: File "/usr/share/ceph/mgr/cephadm/module.py", line 1860, in _get_config_and_keyring
Sep 29 03:53:01 sn-m01 bash[184377]: 'entity': ename,
Sep 29 03:53:01 sn-m01 bash[184377]: File "/usr/share/ceph/mgr/mgr_module.py", line 1102, in check_mon_command
Sep 29 03:53:01 sn-m01 bash[184377]: raise MonCommandFailed(f'{cmd_dict["prefix"]} failed: {r.stderr} retval: {r.retval}')
Sep 29 03:53:01 sn-m01 bash[184377]: mgr_module.MonCommandFailed: auth get failed: failed to find osd.27 in keyring retval: -2
</pre>
<p>for the full logg see: <a class="external" href="https://tracker.ceph.com/attachments/download/5161/mgr.zip">https://tracker.ceph.com/attachments/download/5161/mgr.zip</a></p>
Orchestrator - Bug #47438 (Resolved): OSD.__init__ failes: the JSON object must be str, bytes or ...
https://tracker.ceph.com/issues/47438
2020-09-14T12:48:03Z
Sebastian Wagner
<pre>
After OSD deletion the orchestrator does not come up anymore.
From the log I can just find this:
debug 2020-09-14T12:09:37.105+0000 7fc932de3700 -1 mgr load Failed to construct class in 'cephadm'
debug 2020-09-14T12:09:37.105+0000 7fc932de3700 -1 mgr load Traceback (most recent call last):
File "/usr/share/ceph/mgr/cephadm/module.py", line 325, in __init__
self.rm_util.load_from_store()
File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 465, in load_from_store
osd_obj = OSD.from_json(json.loads(osd), ctx=self)
File "/usr/lib64/python3.6/json/__init__.py", line 348, in loads
'not {!r}'.format(s.__class__.__name__))
TypeError: the JSON object must be str, bytes or bytearray, not 'dict'
debug 2020-09-14T12:09:37.105+0000 7fc932de3700 -1 mgr operator() Failed to run module in active mode ('cephadm')
</pre>
Orchestrator - Bug #47185 (Resolved): TypeError: _daemon_add_misc() got an unexpected keyword arg...
https://tracker.ceph.com/issues/47185
2020-08-28T10:28:01Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-08-28_09:46:34-rados:cephadm-wip-swagner-testing-2020-08-28-1004-distro-basic-smithi/5383116/">https://pulpito.ceph.com/swagner-2020-08-28_09:46:34-rados:cephadm-wip-swagner-testing-2020-08-28-1004-distro-basic-smithi/5383116/</a></p>
<pre>
2020-08-28T10:01:45.393 INFO:teuthology.orchestra.run.smithi044:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:6258ea1dcfe72989baca3f3155cff7e60f2b9ac9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 19d8afaa-e915-11ea-a074-001a4aab830c -- ceph orch daemon add mon 'smithi044:[v2:172.21.15.44:3301,v1:172.21.15.44:6790]=c'
2020-08-28T10:01:47.113 INFO:teuthology.orchestra.run.smithi044.stderr:Error EINVAL: Traceback (most recent call last):
2020-08-28T10:01:47.114 INFO:teuthology.orchestra.run.smithi044.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 1191, in _handle_command
2020-08-28T10:01:47.114 INFO:teuthology.orchestra.run.smithi044.stderr: return self.handle_command(inbuf, cmd)
2020-08-28T10:01:47.114 INFO:teuthology.orchestra.run.smithi044.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 141, in handle_command
2020-08-28T10:01:47.114 INFO:teuthology.orchestra.run.smithi044.stderr: return dispatch[cmd['prefix']].call(self, cmd, inbuf)
2020-08-28T10:01:47.115 INFO:teuthology.orchestra.run.smithi044.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 328, in call
2020-08-28T10:01:47.115 INFO:teuthology.orchestra.run.smithi044.stderr: return self.func(mgr, **kwargs)
2020-08-28T10:01:47.115 INFO:teuthology.orchestra.run.smithi044.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 103, in <lambda>
2020-08-28T10:01:47.115 INFO:teuthology.orchestra.run.smithi044.stderr: wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
2020-08-28T10:01:47.115 INFO:teuthology.orchestra.run.smithi044.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 92, in wrapper
2020-08-28T10:01:47.116 INFO:teuthology.orchestra.run.smithi044.stderr: return func(*args, **kwargs)
2020-08-28T10:01:47.116 INFO:teuthology.orchestra.run.smithi044.stderr:TypeError: _daemon_add_misc() got an unexpected keyword argument 'smithi044:[v2:172.21.15.44:3301,v1:172.21.15.44:6790]'
2020-08-28T10:01:47.116 INFO:teuthology.orchestra.run.smithi044.stderr:
</pre>
<p>src: <a class="external" href="https://github.com/ceph/ceph-ci/blame/wip-swagner-testing-2020-08-28-1004/src/pybind/mgr/orchestrator/module.py#L769-L774">https://github.com/ceph/ceph-ci/blame/wip-swagner-testing-2020-08-28-1004/src/pybind/mgr/orchestrator/module.py#L769-L774</a></p>
<p>Possible cause: <a class="external" href="https://github.com/ceph/ceph-ci/commit/ee9dea6cbf9879208ca88786e7f3a944d479e9ed">https://github.com/ceph/ceph-ci/commit/ee9dea6cbf9879208ca88786e7f3a944d479e9ed</a></p>
Orchestrator - Bug #47170 (Resolved): cephadm "ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d-osd.3-ac...
https://tracker.ceph.com/issues/47170
2020-08-27T15:39:58Z
Sebastian Wagner
<pre>
Aug 27 11:36:50 r620-2 systemd[1]: Started Ceph osd.3 for c2f4ec26-c63c-11ea-80c1-90b11c20b87d.
Aug 27 11:36:50 r620-2 bash[9946]: Error: error creating container storage: the container name "ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d-osd.3-activate" is already in use by "c1b0b49f56035f4a1fb>
Aug 27 11:36:50 r620-2 systemd[1]: ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d@osd.3.service: Main process exited, code=exited, status=125/n/a
Aug 27 11:36:51 r620-2 systemd[1]: ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d@osd.3.service: Unit entered failed state.
Aug 27 11:36:51 r620-2 systemd[1]: ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d@osd.3.service: Failed with result 'exit-code'.
Aug 27 11:37:01 r620-2 systemd[1]: ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d@osd.3.service: Service RestartSec=10s expired, scheduling restart.
Aug 27 11:37:01 r620-2 systemd[1]: Stopped Ceph osd.3 for c2f4ec26-c63c-11ea-80c1-90b11c20b87d.
Aug 27 11:37:01 r620-2 systemd[1]: ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d@osd.3.service: Start request repeated too quickly.
Aug 27 11:37:01 r620-2 systemd[1]: Failed to start Ceph osd.3 for c2f4ec26-c63c-11ea-80c1-90b11c20b87d.
Aug 27 11:37:01 r620-2 systemd[1]: ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d@osd.3.service: Unit entered failed state.
Aug 27 11:37:01 r620-2 systemd[1]: ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d@osd.3.service: Failed with result 'exit-code'.
</pre>
<p>Workaround:</p>
<pre>
podman stop c1b0b49f56035f4a1fb
podman rm c1b0b49f56035f4a1fb
podman rm --storage c1b0b49f56035f4a1fb
</pre>
Orchestrator - Bug #46748 (Resolved): Module 'cephadm' has failed: auth get failed: failed to fin...
https://tracker.ceph.com/issues/46748
2020-07-29T09:53:48Z
Sebastian Wagner
<p>Was purged it yesterday:</p>
<pre>
ceph osd purge 32 --yes-i-really-mean-it
ceph osd tree | grep 32 => no match
ceph osd crush remove osd.32 => device 'osd.32' does not appear in the crush map
</pre>
Ceph - Bug #42528 (Resolved): python-common bulid failure: File not found: ceph-*.egg-info
https://tracker.ceph.com/issues/42528
2019-10-29T12:19:41Z
Sebastian Wagner
<pre>
PM build errors:
File not found: /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python2.7/site-packages/ceph
File not found by glob: /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python2.7/site-packages/ceph-*.egg-info
+ rm -fr /tmp/install-deps.1830
Build step 'Execute shell' marked build as failure
</pre>
<pre>
running install
running build
running build_py
creating build
creating build/lib
creating build/lib/ceph
copying ceph/__init__.py -> build/lib/ceph
copying ceph/exceptions.py -> build/lib/ceph
creating build/lib/ceph/deployment
copying ceph/deployment/__init__.py -> build/lib/ceph/deployment
copying ceph/deployment/drive_group.py -> build/lib/ceph/deployment
copying ceph/deployment/ssh_orchestrator.py -> build/lib/ceph/deployment
creating build/lib/ceph/tests
copying ceph/tests/__init__.py -> build/lib/ceph/tests
copying ceph/tests/test_drive_group.py -> build/lib/ceph/tests
running install_lib
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
copying build/lib/ceph/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
copying build/lib/ceph/exceptions.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/drive_group.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/ssh_orchestrator.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
copying build/lib/ceph/tests/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
copying build/lib/ceph/tests/test_drive_group.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/exceptions.py to exceptions.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/drive_group.py to drive_group.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/ssh_orchestrator.py to ssh_orchestrator.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests/test_drive_group.py to test_drive_group.cpython-36.pyc
running install_egg_info
running egg_info
creating ceph.egg-info
writing ceph.egg-info/PKG-INFO
writing dependency_links to ceph.egg-info/dependency_links.txt
writing requirements to ceph.egg-info/requires.txt
writing top-level names to ceph.egg-info/top_level.txt
writing manifest file 'ceph.egg-info/SOURCES.txt'
reading manifest file 'ceph.egg-info/SOURCES.txt'
writing manifest file 'ceph.egg-info/SOURCES.txt'
Copying ceph.egg-info to /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph-1.0.0-py3.6.egg-info
running install_scripts
Traceback (most recent call last):
File "setup.py", line 45, in <module>
'Programming Language :: Python :: 3.6',
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, in fetch_build_eggs
parse_requirements(requires), installer=self.fetch_build_egg
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 618, in resolve
dist = best[req.key] = env.best_match(req, self, installer)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 862, in best_match
return self.obtain(req, installer) # try and download/install
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 874, in obtain
return installer(requirement)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 339, in fetch_build_egg
return cmd.easy_install(req)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 623, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 653, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 849, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1130, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1115, in run_setup
run_setup(setup_script, args)
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 69, in run_setup
lambda: execfile(
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 120, in run
return func()
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 71, in <lambda>
{'__file__':setup_script, '__name__':'__main__'}
File "setup.py", line 21, in <module>
packages=find_packages(),
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 269, in __init__
_Distribution.__init__(self,attrs)
File "/usr/lib64/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 302, in finalize_options
ep.load()(self, ep.name, value)
File "build/bdist.linux-x86_64/egg/setuptools_scm/integration.py", line 9, in version_keyword
File "build/bdist.linux-x86_64/egg/setuptools_scm/version.py", line 66, in _warn_if_setuptools_outdated
setuptools_scm.version.SetuptoolsOutdatedWarning: your setuptools is too old (<12)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31272//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31272//consoleFull</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31269//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31269//consoleFull</a></p>