Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2020-04-29T15:58:52Z
Ceph
Redmine
bluestore - Bug #45335 (Resolved): cephadm upgrade: OSD.0 is not coming back after restart: rock...
https://tracker.ceph.com/issues/45335
2020-04-29T15:58:52Z
Sebastian Wagner
<pre>
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.842415+0000 mgr.x (mgr.34535) 47 : cephadm [INF] Upgrade: Target is quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537 with id 9c90938ad11a31c5ba9b58ed052bf347591ae047e94bca695e7a022672efd3b9
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.843492+0000 mgr.x (mgr.34535) 48 : cephadm [INF] Upgrade: Checking mgr daemons...
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.848251+0000 mgr.x (mgr.34535) 49 : cephadm [INF] Upgrade: All mgr daemons are up to date.
2020-04-28T22:15:25.775 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.848483+0000 mgr.x (mgr.34535) 50 : cephadm [INF] Upgrade: Checking mon daemons...
2020-04-28T22:15:25.776 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.849302+0000 mgr.x (mgr.34535) 51 : cephadm [INF] Upgrade: Setting container_image for all mon...
2020-04-28T22:15:25.776 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.868867+0000 mgr.x (mgr.34535) 52 : cephadm [INF] Upgrade: All mon daemons are up to date.
2020-04-28T22:15:25.777 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.869043+0000 mgr.x (mgr.34535) 53 : cephadm [INF] Upgrade: Checking crash daemons...
2020-04-28T22:15:25.777 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.869744+0000 mgr.x (mgr.34535) 54 : cephadm [INF] Upgrade: Setting container_image for all crash...
2020-04-28T22:15:25.777 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.870444+0000 mgr.x (mgr.34535) 55 : cephadm [INF] Upgrade: All crash daemons are up to date.
2020-04-28T22:15:25.778 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cephadm 2020-04-28T22:15:23.870641+0000 mgr.x (mgr.34535) 56 : cephadm [INF] Upgrade: Checking osd daemons...
2020-04-28T22:15:25.778 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:25 smithi151 bash[10946]: cluster 2020-04-28T22:15:24.333492+0000 mon.a (mon.0) 109 : cluster [DBG] mgrmap e25: x(active, since 41s), standbys: y
2020-04-28T22:15:26.521 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cluster 2020-04-28T22:15:25.061515+0000 mgr.x (mgr.34535) 57 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:26.522 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.422931+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.522 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.423236+0000 mgr.x (mgr.34535) 58 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.522 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cephadm 2020-04-28T22:15:25.424056+0000 mgr.x (mgr.34535) 59 : cephadm [INF] Upgrade: It is safe to stop osd.0
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cephadm 2020-04-28T22:15:25.424365+0000 mgr.x (mgr.34535) 60 : cephadm [INF] Upgrade: Redeploying osd.0
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.424676+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]: dispatch
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.428887+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd='[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]': finished
2020-04-28T22:15:26.523 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.429664+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
2020-04-28T22:15:26.524 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.430373+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-04-28T22:15:26.524 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: cephadm 2020-04-28T22:15:25.431342+0000 mgr.x (mgr.34535) 61 : cephadm [INF] Deploying daemon osd.0 on smithi151
2020-04-28T22:15:26.524 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[10946]: audit 2020-04-28T22:15:25.431836+0000 mon.a (mon.0) 115 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "osd.0", "key": "container_image"}]: dispatch
2020-04-28T22:15:26.524 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cluster 2020-04-28T22:15:25.061515+0000 mgr.x (mgr.34535) 57 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.422931+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.423236+0000 mgr.x (mgr.34535) 58 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cephadm 2020-04-28T22:15:25.424056+0000 mgr.x (mgr.34535) 59 : cephadm [INF] Upgrade: It is safe to stop osd.0
2020-04-28T22:15:26.525 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cephadm 2020-04-28T22:15:25.424365+0000 mgr.x (mgr.34535) 60 : cephadm [INF] Upgrade: Redeploying osd.0
2020-04-28T22:15:26.526 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.424676+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]: dispatch
2020-04-28T22:15:26.526 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.428887+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd='[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]': finished
2020-04-28T22:15:26.526 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.429664+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
2020-04-28T22:15:26.527 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.430373+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-04-28T22:15:26.527 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: cephadm 2020-04-28T22:15:25.431342+0000 mgr.x (mgr.34535) 61 : cephadm [INF] Deploying daemon osd.0 on smithi151
2020-04-28T22:15:26.527 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[11812]: audit 2020-04-28T22:15:25.431836+0000 mon.a (mon.0) 115 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "osd.0", "key": "container_image"}]: dispatch
2020-04-28T22:15:26.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cluster 2020-04-28T22:15:25.061515+0000 mgr.x (mgr.34535) 57 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:26.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.422931+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.423236+0000 mgr.x (mgr.34535) 58 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"]}]: dispatch
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cephadm 2020-04-28T22:15:25.424056+0000 mgr.x (mgr.34535) 59 : cephadm [INF] Upgrade: It is safe to stop osd.0
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cephadm 2020-04-28T22:15:25.424365+0000 mgr.x (mgr.34535) 60 : cephadm [INF] Upgrade: Redeploying osd.0
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.424676+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]: dispatch
2020-04-28T22:15:26.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.428887+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd='[{"prefix": "config set", "name": "container_image", "value": "quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537", "who": "osd.0"}]': finished
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.429664+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.430373+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: cephadm 2020-04-28T22:15:25.431342+0000 mgr.x (mgr.34535) 61 : cephadm [INF] Deploying daemon osd.0 on smithi151
2020-04-28T22:15:26.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:26 smithi156 bash[4373]: audit 2020-04-28T22:15:25.431836+0000 mon.a (mon.0) 115 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "osd.0", "key": "container_image"}]: dispatch
2020-04-28T22:15:26.991 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 systemd[1]: Stopping Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c...
2020-04-28T22:15:26.992 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[28166]: debug 2020-04-28T22:15:26.838+0000 7fb6a45f8700 -1 received signal: Terminated from Kernel ( Could be generated by pthread_kill(), raise(), abort(), alarm() ) UID: 0
2020-04-28T22:15:26.992 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[28166]: debug 2020-04-28T22:15:26.838+0000 7fb6a45f8700 -1 osd.0 51 *** Got signal Terminated ***
2020-04-28T22:15:26.992 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 bash[28166]: debug 2020-04-28T22:15:26.838+0000 7fb6a45f8700 -1 osd.0 51 *** Immediate shutdown (osd_fast_shutdown=true) ***
2020-04-28T22:15:27.271 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:26 smithi151 podman[13417]: 2020-04-28 22:15:26.989657914 +0000 UTC m=+0.182639933 container died 816cfebb822abfc1a17c63b3c8dbf4d82a23b1f976117dbc325a433a615cc931 (image=docker.io/ceph/ceph:v15.2.0, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:27.272 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13417]: 2020-04-28 22:15:27.020897016 +0000 UTC m=+0.213879019 container stop 816cfebb822abfc1a17c63b3c8dbf4d82a23b1f976117dbc325a433a615cc931 (image=docker.io/ceph/ceph:v15.2.0, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:27.272 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13417]: 816cfebb822abfc1a17c63b3c8dbf4d82a23b1f976117dbc325a433a615cc931
2020-04-28T22:15:27.647 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.875889+0000 mon.a (mon.0) 116 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.647 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.875926+0000 mon.a (mon.0) 117 : cluster [INF] osd.0 failed (root=default,host=smithi151) (connection refused reported by osd.2)
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876132+0000 mon.a (mon.0) 118 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876254+0000 mon.a (mon.0) 119 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876427+0000 mon.a (mon.0) 120 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.648 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876630+0000 mon.a (mon.0) 121 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876804+0000 mon.a (mon.0) 122 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.876953+0000 mon.a (mon.0) 123 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877082+0000 mon.a (mon.0) 124 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.649 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877205+0000 mon.a (mon.0) 125 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877341+0000 mon.a (mon.0) 126 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:26.877492+0000 mon.a (mon.0) 127 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.074089+0000 mon.a (mon.0) 128 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.650 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.074241+0000 mon.a (mon.0) 129 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.075789+0000 mon.a (mon.0) 130 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.075870+0000 mon.a (mon.0) 131 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.076508+0000 mon.a (mon.0) 132 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.651 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.076604+0000 mon.a (mon.0) 133 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.076919+0000 mon.a (mon.0) 134 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077001+0000 mon.a (mon.0) 135 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077113+0000 mon.a (mon.0) 136 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.652 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077197+0000 mon.a (mon.0) 137 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077320+0000 mon.a (mon.0) 138 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077390+0000 mon.a (mon.0) 139 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077520+0000 mon.a (mon.0) 140 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.653 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.077634+0000 mon.a (mon.0) 141 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.654 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.875889+0000 mon.a (mon.0) 116 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.654 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.875926+0000 mon.a (mon.0) 117 : cluster [INF] osd.0 failed (root=default,host=smithi151) (connection refused reported by osd.2)
2020-04-28T22:15:27.654 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876132+0000 mon.a (mon.0) 118 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876254+0000 mon.a (mon.0) 119 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876427+0000 mon.a (mon.0) 120 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876630+0000 mon.a (mon.0) 121 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876804+0000 mon.a (mon.0) 122 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.655 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.876953+0000 mon.a (mon.0) 123 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877082+0000 mon.a (mon.0) 124 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877205+0000 mon.a (mon.0) 125 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877341+0000 mon.a (mon.0) 126 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.656 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:26.877492+0000 mon.a (mon.0) 127 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.074089+0000 mon.a (mon.0) 128 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.074241+0000 mon.a (mon.0) 129 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.075789+0000 mon.a (mon.0) 130 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.657 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.075870+0000 mon.a (mon.0) 131 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.076508+0000 mon.a (mon.0) 132 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.076604+0000 mon.a (mon.0) 133 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.076919+0000 mon.a (mon.0) 134 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.658 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077001+0000 mon.a (mon.0) 135 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077113+0000 mon.a (mon.0) 136 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077197+0000 mon.a (mon.0) 137 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077320+0000 mon.a (mon.0) 138 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.659 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077390+0000 mon.a (mon.0) 139 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.660 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077520+0000 mon.a (mon.0) 140 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.660 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:27 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.077634+0000 mon.a (mon.0) 141 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.875889+0000 mon.a (mon.0) 116 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.875926+0000 mon.a (mon.0) 117 : cluster [INF] osd.0 failed (root=default,host=smithi151) (connection refused reported by osd.2)
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876132+0000 mon.a (mon.0) 118 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876254+0000 mon.a (mon.0) 119 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876427+0000 mon.a (mon.0) 120 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876630+0000 mon.a (mon.0) 121 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876804+0000 mon.a (mon.0) 122 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.876953+0000 mon.a (mon.0) 123 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877082+0000 mon.a (mon.0) 124 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.689 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877205+0000 mon.a (mon.0) 125 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877341+0000 mon.a (mon.0) 126 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:26.877492+0000 mon.a (mon.0) 127 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.074089+0000 mon.a (mon.0) 128 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.690 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.074241+0000 mon.a (mon.0) 129 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.075789+0000 mon.a (mon.0) 130 : cluster [DBG] osd.0 reported immediately failed by osd.3
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.075870+0000 mon.a (mon.0) 131 : cluster [DBG] osd.0 reported immediately failed by osd.2
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.076508+0000 mon.a (mon.0) 132 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.691 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.076604+0000 mon.a (mon.0) 133 : cluster [DBG] osd.0 reported immediately failed by osd.1
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.076919+0000 mon.a (mon.0) 134 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077001+0000 mon.a (mon.0) 135 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077113+0000 mon.a (mon.0) 136 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077197+0000 mon.a (mon.0) 137 : cluster [DBG] osd.0 reported immediately failed by osd.6
2020-04-28T22:15:27.692 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077320+0000 mon.a (mon.0) 138 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:27.693 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077390+0000 mon.a (mon.0) 139 : cluster [DBG] osd.0 reported immediately failed by osd.4
2020-04-28T22:15:27.693 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077520+0000 mon.a (mon.0) 140 : cluster [DBG] osd.0 reported immediately failed by osd.5
2020-04-28T22:15:27.693 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:27 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.077634+0000 mon.a (mon.0) 141 : cluster [DBG] osd.0 reported immediately failed by osd.7
2020-04-28T22:15:28.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.64575936 +0000 UTC m=+0.606987472 container create c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.820779587 +0000 UTC m=+0.782007706 container init c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.023 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.862377714 +0000 UTC m=+0.823605831 container start c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.023 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:27 smithi151 podman[13459]: 2020-04-28 22:15:27.862442318 +0000 UTC m=+0.823670460 container attach c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.349 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 podman[13459]: 2020-04-28 22:15:28.087744802 +0000 UTC m=+1.048972928 container died c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.605 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.062007+0000 mgr.x (mgr.34535) 62 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:28.605 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.338144+0000 mon.a (mon.0) 142 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2020-04-28T22:15:28.605 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[10946]: cluster 2020-04-28T22:15:27.347350+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in
2020-04-28T22:15:28.605 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 podman[13459]: 2020-04-28 22:15:28.587885902 +0000 UTC m=+1.549114039 container remove c38c141a767f34ae26c7f4f2d575e2f0e779068022585eabc02d7f9832e2536a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-deactivate)
2020-04-28T22:15:28.606 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 systemd[1]: Stopped Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c.
2020-04-28T22:15:28.606 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.062007+0000 mgr.x (mgr.34535) 62 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:28.606 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.338144+0000 mon.a (mon.0) 142 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2020-04-28T22:15:28.607 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:28 smithi151 bash[11812]: cluster 2020-04-28T22:15:27.347350+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in
2020-04-28T22:15:28.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:28 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.062007+0000 mgr.x (mgr.34535) 62 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:28.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:28 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.338144+0000 mon.a (mon.0) 142 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2020-04-28T22:15:28.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:28 smithi156 bash[4373]: cluster 2020-04-28T22:15:27.347350+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e52: 8 total, 7 up, 8 in
2020-04-28T22:15:29.021 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 systemd[1]: Starting Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c...
2020-04-28T22:15:29.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 podman[13562]: Error: no container with name or ID ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0 found: no such container
2020-04-28T22:15:29.022 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:28 smithi151 systemd[1]: Started Ceph osd.0 for 8e078f14-899c-11ea-a068-001a4aab830c.
2020-04-28T22:15:29.322 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.0207602 +0000 UTC m=+0.262426894 container create edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.322 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.120609901 +0000 UTC m=+0.362276575 container init edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.323 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.162161904 +0000 UTC m=+0.403828610 container start edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.323 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.162247399 +0000 UTC m=+0.403914112 container attach edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:29.688 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:29 smithi156 bash[4373]: cluster 2020-04-28T22:15:28.355727+0000 mon.a (mon.0) 144 : cluster [DBG] osdmap e53: 8 total, 7 up, 8 in
2020-04-28T22:15:29.775 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[11812]: audit 2020-04-28T22:15:28.777097+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.34535 172.21.15.156:0/2194410762' entity='mgr.x' cmd=[{"prefix": "config get", "who": "mon", "key": "container_image"}]: dispatch
2020-04-28T22:15:29.776 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
2020-04-28T22:15:29.776 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/vg_nvme/lv_4 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
2020-04-28T22:15:29.776 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/ln -snf /dev/vg_nvme/lv_4 /var/lib/ceph/osd/ceph-0/block
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
2020-04-28T22:15:29.777 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 bash[13577]: --> ceph-volume lvm activate successful for osd ID: 0
2020-04-28T22:15:29.778 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.444755363 +0000 UTC m=+0.686422056 container died edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:30.180 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:29 smithi151 podman[13580]: 2020-04-28 22:15:29.902456456 +0000 UTC m=+1.144123162 container remove edc631b77207be0034ca1916a33133fc983c923c2b05a13f6b48aa7e06025d40 (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0-activate)
2020-04-28T22:15:30.180 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.137040328 +0000 UTC m=+0.215911961 container create 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.505 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:30 smithi151 bash[10946]: cluster 2020-04-28T22:15:29.062506+0000 mgr.x (mgr.34535) 63 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:30.506 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:30 smithi151 bash[11812]: cluster 2020-04-28T22:15:29.062506+0000 mgr.x (mgr.34535) 63 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:30.506 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.253693628 +0000 UTC m=+0.332565244 container init 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.506 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.295389095 +0000 UTC m=+0.374260713 container start 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.506 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 podman[13761]: 2020-04-28 22:15:30.295458519 +0000 UTC m=+0.374330136 container attach 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:30.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:30 smithi156 bash[4373]: cluster 2020-04-28T22:15:29.062506+0000 mgr.x (mgr.34535) 63 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 0 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
2020-04-28T22:15:31.077 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:30 smithi151 bash[13577]: debug 2020-04-28T22:15:30.801+0000 7f47628adec0 -1 Falling back to public interface
2020-04-28T22:15:31.077 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.074+0000 7f47628adec0 -1 rocksdb: verify_sharding mismatch on sharding. requested = [(L,1,0-,),(O,3,0-13,),(m,3,0-,)] stored = []
2020-04-28T22:15:31.078 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.074+0000 7f47628adec0 -1 bluestore(/var/lib/ceph/osd/ceph-0) _open_db erroring opening db:
2020-04-28T22:15:31.687 INFO:ceph.mon.b.smithi156.stdout:Apr 28 22:15:31 smithi156 bash[4373]: cluster 2020-04-28T22:15:31.361256+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)
2020-04-28T22:15:31.731 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[10946]: cluster 2020-04-28T22:15:31.361256+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)
2020-04-28T22:15:31.731 INFO:ceph.mon.c.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[11812]: cluster 2020-04-28T22:15:31.361256+0000 mon.a (mon.0) 148 : cluster [WRN] Health check failed: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)
2020-04-28T22:15:31.732 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.589+0000 7f47628adec0 -1 osd.0 0 OSD:init: unable to mount object store
2020-04-28T22:15:31.732 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 bash[13577]: debug 2020-04-28T22:15:31.589+0000 7f47628adec0 -1 ** ERROR: osd init failed: (5) Input/output error
2020-04-28T22:15:32.021 INFO:ceph.osd.0.smithi151.stdout:Apr 28 22:15:31 smithi151 podman[13761]: 2020-04-28 22:15:31.729840599 +0000 UTC m=+1.808712241 container died 0ba1c66cfbf52fe97b55dfc3a43281b5629c624e0482caba90b4d1019b5c9f8a (image=quay.io/ceph-ci/ceph:572900540492d8a60eef974379bcb0aa3e5c4537, name=ceph-8e078f14-899c-11ea-a068-001a4aab830c-osd.0)
2020-04-28T22:15:32.614 INFO:ceph.mon.a.smithi151.stdout:Apr 28 22:15:32 smithi151 bash[10946]: cluster 2020-04-28T22:15:31.063005+0000 mgr.x (mgr.34535) 64 : cluster [DBG] pgmap v29: 1 pgs: 1 active+undersized+degraded; 0 B data, 4.0 MiB used, 707 GiB / 715 GiB avail; 1/3 objects degraded (33.333%)
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/mgfritch-2020-04-28_21:52:13-rados-wip-mgfritch-testing-2020-04-28-1427-distro-basic-smithi/4995175/teuthology.log">http://qa-proxy.ceph.com/teuthology/mgfritch-2020-04-28_21:52:13-rados-wip-mgfritch-testing-2020-04-28-1427-distro-basic-smithi/4995175/teuthology.log</a></p>
Ceph - Feature #44745 (New): YAMLFormatter for common/Formatter.h
https://tracker.ceph.com/issues/44745
2020-03-25T11:36:11Z
Sebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/pull/34061">https://github.com/ceph/ceph/pull/34061</a> add a new value <code>yaml</code> for <code>--format</code> in order to support yaml in <code>mgr/cephadm</code>.</p>
<p>Having a YAMLFormatter for common/Formatter.h would be great, too!</p>
ceph-volume - Bug #44356 (Resolved): ceph-volume inventory: KeyError: 'ceph.cluster_name'
https://tracker.ceph.com/issues/44356
2020-02-28T18:59:45Z
Sebastian Wagner
<pre>
Module 'cephadm' has failed: cephadm exited with an error code: 1, stderr:INFO:cephadm:/usr/bin/podman:stderr WARNING: The same type, major and minor should not be used for multiple devices.
INFO:cephadm:/usr/bin/podman:stderr WARNING: The same type, major and minor should not be used for multiple devices.
INFO:cephadm:/usr/bin/podman:stderr --> KeyError: 'ceph.cluster_name'
Traceback (most recent call last):
File "<stdin>", line 3394, in <module>
File "<stdin>", line 688, in _infer_fsid
File "<stdin>", line 2202, in command_ceph_volume
File "<stdin>", line 513, in call_throws
RuntimeError: Failed command: /usr/bin/podman run --rm --net=host --privileged --group-add=disk -e CONTAINER_IMAGE=registry.suse.de/suse/sle-15-sp2/update/products/ses7/update/cr/containers/ses/7/ceph/ceph:latest -e NODE_NAME=hses-node1 -v /var/run/ceph/002c389e-54fd-11ea-a99f-52540044d765:/var/run/ceph:z -v /var/log/ceph/002c389e-54fd-11ea-a99f-52540044d765:/var/log/ceph:z -v /var/lib/ceph/002c389e-54fd-11ea-a99f-52540044d765/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm --entrypoint /usr/sbin/ceph-volume registry.suse.de/suse/sle-15-sp2/update/products/ses7/update/cr/containers/ses/7/ceph/ceph:latest inventory --format=json
</pre>
<pre>
hses-node1:~ # ceph -s
cluster:
id: 002c389e-54fd-11ea-a99f-52540044d765
health: HEALTH_ERR
1 filesystem is offline
1 filesystem is online with fewer MDS than max_mds
Module 'cephadm' has failed: cephadm exited with an error code: 1, stderr:INFO:cephadm:/usr/bin/podman:stderr WARNING: The same type, major and minor should not be used for multiple devices.
INFO:cephadm:/usr/bin/podman:stderr WARNING: The same type, major and minor should not be used for multiple devices.
INFO:cephadm:/usr/bin/podman:stderr --> KeyError: 'ceph.cluster_name'
Traceback (most recent call last):
File "<stdin>", line 3394, in <module>
File "<stdin>", line 688, in _infer_fsid
File "<stdin>", line 2202, in command_ceph_volume
File "<stdin>", line 513, in call_throws
RuntimeError: Failed command: /usr/bin/podman run --rm --net=host --privileged --group-add=disk -e CONTAINER_IMAGE=registry.suse.de/suse/sle-15-sp2/update/products/ses7/update/cr/containers/ses/7/ceph/ceph:latest -e NODE_NAME=hses-node1 -v /var/run/ceph/002c389e-54fd-11ea-a99f-52540044d765:/var/run/ceph:z -v /var/log/ceph/002c389e-54fd-11ea-a99f-52540044d765:/var/log/ceph:z -v /var/lib/ceph/002c389e-54fd-11ea-a99f-52540044d765/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm --entrypoint /usr/sbin/ceph-volume registry.suse.de/suse/sle-15-sp2/update/products/ses7/update/cr/containers/ses/7/ceph/ceph:latest inventory --format=json
services:
mon: 4 daemons, quorum hses-node1,hses-node2,hses-node3,hses-node4 (age 11m)
mgr: hses-node1.jxzdin(active, since 13m), standbys: hses-node1.aogwfz, hses-node2.dlyvwy, hses-node3.delhzp, hses-node4.vgmgec
mds: myfs:0
osd: 8 osds: 8 up (since 6m), 8 in (since 6m)
rgw: 4 daemons active (default.default.hses-node1.sirofe, default.default.hses-node2.kvzapg, default.default.hses-node3.dhobfn, default.default.hses-node4.cuulnm)
task status:
data:
pools: 6 pools, 168 pgs
objects: 189 objects, 5.8 KiB
usage: 8.2 GiB used, 152 GiB / 160 GiB avail
pgs: 168 active+clean
</pre>
ceph-volume - Bug #43858 (New): ceph-volume: lvm zap requires `/dev/` prefix
https://tracker.ceph.com/issues/43858
2020-01-28T12:38:22Z
Sebastian Wagner
<p>This is different to the other lvm commands:</p>
<p>preparing:</p>
<pre>
root@ubuntu:~# losetup -f
/dev/loop12
root@ubuntu:~# LANG=C losetup /dev/loop12 disk-image
root@ubuntu:~# pvcreate /dev/loop12
Physical volume "/dev/loop12" successfully created.
root@ubuntu:~# vgcreate MyVg /dev/loop12
Volume group "MyVg" successfully created
root@ubuntu:~# lvcreate --size 5500M --name MyLV MyVg
Logical volume "MyLV" created.
root@ubuntu:~# ll /dev/MyVg/MyLV
lrwxrwxrwx 1 root root 7 Jan 28 11:38 /dev/MyVg/MyLV -> ../dm-0
root@ubuntu:~# vgs -o vg_tags MyVg
VG Tags
root@ubuntu:~# vgs -o lv_tags MyVg
LV Tags
</pre>
<pre>
# ceph-volume lvm zap MyVg/MyLV
stderr: lsblk: MyVg/MyLV: kein blockorientiertes Gerät
stderr: blkid: Fehler: MyVg/MyLV: Datei oder Verzeichnis nicht gefunden
stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
[--osd-fsid OSD_FSID]
[DEVICES [DEVICES ...]]
ceph-volume lvm zap: error: Unable to proceed with non-existing device: MyVg/MyLV
</pre>
<pre>
# ceph-volume lvm zap /dev/MyVg/MyLV
--> Zapping: /dev/MyVg/MyLV
Running command: /bin/dd if=/dev/zero of=/dev/MyVg/MyLV bs=1M count=10
stderr: 10+0 Datensätze ein
10+0 Datensätze aus
10485760 Bytes (10 MB, 10 MiB) kopiert, 0,00413076 s, 2,5 GB/s
--> Zapping successful for: <LV: /dev/MyVg/MyLV>
</pre>
ceph-cm-ansible - Bug #43738 (New): cephadm: conflicts between attempted installs of libstoragemg...
https://tracker.ceph.com/issues/43738
2020-01-21T10:27:32Z
Sebastian Wagner
<pre>
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 420, in begin
super(CephLab, self).begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 263, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 290, in execute_playbook
self._handle_failure(command, status)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 314, in _handle_failure
raise AnsibleFailedError(failures)
failure_reason: '86 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz
conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
Error Summary
-------------
''}}Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure
log.error(yaml.safe_dump(failure))
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent
node = self.represent_data(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)RepresenterError: (''cannot represent an object'', u''/'')Failure
object was: {''smithi161.front.sepia.ceph.com'': {''_ansible_no_log'': False, ''changed'':
False, u''results'': [], u''rc'': 1, u''invocation'': {u''module_args'': {u''install_weak_deps'':
True, u''autoremove'': False, u''lock_timeout'': 0, u''download_dir'': None, u''install_repoquery'':
True, u''enable_plugin'': [], u''update_cache'': False, u''disable_excludes'': None,
u''exclude'': [], u''installroot'': u''/'', u''allow_downgrade'': False, u''name'':
[u''@core'', u''@base'', u''dnf-utils'', u''git-all'', u''sysstat'', u''libedit'',
u''boost-thread'', u''xfsprogs'', u''gdisk'', u''parted'', u''libgcrypt'', u''fuse-libs'',
u''openssl'', u''libuuid'', u''attr'', u''ant'', u''lsof'', u''gettext'', u''bc'',
u''xfsdump'', u''blktrace'', u''usbredir'', u''podman'', u''podman-docker'', u''libev-devel'',
u''valgrind'', u''nfs-utils'', u''ncurses-devel'', u''gcc'', u''git'', u''python3-nose'',
u''python3-virtualenv'', u''genisoimage'', u''qemu-img'', u''qemu-kvm-core'', u''qemu-kvm-block-rbd'',
u''libacl-devel'', u''dbench'', u''autoconf''], u''download_only'': False, u''bugfix'':
False, u''list'': None, u''disable_gpg_check'': False, u''conf_file'': None, u''update_only'':
False, u''state'': u''present'', u''disablerepo'': [], u''releasever'': None, u''disable_plugin'':
[], u''enablerepo'': [], u''skip_broken'': False, u''security'': False, u''validate_certs'':
True}}, u''failures'': [], u''msg'': u''Unknown Error occured: Transaction check
error:
file /usr/lib/tmpfiles.d/libstoragemgmt.conf conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/doc/libstoragemgmt/NEWS conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmcli.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmd.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/simc_lsmplugin.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log</a></p>
<ul>
<li>os_type: rhel</li>
<li>os_version: '8.0'</li>
<li>description: rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_api_tests.yaml}</li>
</ul>
<p>As this is an ansible error, I'm not sure if this is a cephadm issue. Any clues?</p>
Ceph - Bug #42528 (Resolved): python-common bulid failure: File not found: ceph-*.egg-info
https://tracker.ceph.com/issues/42528
2019-10-29T12:19:41Z
Sebastian Wagner
<pre>
PM build errors:
File not found: /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python2.7/site-packages/ceph
File not found by glob: /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python2.7/site-packages/ceph-*.egg-info
+ rm -fr /tmp/install-deps.1830
Build step 'Execute shell' marked build as failure
</pre>
<pre>
running install
running build
running build_py
creating build
creating build/lib
creating build/lib/ceph
copying ceph/__init__.py -> build/lib/ceph
copying ceph/exceptions.py -> build/lib/ceph
creating build/lib/ceph/deployment
copying ceph/deployment/__init__.py -> build/lib/ceph/deployment
copying ceph/deployment/drive_group.py -> build/lib/ceph/deployment
copying ceph/deployment/ssh_orchestrator.py -> build/lib/ceph/deployment
creating build/lib/ceph/tests
copying ceph/tests/__init__.py -> build/lib/ceph/tests
copying ceph/tests/test_drive_group.py -> build/lib/ceph/tests
running install_lib
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
copying build/lib/ceph/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
copying build/lib/ceph/exceptions.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/drive_group.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
copying build/lib/ceph/deployment/ssh_orchestrator.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment
creating /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
copying build/lib/ceph/tests/__init__.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
copying build/lib/ceph/tests/test_drive_group.py -> /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/exceptions.py to exceptions.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/drive_group.py to drive_group.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/deployment/ssh_orchestrator.py to ssh_orchestrator.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests/__init__.py to __init__.cpython-36.pyc
byte-compiling /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph/tests/test_drive_group.py to test_drive_group.cpython-36.pyc
running install_egg_info
running egg_info
creating ceph.egg-info
writing ceph.egg-info/PKG-INFO
writing dependency_links to ceph.egg-info/dependency_links.txt
writing requirements to ceph.egg-info/requires.txt
writing top-level names to ceph.egg-info/top_level.txt
writing manifest file 'ceph.egg-info/SOURCES.txt'
reading manifest file 'ceph.egg-info/SOURCES.txt'
writing manifest file 'ceph.egg-info/SOURCES.txt'
Copying ceph.egg-info to /home/jenkins-build/build/workspace/ceph-dev-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-6557-g9c17ca0/rpm/el7/BUILDROOT/ceph-15.0.0-6557.g9c17ca0.el7.x86_64/usr/lib/python3.6/site-packages/ceph-1.0.0-py3.6.egg-info
running install_scripts
Traceback (most recent call last):
File "setup.py", line 45, in <module>
'Programming Language :: Python :: 3.6',
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, in fetch_build_eggs
parse_requirements(requires), installer=self.fetch_build_egg
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 618, in resolve
dist = best[req.key] = env.best_match(req, self, installer)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 862, in best_match
return self.obtain(req, installer) # try and download/install
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 874, in obtain
return installer(requirement)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 339, in fetch_build_egg
return cmd.easy_install(req)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 623, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 653, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 849, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1130, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1115, in run_setup
run_setup(setup_script, args)
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 69, in run_setup
lambda: execfile(
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 120, in run
return func()
File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 71, in <lambda>
{'__file__':setup_script, '__name__':'__main__'}
File "setup.py", line 21, in <module>
packages=find_packages(),
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 269, in __init__
_Distribution.__init__(self,attrs)
File "/usr/lib64/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 302, in finalize_options
ep.load()(self, ep.name, value)
File "build/bdist.linux-x86_64/egg/setuptools_scm/integration.py", line 9, in version_keyword
File "build/bdist.linux-x86_64/egg/setuptools_scm/version.py", line 66, in _warn_if_setuptools_outdated
setuptools_scm.version.SetuptoolsOutdatedWarning: your setuptools is too old (<12)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31272//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31272//consoleFull</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31269//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/31269//consoleFull</a></p>
Ceph - Bug #39595 (Resolved): Compile error: ceph-dencoder: invalid operands (*UND* and .gcc_exce...
https://tracker.ceph.com/issues/39595
2019-05-06T09:57:05Z
Sebastian Wagner
<p><a class="external" href="https://shaman.ceph.com/builds/ceph/master/1991495a22fa74210348ffd4f261c314ef3f056c/notcmalloc/152091/">https://shaman.ceph.com/builds/ceph/master/1991495a22fa74210348ffd4f261c314ef3f056c/notcmalloc/152091/</a></p>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/25093//consoleFull">https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos7,DIST=centos7,MACHINE_SIZE=huge/25093//consoleFull</a></p>
<pre>
[ 90%] Building CXX object src/tools/ceph-dencoder/CMakeFiles/ceph-dencoder.dir/ceph_dencoder.cc.o
{standard input}: Assembler messages:
{standard input}:1923894: Error: invalid operands (*UND* and .gcc_except_table sections) for `-'
{standard input}:1923897: Error: invalid operands (*UND* and .gcc_except_table sections) for `-'
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
make[2]: *** [src/tools/ceph-dencoder/CMakeFiles/ceph-dencoder.dir/ceph_dencoder.cc.o] Error 1
make[1]: *** [src/tools/ceph-dencoder/CMakeFiles/ceph-dencoder.dir/all] Error 2
make: *** [all] Error 2
error: Bad exit status from /var/tmp/rpm-tmp.I0VnuO (%build)
RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.I0VnuO (%build)
Finished: FAILURE
</pre>
Ceph - Documentation #38934 (Resolved): "Developer Guide" appears two times in table of contents
https://tracker.ceph.com/issues/38934
2019-03-25T13:10:25Z
Sebastian Wagner
<p>This is the expanded table of content of <a class="external" href="http://docs.ceph.com/docs/master/dev/">http://docs.ceph.com/docs/master/dev/</a> clearly showing the developer guide two times.</p>
<pre>
- **Intro to Ceph**
- **Installation (ceph-deploy)**
- **Installation (Manual)**
- **Installation (Kubernetes + Helm)**
- **Ceph Storage Cluster**
- **Ceph Filesystem**
- **Ceph Block Device**
- **Ceph Object Gateway**
- **Ceph Manager Daemon**
- **Ceph Dashboard**
- **API Documentation**
- **Architecture**
- **Developer Guide**
- **Introduction**
- **Essentials (tl;dr)**
- **Leads**
- **History**
- **Licensing**
- **Source code repositories**
- **Redmine issue tracker**
- **Mailing list**
- **IRC**
- **Submitting patches**
- **Building from source**
- **Using ccache to speed up local builds**
- **Development-mode cluster**
- **Kubernetes/Rook development cluster**
- **Backporting**
- **Guidance for use of cluster log**
- **What is merged where and when ?**
- **Development releases (i.e. x.0.z)**
- **What ?**
- **Where ?**
- **When ?**
- **Branch merges**
- **Stable release candidates (i.e. x.1.z) phase 1**
- **What ?**
- **Where ?**
- **When ?**
- **Branch merges**
- **Stable release candidates (i.e. x.1.z) phase 2**
- **What ?**
- **Where ?**
- **When ?**
- **Branch merges**
- **Stable releases (i.e. x.2.z)**
- **What ?**
- **Where ?**
- **When ?**
- **Branch merges**
- **Issue tracker**
- **Issue tracker conventions**
- **Basic workflow**
- **Update the tracker**
- **Upstream code**
- **Local environment**
- **Bugfix branch**
- **Fix bug locally**
- **GitHub pull request**
- **Automated PR validation**
- **Notes on PR make check test**
- **Integration tests AKA ceph-qa-suite**
- **Code review**
- **Amending your PR**
- **Merge**
- **Testing**
- **Unit tests - make check**
- **How unit tests are declared**
- **Unit testing of CLI tools**
- **Caveats**
- **Integration tests**
- **Teuthology consumes packages**
- **The nightlies**
- **Suites inventory**
- **teuthology-describe-tests**
- **How integration tests are run**
- **How integration tests are defined**
- **Reading a standalone test**
- **Test descriptions**
- **How are tests built from directories?**
- **Convolution operator**
- **Concatenation operator**
- **Filtering tests by their description**
- **Reducing the number of tests**
- **Testing in the cloud**
- **Assumptions and caveat**
- **Prepare tenant**
- **Getting ceph-workbench**
- **Linking ceph-workbench with your OpenStack tenant**
- **Run the dummy suite**
- **Run a standalone test**
- **Interrupt a running suite**
- **Upload logs to archive server**
- **Provision VMs ad hoc**
- **Deploy a cluster for manual testing**
- **Testing - how to run s3-tests locally**
- **Step 1 - build Ceph**
- **Step 2 - vstart**
- **Step 3 - run s3-tests**
- **Ceph Internals**
- **Tracing Ceph With Blkin**
- **BlueStore Internals**
- **Cache pool**
- **A Detailed Documentation on How to Set up Ceph Kerberos
Authentication**
- **CephFS Reclaim Interface**
- **CephFS Snapshots**
- **Cephx**
- **A Detailed Description of the Cephx Authentication Protocol**
- **Configuration Management System**
- **config-key layout**
- **CephContext**
- **Corpus structure**
- **Installing Oprofile**
- **C++17 and libstdc++ ABI**
- **CephFS delayed deletion**
- **Deploying a development cluster**
- **Deploying multiple development clusters on the same machine**
- **Development workflows**
- **Documenting Ceph**
- **Serialization (encode/decode)**
- **Erasure Coded pool**
- **File striping**
- **FreeBSD Implementation details**
- **Building Ceph Documentation**
- **IANA Numbers**
- **Contributing to Ceph: A Guide for Developers**
- **Introduction**
- **Essentials (tl;dr)**
- **Leads**
- **History**
- **Licensing**
- **Source code repositories**
- **Redmine issue tracker**
- **Mailing list**
- **IRC**
- **Submitting patches**
- **Building from source**
- **Using ccache to speed up local builds**
- **Development-mode cluster**
- **Kubernetes/Rook development cluster**
- **Backporting**
- **Guidance for use of cluster log**
- **What is merged where and when ?**
- **Development releases (i.e. x.0.z)**
- **What ?**
- **Where ?**
- **When ?**
- **Branch merges**
- **Stable release candidates (i.e. x.1.z) phase 1**
- **What ?**
- **Where ?**
- **When ?**
- **Branch merges**
- **Stable release candidates (i.e. x.1.z) phase 2**
- **What ?**
- **Where ?**
- **When ?**
- **Branch merges**
- **Stable releases (i.e. x.2.z)**
- **What ?**
- **Where ?**
- **When ?**
- **Branch merges**
- **Issue tracker**
- **Issue tracker conventions**
- **Basic workflow**
- **Update the tracker**
- **Upstream code**
- **Local environment**
- **Bugfix branch**
- **Fix bug locally**
- **GitHub pull request**
- **Automated PR validation**
- **Notes on PR make check test**
- **Integration tests AKA ceph-qa-suite**
- **Code review**
- **Amending your PR**
- **Merge**
- **Testing**
- **Unit tests - make check**
- **How unit tests are declared**
- **Unit testing of CLI tools**
- **Caveats**
- **Integration tests**
- **Teuthology consumes packages**
- **The nightlies**
- **Suites inventory**
- **teuthology-describe-tests**
- **How integration tests are run**
- **How integration tests are defined**
- **Reading a standalone test**
- **Test descriptions**
- **How are tests built from directories?**
- **Convolution operator**
- **Concatenation operator**
- **Filtering tests by their description**
- **Reducing the number of tests**
- **Testing in the cloud**
- **Assumptions and caveat**
- **Prepare tenant**
- **Getting ceph-workbench**
- **Linking ceph-workbench with your OpenStack tenant**
- **Run the dummy suite**
- **Run a standalone test**
- **Interrupt a running suite**
- **Upload logs to archive server**
- **Provision VMs ad hoc**
- **Deploy a cluster for manual testing**
- **Testing - how to run s3-tests locally**
- **Step 1 - build Ceph**
- **Step 2 - vstart**
- **Step 3 - run s3-tests**
- **Kernel client troubleshooting (FS)**
- **Hacking on Ceph in Kubernetes with Rook**
- **Library architecture**
- **Use of the cluster log**
- **Debug logs**
- **build on MacOS**
- **Messenger notes**
- **Monitor bootstrap**
- **ON-DISK FORMAT**
- **FULL OSDMAP VERSION PRUNING**
- **msgr2 protocol**
- **Network Encoding**
- **Network Protocol**
- **Object Store Architecture Overview**
- **OSD class path issues**
- **Peering**
- **Using perf**
- **Perf counters**
- **Perf histograms**
- **PG (Placement Group) notes**
- **Developer Guide (Quick)**
- **RADOS client protocol**
- **RBD Incremental Backup**
- **RBD Export & Import**
- **RBD Layering**
- **Ceph Release Process**
- **Notes on Ceph repositories**
- **SeaStore**
- **Sepia community test lab**
- **Session Authentication for the Cephx Protocol**
- **Testing notes**
- **Public OSD Version**
- **Wireshark Dissector**
- **OSD developer documentation**
- **MDS developer documentation**
- **RADOS Gateway developer documentation**
- **ceph-volume developer documentation**
- **Governance**
- **ceph-volume**
- **Ceph Releases**
- **Glossary**
</pre>
<p>First failed attempt to fix this: <a class="external" href="https://github.com/ceph/ceph/pull/27120">https://github.com/ceph/ceph/pull/27120</a></p>
Ceph - Bug #38629 (Closed): ceph_dencoder.cc: c++: internal compiler error: Killed (program cc1plus)
https://tracker.ceph.com/issues/38629
2019-03-07T16:44:09Z
Sebastian Wagner
<p>my master is on b954e3d8b13ff0ab9fce973eacf54e013228df91</p>
<p>while building this revision, I got:</p>
<pre>
[ 88%] Building CXX object src/tools/ceph-dencoder/CMakeFiles/ceph-dencoder.dir/ceph_dencoder.cc.o
cd /build/ceph-14.1.0-435-g618ab58/obj-x86_64-linux-gnu/src/tools/ceph-dencoder && /usr/bin/c++ -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -D__linux__ -isystem /build/ceph-14.1.0-435-g618ab58/obj-x86_64-linux-gnu/boost/include -I/build/ceph-14.1.0-435-g618ab58/obj-x86_64-linux-gnu/src/include -I/build/ceph-14.1.0-435-g618ab58/src -isystem /build/ceph-14.1.0-435-g618ab58/obj-x86_64-linux-gnu/include -I/usr/include/nss -I/usr/include/nspr -isystem /build/ceph-14.1.0-435-g618ab58/src/xxHash -isystem /build/ceph-14.1.0-435-g618ab58/src/rapidjson/include -isystem /build/ceph-14.1.0-435-g618ab58/src/spdk/include -isystem /build/ceph-14.1.0-435-g618ab58/obj-x86_64-linux-gnu/src/dpdk/include -isystem /build/ceph-14.1.0-435-g618ab58/src/rocksdb/include -I/build/ceph-14.1.0-435-g618ab58/src/dmclock/src -I/build/ceph-14.1.0-435-g618ab58/src/dmclock/support/src -isystem /build/ceph-14.1.0-435-g618ab58/src/rgw/../rapidjson/include -g -O2 -fdebug-prefix-map=/build/ceph-14.1.0-435-g618ab58=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wall -Wtype-limits -Wignored-qualifiers -Winit-self -Wpointer-arith -Werror=format-security -fno-strict-aliasing -fsigned-char -Wno-unknown-pragmas -rdynamic -g -O2 -fdebug-prefix-map=/build/ceph-14.1.0-435-g618ab58=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -ftemplate-depth-1024 -Wnon-virtual-dtor -Wno-unknown-pragmas -Wno-ignored-qualifiers -Wstrict-null-sentinel -Woverloaded-virtual -fno-new-ttp-matching -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fstack-protector-strong -fdiagnostics-color=auto -fno-builtin-malloc -fno-builtin-calloc -fno-builtin-realloc -fno-builtin-free -fPIE -DHAVE_CONFIG_H -D__CEPH__ -D_REENTRANT -D_THREAD_SAFE -D__STDC_FORMAT_MACROS -march=core2 -std=c++1z -fno-var-tracking-assignments -o CMakeFiles/ceph-dencoder.dir/ceph_dencoder.cc.o -c /build/ceph-14.1.0-435-g618ab58/src/tools/ceph-dencoder/ceph_dencoder.cc
c++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-7/README.Bugs> for instructions.
src/tools/ceph-dencoder/CMakeFiles/ceph-dencoder.dir/build.make:66: recipe for target 'src/tools/ceph-dencoder/CMakeFiles/ceph-dencoder.dir/ceph_dencoder.cc.o' failed
make[4]: *** [src/tools/ceph-dencoder/CMakeFiles/ceph-dencoder.dir/ceph_dencoder.cc.o] Error 4
make[4]: Leaving directory '/build/ceph-14.1.0-435-g618ab58/obj-x86_64-linux-gnu'
CMakeFiles/Makefile2:8884: recipe for target 'src/tools/ceph-dencoder/CMakeFiles/ceph-dencoder.dir/all' failed
</pre>
<ol>
<li>shaman: <a class="external" href="https://shaman.ceph.com/builds/ceph/wip-swagner-testing/618ab5825c33666af7947c60d604df4f123eaa0c/default/145500/">https://shaman.ceph.com/builds/ceph/wip-swagner-testing/618ab5825c33666af7947c60d604df4f123eaa0c/default/145500/</a></li>
<li>jenkins: <a class="external" href="https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=bionic,DIST=bionic,MACHINE_SIZE=huge/18467/">https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=bionic,DIST=bionic,MACHINE_SIZE=huge/18467/</a></li>
<li>log: <a class="external" href="https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=bionic,DIST=bionic,MACHINE_SIZE=huge/18467//consoleFull">https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=bionic,DIST=bionic,MACHINE_SIZE=huge/18467//consoleFull</a></li>
</ol>
Ceph - Bug #38145 (New): /usr/bin/ld: cmdparse.cc.o: bad reloc symbol index
https://tracker.ceph.com/issues/38145
2019-02-01T10:09:43Z
Sebastian Wagner
<p>Hey,</p>
<p>in the Sepia lab in flavour "Ubuntu Xenial", I'm getting a linker error:</p>
<pre>
/usr/bin/ld: common/CMakeFiles/common-common-objs.dir/cmdparse.cc.o: bad reloc symbol index (0x30317453 >= 0x2d1) for offset 0x4961534563497374 in section `.debug_info'
common/CMakeFiles/common-common-objs.dir/cmdparse.cc.o: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
src/CMakeFiles/ceph-common.dir/build.make:446: recipe for target 'lib/libceph-common.so.1' failed
make[4]: *** [lib/libceph-common.so.1] Error 1
make[4]: Leaving directory '/build/ceph-14.0.1-3099-g9e926e9/obj-x86_64-linux-gnu'
</pre>
<ul>
<li><a href="https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=xenial,DIST=xenial,MACHINE_SIZE=huge/17352//consoleFull" class="external">Jenkins Log</a></li>
<li><a href="https://shaman.ceph.com/builds/ceph/wip-swagner-testing/9e926e9927a4c9592403dbce959e526ba3860206/default/140455/" class="external">Shaman build</a></li>
</ul>
<p>I don't know if this is a reproducible error or not.</p>
Ceph - Bug #37858 (Can't reproduce): Python 3: UnicodeDecodeError in /usr/bin/ceph in parse_json_...
https://tracker.ceph.com/issues/37858
2019-01-10T11:24:31Z
Sebastian Wagner
<p>I'm seeing this in the log of a recent nautilus build:</p>
<pre>
2019-01-10 11:05:18.613 7fce75ddb700 1 librados: init done
2019-01-10 11:05:18.613 7fce75ddb700 1 librados: init done
Traceback (most recent call last):
File "/usr/bin/ceph", line 1212, in <module>
retval = main()
File "/usr/bin/ceph", line 1136, in main
sigdict = parse_json_funcsigs(outbuf.decode('utf-8'), 'cli')
File "/usr/lib64/python2.7/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa0 in position 5827: invalid start byte
</pre>
<p>Unfortunately, I don't have <strong>any</strong> further information yet.</p>
ceph-volume - Bug #37390 (Resolved): c-v inventory returns invalid JSON
https://tracker.ceph.com/issues/37390
2018-11-26T13:16:39Z
Sebastian Wagner
<p>print() uses single-quotes by default, which is invalid JSON.</p>
Ceph - Bug #37373 (New): Interactive mode CLI with Python 3: Traceback when pressing ^D
https://tracker.ceph.com/issues/37373
2018-11-22T15:06:43Z
Sebastian Wagner
<p>Hey,</p>
<p>calling ^d in the repl of the ceph command using Python 3 shows a Traceback:</p>
<pre>
$ ceph
ceph>
ceph>
ceph> Traceback (most recent call last):
File "/home/sebastian/Repos/ceph/build/bin/ceph", line 1250, in <module>
retval = main()
File "/home/sebastian/Repos/ceph/build/bin/ceph", line 1229, in main
raw_write(outbuf)
File "/home/sebastian/Repos/ceph/build/bin/ceph", line 172, in raw_write
raw_stdout.write(buf)
TypeError: a bytes-like object is required, not 'str'
</pre>
<p>Is there anyone that uses this mode? Relates to</p>
<p><a class="external" href="https://marc.info/?i=CALe9h7c5kJudfsQ6Vf_vczUG0CeoN8=dxznC=92RamBvxD9u0w%20()%20mail%20!%20gmail%20!%20com">https://marc.info/?i=CALe9h7c5kJudfsQ6Vf_vczUG0CeoN8=dxznC=92RamBvxD9u0w%20()%20mail%20!%20gmail%20!%20com</a></p>
Ceph - Bug #23854 (Can't reproduce): Linking libceph_zstd.so sometimes fails
https://tracker.ceph.com/issues/23854
2018-04-25T13:00:23Z
Sebastian Wagner
<p>I get this error from time to time (since a few months) when building from source:<br /><pre>
[ 20%] Linking CXX shared library ../../../lib/libceph_zstd.so
/usr/bin/ld: libzstd/lib/libzstd.a(error_private.c.o): relocation R_X86_64_PC32 against symbol `ERR_getErrorString' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
src/compressor/zstd/CMakeFiles/ceph_zstd.dir/build.make:95: die Regel für Ziel „lib/libceph_zstd.so.2.0.0“ scheiterte
make[2]: *** [lib/libceph_zstd.so.2.0.0] Fehler 1
CMakeFiles/Makefile2:20785: die Regel für Ziel „src/compressor/zstd/CMakeFiles/ceph_zstd.dir/all“ scheiterte
make[1]: *** [src/compressor/zstd/CMakeFiles/ceph_zstd.dir/all] Fehler 2
make[1]: *** Auf noch nicht beendete Prozesse wird gewartet …
</pre></p>
<p>This happens mostly after calling <code>make</code> multiple times without a full rebuild for a longer period of time.</p>
<p>Removing the build files helps as a workaround:<br /><pre>
rm -r build/src/compressor/zstd
</pre></p>
<p>Environment:</p>
<ul>
<li>Ubuntu 17.10</li>
<li>GNU ld (GNU Binutils for Ubuntu) 2.29.1</li>
<li>g++ (Ubuntu 7.2.0-8ubuntu3) 7.2.0</li>
<li>cmake version 3.9.1</li>
<li>git on master for the last few months.</li>
</ul>
Ceph - Bug #20619 (Closed): MgrClient.cc: 43: FAILED assert(msgr != nullptr)
https://tracker.ceph.com/issues/20619
2017-07-13T16:17:29Z
Sebastian Wagner
<p>I got this after creating a replicated Pool with very few PGs.</p>
Environment:
<ul>
<li>Git revision: 7e12840db34f8a0fb</li>
<li>vstart.sh -X -l</li>
</ul>
<pre>
/ceph/src/mgr/MgrClient.cc: In function 'void MgrClient::init()' thread 7f34b5310700 time 2017-07-13 17:51:39.314068
/ceph/src/mgr/MgrClient.cc: 43: FAILED assert(msgr != nullptr)
ceph version 12.1.0-761-g3ad4123 (3ad4123c83b42bfd49dc3594c96a0c7539bd6511) luminous (rc)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x7f34a9f847d2]
2: (MgrClient::init()+0x5d) [0x7f34a9fe4bed]
3: (librados::RadosClient::connect()+0x90e) [0x7f34b29cde0e]
4: (rados_connect()+0x1f) [0x7f34b298199f]
5: (()+0x5f6dc) [0x7f34b2ce16dc]
6: (PyEval_EvalFrameEx()+0x68a) [0x4c468a]
7: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
8: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
9: ../env/bin/python() [0x4de6fe]
10: (PyObject_Call()+0x43) [0x4b0cb3]
11: ../env/bin/python() [0x4f492e]
12: (PyObject_Call()+0x43) [0x4b0cb3]
13: ../env/bin/python() [0x4f46a7]
14: ../env/bin/python() [0x4b670c]
15: (PyObject_Call()+0x43) [0x4b0cb3]
16: (PyEval_EvalFrameEx()+0x5faf) [0x4c9faf]
17: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
18: ../env/bin/python() [0x4de6fe]
19: (PyObject_Call()+0x43) [0x4b0cb3]
20: ../env/bin/python() [0x4f492e]
21: (PyObject_Call()+0x43) [0x4b0cb3]
22: ../env/bin/python() [0x569a48]
23: (PyEval_EvalFrameEx()+0x6345) [0x4ca345]
24: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
25: (PyEval_EvalFrameEx()+0x6099) [0x4ca099]
26: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
27: (PyEval_EvalFrameEx()+0x6099) [0x4ca099]
28: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
29: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
30: (PyEval_EvalFrameEx()+0x68d1) [0x4ca8d1]
31: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
32: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
33: (PyEval_EvalFrameEx()+0x68d1) [0x4ca8d1]
34: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
35: ../env/bin/python() [0x4de6fe]
36: (PyObject_Call()+0x43) [0x4b0cb3]
37: (PyObject_CallFunctionObjArgs()+0x16a) [0x4b97fa]
38: (_PyObject_GenericGetAttrWithDict()+0x17c) [0x4b00dc]
39: (PyEval_EvalFrameEx()+0x4c1) [0x4c44c1]
40: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
41: ../env/bin/python() [0x4de6fe]
42: (PyObject_Call()+0x43) [0x4b0cb3]
43: ../env/bin/python() [0x4f492e]
44: (PyObject_Call()+0x43) [0x4b0cb3]
45: ../env/bin/python() [0x569a48]
46: ../env/bin/python() [0x589fb1]
47: ../env/bin/python() [0x50157c]
48: (PyEval_EvalFrameEx()+0x615e) [0x4ca15e]
49: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
50: ../env/bin/python() [0x4de8b8]
51: (PyObject_Call()+0x43) [0x4b0cb3]
52: (PyEval_EvalFrameEx()+0x2ad1) [0x4c6ad1]
53: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
54: (PyEval_EvalFrameEx()+0x68d1) [0x4ca8d1]
55: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
56: ../env/bin/python() [0x4de8b8]
57: (PyObject_Call()+0x43) [0x4b0cb3]
58: (PyEval_EvalFrameEx()+0x2ad1) [0x4c6ad1]
59: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
60: ../env/bin/python() [0x4de8b8]
61: (PyObject_Call()+0x43) [0x4b0cb3]
62: (PyEval_EvalFrameEx()+0x2ad1) [0x4c6ad1]
63: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
64: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
65: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
66: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
67: ../env/bin/python() [0x4de6fe]
68: (PyObject_Call()+0x43) [0x4b0cb3]
69: ../env/bin/python() [0x4f492e]
70: (PyObject_Call()+0x43) [0x4b0cb3]
71: (PyEval_CallObjectWithKeywords()+0x30) [0x4ce5d0]
72: (_pyglib_handler_marshal()+0x39) [0x7f348a002759]
73: (()+0x4aab3) [0x7f348a24fab3]
74: (g_main_context_dispatch()+0x15a) [0x7f348a24f04a]
75: (()+0x4a3f0) [0x7f348a24f3f0]
76: (g_main_loop_run()+0xc2) [0x7f348a24f712]
77: (()+0xa534) [0x7f348a520534]
78: (PyEval_EvalFrameEx()+0x5780) [0x4c9780]
79: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
80: ../env/bin/python() [0x4de8b8]
81: (PyObject_Call()+0x43) [0x4b0cb3]
82: (PyEval_EvalFrameEx()+0x2ad1) [0x4c6ad1]
83: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
84: ../env/bin/python() [0x4de8b8]
85: (PyObject_Call()+0x43) [0x4b0cb3]
86: (PyEval_EvalFrameEx()+0x2ad1) [0x4c6ad1]
87: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
88: (PyEval_EvalFrameEx()+0x68d1) [0x4ca8d1]
89: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
90: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
91: (PyEval_EvalFrameEx()+0x6099) [0x4ca099]
92: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
93: (PyEval_EvalCode()+0x19) [0x4c2509]
94: ../env/bin/python() [0x4f1def]
95: (PyRun_FileExFlags()+0x82) [0x4ec652]
96: (PyRun_SimpleFileExFlags()+0x191) [0x4eae31]
97: (Py_Main()+0x68a) [0x49e14a]
98: (__libc_start_main()+0xf0) [0x7f34b4b58830]
99: (_start()+0x29) [0x49d9d9]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
[1] 17661 abort (core dumped)
</pre>