Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2021-12-13T10:14:08Z
Ceph
Redmine
Orchestrator - Bug #53594 (Resolved): mgr/cephadm/upgrade.py: normalize_image_digest has a hard c...
https://tracker.ceph.com/issues/53594
2021-12-13T10:14:08Z
Sebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/blob/84f88eaec44103edd377817e264d5d376df8c554/src/pybind/mgr/cephadm/upgrade.py#L34">https://github.com/ceph/ceph/blob/84f88eaec44103edd377817e264d5d376df8c554/src/pybind/mgr/cephadm/upgrade.py#L34</a></p>
<p>I mean it's clearly wrong as this depends on the search-regiestries setting of the hosts and is not a constant.</p>
<p>Can we drop this "normalizing" step altogether?</p>
<p>Still, we have to avoid creating a regression to <a class="external" href="https://github.com/ceph/ceph/pull/40577">https://github.com/ceph/ceph/pull/40577</a></p>
Orchestrator - Bug #53424 (Resolved): CEPHADM_DAEMON_PLACE_FAIL in orch:cephadm/mgr-nfs-upgrade/
https://tracker.ceph.com/issues/53424
2021-11-29T12:47:58Z
Sebastian Wagner
<pre>
mon[30568]: cephadm 2021-11-29T09:37:30.941127+0000 mgr.smithi198.ueaztz (mgr.24461) 46 : cephadm [INF] Removing key for client.nfs.foo.1.0.smithi198.mjthlo
mon[30568]: cephadm 2021-11-29T09:37:30.945462+0000 mgr.smithi198.ueaztz (mgr.24461) 47 : cephadm [INF] Removing key for client.nfs.foo.1.0.smithi198.mjthlo-rgw
mon[30568]: cephadm 2021-11-29T09:37:30.950752+0000 mgr.smithi198.ueaztz (mgr.24461) 48 : cephadm [ERR] Failed while placing nfs.foo.1.0.smithi198.mjthlo on smithi198: cephadm exited with an error code: 1, stde>
mon[30568]: /bin/podman: stderr Error: error inspecting object: no such container ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs-foo-1-0-smithi198-mjthlo
mon[30568]: Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs.foo.1.0.smithi198.mjthlo
mon[30568]: /bin/podman: stderr Error: error inspecting object: no such container ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs.foo.1.0.smithi198.mjthlo
mon[30568]: Deploy daemon nfs.foo.1.0.smithi198.mjthlo ...
mon[30568]: Verifying port 2049 ...
mon[30568]: Cannot bind to IP 0.0.0.0 port 2049: [Errno 98] Address already in use
mon[30568]: ERROR: TCP Port(s) '2049' required for nfs already in use
mon[30568]: cluster 2021-11-29T09:37:30.951587+0000 mgr.smithi198.ueaztz (mgr.24461) 49
mon[30568]: : cluster [DBG] pgmap v22: 129 pgs: 129 active+clean; 316 MiB data, 950 MiB used, 706 GiB / 715 GiB avail; 9.7 KiB/s rd, 8.7 MiB/s wr, 815 op/s
mon[30568]: cephadm 2021-11-29T09:37:30.953688+0000 mgr.smithi198.ueaztz (mgr.24461) 50 : cephadm [INF] Removing orphan daemon nfs.ganesha-foo.smithi112...
mon[30568]: cephadm 2021-11-29T09:37:30.953846+0000 mgr.smithi198.ueaztz (mgr.24461) 51 : cephadm [INF] Removing daemon nfs.ganesha-foo.smithi112 from smithi112
mon[30568]: cluster 2021-11-29
mon[30568]: T09:37:31.149845+0000 mon.smithi112 (mon.0) 877 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)
mon
</pre>
<p><strong>grep nfs.foo</strong></p>
<pre>
➜ foo grep nfs.foo.1.0.smithi198 teuthology.log
2021-11-29T09:37:30.295 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:29 smithi112 conmon[30568]: cephadm 2021-11-29T09:37:29.528019+0000 mgr.smithi198.ueaztz (mgr.24461) 40 : cephadm [INF] Creating key for client.nfs.foo.1.0.smithi198.mjthlo
2021-11-29T09:37:30.295 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:29 smithi112 conmon[30568]: audit 2021-11-29T09:37:29.528302+0000 mon.smithi198 (mon.1) 139 : audit [INF] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.smithi198.mjthlo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch
2021-11-29T09:37:30.295 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:29 smithi112 conmon[30568]: audit 2021-11-29T09:37:29.528611+0000 mon.smithi112 (mon.0) 865 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.smithi198.mjthlo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch
2021-11-29T09:37:30.295 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:29 smithi112 conmon[30568]: audit 2021-11-29T09:37:29.531491+0000 mon.smithi112 (mon.0) 866 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.smithi198.mjthlo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished
2021-11-29T09:37:30.298 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:29 smithi112 conmon[30568]: ) 43 : cephadm [INF] Creating key for client.nfs.foo.1.0.smithi198.mjthlo-rgw
2021-11-29T09:37:30.299 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:29 smithi112 conmon[30568]: 143 : audit [INF] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.smithi198.mjthlo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
2021-11-29T09:37:30.299 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:29 smithi112 conmon[30568]: 871 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.smithi198.mjthlo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
2021-11-29T09:37:30.300 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:29 smithi112 conmon[30568]: ) 872 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.smithi198.mjthlo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
2021-11-29T09:37:30.302 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:29 smithi112 conmon[30568]: (mgr.24461) 44 : cephadm [INF] Deploying daemon nfs.foo.1.0.smithi198.mjthlo on smithi198
2021-11-29T09:37:30.303 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:29 smithi112 conmon[30568]: [DBG] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "config get","who": "client.nfs.foo.1.0.smithi198.mjthlo","key": "container_image"}]: dispatch
2021-11-29T09:37:30.320 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:29 smithi198 conmon[29563]: cephadm 2021-11-29T09:37:29.528019+0000 mgr.smithi198.ueaztz (mgr.24461) 40 : cephadm [INF] Creating key for client.nfs.foo.1.0.smithi198.mjthlo
2021-11-29T09:37:30.320 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:29 smithi198 conmon[29563]: audit 2021-11-29T09:37:29.528302+0000 mon.smithi198 (mon.1) 139 : audit [INF] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.smithi198.mjthlo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch
2021-11-29T09:37:30.320 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:29 smithi198 conmon[29563]: audit 2021-11-29T09:37:29.528611+0000 mon.smithi112 (mon.0) 865 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.smithi198.mjthlo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch
2021-11-29T09:37:30.320 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:29 smithi198 conmon[29563]: audit 2021-11-29T09:37:29.531491+0000 mon.smithi112 (mon.0) 866 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.smithi198.mjthlo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished
2021-11-29T09:37:30.324 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:29 smithi198 conmon[29563]: [INF] Creating key for client.nfs.foo.1.0.smithi198.mjthlo-rgw
2021-11-29T09:37:30.325 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:29 smithi198 conmon[29563]: from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.smithi198.mjthlo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
2021-11-29T09:37:30.325 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:29 smithi198 conmon[29563]: 11-29T09:37:29.654113+0000 mon.smithi112 (mon.0) 871 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.smithi198.mjthlo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
2021-11-29T09:37:30.326 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:29 smithi198 conmon[29563]: 0) 872 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.1.0.smithi198.mjthlo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
2021-11-29T09:37:30.327 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:29 smithi198 conmon[29563]: 37:29.660464+0000 mgr.smithi198.ueaztz (mgr.24461) 44 : cephadm [INF] Deploying daemon nfs.foo.1.0.smithi198.mjthlo on smithi198
2021-11-29T09:37:30.328 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:29 smithi198 conmon[29563]: from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "config get","who": "client.nfs.foo.1.0.smithi198.mjthlo","key": "container_image"}]: dispatch
2021-11-29T09:37:31.288 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:31 smithi112 conmon[30568]: audit 2021-11-29T09:37:30.941391+0000 mon.smithi198 (mon.1) 146 : audit [INF] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.smithi198.mjthlo"}]: dispatch
2021-11-29T09:37:31.289 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:31 smithi112 conmon[30568]: audit 2021-11-29T09:37:30.941700+0000 mon.smithi112 (mon.0) 873 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.smithi198.mjthlo"}]: dispatch
2021-11-29T09:37:31.289 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:31 smithi112 conmon[30568]: : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.smithi198.mjthlo"}]': finished
2021-11-29T09:37:31.290 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:31 smithi112 conmon[30568]: audit [INF] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.smithi198.mjthlo-rgw"}]: dispatch
2021-11-29T09:37:31.290 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:31 smithi112 conmon[30568]: .0) 875 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.smithi198.mjthlo-rgw"}]: dispatch
2021-11-29T09:37:31.291 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:31 smithi112 conmon[30568]: [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.smithi198.mjthlo-rgw"}]': finished
2021-11-29T09:37:31.564 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:31 smithi198 conmon[29563]: audit 2021-11-29T09:37:30.941391+0000 mon.smithi198 (mon.1) 146 : audit [INF] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.smithi198.mjthlo"}]: dispatch
2021-11-29T09:37:31.565 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:31 smithi198 conmon[29563]: 873 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.smithi198.mjthlo"}]: dispatch
2021-11-29T09:37:31.565 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:31 smithi198 conmon[29563]: (mon.0) 874 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.smithi198.mjthlo"}]': finished
2021-11-29T09:37:31.566 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:31 smithi198 conmon[29563]: from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.smithi198.mjthlo-rgw"}]: dispatch
2021-11-29T09:37:31.567 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:31 smithi198 conmon[29563]: .0) 875 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.smithi198.mjthlo-rgw"}]: dispatch
2021-11-29T09:37:31.569 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:31 smithi198 conmon[29563]: : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.1.0.smithi198.mjthlo-rgw"}]': finished
2021-11-29T09:37:32.536 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:32 smithi112 conmon[30568]: cephadm 2021-11-29T09:37:30.941127+0000 mgr.smithi198.ueaztz (mgr.24461) 46 : cephadm [INF] Removing key for client.nfs.foo.1.0.smithi198.mjthlo
2021-11-29T09:37:32.537 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:32 smithi112 conmon[30568]: cephadm 2021-11-29T09:37:30.945462+0000 mgr.smithi198.ueaztz (mgr.24461) 47 : cephadm [INF] Removing key for client.nfs.foo.1.0.smithi198.mjthlo-rgw
2021-11-29T09:37:32.537 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:32 smithi112 conmon[30568]: cephadm 2021-11-29T09:37:30.950752+0000 mgr.smithi198.ueaztz (mgr.24461) 48 : cephadm [ERR] Failed while placing nfs.foo.1.0.smithi198.mjthlo on smithi198: cephadm exited with an error code: 1, stderr: Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs-foo-1-0-smithi198-mjthlo
2021-11-29T09:37:32.537 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:32 smithi112 conmon[30568]: /bin/podman: stderr Error: error inspecting object: no such container ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs-foo-1-0-smithi198-mjthlo
2021-11-29T09:37:32.538 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:32 smithi112 conmon[30568]: Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs.foo.1.0.smithi198.mjthlo
2021-11-29T09:37:32.538 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:32 smithi112 conmon[30568]: /bin/podman: stderr Error: error inspecting object: no such container ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs.foo.1.0.smithi198.mjthlo
2021-11-29T09:37:32.539 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:32 smithi112 conmon[30568]: Deploy daemon nfs.foo.1.0.smithi198.mjthlo ...
2021-11-29T09:37:32.565 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:32 smithi198 conmon[29563]: 24461) 46 : cephadm [INF] Removing key for client.nfs.foo.1.0.smithi198.mjthlo
2021-11-29T09:37:32.565 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:32 smithi198 conmon[29563]: ) 47 : cephadm [INF] Removing key for client.nfs.foo.1.0.smithi198.mjthlo-rgw
2021-11-29T09:37:32.570 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:32 smithi198 conmon[29563]: .24461) 48 : cephadm [ERR] Failed while placing nfs.foo.1.0.smithi198.mjthlo on smithi198: cephadm exited with an error code: 1, stderr: Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs-foo-1-0-smithi198-mjthlo
2021-11-29T09:37:32.570 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:32 smithi198 conmon[29563]: /bin/podman: stderr Error: error inspecting object: no such container ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs-foo-1-0-smithi198-mjthlo
2021-11-29T09:37:32.570 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:32 smithi198 conmon[29563]: Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs.foo.1.0.smithi198.mjthlo
2021-11-29T09:37:32.571 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:32 smithi198 conmon[29563]: /bin/podman: stderr Error: error inspecting object: no such container ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs.foo.1.0.smithi198.mjthlo
2021-11-29T09:37:32.571 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:32 smithi198 conmon[29563]: Deploy daemon nfs.foo.1.0.smithi198.mjthlo ...
2021-11-29T09:40:00.792 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:40:00 smithi112 conmon[30568]: mon.smithi112 (mon.0) 933 : cluster [WRN] Failed while placing nfs.foo.1.0.smithi198.mjthlo on smithi198: cephadm exited with an error code: 1, stderr: Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs-foo-1-0-smithi198-mjthlo
2021-11-29T09:40:00.792 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:40:00 smithi112 conmon[30568]: 000295+0000 mon.smithi112 (mon.0) 934 : cluster [WRN] /bin/podman: stderr Error: error inspecting object: no such container ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs-foo-1-0-smithi198-mjthlo
2021-11-29T09:40:00.792 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:40:00 smithi112 conmon[30568]: :40:00.000308+0000 mon.smithi112 (mon.0) 935 : cluster [WRN] Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs.foo.1.0.smithi198.mjthlo
2021-11-29T09:40:00.793 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:40:00 smithi112 conmon[30568]: mon.0) 936 : cluster [WRN] /bin/podman: stderr Error: error inspecting object: no such container ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs.foo.1.0.smithi198.mjthlo
2021-11-29T09:40:00.793 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:40:00 smithi112 conmon[30568]: mon.smithi112 (mon.0) 937 : cluster [WRN] Deploy daemon nfs.foo.1.0.smithi198.mjthlo ...
2021-11-29T09:40:00.819 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:40:00 smithi198 conmon[29563]: mon.0) 933 : cluster [WRN] Failed while placing nfs.foo.1.0.smithi198.mjthlo on smithi198: cephadm exited with an error code: 1, stderr: Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs-foo-1-0-smithi198-mjthlo
2021-11-29T09:40:00.819 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:40:00 smithi198 conmon[29563]: 11-29T09:40:00.000295+0000 mon.smithi112 (mon.0) 934 : cluster [WRN] /bin/podman: stderr Error: error inspecting object: no such container ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs-foo-1-0-smithi198-mjthlo
2021-11-29T09:40:00.820 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:40:00 smithi198 conmon[29563]: mon.0) 935 : cluster [WRN] Non-zero exit code 125 from /bin/podman container inspect --format {{.State.Status}} ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs.foo.1.0.smithi198.mjthlo
2021-11-29T09:40:00.820 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:40:00 smithi198 conmon[29563]: 936 : cluster [WRN] /bin/podman: stderr Error: error inspecting object: no such container ceph-e6122430-50f6-11ec-8c2d-001a4aab830c-nfs.foo.1.0.smithi198.mjthlo
2021-11-29T09:40:00.820 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:40:00 smithi198 conmon[29563]: 00.000333+0000 mon.smithi112 (mon.0) 937 : cluster [WRN] Deploy daemon nfs.foo.1.0.smithi198.mjthlo ...
</pre>
<p><strong>grep nfs.ganesha-foo.smithi112</strong></p>
<pre>
➜ foo grep nfs.ganesha-foo.smithi112 teuthology.log
2021-11-29T09:34:57.422 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:34:57 smithi112 conmon[30568]: .272588+0000 mon.smithi112 (mon.0) 684 : audit [INF] from='mgr.14164 172.21.15.112:0/3292037400' entity='mgr.smithi112.lrlmfr' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.ganesha-foo.smithi112", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=foo"]}]: dispatch
2021-11-29T09:34:57.422 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:34:57 smithi112 conmon[30568]: (mon.0) 685 : audit [INF] from='mgr.14164 172.21.15.112:0/3292037400' entity='mgr.smithi112.lrlmfr' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.ganesha-foo.smithi112", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=foo"]}]': finished
2021-11-29T09:34:57.423 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:34:57 smithi112 conmon[30568]: 686 : audit [INF] from='mgr.14164 172.21.15.112:0/3292037400' entity='mgr.smithi112.lrlmfr' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.ganesha-foo.smithi112-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
2021-11-29T09:34:57.424 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:34:57 smithi112 conmon[30568]: ) 687 : audit [INF] from='mgr.14164 172.21.15.112:0/3292037400' entity='mgr.smithi112.lrlmfr' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.ganesha-foo.smithi112-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
2021-11-29T09:34:57.426 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:34:57 smithi112 conmon[30568]: 689 : audit [DBG] from='mgr.14164 172.21.15.112:0/3292037400' entity='mgr.smithi112.lrlmfr' cmd=[{"prefix": "config get", "who": "client.nfs.ganesha-foo.smithi112", "key": "container_image"}]: dispatch
2021-11-29T09:34:57.566 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:34:57 smithi198 conmon[29563]: audit 2021-11-29T09:34:57.272588+0000 mon.smithi112 (mon.0) 684 : audit [INF] from='mgr.14164 172.21.15.112:0/3292037400' entity='mgr.smithi112.lrlmfr' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.ganesha-foo.smithi112", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=foo"]}]: dispatch
2021-11-29T09:34:57.566 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:34:57 smithi198 conmon[29563]: ) 685 : audit [INF] from='mgr.14164 172.21.15.112:0/3292037400' entity='mgr.smithi112.lrlmfr' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.ganesha-foo.smithi112", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=foo"]}]': finished
2021-11-29T09:34:57.567 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:34:57 smithi198 conmon[29563]: audit 2021-11-29T09:34:57.276767+0000 mon.smithi112 (mon.0) 686 : audit [INF] from='mgr.14164 172.21.15.112:0/3292037400' entity='mgr.smithi112.lrlmfr' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.ganesha-foo.smithi112-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
2021-11-29T09:34:57.567 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:34:57 smithi198 conmon[29563]: audit 2021-11-29T09:34:57.279583+0000 mon.smithi112 (mon.0) 687 : audit [INF] from='mgr.14164 172.21.15.112:0/3292037400' entity='mgr.smithi112.lrlmfr' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.ganesha-foo.smithi112-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
2021-11-29T09:34:57.568 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:34:57 smithi198 conmon[29563]: : audit [DBG] from='mgr.14164 172.21.15.112:0/3292037400' entity='mgr.smithi112.lrlmfr' cmd=[{"prefix": "config get", "who": "client.nfs.ganesha-foo.smithi112", "key": "container_image"}]: dispatch
2021-11-29T09:34:58.779 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:34:58 smithi112 conmon[30568]: cephadm 2021-11-29T09:34:57.272382+0000 mgr.smithi112.lrlmfr (mgr.14164) 239 : cephadm [INF] Create keyring: client.nfs.ganesha-foo.smithi112
2021-11-29T09:34:58.780 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:34:58 smithi112 conmon[30568]: cephadm 2021-11-29T09:34:57.276586+0000 mgr.smithi112.lrlmfr (mgr.14164) 241 : cephadm [INF] Create keyring: client.nfs.ganesha-foo.smithi112-rgw
2021-11-29T09:34:58.780 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:34:58 smithi112 conmon[30568]: cephadm 2021-11-29T09:34:57.280963+0000 mgr.smithi112.lrlmfr (mgr.14164) 242 : cephadm [INF] Deploying daemon nfs.ganesha-foo.smithi112 on smithi112
2021-11-29T09:34:58.814 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:34:58 smithi198 conmon[29563]: cephadm 2021-11-29T09:34:57.272382+0000 mgr.smithi112.lrlmfr (mgr.14164) 239 : cephadm [INF] Create keyring: client.nfs.ganesha-foo.smithi112
2021-11-29T09:34:58.814 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:34:58 smithi198 conmon[29563]: cephadm 2021-11-29T09:34:57.276586+0000 mgr.smithi112.lrlmfr (mgr.14164) 241 : cephadm [INF] Create keyring: client.nfs.ganesha-foo.smithi112-rgw
2021-11-29T09:34:58.815 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:34:58 smithi198 conmon[29563]: cephadm 2021-11-29T09:34:57.280963+0000 mgr.smithi112.lrlmfr (mgr.14164) 242 : cephadm [INF] Deploying daemon nfs.ganesha-foo.smithi112 on smithi112
2021-11-29T09:35:31.042 INFO:teuthology.orchestra.run.smithi112.stdout:nfs.ganesha-foo.smithi112 smithi112 running (30s) 24s ago 29s 3.3 docker.io/ceph/ceph:v15 2cf504fded39 5429025aa7a1
2021-11-29T09:36:02.393 INFO:teuthology.orchestra.run.smithi112.stdout:nfs.ganesha-foo.smithi112 smithi112 running (61s) 28s ago 61s 3.3 docker.io/ceph/ceph:v15 2cf504fded39 5429025aa7a1
2021-11-29T09:36:33.472 INFO:teuthology.orchestra.run.smithi112.stdout:nfs.ganesha-foo.smithi112 smithi112 running (92s) 59s ago 92s 3.3 docker.io/ceph/ceph:v15 2cf504fded39 5429025aa7a1
2021-11-29T09:37:05.391 INFO:teuthology.orchestra.run.smithi112.stdout:nfs.ganesha-foo.smithi112 smithi112 running (2m) 91s ago 2m - - 3.3 2cf504fded39 5429025aa7a1
2021-11-29T09:37:31.292 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:31 smithi112 conmon[30568]: mon.smithi198 (mon.1) 148 : audit [DBG] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "config get","who": "client.nfs.ganesha-foo.smithi112","key": "container_image"}]: dispatch
2021-11-29T09:37:31.569 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:31 smithi198 conmon[29563]: [DBG] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "config get","who": "client.nfs.ganesha-foo.smithi112","key": "container_image"}]: dispatch
2021-11-29T09:37:32.541 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:32 smithi112 conmon[30568]: cephadm 2021-11-29T09:37:30.953688+0000 mgr.smithi198.ueaztz (mgr.24461) 50 : cephadm [INF] Removing orphan daemon nfs.ganesha-foo.smithi112...
2021-11-29T09:37:32.542 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:32 smithi112 conmon[30568]: cephadm 2021-11-29T09:37:30.953846+0000 mgr.smithi198.ueaztz (mgr.24461) 51 : cephadm [INF] Removing daemon nfs.ganesha-foo.smithi112 from smithi112
2021-11-29T09:37:32.573 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:32 smithi198 conmon[29563]: : cephadm [INF] Removing orphan daemon nfs.ganesha-foo.smithi112...
2021-11-29T09:37:32.574 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:32 smithi198 conmon[29563]: [INF] Removing daemon nfs.ganesha-foo.smithi112 from smithi112
2021-11-29T09:37:36.787 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:36 smithi112 conmon[30568]: audit 2021-11-29T09:37:36.245459+0000 mon.smithi198 (mon.1) 149 : audit [INF] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.ganesha-foo.smithi112"}]: dispatch
2021-11-29T09:37:36.787 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:36 smithi112 conmon[30568]: audit 2021-11-29T09:37:36.245865+0000 mon.smithi112 (mon.0) 880 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.ganesha-foo.smithi112"}]: dispatch
2021-11-29T09:37:36.787 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:36 smithi112 conmon[30568]: audit 2021-11-29T09:37:36.250835+0000 mon.smithi112 (mon.0) 881 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd='[{"prefix": "auth rm", "entity": "client.nfs.ganesha-foo.smithi112"}]': finished
2021-11-29T09:37:36.787 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:36 smithi112 conmon[30568]: audit 2021-11-29T09:37:36.252011+0000 mon.smithi198 (mon.1) 150 : audit [INF] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.ganesha-foo.smithi112-rgw"}]: dispatch
2021-11-29T09:37:36.789 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:36 smithi112 conmon[30568]: audit 2021-11-29T09:37:36.252255+0000 mon.smithi112 (mon.0) 882 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.ganesha-foo.smithi112-rgw"}]: dispatch
2021-11-29T09:37:36.790 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:36 smithi112 conmon[30568]: audit 2021-11-29T09:37:36.255814+0000 mon.smithi112 (mon.0) 883 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd='[{"prefix": "auth rm", "entity": "client.nfs.ganesha-foo.smithi112-rgw"}]': finished
2021-11-29T09:37:36.812 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:36 smithi198 conmon[29563]: 37:36.245459+0000 mon.smithi198 (mon.1) 149 : audit [INF] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.ganesha-foo.smithi112"}]: dispatch
2021-11-29T09:37:36.812 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:36 smithi198 conmon[29563]: audit 2021-11-29T09:37:36.245865+0000 mon.smithi112 (mon.0) 880 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.ganesha-foo.smithi112"}]: dispatch
2021-11-29T09:37:36.812 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:36 smithi198 conmon[29563]: audit 2021-11-29T09:37:36.250835+0000 mon.smithi112 (mon.0) 881 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd='[{"prefix": "auth rm", "entity": "client.nfs.ganesha-foo.smithi112"}]': finished
2021-11-29T09:37:36.812 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:36 smithi198 conmon[29563]: audit 2021-11-29T09:37:36.252011+0000 mon.smithi198 (mon.1) 150 : audit [INF] from='mgr.24461 172.21.15.198:0/1063122292' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.ganesha-foo.smithi112-rgw"}]: dispatch
2021-11-29T09:37:36.813 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:36 smithi198 conmon[29563]: audit 2021-11-29T09:37:36.252255+0000 mon.smithi112 (mon.0) 882 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd=[{"prefix": "auth rm", "entity": "client.nfs.ganesha-foo.smithi112-rgw"}]: dispatch
2021-11-29T09:37:36.813 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:36 smithi198 conmon[29563]: audit 2021-11-29T09:37:36.255814+0000 mon.smithi112 (mon.0) 883 : audit [INF] from='mgr.24461 ' entity='mgr.smithi198.ueaztz' cmd='[{"prefix": "auth rm", "entity": "client.nfs.ganesha-foo.smithi112-rgw"}]': finished
2021-11-29T09:37:37.667 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:37 smithi198 conmon[29563]: cephadm 2021-11-29T09:37:36.245092+0000 mgr.smithi198.ueaztz (mgr.24461) 55 : cephadm [INF] Removing key for client.nfs.ganesha-foo.smithi112
2021-11-29T09:37:37.668 INFO:journalctl@ceph.mon.smithi198.smithi198.stdout:Nov 29 09:37:37 smithi198 conmon[29563]: cephadm 2021-11-29T09:37:36.251653+0000 mgr.smithi198.ueaztz (mgr.24461) 56 : cephadm [INF] Removing key for client.nfs.ganesha-foo.smithi112-rgw
2021-11-29T09:37:38.039 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:37 smithi112 conmon[30568]: [INF] Removing key for client.nfs.ganesha-foo.smithi112
2021-11-29T09:37:38.039 INFO:journalctl@ceph.mon.smithi112.smithi112.stdout:Nov 29 09:37:37 smithi112 conmon[30568]: .24461) 56 : cephadm [INF] Removing key for client.nfs.ganesha-foo.smithi112-rgw
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2021-11-29_08:27:28-orch:cephadm-wip-swagner-testing-2021-11-26-1656-distro-default-smithi/6533368/">https://pulpito.ceph.com/swagner-2021-11-29_08:27:28-orch:cephadm-wip-swagner-testing-2021-11-26-1656-distro-default-smithi/6533368/</a></p>
Orchestrator - Bug #52898 (Resolved): cephadm: Unable to create max luns per iSCSI target: thread...
https://tracker.ceph.com/issues/52898
2021-10-12T10:17:46Z
Sebastian Wagner
<p>Description of problem:</p>
<p>Unable to create max luns per target. Container is crashed after some luns creation.</p>
<p>Version-Release number of selected component (if applicable):<br />ceph version 16.2.0 pacific (stable)</p>
<p>How reproducible:<br />100%</p>
<p>Steps to Reproduce:<br />1.Create gateways using below file.<br />[ceph: root@host104 ~]# cat iscsi.yaml <br />service_type: iscsi<br />service_id: iscsi<br />placement:<br /> hosts:<br /> - host108<br /> - host113<br />spec:<br /> pool: iscsi_pool<br /> trusted_ip_list: "ipv4,ipv6" <br /> api_user: admin<br /> api_password: admin<br />[ceph: root@host104 ~]#</p>
<p>2.Start iscsi gateways using "Gwcli"</p>
<p>3.Create target and gateways<br />/iscsi-targets> ls<br />o- iscsi-targets ................................................................................. [DiscoveryAuth: None, Targets: 1]<br /> o- iqn.2003-01.com.example.iscsi-gw:ceph-igw ............................................................ [Auth: None, Gateways: 2]<br /> o- disks ............................................................................................................ [Disks: 0]<br /> o- gateways .............................................................................................. [Up: 2/2, Portals: 2]
| o- host108 .............................................................................................. [1.0.0.108 (UP)]
| o- host113 .............................................................................................. [1.0.0.113 (UP)]<br /> o- host-groups .................................................................................................... [Groups : 0]<br /> o- hosts ......................................................................................... [Auth: ACL_ENABLED, Hosts: 0]<br />/iscsi-targets><br />4.Create client iqn.<br />5.Create images and add disks to client</p>
<p>Actual results:<br />/iscsi-target...at:rh7-client> disk add iscsi_pool/image127<br />ok<br />/iscsi-target...at:rh7-client> disk add iscsi_pool/image128<br />Exception in thread Thread-11:<br />Traceback (most recent call last):<br /> File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner<br /> self.run()<br /> File "/usr/lib64/python3.6/threading.py", line 1182, in run<br /> self.function(*self.args, **self.kwargs)<br /> File "/usr/lib/python3.6/site-packages/gwcli/gateway.py", line 646, in check_gateways<br /> check_thread.start()<br /> File "/usr/lib64/python3.6/threading.py", line 846, in start<br /> _start_new_thread(self._bootstrap, ())<br />RuntimeError: can't start new thread<br />[root@host108 ubuntu]# podman exec -it ff1f0ffc5f35 sh<br />Error: no container with name or ID ff1f0ffc5f35 found: no such container<br />[root@host108 ubuntu]#</p>
<p>Expected results:<br />Max luns should be created per target.</p>
Orchestrator - Bug #50685 (Resolved): wrong exception type: Exception("No filters applied")
https://tracker.ceph.com/issues/50685
2021-05-07T10:09:01Z
Sebastian Wagner
<pre>
Traceback (most recent call last):
File "/usr/share/ceph/mgr/cephadm/serve.py", line 466, in _apply_all_services
if self._apply_service(spec):
File "/usr/share/ceph/mgr/cephadm/serve.py", line 523, in _apply_service
self.mgr.osd_service.create_from_spec(cast(DriveGroupSpec, spec))
File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 68, in create_from_spec
ret = create_from_spec_one(self.prepare_drivegroup(drive_group))
File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 171, in prepare_drivegroup
existing_daemons=len(dd_for_spec_and_host))
File "/lib/python3.6/site-packages/ceph/deployment/drive_selection/selector.py", line 26, in __init__
self._data = self.assign_devices(self.spec.data_devices)
File "/lib/python3.6/site-packages/ceph/deployment/drive_selection/selector.py", line 130, in assign_devices
if not all(m.compare(disk) for m in FilterGenerator(device_filter)):
File "/lib/python3.6/site-packages/ceph/deployment/drive_selection/selector.py", line 130, in <genexpr>
if not all(m.compare(disk) for m in FilterGenerator(device_filter)):
File "/lib/python3.6/site-packages/ceph/deployment/drive_selection/matchers.py", line 407, in compare
raise Exception("No filters applied")
May 06 07:20:31 host conmon[2216]: debug 2021-05-06T07:20:31.114+0000 7f69546c4700 -1 log_channel(cephadm) log [ERR] : Failed to apply osd.dashboard-spec DriveGroupSpec(name=dashboard-1620208717516->placement=PlacementSpec(host_pattern='host'), service_id='dashboard-1620208717516', service_type='osd', data_devices=DeviceSelection(size='931.5GB', all=False), osd_id_claims={}, unmanaged=False, filter_logic='AND', preview_only=False): No filters applied
</pre>
<p>this should not end in the logs like this</p>
Orchestrator - Bug #49737 (Resolved): cephadm bootstrap --skip-ssh skips too much
https://tracker.ceph.com/issues/49737
2021-03-11T13:08:39Z
Sebastian Wagner
<p>--skip-ssh should only disable the SSH configs. It should still enable the cephadm mgr module.</p>
<p>This is a usability bug.</p>
<p><a class="external" href="https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/5K3CTLZUBHQDVZGP5BDMI3B6ZBVDTETO/">https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/5K3CTLZUBHQDVZGP5BDMI3B6ZBVDTETO/</a></p>
Orchestrator - Bug #48535 (Resolved): QA smoke test: cephadm is removing mgr.y
https://tracker.ceph.com/issues/48535
2020-12-10T11:32:13Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/yuriw-2020-12-08_16:18:10-rados-octopus-distro-basic-smithi/5693969">https://pulpito.ceph.com/yuriw-2020-12-08_16:18:10-rados-octopus-distro-basic-smithi/5693969</a></p>
<p>cephadm is properly deploying mgr.y:</p>
<pre>
Dec 08 18:58:47 smithi099 bash[10707]: audit 2020-12-08T18:58:47.142557+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14138 172.21.15.131:0/1846859103' entity='mgr.y' cmd='[{"prefix":"config-ke
y set","key":"mgr/cephadm/host.smithi131","val":"{\"daemons\": {\"mon.a\": {\"daemon_type\": \"mon\", \"daemon_id\": \"a\", \"hostname\": \"smithi131\", \"container_id\": \"5fdb0de44749\", \"container_image_id\": \"dae82b93a77958a9a6819f28e335d1a604c7e3cdecf8bba
5e0c820df2b53a64a\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe\", \"version\": \"15.2.7-661-gb1c1268b\", \"status\": 1, \"status_desc\": \"running\", \"is_active\": false, \"last_refresh\": \"2020-12-08T18:58:
47.137207\", \"created\": \"2020-12-08T18:56:38.730085\", \"started\": \"2020-12-08T18:56:45.378062\"}, \"mgr.y\": {\"daemon_type\": \"mgr\", \"daemon_id\": \"y\", \"hostname\": \"smithi131\", \"container_id\": \"620f6dfea3b3\", \"container_image_id\": \"dae82b9
3a77958a9a6819f28e335d1a604c7e3cdecf8bba5e0c820df2b53a64a\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe\", \"version\": \"15.2.7-661-gb1c1268b\", \"status\": 1, \"status_desc\": \"running\", \"is_active\": fals
e, \"last_refresh\": \"2020-12-08T18:58:47.137300\", \"created\": \"2020-12-08T18:56:47.849275\", \"started\": \"2020-12-08T18:56:47.941925\"}, \"mon.c\": {\"daemon_type\": \"mon\", \"daemon_id\": \"c\", \"hostname\": \"smithi131\", \"container_id\": \"8566d05a0
1c5\", \"container_image_id\": \"dae82b93a77958a9a6819f28e335d1a604c7e3cdecf8bba5e0c820df2b53a64a\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe\", \"version\": \"15.2.7-661-gb1c1268b\", \"status\": 1, \"status_
desc\": \"running\", \"is_active\": false, \"last_refresh\": \"2020-12-08T18:58:47.137347\", \"created\": \"2020-12-08T18:58:16.069285\", \"started\": \"2020-12-08T18:58:16.157886\"}}, \"devices\": [...], \"osdspec_previews\": [], \"daemon_config_deps\": {\"mon.a\": {\"deps\": [], \"last_config\": \"2020-12-08T18:58:44.148173\"}, \"mgr.y\": {\"deps\": [], \"last_con
fig\": \"2020-12-08T18:58:36.178335\"}, \"mon.c\": {\"deps\": [], \"last_config\": \"2020-12-08T18:58:38.524511\"}}, \"last_daemon_update\": \"2020-12-08T18:58:47.137450\", \"last_device_update\": \"2020-12-08T18:57:36.298984\", \"networks\": {\"172.17.0.0/16\":
[\"172.17.0.1\"], \"172.21.0.0/20\": [\"172.21.15.131\"], \"172.21.15.254\": [\"172.21.15.131\"], \"fe80::/64\": [\"fe80::ec4:7aff:fe88:72f9\"]}, \"last_host_check\": \"2020-12-08T18:57:14.445904\"}"}]': finished
</pre>
<p>But at some point, <code>cephadm.py</code> decides to remove it again:</p>
<pre>
Dec 08 18:58:41.703 INFO:teuthology.orchestra.run.smithi099:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 07c2237e-3987-11eb-9811-001a4aab830c -- ceph orch apply mgr '2;smithi099=x'
Dec 08 18:58:47 smithi131 bash[10704]: cephadm 2020-12-08T18:58:47.157062+0000 mgr.y (mgr.14138) 62 : cephadm [INF] Deploying daemon mgr.x on smithi099
Dec 08 18:58:47 smithi099 bash[10707]: cephadm 2020-12-08T18:58:47.157062+0000 mgr.y (mgr.14138) 62 : cephadm [INF] Deploying daemon mgr.x on smithi099
Dec 08 18:58:49 smithi099 bash[10707]: cephadm 2020-12-08T18:58:49.238457+0000 mgr.y (mgr.14138) 64 : cephadm [INF] It is presumed safe to stop ['mgr.y']
Dec 08 18:58:49 smithi099 bash[10707]: cephadm 2020-12-08T18:58:49.238723+0000 mgr.y (mgr.14138) 65 : cephadm [INF] Removing daemon mgr.y from smithi131
</pre>
<p>thus the mgr is then missing:</p>
<pre>
2020-12-08T19:05:16.283 INFO:teuthology.orchestra.run.smithi131.stdout:NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID
2020-12-08T19:05:16.284 INFO:teuthology.orchestra.run.smithi131.stdout:alertmanager 1/1 89s ago 2m smithi131=a;count:1 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f
2020-12-08T19:05:16.284 INFO:teuthology.orchestra.run.smithi131.stdout:grafana 1/1 89s ago 2m smithi099=a;count:1 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a
2020-12-08T19:05:16.284 INFO:teuthology.orchestra.run.smithi131.stdout:iscsi.iscsi 1/1 89s ago 2m smithi099=iscsi.a;count:1 quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:16.284 INFO:teuthology.orchestra.run.smithi131.stdout:mgr 1/2 89s ago 6m smithi099=x;count:2 quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:16.285 INFO:teuthology.orchestra.run.smithi131.stdout:mon 3/0 89s ago - <unmanaged> quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:16.285 INFO:teuthology.orchestra.run.smithi131.stdout:node-exporter 2/2 89s ago 2m smithi131=a;smithi099=b;count:2 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf
2020-12-08T19:05:16.285 INFO:teuthology.orchestra.run.smithi131.stdout:osd.None 8/0 89s ago - <unmanaged> quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:16.285 INFO:teuthology.orchestra.run.smithi131.stdout:prometheus 1/1 89s ago 2m smithi099=a;count:1 docker.io/prom/prometheus:v2.18.1 de242295e225
2020-12-08T19:05:16.286 INFO:teuthology.orchestra.run.smithi131.stdout:rgw.realm.zone 1/1 89s ago 2m smithi131=realm.zone.a;count:1 quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:12.375 INFO:teuthology.orchestra.run.smithi131.stdout:NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
2020-12-08T19:05:12.375 INFO:teuthology.orchestra.run.smithi131.stdout:alertmanager.a smithi131 running (89s) 85s ago 2m 0.20.0 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f 5cfd91f2dec4
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:grafana.a smithi099 running (104s) 85s ago 104s 6.6.2 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a d22d8f54a540
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:iscsi.iscsi.a smithi099 running (2m) 85s ago 2m 3.4 quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 ee4b2dbcfe42
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:mgr.x smithi099 running (6m) 85s ago 6m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 5b50fc3e28b6
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:mon.a smithi131 running (8m) 85s ago 8m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 5fdb0de44749
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:mon.b smithi099 running (6m) 85s ago 6m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 6d898409329d
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:mon.c smithi131 running (6m) 85s ago 6m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 8566d05a01c5
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:node-exporter.a smithi131 running (2m) 85s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf 6c48edd25d55
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:node-exporter.b smithi099 running (2m) 85s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf e4321578ec02
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:osd.0 smithi131 running (5m) 85s ago 5m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 013ba62b1a67
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:osd.1 smithi131 running (5m) 85s ago 5m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 b3249b0b5044
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.2 smithi131 running (4m) 85s ago 4m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 9b79623d7e60
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.3 smithi131 running (4m) 85s ago 4m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 4626c1117138
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.4 smithi099 running (4m) 85s ago 4m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 4a1c1bbd5040
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.5 smithi099 running (3m) 85s ago 3m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 c3e1893f1cc6
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.6 smithi099 running (3m) 85s ago 3m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 292f1b5ea013
2020-12-08T19:05:12.379 INFO:teuthology.orchestra.run.smithi131.stdout:osd.7 smithi099 running (3m) 85s ago 3m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 5a88aabbe925
2020-12-08T19:05:12.379 INFO:teuthology.orchestra.run.smithi131.stdout:prometheus.a smithi099 running (95s) 85s ago 2m 2.18.1 docker.io/prom/prometheus:v2.18.1 de242295e225 45d6bf1407d5
2020-12-08T19:05:12.379 INFO:teuthology.orchestra.run.smithi131.stdout:rgw.realm.zone.a smithi131 running (2m) 85s ago 2m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 ae6e6aa0af5c
</pre>
Dashboard - Bug #46623 (Resolved): mgr/dashboard: ui/navigation.e2e-spec.ts: AssertionError: Time...
https://tracker.ceph.com/issues/46623
2020-07-20T08:20:41Z
Sebastian Wagner
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dashboard-pull-requests/4980/consoleFull#842358957c212b007-e891-4176-9ee7-2f60eca393b7">https://jenkins.ceph.com/job/ceph-dashboard-pull-requests/4980/consoleFull#842358957c212b007-e891-4176-9ee7-2f60eca393b7</a></p>
<pre>
────────────────────────────────────────────────────────────────────────────────────────────────────
Running: ui/notification.e2e-spec.ts (21 of 23)
Estimated: 56 seconds
Notification page
1) "before all" hook for "should open notification sidebar"
2) "after all" hook for "should open notification sidebar"
0 passing (32s)
2 failing
1) Notification page
"before all" hook for "should open notification sidebar":
AssertionError: Timed out retrying: Expected not to find content: 'Creating...' but continuously found it.
Because this error occurred during a `before all` hook we are skipping the remaining tests in the current suite: `Notification page`
at PoolPageHelper.PageHelper.navigateEdit (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/tests?p=cypress/integration/ui/notification.e2e-spec.ts:57:36)
at PoolPageHelper.edit_pool_pg (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/tests?p=cypress/integration/ui/notification.e2e-spec.ts:286:14)
at Context.eval (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/tests?p=cypress/integration/ui/notification.e2e-spec.ts:326:15)
2) Notification page
"after all" hook for "should open notification sidebar":
CypressError: `cy.click()` failed because it requires a DOM element.
The subject received was:
> `undefined`
The previous command that ran was:
> `cy.get()`
Because this error occurred during a `after all` hook we are skipping the remaining tests in the current suite: `Notification page`
at ensureElement (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/runner/cypress_runner.js:159721:24)
at validateType (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/runner/cypress_runner.js:159545:16)
at Object.ensureSubjectByType (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/runner/cypress_runner.js:159587:11)
at pushSubjectAndValidate (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/runner/cypress_runner.js:167293:15)
at Context.<anonymous> (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/runner/cypress_runner.js:167621:18)
From Your Spec Code:
at PoolPageHelper.PageHelper.delete (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/tests?p=cypress/integration/ui/notification.e2e-spec.ts:215:50)
at Context.eval (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/tests?p=cypress/integration/ui/notification.e2e-spec.ts:331:21)
(Results)
┌────────────────────────────────────────────────────────────────────────────────────────────────â”
│ Tests: 4 │
│ Passing: 0 │
│ Failing: 1 │
│ Pending: 0 │
│ Skipped: 3 │
│ Screenshots: 2 │
│ Video: false │
│ Duration: 31 seconds │
│ Estimated: 56 seconds │
│ Spec Ran: ui/notification.e2e-spec.ts │
└────────────────────────────────────────────────────────────────────────────────────────────────┘
(Screenshots)
- /home/jenkins-build/build/workspace/ceph-dashboard-pull-requests/src/pybind/mgr/ (1280x720)
dashboard/frontend/cypress/screenshots/ui/notification.e2e-spec.ts/Notification
page -- should open notification sidebar -- before all hook (failed).png
- /home/jenkins-build/build/workspace/ceph-dashboard-pull-requests/src/pybind/mgr/ (1280x720)
dashboard/frontend/cypress/screenshots/ui/notification.e2e-spec.ts/Notification
page -- should open notification sidebar -- after all hook (failed).png
(Uploading Results)
</pre>
Orchestrator - Documentation #46335 (Resolved): Document "Using cephadm to set up rgw-nfs"
https://tracker.ceph.com/issues/46335
2020-07-03T08:30:46Z
Sebastian Wagner
<ul>
<li>cephadm doesn't care about exports. instead it simply sets up the daemons.</li>
<li>cephadm only creates an empty 'conf-{service-name}' RADOS object</li>
</ul>
<p>the Q is now: how to set up an RGW export?</p>
Orchestrator - Documentation #44905 (Resolved): cephadm troubleshooting SSH errors
https://tracker.ceph.com/issues/44905
2020-04-02T10:07:08Z
Sebastian Wagner
<pre>
<wowas> I'm getting:
<wowas> execnet.gateway_bootstrap.HostNotFound: -F /tmp/cephadm-conf-kbqvkrkw root@10.10.1.2
<wowas> raise OrchestratorError('Failed to connect to %s (%s). Check that the host is reachable and accepts connections using the cephadm SSH key' % (host, addr)) from e
<wowas> orchestrator._interface.OrchestratorError: Failed to connect to 10.10.1.2 (10.10.1.2). Check that the host is reachable and accepts connections using the cephadm SSH key
</pre>
<p>Things users can do:</p>
<p>1. ensure, we can connect to the host via ssh:<br /><pre>
[root@mon1 ~]# cephadm shell -- ceph config-key get mgr/cephadm/ssh_identity_key > key
INFO:cephadm:Inferring fsid f8edc08a-7f17-11ea-8707-000c2915dd98
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
obtained 'mgr/cephadm/ssh_identity_key'
[root@mon1 ~]# chmod 0600 key
</pre></p>
<p>If this fails, cephadm doesn't have a key. Fix this by:<br /><pre>
[root@mon1 ~]# cephadm shell -- ceph cephadm generate-ssh-key
</pre><br />or <br /><pre>
[root@mon1 ~]# cat key | cephadm shell -- ceph cephadm set-ssk-key -i -
</pre></p>
<p>2. ensure the ssh config is correct<br /><pre>
[root@mon1 ~]# cephadm shell -- ceph cephadm get-ssh-config > config
</pre></p>
<p>3. Verify we can connect to the host:</p>
<pre>
[root@mon1 ~]# ssh -F config -i key root@mon1
ssh: connect to host mon1 port 22: Connection timed out
</pre>
<p>4. There is a limitation right now. the ssh user is root. Hardcoded.</p>
mgr - Feature #44856 (In Progress): telemetry: report orch backend
https://tracker.ceph.com/issues/44856
2020-03-31T14:43:25Z
Sebastian Wagner
ceph-volume - Bug #44096 (Resolved): lvm prepare doesn't create vg and thus does not pass vg name...
https://tracker.ceph.com/issues/44096
2020-02-12T12:00:58Z
Sebastian Wagner
<p>Extracted from: <a class="external" href="https://tracker.ceph.com/issues/44028">https://tracker.ceph.com/issues/44028</a></p>
<pre>
$ ceph-volume lvm prepare --bluestore --data /dev/sdc --no-systemd
INFO:cephadm:/usr/bin/docker:stderr unable to read label for /dev/sdc: (2) No such file or directory
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/bin/ceph-authtool --gen-print-key
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 11e7e2c4-7e16-417e-b56a-a45b7a4c6dd6
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-11e7e2c4-7e16-417e-b56a-a45b7a4c6dd6
INFO:cephadm:/usr/bin/docker:stderr stderr: Volume group name has invalid characters
INFO:cephadm:/usr/bin/docker:stderr Run `lvcreate --help' for more information.
INFO:cephadm:/usr/bin/docker:stderr --> Was unable to complete a new OSD, will rollback changes
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
INFO:cephadm:/usr/bin/docker:stderr stderr: purged osd.0
INFO:cephadm:/usr/bin/docker:stderr --> RuntimeError: command returned non-zero exit status: 3
</pre>
<p>Looks like "<code>100%FREE</code>" is often used in the ceph-ansible context: <a class="external" href="https://github.com/search?p=1&q=ceph+100%25FREE&type=Code">https://github.com/search?p=1&q=ceph+100%25FREE&type=Code</a></p>
Orchestrator - Feature #39095 (Resolved): mgr/deepsea: return ganesha and iscsi endpoint URLs
https://tracker.ceph.com/issues/39095
2019-04-03T13:55:38Z
Sebastian Wagner
<p>his updates describe_service() to include nfs and iscsi services<br />(deepsea internally refers to these as "ganesha" and "igw" roles).<br />Additionally, if deepsea sets any of container_id, service, version,<br />rados_config_location, service_url, status or status_desc, these will<br />come through now too.</p>
<p>This relies on SUSE/DeepSea#1606 for the new<br />functionality, but if run against an older version of deepsea, it will<br />continue to operate as it did before, it just won't include<br />rados_config_location or service_url data.</p>
<p>Signed-off-by: Tim Serong <a class="email" href="mailto:tserong@suse.com">tserong@suse.com</a></p>
Orchestrator - Fix #39082 (Resolved): mgr/deepsea: use ceph_volume output in get_inventory()
https://tracker.ceph.com/issues/39082
2019-04-02T14:14:19Z
Sebastian Wagner
<p>DeepSea is being updated to use ceph_volume internally (see SUSE/DeepSea#1517 and jschmid1/DeepSea#6). Once this is done, the mgr_orch.get_inventory runner will just be returning the raw ceph_volume output, so this PR updates the DeepSea mgr module to match. There's also a couple of small cleanup commits. We probably don't want to merge this until the DS PR is in though.</p>
mgr - Bug #38626 (Resolved): pg_autoscaler is not Python 3 compatible
https://tracker.ceph.com/issues/38626
2019-03-07T14:04:50Z
Sebastian Wagner
<p><code>src/scripts/run_mypy.sh</code> revealed some type errors:</p>
<p>from <a class="external" href="https://gist.github.com/sebastian-philipp/25f70aae3b0d21b1a781c110a7ef8be4">https://gist.github.com/sebastian-philipp/25f70aae3b0d21b1a781c110a7ef8be4</a></p>
<pre>
pybind/mgr/pg_autoscaler/module.py: note: In member "get_subtree_resource_status" of class "PgAutoscaler":
pybind/mgr/pg_autoscaler/module.py:192: error: "Dict[Any, Any]" has no attribute "itervalues"
pybind/mgr/pg_autoscaler/module.py: note: In member "_maybe_adjust" of class "PgAutoscaler":
pybind/mgr/pg_autoscaler/module.py:359: error: Name 'cr_name' is not defined
pybind/mgr/pg_autoscaler/module.py:410: error: "Dict[Any, float]" has no attribute "iteritems"
pybind/mgr/pg_autoscaler/module.py:437: error: "Dict[Any, int]" has no attribute "iteritems"
</pre>
<p>Those errors look like some incompatibilities to Python 3.</p>
Dashboard - Bug #38590 (Resolved): mimic: dashboard: failed to compile the dashboard: Cannot find...
https://tracker.ceph.com/issues/38590
2019-03-05T18:51:58Z
Sebastian Wagner
<pre>
�[0mDate: �[1m�[37m2019-03-05T05:49:15.789Z�[39m�[22m�[0m
�[0mHash: �[1m�[37m894ed43e42aed84f2e6a�[39m�[22m�[0m
�[0mTime: �[1m�[37m21545�[39m�[22mms�[0m
�[0mchunk {�[1m�[33mscripts�[39m�[22m} �[1m�[32mscripts.fc88ef4a23399c760d0b.bundle.js�[39m�[22m (scripts) 210 kB �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m0�[39m�[22m} �[1m�[32mstyles.89887a238a2462b3f866.bundle.css�[39m�[22m (styles) 211 kB �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m1�[39m�[22m} �[1m�[32mpolyfills.997d8cc03812de50ae67.bundle.js�[39m�[22m (polyfills) 84 bytes �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m2�[39m�[22m} �[1m�[32mmain.ee32620ecd1edff94184.bundle.js�[39m�[22m (main) 84 bytes �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m3�[39m�[22m} �[1m�[32minline.318b50c57b4eba3d437b.bundle.js�[39m�[22m (inline) 796 bytes �[1m�[33m[entry]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0m�[0m
�[0m�[1m�[33mWARNING in Invalid animation value at 11938:14. Ignoring.�[39m�[22m�[0m
�[0m�[0m
�[0m�[1m�[33mWARNING in Invalid animation value at 11937:22. Ignoring.�[39m�[22m�[0m
�[0m�[0m
�[0m�[1m�[31mERROR in node_modules/@types/lodash/common/object.d.ts(1689,12): error TS2304: Cannot find name 'Exclude'.�[39m�[22m�[0m
�[0m�[1m�[31mnode_modules/@types/lodash/common/object.d.ts(1766,12): error TS2304: Cannot find name 'Exclude'.�[39m�[22m�[0m
�[0m�[1m�[31mnode_modules/@types/lodash/common/object.d.ts(1842,34): error TS2304: Cannot find name 'Exclude'.�[39m�[22m�[0m
�[0m�[1m�[31m�[39m�[22m�[0m
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! ceph-dashboard@0.0.0 build: `ng build "--prod"`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the ceph-dashboard@0.0.0 build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/jenkins-build/.npm/_logs/2019-03-05T05_49_15_864Z-debug.log
src/pybind/mgr/dashboard/CMakeFiles/mgr-dashboard-frontend-build.dir/build.make:1435: recipe for target '../src/pybind/mgr/dashboard/frontend/dist' failed
make[3]: *** [../src/pybind/mgr/dashboard/frontend/dist] Error 1
CMakeFiles/Makefile2:4878: recipe for target 'src/pybind/mgr/dashboard/CMakeFiles/mgr-dashboard-frontend-build.dir/all' failed
make[2]: *** [src/pybind/mgr/dashboard/CMakeFiles/mgr-dashboard-frontend-build.dir/all] Error 2
make[2]: *** Waiting for unfinished jobs....
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/15029/consoleFull#-2110717508ed6e825c-5772-40cc-ba3e-4f4430b3e036">https://jenkins.ceph.com/job/ceph-pull-requests/15029/consoleFull#-2110717508ed6e825c-5772-40cc-ba3e-4f4430b3e036</a></p>