Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2020-10-15T05:10:09ZCeph
Redmine Dashboard - Feature #47865 (New): mgr/dashboard: check client user capibilities for NFS exportshttps://tracker.ceph.com/issues/478652020-10-15T05:10:09ZKiefer Chang
<p>When the end-user creates an export via Dashboard, we let the user select a Ceph client user from the list.<br />It would be great if the Dashboard can check if the client user has enough capabilities to the exported FS.</p>
<p><img src="https://tracker.ceph.com/attachments/download/5187/auth.png" alt="" /></p> Dashboard - Cleanup #47595 (New): mgr/dashboard: move Device health pane to upper levelhttps://tracker.ceph.com/issues/475952020-09-23T03:13:09ZKiefer Chang
<p>A suggestion was proposed in this <a href="https://github.com/ceph/ceph/pull/37275#issuecomment-696917846" class="external">PR</a> to move the Device health pane (Currently it contains SMART data) to the upper level.<br />So users can see the information without navigating too deep.</p>
<p><img src="https://tracker.ceph.com/attachments/download/5153/93925033-cfc29d80-fd15-11ea-9b50-deffd38d5aa8.png" alt="" /></p> Dashboard - Bug #47510 (New): mgr/dashboard: container ID truncates in daemons table when using R...https://tracker.ceph.com/issues/475102020-09-17T06:18:11ZKiefer Chang
<p>We truncate the ID in the frontend to shorten the hash, but this rule doesn't apply to Rook containers.</p>
<p>Rook:<br /><img src="https://tracker.ceph.com/attachments/download/5142/rook.png" alt="" /></p>
<p>Cephadm:<br /><img src="https://tracker.ceph.com/attachments/download/5143/cephadm.png" alt="" /></p>
<p>cephadm now return shorten IDs, which means we can remove the hack from frontend.</p> Orchestrator - Bug #46685 (Won't Fix): mgr/rook: OSD devices are marked as available https://tracker.ceph.com/issues/466852020-07-23T08:27:06ZKiefer Chang
<p>I deployed a Rook Ceph cluster with latest master/octopus image today, devices are marked as available even they are already used for OSDs.</p>
<pre>
[root@rook-ceph-tools-6b4889fdfd-s8x2k /]# ceph orch ls
NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID
crash 0/3 0s ago - * 192.168.2.106:5000/docker.io/ceph/daemon-base:octopus <unknown>
mgr 1/1 0s ago - count:1 192.168.2.106:5000/docker.io/ceph/daemon-base:octopus <unknown>
mon 3/3 0s ago - count:3 192.168.2.106:5000/docker.io/ceph/daemon-base:octopus <unknown>
[root@rook-ceph-tools-6b4889fdfd-s8x2k /]# ceph orch device ls
HOST PATH TYPE SIZE DEVICE AVAIL REJECT REASONS
k8s-1 /dev/sda hdd 20.0G None True
k8s-1 /dev/sdb hdd 20.0G None True
k8s-1 /dev/sdc hdd 20.0G None True
k8s-2 /dev/sda hdd 20.0G None True
k8s-2 /dev/sdb hdd 20.0G None True
k8s-2 /dev/sdc hdd 20.0G None True
k8s-3 /dev/sda hdd 20.0G None True
k8s-3 /dev/sdb hdd 20.0G None True
k8s-3 /dev/sdc hdd 20.0G None True
[root@rook-ceph-tools-6b4889fdfd-s8x2k /]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.17537 root default
-5 0.05846 host k8s-1
1 hdd 0.01949 osd.1 up 1.00000 1.00000
3 hdd 0.01949 osd.3 up 1.00000 1.00000
6 hdd 0.01949 osd.6 up 1.00000 1.00000
-3 0.05846 host k8s-2
0 hdd 0.01949 osd.0 up 1.00000 1.00000
2 hdd 0.01949 osd.2 up 1.00000 1.00000
5 hdd 0.01949 osd.5 up 1.00000 1.00000
-7 0.05846 host k8s-3
4 hdd 0.01949 osd.4 up 1.00000 1.00000
7 hdd 0.01949 osd.7 up 1.00000 1.00000
8 hdd 0.01949 osd.8 up 1.00000 1.00000
</pre>
<pre>
# kubectl describe configmap -n rook-ceph -l app=rook-discover
<...>
Name: local-device-k8s-3
Namespace: rook-ceph
Labels: app=rook-discover
rook.io/node=k8s-3
Annotations: <none>
Data
====
devices:
----
[{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:01.1-ata-1 /dev/disk/by-id/ata-QEMU_HARDDISK_QM00002","size":21474836480,"uuid":"70da3095-e5e8-4b02-8e08-c5210a4267b4","serial":"QEMU_HARDDISK_QM00002","type":"disk","rotational":true,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"QEMU_HARDDISK","wwn":"","wwnVendorExtension":"","empty":true,"cephVolumeData":"{\"path\":\"/dev/sda\",\"available\":true,\"rejected_reasons\":[],\"sys_api\":{\"removable\":\"0\",\"ro\":\"0\",\"vendor\":\"ATA\",\"model\":\"QEMU HARDDISK\",\"rev\":\"2.5+\",\"sas_address\":\"\",\"sas_device_handle\":\"\",\"support_discard\":\"512\",\"rotational\":\"1\",\"nr_requests\":\"128\",\"scheduler_mode\":\"cfq\",\"partitions\":{},\"sectors\":0,\"sectorsize\":\"512\",\"size\":21474836480.0,\"human_readable_size\":\"20.00 GB\",\"path\":\"/dev/sda\",\"locked\":0},\"lvs\":[]}","real-path":"/dev/sda"},{"name":"sdb","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/ata-QEMU_HARDDISK_QM00003 /dev/disk/by-path/pci-0000:00:01.1-ata-2","size":21474836480,"uuid":"24ecd945-823d-47f8-bf1b-73ae38150ead","serial":"QEMU_HARDDISK_QM00003","type":"disk","rotational":true,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"QEMU_HARDDISK","wwn":"","wwnVendorExtension":"","empty":true,"cephVolumeData":"{\"path\":\"/dev/sdb\",\"available\":true,\"rejected_reasons\":[],\"sys_api\":{\"removable\":\"0\",\"ro\":\"0\",\"vendor\":\"ATA\",\"model\":\"QEMU HARDDISK\",\"rev\":\"2.5+\",\"sas_address\":\"\",\"sas_device_handle\":\"\",\"support_discard\":\"512\",\"rotational\":\"1\",\"nr_requests\":\"128\",\"scheduler_mode\":\"cfq\",\"partitions\":{},\"sectors\":0,\"sectorsize\":\"512\",\"size\":21474836480.0,\"human_readable_size\":\"20.00 GB\",\"path\":\"/dev/sdb\",\"locked\":0},\"lvs\":[]}","real-path":"/dev/sdb"},{"name":"sdc","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/ata-QEMU_HARDDISK_QM00004 /dev/disk/by-path/pci-0000:00:01.1-ata-2","size":21474836480,"uuid":"e1eadfb7-2fc7-49aa-a5ea-65944b6ccdcb","serial":"QEMU_HARDDISK_QM00004","type":"disk","rotational":true,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"QEMU_HARDDISK","wwn":"","wwnVendorExtension":"","empty":true,"cephVolumeData":"{\"path\":\"/dev/sdc\",\"available\":true,\"rejected_reasons\":[],\"sys_api\":{\"removable\":\"0\",\"ro\":\"0\",\"vendor\":\"ATA\",\"model\":\"QEMU HARDDISK\",\"rev\":\"2.5+\",\"sas_address\":\"\",\"sas_device_handle\":\"\",\"support_discard\":\"512\",\"rotational\":\"1\",\"nr_requests\":\"128\",\"scheduler_mode\":\"cfq\",\"partitions\":{},\"sectors\":0,\"sectorsize\":\"512\",\"size\":21474836480.0,\"human_readable_size\":\"20.00 GB\",\"path\":\"/dev/sdc\",\"locked\":0},\"lvs\":[]}","real-path":"/dev/sdc"}]
Events: <none>
</pre>
<p>Where sdc's cephVolumeData is:<br /><pre>
"cephVolumeData":"{\"path\":\"/dev/sdc\",\"available\":true,\"rejected_reasons\":[],\"sys_api\":{\"removable\":\"0\",\"ro\":\"0\",\"vendor\":\"ATA\",\"model\":\"QEMU HARDDISK\",\"rev\":\"2.5+\",\"sas_address\":\"\",\"sas_device_handle\":\"\",\"support_discard\":\"512\",\"rotational\":\"1\",\"nr_requests\":\"128\",\"scheduler_mode\":\"cfq\",\"partitions\":{},\"sectors\":0,\"sectorsize\":\"512\",\"size\":21474836480.0,\"human_readable_size\":\"20.00 GB\",\"path\":\"/dev/sdc\",\"locked\":0},\"lvs\":[]}",
</pre></p> Dashboard - Bug #46652 (New): mgr/dashboard: exception raised when collapsing OSD detailhttps://tracker.ceph.com/issues/466522020-07-21T10:11:37ZKiefer Chang
<p>An exception is raised when collapsing the OSD detail pane if the backend API call is too slow.</p>
<pre>
Uncaught TypeError: this.osd is undefined
refresh osd-details.component.ts:44
__tryOrUnsub Subscriber.ts:265
next Subscriber.ts:207
_next Subscriber.ts:139
next Subscriber.ts:99
observe Notification.ts:47
dispatch delay.ts:100
_execute AsyncAction.ts:122
execute AsyncAction.ts:97
flush AsyncScheduler.ts:58
</pre>
<p>The cause is when the detail is collapsed, `this.osd` become undefined and got assigned.</p>
<p><img src="https://tracker.ceph.com/attachments/download/4999/osd_collapse.gif" alt="" /></p>
<p>It's not easy to reproduce if the API call is fast. But it can be artificially created by applying the following patch (collapse the detail when seeing `start refresh`):<br /><pre>
diff --git a/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/osd/osd-details/osd-details.component.ts b/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/osd/osd-details/osd-det
ails.component.ts
index 2ed5e0fe1f..f5c47d4a27 100644
--- a/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/osd/osd-details/osd-details.component.ts
+++ b/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/osd/osd-details/osd-details.component.ts
@@ -38,7 +38,9 @@ export class OsdDetailsComponent implements OnChanges {
}
refresh() {
+ console.log('start refresh');
this.osdService.getDetails(this.osd.id).subscribe((data) => {
+ console.log('done refresh');
this.osd.details = data;
this.osd.histogram_failed = '';
if (!_.isObject(data.histogram)) {
diff --git a/src/pybind/mgr/dashboard/frontend/src/app/shared/api/osd.service.ts b/src/pybind/mgr/dashboard/frontend/src/app/shared/api/osd.service.ts
index cc088d0e95..db6851b054 100644
--- a/src/pybind/mgr/dashboard/frontend/src/app/shared/api/osd.service.ts
+++ b/src/pybind/mgr/dashboard/frontend/src/app/shared/api/osd.service.ts
@@ -2,7 +2,7 @@ import { HttpClient } from '@angular/common/http';
import { Injectable } from '@angular/core';
import * as _ from 'lodash';
-import { map } from 'rxjs/operators';
+import { map, delay } from 'rxjs/operators';
import { CdDevice } from '../models/devices';
import { SmartDataResponseV1 } from '../models/smart';
@@ -81,7 +81,7 @@ export class OsdService {
histogram: { [key: string]: object };
smart: { [device_identifier: string]: any };
}
- return this.http.get<OsdData>(`${this.path}/${id}`);
+ return this.http.get<OsdData>(`${this.path}/${id}`).pipe(delay(4000));
}
/**
</pre></p> Orchestrator - Bug #46582 (Resolved): cephadm: NFS services should not share the same namespace i...https://tracker.ceph.com/issues/465822020-07-17T09:30:20ZKiefer Chang
<p>NFS services should have their dedicated pool and namespace to store export and conf objects.</p>
<p>cephadm allow me to apply these two specs:<br /><pre>
service_type: nfs
service_id: foo
placement:
hosts:
- mgr0
spec:
pool: rbd
namespace: nfs
</pre></p>
<pre>
service_type: nfs
service_id: bar
placement:
hosts:
- osd0
spec:
pool: rbd
namespace: nfs
</pre>
<p>The namespace should be checked if it's already occupied by other services.</p> Orchestrator - Bug #46327 (Won't Fix): cephadm: nfs daemons share the same config objecthttps://tracker.ceph.com/issues/463272020-07-02T08:02:00ZKiefer Chang
<p>If we create a NFS service with multiple instances, those instance share the same rados object as the configuration source.</p>
<p>E.g.</p>
<pre>
# cat /tmp/nfs.yml
service_type: nfs
service_id: sesdev_nfs_deployment
placement:
hosts:
- 'mgr0'
- 'osd0'
spec:
pool: rbd
namespace: nfs
# ceph orch apply -i /tmp/nfs.yml
Scheduled nfs.sesdev_nfs_deployment update...
# rados ls -p rbd --all
2020-07-02T07:48:22.637+0000 7fecad052b80 -1 WARNING: all dangerous and experimental features are enabled.
2020-07-02T07:48:22.637+0000 7fecad052b80 -1 WARNING: all dangerous and experimental features are enabled.
2020-07-02T07:48:22.637+0000 7fecad052b80 -1 WARNING: all dangerous and experimental features are enabled.
nfs grace
nfs rec-0000000000000003:nfs.sesdev_nfs_deployment.mgr0
nfs rec-0000000000000003:nfs.sesdev_nfs_deployment.osd0
nfs conf-nfs.sesdev_nfs_deployment <--- this object is shared by all daemons
# podman exec ceph-a83a8c75-ee83-4ada-8881-fc01bdca496b-nfs.sesdev_nfs_deployment.osd0 cat /etc/ganesha/ganesha.conf
<...>
RADOS_URLS {
UserId = "nfs.sesdev_nfs_deployment.osd0";
watch_url = "rados://rbd/nfs/conf-nfs.sesdev_nfs_deployment";
}
# podman exec ceph-a83a8c75-ee83-4ada-8881-fc01bdca496b-nfs.sesdev_nfs_deployment.mgr0 cat /etc/ganesha/ganesha.conf
<...>
RADOS_URLS {
UserId = "nfs.sesdev_nfs_deployment.mgr0";
watch_url = "rados://rbd/nfs/conf-nfs.sesdev_nfs_deployment";
}
</pre>
<p>Each daemon should have its own configuration object. Otherwise, all daemons are going to share the same exports.</p>
<p>The Dashboard determines daemon instances by enumerating `conf-xxx` objects. If there is only one config, the user can only choose it (but all daemons share the same config actually).</p>
<p><img src="https://tracker.ceph.com/attachments/download/4951/daemons.png" alt="" /></p> Dashboard - Bug #46147 (New): mgr/dashboard: table actions and column headers are not displayed i...https://tracker.ceph.com/issues/461472020-06-23T08:31:17ZKiefer Chang
<ul>
<li>Switch to a language other than English.</li>
<li>The table actions and column headers are still in English.</li>
</ul>
<p>For example:<br /><img src="https://tracker.ceph.com/attachments/download/4930/Screenshot_20200623_160603.png" alt="" /></p>
<p>It works in Octopus:</p>
<p><img src="https://tracker.ceph.com/attachments/download/4931/Screenshot_20200623_162244.png" alt="" /></p> Ceph - Bug #46130 (Resolved): building error when running Sphinxhttps://tracker.ceph.com/issues/461302020-06-22T03:10:53ZKiefer Chang
<p>I hit an error when building latest master code (89ad6c8e5d789975ae995ed2ca413d19d3f3d7cd).<br />The error message is</p>
<pre>
Running Sphinx v2.3.1
Configuration error:
There is a programmable error in your configuration file:
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/sphinx/config.py", line 368, in eval_config_file
execfile_(filename, namespace)
File "/usr/lib/python3.8/site-packages/sphinx/util/pycompat.py", line 81, in execfile_
exec(code, _globals)
File "/ceph/man/conf.py", line 58, in <module>
man_pages = list(_get_manpages())
File "/ceph/man/conf.py", line 49, in _get_manpages
description = _get_description(path, base)
File "/ceph/man/conf.py", line 28, in _get_description
name, description = two.split('--', 1)
ValueError: not enough values to unpack (expected 2, got 1)
make[2]: *** [doc/man/CMakeFiles/manpages.dir/build.make:159: doc/man/ceph-syn.8] Error 2
make[1]: *** [CMakeFiles/Makefile2:23464: doc/man/CMakeFiles/manpages.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
</pre>
<p>It seems to fail when parsing <strong>doc/man/8/ceph-objectstore-tool.rst</strong> file</p> Orchestrator - Feature #45982 (Resolved): mgr/cephadm: remove or update Dashboard settings after ...https://tracker.ceph.com/issues/459822020-06-12T06:14:50ZKiefer Chang
When these services are deployed, cephadm calls Dashboard's command to set settings to make features available in the Dashboard:
<ul>
<li>Prometheus</li>
<li>AlertManager</li>
<li>Grafana</li>
<li>Ganesha</li>
<li>iSCSI</li>
</ul>
<p>The settings are not updated or removed after daemons are destroyed.<br />e.g.</p>
<pre>
bin/ceph orch ls
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2020-06-12T06:07:44.633+0000 7f925a835700 -1 WARNING: all dangerous and experimental features are enabled.
2020-06-12T06:07:44.653+0000 7f925a835700 -1 WARNING: all dangerous and experimental features are enabled.
NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID
alertmanager 1/1 12m ago 14m count:1 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f
grafana 1/1 12m ago 14m count:1 docker.io/ceph/ceph-grafana:latest 87a51ecf0b1c
iscsi.test 2/2 12m ago 3h mgr0,osd0 quay.io/ceph-ci/ceph:master 597a83acbe4d
node-exporter 2/2 12m ago 14m * docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf
prometheus 1/1 12m ago 14m count:1 docker.io/prom/prometheus:v2.18.1 de242295e225
</pre>
<pre>
bin/ceph dashboard get-grafana-api-url
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2020-06-12T06:07:52.593+0000 7fcfffd0a700 -1 WARNING: all dangerous and experimental features are enabled.
2020-06-12T06:07:52.609+0000 7fcfffd0a700 -1 WARNING: all dangerous and experimental features are enabled.
https://osd0:3000
</pre>
<pre>
bin/ceph orch rm grafana
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2020-06-12T06:08:08.557+0000 7f6f3f8a4700 -1 WARNING: all dangerous and experimental features are enabled.
2020-06-12T06:08:08.577+0000 7f6f3f8a4700 -1 WARNING: all dangerous and experimental features are enabled.
Removed service grafana
</pre>
<pre>
bin/ceph dashboard get-grafana-api-url
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2020-06-12T06:12:57.589+0000 7f09142d8700 -1 WARNING: all dangerous and experimental features are enabled.
2020-06-12T06:12:57.605+0000 7f09142d8700 -1 WARNING: all dangerous and experimental features are enabled.
https://osd0:3000
</pre> Dashboard - Feature #45718 (New): mgr/dashboard: display more information about services in Servi...https://tracker.ceph.com/issues/457182020-05-27T02:17:39ZKiefer Chang
<p>In the Services page, we display services and their daemons.</p>
<p><img src="https://tracker.ceph.com/attachments/download/4888/Screenshot_20200527_101641.png" alt="" /></p>
<p>The Orchestrator provides more information about services:</p>
<ul>
<li>Placement</li>
<li>Service-specific parameters, for example. iSCSI service container `api_user` and `api_password` parameters.</li>
<li>Management state (`unmanaged` flag)</li>
</ul>
<p>We should display this information to users.</p> Orchestrator - Bug #45560 (Resolved): cephadm: fail to create OSDshttps://tracker.ceph.com/issues/455602020-05-15T04:13:22ZKiefer Chang
<p>OSDs are not created after applying the following spec:</p>
<pre>
service_type: osd
service_id: dg1
host_pattern: '*'
data_devices:
rotational: true
</pre>
<p>Logs: <br /><pre>
2020-05-15T04:12:06.289+0000 7f8419a59700 0 [cephadm DEBUG root] Applying service osd.dg1 spec
2020-05-15T04:12:06.289+0000 7f8419a59700 0 [cephadm DEBUG cephadm.schedule] All hosts: [HostPlacementSpec(hostname='mgr0', network='', name=''), HostPlacementSpec(hostname='osd0', network
='', name='')]
2020-05-15T04:12:06.289+0000 7f84232ec700 20 mgr Gil Switched to new thread state 0x5650be3c8480
2020-05-15T04:12:06.289+0000 7f84232ec700 20 mgr ~Gil Destroying new thread state 0x5650be3c8480
2020-05-15T04:12:06.289+0000 7f8419a59700 20 mgr get_config key: mgr/dashboard/GRAFANA_API_URL
2020-05-15T04:12:06.289+0000 7f84232ec700 20 mgr Gil Switched to new thread state 0x5650be3c8480
2020-05-15T04:12:06.289+0000 7f84232ec700 20 mgr ~Gil Destroying new thread state 0x5650be3c8480
2020-05-15T04:12:06.289+0000 7f8419a59700 10 mgr get_typed_config GRAFANA_API_URL not found
2020-05-15T04:12:06.289+0000 7f8419a59700 0 [cephadm DEBUG root] Sleeping for 600 seconds
</pre></p>
<p>It might be caused by changes in <a class="external" href="https://github.com/ceph/ceph/pull/35022">https://github.com/ceph/ceph/pull/35022</a></p> Orchestrator - Bug #45249 (Resolved): cephadm: fail to apply a iSCSI ServiceSpechttps://tracker.ceph.com/issues/452492020-04-24T07:25:43ZKiefer Chang
How to reproduce:
<ul>
<li>Create a spec file<br /><pre>
# cat /tmp/iscsi.txt
service_type: iscsi
service_id: test
placement:
hosts:
- osd0
spec:
pool: rbd
api_user: admin
api_password: admin
trusted_ip_list: 192.168.121.1
</pre></li>
<li>Apply it<br /><pre>
# bin/ceph orch apply -i /tmp/iscsi.txt
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
2020-04-24T06:42:35.315+0000 7fa50e332700 -1 WARNING: all dangerous and experimental features are enabled.
2020-04-24T06:42:35.343+0000 7fa50e332700 -1 WARNING: all dangerous and experimental features are enabled.
Error ENOENT: ServiceSpec: __init__() missing 1 required positional argument: 'service_id
</pre></li>
</ul>
<p>This can be fixed by adding <strong>iscsi</strong> to the check list here: <a class="external" href="https://github.com/ceph/ceph/blob/036c40e9432deafb4379cedb5c119109376cc5b9/src/pybind/mgr/cephadm/module.py#L2531">https://github.com/ceph/ceph/blob/036c40e9432deafb4379cedb5c119109376cc5b9/src/pybind/mgr/cephadm/module.py#L2531</a></p> Dashboard - Feature #44865 (New): mgr/dashboard: support zapping deviceshttps://tracker.ceph.com/issues/448652020-03-31T15:56:44ZKiefer Chang
<p>The orchestrator support zapping devices by:</p>
<pre>
ceph orch device zap <hostname> <path>
</pre>
<p>We can support this as a new action in the inventory page or a post step after removing OSDs.</p> Dashboard - Feature #44833 (Duplicate): mgr/dashboard: allow users to manage labels on hostshttps://tracker.ceph.com/issues/448332020-03-31T09:51:39ZKiefer Chang
To support <a href="https://docs.ceph.com/docs/master/mgr/orchestrator/#placement-specification" class="external">PlacementSpec</a> when creating services, we should allow users to
<ul>
<li>create hosts with labels.</li>
<li>edit labels of hosts later.</li>
</ul>