Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2020-10-01T10:03:53ZCeph
Redmine Dashboard - Bug #47714 (New): mgr/dashboard: Implement an expert settinghttps://tracker.ceph.com/issues/477142020-10-01T10:03:53ZStephan Müller
<p>To simplify forms implement an expert setting that if disabled hides not mandatory fields in the forms as the first step.</p>
<p>How should it look like?<br />Maybe on the top right an expert slider on the form and on the top panel. As it will have impact on what a users sees in the future.</p>
<p>Tip before getting to deep into the implementation please ask in the stand up if that's path we want to go down.</p> Orchestrator - Tasks #46551 (Resolved): cephadm: Add better a better hint how to add a hosthttps://tracker.ceph.com/issues/465512020-07-15T14:14:32ZStephan Müller
<p>Currently:</p>
<pre>
master:~ # ceph orch host add mgr0 192.168.121.230
Error ENOENT: Failed to connect to mgr0 (192.168.121.230).
Check that the host is reachable and accepts connections using the cephadm SSH key
you may want to run:
> ceph cephadm get-ssh-config > ssh_config
> ceph config-key get mgr/cephadm/ssh_identity_key > key
> ssh -F ssh_config -i key root@mgr0
</pre>
<p>What actually needs to be done:<br /><pre>
master:~ # ceph config-key get mgr/cephadm/ssh_identity_pub > key.pub
master:~ # ssh-copy-id -i "key.pub" root@mgr0
</pre></p>
<p>What the message should look like in the end:<br /><pre>
master:~ # ceph orch host add mgr0 192.168.121.230
Error ENOENT: Failed to connect to mgr0 (192.168.121.230).
Check that the host is reachable and accepts connections using the cephadm SSH key
you may want to add the SSH key to the host:
> ceph config-key get mgr/cephadm/ssh_identity_pub > ~/cephadm_ssh_key.pub
> ssh-copy-id -i ~/cephadm_ssh_key.pub root@mgr0
you may want to check that everything works, before rerunning the command:
> ceph cephadm get-ssh-config > ssh_config
> ceph config-key get mgr/cephadm/ssh_identity_key > ~/cephadm_ssh_key
> ssh -F ssh_config -i ~/cephadm_ssh_key root@mgr0
</pre></p> Orchestrator - Support #46547 (Resolved): cephadm: Exception adding host via FQDN if host was alr...https://tracker.ceph.com/issues/465472020-07-15T12:17:02ZStephan Müller
<p>To reproduce you need nodes that have a subdomain (not like in current Vagrantfile). I used sesdev to find this issue.</p>
<pre>
master:~ # ceph orch host add node1.pacific.test
Error ENOENT: New host node1.pacific.test (node1.pacific.test) failed check: [
'INFO:cephadm:podman|docker (/usr/bin/podman) is present',
'INFO:cephadm:systemctl is present', 'INFO:cephadm:lvcreate is present',
'INFO:cephadm:Unit chronyd.service is enabled and running',
'INFO:cephadm:Hostname "node1.pacific.test" matches what is expected.',
'ERROR: hostname "node1" does not match expected hostname "node1.pacific.test"'
]
</pre>
<p>With `ceph -W cephadm` one observes</p>
<pre>
2020-07-15T13:24:21.159126+0200 mgr.node1.zybwkb [ERR] _Promise failed
Traceback (most recent call last):
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 277, in _finalize
next_result = self._on_complete(self._value)
File "/usr/share/ceph/mgr/cephadm/module.py", line 132, in <lambda>
return CephadmCompletion(on_complete=lambda _: f(*args, **kwargs))
File "/usr/share/ceph/mgr/cephadm/module.py", line 1098, in add_host
return self._add_host(spec)
File "/usr/share/ceph/mgr/cephadm/module.py", line 1087, in _add_host
spec.hostname, spec.addr, err))
orchestrator._interface.OrchestratorError: New host node1.pacific.test (node1.pacific.test) failed check: ['INFO:cephadm:podman|docker (/usr/bin/podman) is present', 'INFO:cephadm:systemctl is present', 'INFO:cephadm:lvcreate is present', 'INFO:cephadm:Unit chronyd.service is enabled and running', 'INFO:cephadm:Hostname "node1.pacific.test" matches what is expected.', 'ERROR: hostname "node1" does not match expected hostname "node1.pacific.test"']
</pre> Ceph - Documentation #45874 (Fix Under Review): doc: Extend resolving conflict section in "Submit...https://tracker.ceph.com/issues/458742020-06-04T08:09:20ZStephan Müller
<p>Currently it's not clear how to easily continue with the backport script when a conflict is encountered.</p> Dashboard - Bug #42243 (Duplicate): mgr/dashboard: Fix unit test that is failing in a negative ti...https://tracker.ceph.com/issues/422432019-10-09T08:54:36ZStephan Müller
<p>Found a test that is failing in a negative timezone (Washington (-05:00))</p>
<pre>
● RbdSnapshotListComponent › snapshot modal dialog › should display suggested snapshot name
expect(received).toMatch(expected)
Expected pattern: /^image01_[\d-]+T[\d.:]+\+[\d:]+$/
Received string: "image01_2019-10-09T03:40:18.664-05:00"
204 | it('should display suggested snapshot name', () => {
205 | component.openCreateSnapshotModal();
> 206 | expect(component.modalRef.content.snapName).toMatch(
| ^
207 | RegExp(`^${component.rbdName}_[\\d-]+T[\\d.:]+\\+[\\d:]+\$`)
208 | );
209 | });
at src/app/ceph/block/rbd-snapshot-list/rbd-snapshot-list.component.spec.ts:206:51
at ZoneDelegate.Object.<anonymous>.ZoneDelegate.invoke (node_modules/zone.js/dist/zone.js:391:26)
at ProxyZoneSpec.Object.<anonymous>.ProxyZoneSpec.onInvoke (node_modules/zone.js/dist/proxy.js:129:39)
at ZoneDelegate.Object.<anonymous>.ZoneDelegate.invoke (node_modules/zone.js/dist/zone.js:390:52)
at Zone.Object.<anonymous>.Zone.run (node_modules/zone.js/dist/zone.js:150:43)
at Object.testBody.length (node_modules/jest-preset-angular/zone-patch/index.js:52:27)
</pre> Dashboard - Feature #42232 (New): mgr/dashboard: CephFs directory size calculationhttps://tracker.ceph.com/issues/422322019-10-08T15:02:25ZStephan Müller
<p>Add a button to calculate the size of the current selected directory</p> Dashboard - Bug #40331 (New): mgr/dashboard: Details should not display information that is shown...https://tracker.ceph.com/issues/403312019-06-13T12:31:31ZStephan Müller
<p>"Details" about table items should not display redundant information. Details should not be provided, if the table already displays all information available.</p> Dashboard - Feature #40310 (New): mgr/dashboard: Create a landing page (aggregated overview) pagehttps://tracker.ceph.com/issues/403102019-06-13T09:08:43ZStephan Müller
<p>Create a landing page (aggregated overview) page, the third step to take for multi cluster management.</p> Dashboard - Tasks #25167 (New): mgr/dashboard: Display useful popovers in formshttps://tracker.ceph.com/issues/251672018-07-30T14:49:33ZStephan Müller
<p>Add useful popovers to each form attribute, for inexperienced users.</p> Dashboard - Feature #25166 (New): mgr/dashboard: Add cache pool supporthttps://tracker.ceph.com/issues/251662018-07-30T14:47:06ZStephan Müller
<p>Ceph provides a concept of <a href="http://ceph.com/docs/master/dev/cache-pool/" class="external">cache pools</a>. It should be possible for an administrator to define such a cache pool and assign it to an existing pool via dashboard:</p>
<a name="Purpose"></a>
<h3 >Purpose<a href="#Purpose" class="wiki-anchor">¶</a></h3>
<p>Use a pool of fast storage devices (probably SSDs) and use it as a cache for an existing slower and larger pool.</p>
<p>Use a replicated pool as a front-end to service most I/O, and destage cold data to a separate erasure coded pool that does not currently (and cannot efficiently) handle the workload.</p>
<p>We should be able to create and add a cache pool to an existing pool of data, and later remove it, without disrupting service or migrating data around.</p>
<a name="See-also"></a>
<h3 >See also:<a href="#See-also" class="wiki-anchor">¶</a></h3>
<p><a class="external" href="http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013880.html">http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013880.html</a><br /><a class="external" href="http://docs.ceph.com/docs/master/rados/operations/cache-tiering/">http://docs.ceph.com/docs/master/rados/operations/cache-tiering/</a><br /><a class="external" href="https://www.packtpub.com/packtlib/book/Application-Development/9781784393502/8/ch08lvl1sec91/Creating%20a%20pool%20for%20cache%20tiering">https://www.packtpub.com/packtlib/book/Application-Development/9781784393502/8/ch08lvl1sec91/Creating%20a%20pool%20for%20cache%20tiering</a></p> Dashboard - Feature #25164 (New): mgr/dashboard: Display basic performance/utilization metrics of...https://tracker.ceph.com/issues/251642018-07-30T14:30:23ZStephan Müller
<p>When clicking on a pool in the list of pools, the pool details should show graphs of the pool's performance and utilization.</p> Dashboard - Tasks #25163 (New): mgr/dashboard: Extend the Ceph pool by configurationshttps://tracker.ceph.com/issues/251632018-07-30T14:26:53ZStephan Müller
<p>The ceph pool details found on /api/pools aren't complete yet and shall be extended by the missing configurations listed in <a class="external" href="http://docs.ceph.com/docs/master/rados/operations/pools/#get-pool-values">http://docs.ceph.com/docs/master/rados/operations/pools/#get-pool-values</a>.</p> Dashboard - Tasks #25162 (New): API interceptor should handle client and server side offline statushttps://tracker.ceph.com/issues/251622018-07-30T14:25:02ZStephan Müller
<p>API interceptor should handle client and server side offline status</p> Dashboard - Feature #25160 (New): mgr/dashboard: Create a "Create Ceph Cluster Pool Configuration...https://tracker.ceph.com/issues/251602018-07-30T14:19:56ZStephan Müller
<p>This issue is a port from this openATTIC <a href="https://tracker.openattic.org/browse/OP-1072" class="external">issue</a>.<br />For all comments and pictures please look at the original issue.</p>
<p>This Wizard should consist of these three steps:</p>
<ol>
<li>Provide a check list of options that this cluster will be used for. e.g. "openstack", "iSCSI", RGW, CephFS. Also, ask for the expected final size of this cluster.</li>
<li>Generate a dialog similar to the ceph PG calculator, which contains a table of all pools that will be created. each pool should be editable and removable.</li>
<li>Apply</li>
</ol>
<p>This wizard will only be useful, if the cluster is newly created.</p>
<p>Original Description for creating a single pool:</p>
<blockquote>
<p>Once the basic/generic functionality for creating a Ceph Pool exists (e.g. the required REST API call), we should consider creating a "Create Pool" Wizard, that guides the user through the required steps.</p>
<p>Some rough notes about this that were gathered during a call with SUSE about this:</p>
<p>First step after installation - creation of a Crush map depending if there is just one set of disks (one size, all rotational) vs. rotating disk plus SSDs (likely two different sizes?)<br />(<strong>Not</strong> for SSDs used as journal devices)</p>
<p>Crush Map creation? All your disks in one rule set, or use two separate groupings? Reason: to constrain what disks a pool can use (e.g. for creating a cache pool)<br />Can I query an OSD for the size of its disk?<br />Propose a grouping based on what is reported.</p>
<p>Create a pool for one of the following purposes</p>
<p>- Replicated or Erasure Coded? => Explain the pros and cons in sidebar<br />- Cache tiering (only if there are separate rule sets)</p>
<p>If Erasure Coded: Propose k/m values e.g. 5/3 4/2 (dropdown showing existing profiles), algorithm</p>
<p>- Suggest conservative Placement Group Number (use pgcalc algorithm?) (Hint that it can't be decreased and depends on the estimated number of Pools, probably propose a conservative number)<br />Maybe a Checkbox? "Do you intend to create additional pools?"</p>
<p>- Block devices (iSCSI), Virtual Machine Images<br />- Generic object storage</p>
<p>Note: (Cache tiering does not work with RBDs)</p>
<p>- (CephFS)</p>
</blockquote>
<p>Also Seel: <a class="external" href="http://ceph.com/pgcalc/">http://ceph.com/pgcalc/</a></p> Dashboard - Feature #25159 (New): mgr/dashboard: Add CRUSH ruleset management to CRUSH viewerhttps://tracker.ceph.com/issues/251592018-07-30T14:07:19ZStephan Müller
<p>Add support to view / create / update / delete a crush ruleset in the CRUSH map viewer.</p>