Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2020-06-04T08:09:20ZCeph
Redmine Ceph - Documentation #45874 (Fix Under Review): doc: Extend resolving conflict section in "Submit...https://tracker.ceph.com/issues/458742020-06-04T08:09:20ZStephan Müller
<p>Currently it's not clear how to easily continue with the backport script when a conflict is encountered.</p> Dashboard - Tasks #25167 (New): mgr/dashboard: Display useful popovers in formshttps://tracker.ceph.com/issues/251672018-07-30T14:49:33ZStephan Müller
<p>Add useful popovers to each form attribute, for inexperienced users.</p> Dashboard - Feature #25166 (New): mgr/dashboard: Add cache pool supporthttps://tracker.ceph.com/issues/251662018-07-30T14:47:06ZStephan Müller
<p>Ceph provides a concept of <a href="http://ceph.com/docs/master/dev/cache-pool/" class="external">cache pools</a>. It should be possible for an administrator to define such a cache pool and assign it to an existing pool via dashboard:</p>
<a name="Purpose"></a>
<h3 >Purpose<a href="#Purpose" class="wiki-anchor">¶</a></h3>
<p>Use a pool of fast storage devices (probably SSDs) and use it as a cache for an existing slower and larger pool.</p>
<p>Use a replicated pool as a front-end to service most I/O, and destage cold data to a separate erasure coded pool that does not currently (and cannot efficiently) handle the workload.</p>
<p>We should be able to create and add a cache pool to an existing pool of data, and later remove it, without disrupting service or migrating data around.</p>
<a name="See-also"></a>
<h3 >See also:<a href="#See-also" class="wiki-anchor">¶</a></h3>
<p><a class="external" href="http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013880.html">http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013880.html</a><br /><a class="external" href="http://docs.ceph.com/docs/master/rados/operations/cache-tiering/">http://docs.ceph.com/docs/master/rados/operations/cache-tiering/</a><br /><a class="external" href="https://www.packtpub.com/packtlib/book/Application-Development/9781784393502/8/ch08lvl1sec91/Creating%20a%20pool%20for%20cache%20tiering">https://www.packtpub.com/packtlib/book/Application-Development/9781784393502/8/ch08lvl1sec91/Creating%20a%20pool%20for%20cache%20tiering</a></p> Dashboard - Feature #25165 (Rejected): mgr/dashboard: Manage Ceph pool snapshotshttps://tracker.ceph.com/issues/251652018-07-30T14:42:38ZStephan Müller
<p>Currently there is no way to create / delete pool snapshots</p> Dashboard - Feature #25164 (New): mgr/dashboard: Display basic performance/utilization metrics of...https://tracker.ceph.com/issues/251642018-07-30T14:30:23ZStephan Müller
<p>When clicking on a pool in the list of pools, the pool details should show graphs of the pool's performance and utilization.</p> Dashboard - Tasks #25163 (New): mgr/dashboard: Extend the Ceph pool by configurationshttps://tracker.ceph.com/issues/251632018-07-30T14:26:53ZStephan Müller
<p>The ceph pool details found on /api/pools aren't complete yet and shall be extended by the missing configurations listed in <a class="external" href="http://docs.ceph.com/docs/master/rados/operations/pools/#get-pool-values">http://docs.ceph.com/docs/master/rados/operations/pools/#get-pool-values</a>.</p> Dashboard - Tasks #25162 (New): API interceptor should handle client and server side offline statushttps://tracker.ceph.com/issues/251622018-07-30T14:25:02ZStephan Müller
<p>API interceptor should handle client and server side offline status</p> Dashboard - Cleanup #25161 (Resolved): Every keystroke for the username in the RGW user form trig...https://tracker.ceph.com/issues/251612018-07-30T14:23:26ZStephan Müller
<p>Every keystroke for the username in the RGW user form triggers an API call, this should be minimized to at max 2 requests.</p> Dashboard - Feature #25160 (New): mgr/dashboard: Create a "Create Ceph Cluster Pool Configuration...https://tracker.ceph.com/issues/251602018-07-30T14:19:56ZStephan Müller
<p>This issue is a port from this openATTIC <a href="https://tracker.openattic.org/browse/OP-1072" class="external">issue</a>.<br />For all comments and pictures please look at the original issue.</p>
<p>This Wizard should consist of these three steps:</p>
<ol>
<li>Provide a check list of options that this cluster will be used for. e.g. "openstack", "iSCSI", RGW, CephFS. Also, ask for the expected final size of this cluster.</li>
<li>Generate a dialog similar to the ceph PG calculator, which contains a table of all pools that will be created. each pool should be editable and removable.</li>
<li>Apply</li>
</ol>
<p>This wizard will only be useful, if the cluster is newly created.</p>
<p>Original Description for creating a single pool:</p>
<blockquote>
<p>Once the basic/generic functionality for creating a Ceph Pool exists (e.g. the required REST API call), we should consider creating a "Create Pool" Wizard, that guides the user through the required steps.</p>
<p>Some rough notes about this that were gathered during a call with SUSE about this:</p>
<p>First step after installation - creation of a Crush map depending if there is just one set of disks (one size, all rotational) vs. rotating disk plus SSDs (likely two different sizes?)<br />(<strong>Not</strong> for SSDs used as journal devices)</p>
<p>Crush Map creation? All your disks in one rule set, or use two separate groupings? Reason: to constrain what disks a pool can use (e.g. for creating a cache pool)<br />Can I query an OSD for the size of its disk?<br />Propose a grouping based on what is reported.</p>
<p>Create a pool for one of the following purposes</p>
<p>- Replicated or Erasure Coded? => Explain the pros and cons in sidebar<br />- Cache tiering (only if there are separate rule sets)</p>
<p>If Erasure Coded: Propose k/m values e.g. 5/3 4/2 (dropdown showing existing profiles), algorithm</p>
<p>- Suggest conservative Placement Group Number (use pgcalc algorithm?) (Hint that it can't be decreased and depends on the estimated number of Pools, probably propose a conservative number)<br />Maybe a Checkbox? "Do you intend to create additional pools?"</p>
<p>- Block devices (iSCSI), Virtual Machine Images<br />- Generic object storage</p>
<p>Note: (Cache tiering does not work with RBDs)</p>
<p>- (CephFS)</p>
</blockquote>
<p>Also Seel: <a class="external" href="http://ceph.com/pgcalc/">http://ceph.com/pgcalc/</a></p> Dashboard - Feature #25159 (New): mgr/dashboard: Add CRUSH ruleset management to CRUSH viewerhttps://tracker.ceph.com/issues/251592018-07-30T14:07:19ZStephan Müller
<p>Add support to view / create / update / delete a crush ruleset in the CRUSH map viewer.</p> Dashboard - Feature #25158 (Resolved): Add pool cache tiering details tabhttps://tracker.ceph.com/issues/251582018-07-30T14:05:31ZStephan Müller
<ul>
<li>Show the cache tiers as an indented pool in the data table.</li>
<li>Don't show cache tiering tab, if there is no cache tier.</li>
</ul> Dashboard - Feature #25156 (Resolved): mgr/dashboard: Erasure code profile managementhttps://tracker.ceph.com/issues/251562018-07-30T13:47:20ZStephan Müller
<p>The dashboard should support managing Erasure code profiles, e.g. list all existing ones, create new ones and delete existing ones.</p>
<p>The creation should allow to select the EC plugin as well as all other features as described here: <a class="external" href="http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/">http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/</a></p> Dashboard - Bug #25139 (Resolved): Task wrapper should not call notifyTask if a task failshttps://tracker.ceph.com/issues/251392018-07-27T14:13:51ZStephan Müller
<p>Task wrapper should not call notifyTask if a task fails as this is done by the API interceptor, because of this you often get two error messages shown.</p> Dashboard - Tasks #24460 (Resolved): Make notification and tasks look more human readablehttps://tracker.ceph.com/issues/244602018-06-08T15:05:20ZStephan Müller
<p>Currently the notification and tasks don't look alike and are not the nicest way to present information as it is often pretty redundant.</p>
<p>The idea is the following have 3 kinds of message types.</p>
<p>1. One runn*ing* message<br /><pre>
Creating ....
</pre></p>
<p>2. One finish*ed* message<br /><pre>
Created ...
</pre></p>
<p>3. One *fail*ure message with description<br /><pre>
Creation ... failed
error description
</pre></p>
<p>Currently our messages look like this:<br /><pre>
Create ...
description message for running/finished and failure
</pre></p> Dashboard - Cleanup #24134 (Resolved): Semi automatic Task creationhttps://tracker.ceph.com/issues/241342018-05-15T09:00:48ZStephan Müller
<p>I would like to have a service that wraps a task around a 'http' request and returns an observable like 'http' does.</p>
<p>This way it would be easy to use task without having to use the same code and mostly useless imports (only used for Task wrapping) over and over again.</p>