Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2022-01-28T15:55:57ZCeph
Redmine Orchestrator - Documentation #54056 (New): https://docs.ceph.com/en/latest/cephadm/install/#deplo...https://tracker.ceph.com/issues/540562022-01-28T15:55:57ZSebastian Wagner
<p><a class="external" href="https://docs.ceph.com/en/latest/cephadm/install/#deployment-in-an-isolated-environment">https://docs.ceph.com/en/latest/cephadm/install/#deployment-in-an-isolated-environment</a></p>
<ul>
<li>e.g. the monitoring stack and ingress is completely missing</li>
<li>link to the registry-login functionality</li>
</ul> Orchestrator - Documentation #54051 (New): cephadm: advise users how when when to change OSD specshttps://tracker.ceph.com/issues/540512022-01-28T12:24:42ZSebastian Wagner
<p>In general it works fine, there is just one caveat: we're not going to touch any already deployed OSDs</p>
<p>I'd recommend to stick to a particular osd layout, but adjust the placement for example<br />let's say testing the deployment on 1 host and then rolling it out on all the others</p>
<p>If a user changes other properties of the OSD spec, he has to understand that existing osds are not getting redeployed. E.g. changing the encraption flag of an OSD spec doesn't magically encrypt any OSDs. Only new OSDs will then be encrypted.</p> Orchestrator - Documentation #53997 (Resolved): cephadm spec: document "config" keyhttps://tracker.ceph.com/issues/539972022-01-24T15:19:53ZSebastian Wagner
<p>Users can specify config options for a particular service:</p>
<pre><code class="yaml syntaxhl"><span class="CodeRay"><span class="key">service_type</span>: <span class="string"><span class="content">mds</span></span>
<span class="key">service_id</span>: <span class="string"><span class="content">fsname</span></span>
<span class="key">placement</span>:
<span class="key">count</span>: <span class="string"><span class="content">2</span></span>
<span class="key">config</span>:
<span class="key">mds_cache_memory_limit</span>: <span class="string"><span class="content">8Gi</span></span>
</span></code></pre>
<p>This is also going to be cleaned up properly. Let's document it!</p> Orchestrator - Documentation #53532 (New): cephadm document Prometheus Sizing observationshttps://tracker.ceph.com/issues/535322021-12-08T11:50:52ZSebastian Wagner
Prometheus Sizing observations
<ul>
<li>22/11 48 osds growing to 600+ - 124GB Disk space, peak of 3000 IOPS, 150MB/s, 2 cores (peak 6), 43G RAM</li>
<li>with 4147 OSDs, storage requirement is ~230GB, 2 cores (peak 8)</li>
</ul>
<p>Should be added to <a class="external" href="https://docs.ceph.com/en/latest/cephadm/services/monitoring/">https://docs.ceph.com/en/latest/cephadm/services/monitoring/</a></p>
<p>Paul:</p>
<ul>
<li>What does <strong>22/11</strong> mean?</li>
</ul> Orchestrator - Documentation #52704 (Won't Fix): network 'flow' diagram for cephadmhttps://tracker.ceph.com/issues/527042021-09-22T14:50:37ZSebastian Wagner
<p>it sounds like you're looking for a network 'flow' diagram for cephadm + the normal ceph architecture</p> Orchestrator - Documentation #52490 (Resolved): document single-host-defaultshttps://tracker.ceph.com/issues/524902021-09-02T09:13:35ZSebastian Wagner
<ul>
<li>use cases</li>
<li>what it does</li>
</ul>
<pre>
global/osd_crush_choose_leaf_type = 0
global/osd_pool_default_size = 2
mgr/mgr_standby_modules = False
</pre> mgr - Backport #39270 (Resolved): nautilus: mgr/rook: Fix Python 2 regressionhttps://tracker.ceph.com/issues/392702019-04-12T11:45:13ZSebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/pull/27496">https://github.com/ceph/ceph/pull/27496</a></p> mgr - Backport #39172 (Resolved): nautilus: rook-ceph-system namespace hardcoded in the rook orch...https://tracker.ceph.com/issues/391722019-04-10T16:35:28ZSebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/pull/27496">https://github.com/ceph/ceph/pull/27496</a></p> mgr - Backport #39169 (Resolved): nautilus: doc/mgr/orchestrator_cli: Rook orch supports mon updatehttps://tracker.ceph.com/issues/391692019-04-10T09:08:33ZSebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/pull/27488">https://github.com/ceph/ceph/pull/27488</a></p> mgr - Backport #39168 (Resolved): nautilus: doc/orchestrator: Fix broken bullet pointshttps://tracker.ceph.com/issues/391682019-04-10T09:08:16ZSebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/pull/27487">https://github.com/ceph/ceph/pull/27487</a></p> mgr - Backport #39167 (Resolved): nautilus: Rook: Fix creation of Bluestore OSDshttps://tracker.ceph.com/issues/391672019-04-10T09:07:54ZSebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/pull/27486">https://github.com/ceph/ceph/pull/27486</a></p> mgr - Backport #39083 (Resolved): nautilus: mgr/deepsea: use ceph_volume output in get_inventory()https://tracker.ceph.com/issues/390832019-04-02T14:18:03ZSebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/pull/27319">https://github.com/ceph/ceph/pull/27319</a></p> mgr - Backport #38837 (Resolved): nautilus: mgr/orchestrator: Add error handling to interfacehttps://tracker.ceph.com/issues/388372019-03-21T11:29:21ZSebastian Wagnermgr - Backport #38808 (Resolved): nautilus: mgr/orchestrator: Remove `(add|test|remove)_stateful_...https://tracker.ceph.com/issues/388082019-03-19T09:19:26ZSebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/pull/27043">https://github.com/ceph/ceph/pull/27043</a></p> Dashboard - Backport #37870 (Resolved): mgr/dashboard: (Mimic) UnboundLocalError: local variable ...https://tracker.ceph.com/issues/378702019-01-11T08:41:32ZSebastian Wagner
<p>Extracting from <a class="external" href="https://github.com/rook/rook/issues/2404">https://github.com/rook/rook/issues/2404</a></p>
<pre>
[07/Jan/2019:06:27:21] HTTP Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cherrypy/_cprequest.py", line 656, in respond
response.body = self.handler()
File "/usr/lib/python2.7/site-packages/cherrypy/lib/encoding.py", line 188, in __call__
self.body = self.oldhandler(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/cherrypy/lib/jsontools.py", line 61, in json_handler
value = cherrypy.serving.request._json_inner_handler(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/cherrypy/_cpdispatch.py", line 34, in __call__
return self.callable(*self.args, **self.kwargs)
File "/usr/lib64/ceph/mgr/dashboard/controllers/summary.py", line 68, in __call__
'rbd_mirroring': self._rbd_mirroring(),
File "/usr/lib64/ceph/mgr/dashboard/controllers/summary.py", line 37, in _rbd_mirroring
_, data = get_daemons_and_pools()
File "/usr/lib64/ceph/mgr/dashboard/tools.py", line 212, in wrapper
return rvc.run(fn, args, kwargs)
File "/usr/lib64/ceph/mgr/dashboard/tools.py", line 194, in run
raise self.exception
UnboundLocalError: local variable 'mirror_mode' referenced before assignment
</pre>
<p>and</p>
<pre>
Traceback (most recent call last):
File "/usr/lib64/ceph/mgr/dashboard/controllers/rbd_mirroring.py", line 94, in get_pools
mirror_mode = rbdctx.mirror_mode_get(ioctx)
File "rbd.pyx", line 1195, in rbd.RBD.mirror_mode_get (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/13.2.2/rpm/el7/BUILD/ceph-13.2.2/build/src/pybind/rbd/pyrex/rbd.c:8543)
PermissionError: [errno 1] error getting mirror mode
</pre>
<p>Fix is most likely be a simple cherry-pick of this single line: <a class="external" href="https://github.com/ceph/ceph/commit/dc616c7d724a2580927ff8fb1fbc070d83dd67ef#diff-24e410817ffc0acf57da283811f4aa03R98">https://github.com/ceph/ceph/commit/dc616c7d724a2580927ff8fb1fbc070d83dd67ef#diff-24e410817ffc0acf57da283811f4aa03R98</a></p>