Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2022-10-28T13:35:15ZCeph
Redmine Dashboard - Bug #57943 (New): doc/radosgw: "waiting on unpkg.com" for upwards of one minute when ...https://tracker.ceph.com/issues/579432022-10-28T13:35:15ZZac Dover
<a name="Description"></a>
<h3 >Description<a href="#Description" class="wiki-anchor">¶</a></h3>
<p><strong>RADOSGW documentation calls unpkg.com when <a class="external" href="http://localhost:8080/radosgw/multisite/">http://localhost:8080/radosgw/multisite/</a> is loaded in a browser</strong></p>
<p>When I build the documentation locally (in my local working copy of the git repo) and I visit <a class="external" href="http://localhost:8080/radosgw/multisite/">http://localhost:8080/radosgw/multisite/</a>, a message appears at the bottom left of the Firefox window that reads "waiting on unpkg.com" and doc/radosgw/multisite/index.html loads only after about eighty seconds.</p>
<p>This does not prevent the documentation from eventually loading, but in every trial I have made tonight, I have had a working internet connection and thus I have had a way for the browser to reach unpkg.com. I do not know what would happen if a browser were unable to reach unpkg.com.</p>
<p>This seems similar to <a class="external" href="https://tracker.ceph.com/issues/40027">https://tracker.ceph.com/issues/40027</a></p>
<a name="How-reproducible"></a>
<h3 >How reproducible<a href="#How-reproducible" class="wiki-anchor">¶</a></h3>
<p>Steps:</p>
<ol>
<li>Build the documentation with ./admin/build-doc</li>
<li>Serve the documentation with ./admin/serve-doc</li>
<li>Point a browser at <a class="external" href="http://localhost:8080/radosgw/multisite/">http://localhost:8080/radosgw/multisite/</a></li>
<li>Look at the message that reads "waiting on unpkg.com" </li>
<li>Reflect on the nature of time and maybe mortality</li>
</ol> RADOS - Subtask #37732 (New): qa/suites/rados/thrash-erasure-code*: coverage review taskshttps://tracker.ceph.com/issues/377322018-12-21T01:02:20ZJosh Durgin
<p>- Leveldb mons no longer relevant<br />- Shec should symlink thrashers dir to get newer thrashing<br />- Balancer, backoff, pg log facets could be added<br />- Fast-read could be added to thrash-erasure-code-*<br />- big suite could include more plugins and sizes (e.g. 8+3)<br />- Ec-overwrites suite could include cephfs and rbd workloads<br />- Each of these could have an rgw workload<br />- Could consolidate different plugins into workloads dir instead of sub-suites</p> RADOS - Subtask #37730 (New): qa/suites/rados/multimon: coverage review taskshttps://tracker.ceph.com/issues/377302018-12-21T00:09:46ZJosh Durgin
<p>- could add more rados workloads<br />- some redundancy with monthrash, which has a 9-mon cluster<br />- mon_seesaw may be a useful workload addition, to cover mon addition/removal<br />- various objectstore configs not too relevant, could randomize if coverage was desired<br />- mon clock skew tests don’t need large clusters, just wait for cluster setup and check for health warning<br />- unrelated, is ceph_test_mon_workloadgen useful? Doesn’t seem to be run by anything<br />- add osd thrashing for OSDMap/OSDMonitor coverage<br />- try compaction, see how long it takes at end of test/middle of thrashing? How effective it is (% compaction)</p> RADOS - Feature #24917 (New): Gracefully deal with upgrades when bluestore skipping of data_diges...https://tracker.ceph.com/issues/249172018-07-13T23:02:14ZDavid Zafmandzafman@redhat.com
<p>Once the data_digest is no longer being used, but is still set from an earlier version, we can get EIO from read but deep-scrub doesn't report errors.</p> Ceph - Feature #20324 (In Progress): Change default filestore 'omap' backend to 'rocksdb' from 'l...https://tracker.ceph.com/issues/203242017-06-15T22:28:03ZVikhyat Umrao
<p>Change default filestore 'omap' backend to 'rocksdb' from 'leveldb'</p>
<p>This is the option: "filestore_omap_backend":</p>
<p>Version-Release number of selected component (if applicable):<br />Red Hat Ceph Storage 2.3/3.0<br />Upstream Jewel and Luminous</p>
<p>We have seen a lot of issues when OMAP directories become very large(40G+) and then leveldb compaction takes a lot of time and this cause OSD's to hit suicide timeout because they won't respond when compaction is running.</p>
<p>Rocksdb will help because it uses multi-threading in compaction and it has other benefits also.</p>
<p>This change does not need code work because the option is already implemented. A simple Pull request would be necessary to change the default we can do that but before that, we need to do upstream and downstream QA testing on an average scale so we can check it is fixing this issue.</p>
<p>For this, we would also need package/release engineering team help to package rocksdb library downstream.</p>
<p>Downstream Features:</p>
<p>RHCS 2.y: <a class="external" href="https://bugzilla.redhat.com/show_bug.cgi?id=1462011">https://bugzilla.redhat.com/show_bug.cgi?id=1462011</a><br />RHCS 3.0: <a class="external" href="https://bugzilla.redhat.com/show_bug.cgi?id=1462012">https://bugzilla.redhat.com/show_bug.cgi?id=1462012</a></p> RADOS - Bug #17718 (New): EC Overwrites: update ceph-objectstore-tool export/import to handle rol...https://tracker.ceph.com/issues/177182016-10-27T02:23:22ZSamuel Justsjust@redhat.comRADOS - Feature #17043 (New): [RFE] filestore merge threadhold and split multiple defaults may no...https://tracker.ceph.com/issues/170432016-08-16T12:36:24ZVikhyat Umrao
<p>[RFE] Change default values for filestore split/merge</p>
<p>In recent customer cases we have seen keeping default value 320 is good but for getting number 320 is better to have:</p>
<pre>
filestore merge threshold = 1
filestore split multiple = 20
</pre>
<p>This will prevent merging of directories.</p>
<p>Current defaults are:</p>
<p><a class="external" href="http://docs.ceph.com/docs/master/rados/configuration/filestore-config-ref/#misc">http://docs.ceph.com/docs/master/rados/configuration/filestore-config-ref/#misc</a></p>
<pre>
filestore merge threshold = 10
filestore split multiple = 2
</pre>
<p>Version-Release number of selected component (if applicable):<br />Red Hat Ceph Storage 1.3.2</p> devops - Feature #9411 (New): remove qemu symlink for librbd on rhel7.1 (and later)https://tracker.ceph.com/issues/94112014-09-09T20:52:10ZSage Weilsage@newdream.net
<p>rhel 7.1's qemu will no longer need the goofy runtime linking or this symlink.</p>
<p>this should be done for 7.1 and later only, I think. and fedora.</p>
<p>see also:<br /> <a class="external" href="https://bugzilla.redhat.com/show_bug.cgi?id=1138094">https://bugzilla.redhat.com/show_bug.cgi?id=1138094</a><br /> <a class="external" href="https://bugzilla.redhat.com/show_bug.cgi?id=1109895">https://bugzilla.redhat.com/show_bug.cgi?id=1109895</a></p> Ceph - Feature #7196 (New): qa: test encoding semantics, not just being able to decode/encode dif...https://tracker.ceph.com/issues/71962014-01-21T12:20:18ZJosh Durgin
<p>I have a branch with at least slightly better detection by comparing the json dumps. Need to generate new object corpus and check in older json.</p> rbd - Feature #6934 (New): Consistency check when importing RBD diffhttps://tracker.ceph.com/issues/69342013-12-05T11:27:01ZRens Reindersrens@rapide.nl
<p>It would be very usefull to have some sort of meganism in place which would ensure that the image you're importing a RBD diff upon is actual the right image in the right state.<br />Perhaps a checksum of the last N bytes could be compared between the source and dest image. Or perhaps something even smarter... Checksumming an entire 128GB image would take too long i guess..</p>
<p>This feature would be usefull for backup your entire collection of RBD images to a different (offsite) pool.</p> Messengers - Feature #6420 (New): buffer: make writemsg use splice where possiblehttps://tracker.ceph.com/issues/64202013-09-26T13:13:46ZSage Weilsage@newdream.netMessengers - Feature #6418 (New): buffer: make readmsg use raw_pipe where possiblehttps://tracker.ceph.com/issues/64182013-09-26T13:13:12ZSage Weilsage@newdream.netMessengers - Feature #6413 (New): msgr: move readmsg() into bufferlist; refactor read_message()https://tracker.ceph.com/issues/64132013-09-26T13:09:25ZSage Weilsage@newdream.netrbd - Feature #2297 (New): ObjectCacher: mark buffers mergeable for ksmhttps://tracker.ceph.com/issues/22972012-04-13T18:38:59ZJosh Durgin
<p>This is done with a simple madvise call, but we should test that it works with ksm and verify that all the buffers are page aligned.</p>