www.ceph.com: Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2019-03-15T12:54:54Z
Ceph
Redmine
Bug #38764 (Resolved): Enforce HTTPS on tracker.ceph.com
https://tracker.ceph.com/issues/38764
2019-03-15T12:54:54Z
Ernesto Puerta
<p>ceph.com already redirects to secure endpoint and sets CSP upgrade-insecure-request (<a class="external" href="https://www.w3.org/TR/upgrade-insecure-requests/">https://www.w3.org/TR/upgrade-insecure-requests/</a>).</p>
<p>However tracker.ceph.com does not follow this practice, so if you miss adding the trailing -s or the plaint-text one gets cached in your browser history, you'll end up regularly sending your password/session cookies unencrypted on the wire. Could it be possible to enable HSTS or at least CSP in the Ceph tracker, and request addition to browser HSTS preload list (<a class="external" href="https://hstspreload.org">https://hstspreload.org</a>)?</p>
Bug #21714 (Resolved): Bad link on Block-storage home page
https://tracker.ceph.com/issues/21714
2017-10-07T12:49:59Z
Soumya M S
<p>On the page <a class="external" href="http://ceph.com/ceph-storage/block-storage/">http://ceph.com/ceph-storage/block-storage/</a> the link "Learn More" doesn't work. It points to <a class="external" href="http://docs.ceph.com/docs/master/rbd/rbd/">http://docs.ceph.com/docs/master/rbd/rbd/</a> but on clicking, it displays "404 not found".</p>
Bug #17702 (Resolved): docs.ceph.com is out of space
https://tracker.ceph.com/issues/17702
2016-10-26T00:48:56Z
Josh Durgin
<p>The ceph-docs jenkins job failed to rsync the result with:</p>
<p>No space left on device (28)</p>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-docs/7342/console">https://jenkins.ceph.com/job/ceph-docs/7342/console</a></p>
Bug #17426 (Closed): CephFs: IO Pauses for more than a 40 seconds, while running write intensive IOs
https://tracker.ceph.com/issues/17426
2016-09-29T07:07:00Z
Parikshith B
Parikshith.B@sandisk.com
<p>CephFs Environment:<br />1 MDS node<br />1 Ceph Kernel Filesystem, Ubuntu 14.04 LTS, kernel version 4.4.0-36-generic</p>
<p>Test Steps:</p>
<p>1. Installed cluster with 1 MDS and created one CephFs.<br />2. I extended the root ceph file system on 3 clients, that is by creating three different sub directories and mounting respective directories on each client.<br />3. Created 1000 files-5g each using vdbench.*Threads used is 256*,xfersize=(4k),fileio=random,fileselect=seq<br />4. Did a write intensive run over these files with the following profile,
{code}<br />fsd=fsd1,anchor=/home/ems/cephfs_mnt,depth=1,width=1,files=1000,size=5G,openflags=o_sync<br />fwd=fwd2,fsd=fsd*,rdpct=0,xfersize=4k,fileio=random,threads=256,fileselect=seq<br />rd=rd2,fwd=fwd2,elapsed=300,interval=1,fwdrate=max,format=no
{code}<br />5. Write pauses after a while for slightly less than a minute. This time out happens many times over long duration run.
{code}<br />Timest Inter ReqstdOps mb/sec xfer<br />01:33.1 31 64192 250.7 4096<br />01:34.1 32 61601 240.6 4096<br />01:35.1 33 64745 252.9 4096<br />01:36.1 34 64670 252.6 4096<br />01:37.1 35 62773 245.2 4096<br />01:38.1 36 52801 206.2 4096<br />01:39.1 37 60966 238.1 4096<br />01:40.0 38 63795 249.2 4096<br />01:41.0 39 60914 237.9 4096<br />01:42.1 40 63908 249.6 4096<br />01:43.0 41 64171 250.6 4096<br />01:44.1 42 64085 250.3 4096<br />01:45.0 43 57086 222.9 4096<br />01:46.0 44 68964 269.3 4096<br />01:47.0 45 62038 242.3 4096<br />01:48.0 46 63627 248.5 4096<br />01:49.0 47 61936 241.9 4096<br />01:50.0 48 66454 259.5 4096<br />01:51.0 49 62026 242.2 4096<br />01:52.0 50 58361 227.9 4096<br />01:53.1 51 66363 259.2 4096<br />01:54.0 52 64689 252.7 4096<br />01:55.1 53 67096 262.1 4096<br />01:56.0 54 64224 250.8 4096<br />01:57.0 55 64018 250 4096<br />01:58.1 56 61333 239.5 4096<br />01:59.0 57 65359 255.3 4096<br />02:00.0 58 52828 206.3 4096<br />02:01.0 59 63845 249.3 4096<br />02:02.0 60 62656 244.7 4096</p>
<p>Timest Inter ReqstdOps mb/sec xfer<br />02:03.0 61 65246 254.8 4096<br />02:04.0 62 61839 241.5 4096<br />02:05.0 63 63338 247.4 4096<br />02:06.0 64 57838 225.9 4096<br />02:07.0 65 61039 238.4 4096<br />02:08.0 66 61246 239.2 4096<br />02:09.0 67 63624 248.5 4096<br />02:10.0 68 64887 253.4 4096<br />02:11.0 69 59726 233.3 4096<br />02:12.0 70 65632 256.3 4096<br />02:13.0 71 2714 10.6 4096<br />02:14.0 72 0 0 0<br />02:15.0 73 0 0 0<br />02:16.0 74 0 0 0<br />02:17.0 75 0 0 0<br />02:18.0 76 0 0 0<br />02:19.0 77 0 0 0<br />02:20.0 78 0 0 0<br />02:21.0 79 0 0 0<br />02:22.0 80 0 0 0<br />02:23.0 81 0 0 0<br />02:24.0 82 0 0 0<br />02:25.0 83 0 0 0<br />02:26.0 84 0 0 0<br />02:27.0 85 0 0 0<br />02:28.0 86 0 0 0<br />02:29.0 87 0 0 0<br />02:30.0 88 0 0 0<br />02:31.0 89 0 0 0<br />02:32.0 90 0 0 0</p>
<p>Timest Inter ReqstdOps mb/sec xfer<br />02:33.0 91 0 0 0<br />02:34.0 92 0 0 0<br />02:35.0 93 0 0 0<br />02:36.0 94 0 0 0<br />02:37.0 95 0 0 0<br />02:38.0 96 0 0 0<br />02:39.0 97 0 0 0<br />02:40.0 98 0 0 0<br />02:41.0 99 0 0 0<br />02:42.0 100 0 0 0<br />02:43.0 101 0 0 0<br />02:44.0 102 0 0 0<br />02:45.0 103 0 0 0<br />02:46.0 104 0 0 0<br />02:47.0 105 0 0 0<br />02:48.0 106 0 0 0<br />02:49.0 107 0 0 0<br />02:50.0 108 0 0 0<br />02:51.0 109 0 0 0<br />02:52.0 110 0 0 0<br />02:53.0 111 0 0 0<br />02:54.0 112 0 0 0<br />02:55.0 113 0 0 0<br />02:56.0 114 0 0 0<br />02:57.0 115 0 0 0<br />02:58.0 116 0 0 0<br />02:59.0 117 57906 226.2 4096<br />03:00.0 118 64699 252.7 4096<br />03:01.0 119 63075 246.3 4096<br />03:02.0 120 63674 248.7 4096</p>
<p>Timest Inter ReqstdOps mb/sec xfer<br />03:03.1 121 63221 246.9 4096<br />03:04.0 122 55609 217.2 4096<br />03:05.0 123 60163 235 4096<br />03:06.0 124 61651 240.8 4096<br />03:07.0 125 63223 246.9 4096<br />03:08.0 126 65191 254.6 4096<br />03:09.0 127 60244 235.3 4096<br />03:10.0 128 66336 259.1 4096<br />03:11.0 129 64951 253.7 4096<br />03:12.0 130 61045 238.4 4096<br />03:13.1 131 65566 256.1 4096<br />03:14.0 132 63936 249.7 4096</p>
<p>{code}</p>
<p>4. Mds logs were clean.</p>
Bug #17097 (Resolved): mailing list archive links are broken
https://tracker.ceph.com/issues/17097
2016-08-22T22:52:18Z
Zack Cerza
<p><a class="external" href="http://ceph.com/resources/mailing-list-irc/">http://ceph.com/resources/mailing-list-irc/</a></p>
<p>Since GMANE died, we don't have working links to archives from the above page. A good alternative might be marc.info; an example link: <a class="external" href="http://marc.info/?l=ceph-devel">http://marc.info/?l=ceph-devel</a></p>
<p>P.S. if tracking ceph.com issues here is a bad idea, I can delete this new redmine project :)</p>