Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2011-05-25T23:45:15Z
Ceph
Redmine
Linux kernel client - Tasks #1112 (Resolved): check all igrab at ceph-client,remove deadlock : sp...
https://tracker.ceph.com/issues/1112
2011-05-25T23:45:15Z
changping Wu
<p>Hi , at igrab function,it has existed the codes: spin_lock(&inode->i_lock);<br />if coding this:</p>
<p>spin_lock(&inode->i_lock);<br />...................<br />igrab(inode);<br />.................<br />spin_unlock(&inode->i_lock);</p>
<p>it is a deadlock.</p>
<p>Jeff ,Wu</p>
<p>linux-2.6.39:<br /><a class="external" href="http://lxr.linux.no/#linux+v2.6.39/fs/inode.c#L1144">http://lxr.linux.no/#linux+v2.6.39/fs/inode.c#L1144</a></p>
<p>1143<br />1144struct inode <strong>igrab(struct inode *inode)<br />1145{<br />1146 spin_lock(&inode->i_lock);<br />1147 if (!(inode->i_state & (I_FREEING|I_WILL_FREE))) {<br />1148 __iget(inode);<br />1149 spin_unlock(&inode->i_lock);<br />1150 } else {<br />1151 spin_unlock(&inode->i_lock);<br />1152 /</strong><br />1153 * Handle the case where s_op->clear_inode is not been<br />1154 * called yet, and somebody is calling igrab<br />1155 * while the inode is getting freed.<br />1156 */<br />1157 inode = NULL;<br />1158 }<br />1159 return inode;<br />1160}<br />1161EXPORT_SYMBOL(igrab);</p>
Linux kernel client - Bug #1096 (Resolved): LTP fsstress test always hang ,ceph 0.27.1+linux-2.6....
https://tracker.ceph.com/issues/1096
2011-05-18T02:46:24Z
changping Wu
<p>Hi ,<br />I do fsstress test for ceph 0.27.1 + linux-2.6.38.6 + ubuntu 10.10</p>
<p>$modprobe libceph<br />$modprobe ceph<br />$mount -t ceph monaddr:6789:/ /mnt/ceph<br />$./fsstress -d /mnt/ceph/mdstest -f write=freq -l 100 -n 10000 -p 30 -v -S</p>
<p>When it is running at some steps ,fsstress process always hang ,<br />it look like deadlock or be limited to continue creating dir/file.<br />which doesn't continue running , we need press ctrl+c (^c) keyboard to exit it.</p>
<p>Jeff
=================================<br />logs:<br />1. test case 1<br />.....................................<br />4/1685: mknod d4/d15/d7d/df4/d1c/d27/d34/d4b/d5f/dfb/d1fb/c26f 0<br />4/1686: mknod d4/d15/d7d/df4/d1c/d64/d6b/d1ff/c270 0<br />4/1687: unlink d4/d14/d100/dcb/d1c8/c1f1 0<br />4/1688: read d4/d15/d7d/df4/d1c/d27/d71/f1c5 [2619005,59869] 0<br />4/1689: mkdir d4/d15/d7d/df4/d1bb/d271 0</p>
<p>~~~stop at this steps ,ctrl+c(^c) abort.</p>
<p>2 test case 2:<br />.........................<br /> 1/6748: creat d0/d33/d3f/d17c/d287/d9b3/f9e3 x:0 0 0<br /> 1/6749: link d0/d7/d18/d4f/d97/d126/d1e7/d93f/d68f/f3b0 d0/d7/d1dc/d346/d450/d9b/d179/f9e4 0<br /> 1/6750: rename d0/d7/d151/dae/d1ce/d377/f69e to d0/d7/d18/dec/d1e0/d23a/d298/d7b4/f9e5 0</p>
<pre><code>~~~stop at this steps ,ctrl+c(^c) abort.</code></pre>
<p>3. test case 3:<br />................................<br /> 24/5902: dwrite d4/d11/d78/d8b/d8f/ded/d114/d194/d35/d1cb/d2fe/f426 [0,4194304] 0<br /> 24/5903: readlink d4/d11/d78/d8b/d8f/ded/d114/d153/d417/l760 0<br /> 24/5904: rmdir d4/d5c/d446 39</p>
<p>~~~stop at this steps ,ctrl+c(^c) abort.</p>
Ceph - Bug #1095 (Closed): run "rados bench 10 seq -p data" print "error during benchmark: -5"
https://tracker.ceph.com/issues/1095
2011-05-17T23:12:22Z
changping Wu
<p>Hi ,<br />ceph 0.27 ,<br />run "rados bench 10 seq -p data",sometimes , print "error during benchmark: -5".
=============================================================<br />logs:</p>
<ol>
<li>rados bench 10 write -p data<br />Maintaining 16 concurrent writes of 4194304 bytes for at least 10 seconds.<br /> sec Cur ops started finished avg MB/s cur MB/s last lat avg lat<br /> 0 0 0 0 0 0 - 0<br /> 1 16 24 8 31.9944 32 0.989244 0.731578<br /> 2 16 36 20 39.9943 48 1.87479 0.911121<br /> 3 16 56 40 53.3266 80 0.986187 0.984835<br /> 4 16 74 58 57.9931 72 0.680331 0.931541<br /> 5 16 89 73 58.3932 60 0.445319 0.945591<br /> 6 16 106 90 59.9931 68 1.2443 0.986629<br /> 7 16 120 104 59.4219 56 1.34696 0.979426<br /> 8 16 136 120 59.9933 64 0.909194 0.977059<br /> 9 16 155 139 61.771 76 0.536061 0.97395<br /> 10 16 172 156 62.3932 68 0.562165 0.968041<br /> 11 10 173 163 59.2663 28 1.04106 0.980805<br />Total time run: 11.041616<br />Total writes made: 173<br />Write size: 4194304<br />Bandwidth (MB/sec): 62.672</li>
</ol>
Average Latency: 1.01859<br />Max latency: 2.4248<br />Min latency: 0.274832
<ol>
<li>rados bench 10 seq -p data<br /> sec Cur ops started finished avg MB/s cur MB/s last lat avg lat<br /> 0 0 0 0 0 0 - 0<br /> 1 16 36 20 79.986 80 0.634338 0.514692<br /> 2 16 55 39 77.9894 76 0.123429 0.598664<br /> 3 16 76 60 79.9899 84 0.395291 0.62393<br /> 4 16 97 81 80.9903 84 0.246663 0.665094<br /> 5 16 119 103 82.3906 88 1.01186 0.682504<br /> 6 16 139 123 81.991 80 0.835631 0.689855<br /> 7 16 161 145 82.8482 88 0.340252 0.694499<br />read got -2<br />error during benchmark: -5<br />error 5: Input/output error</li>
</ol>
Ceph - Bug #1029 (Rejected): ceph 0.27 run gceph, printf "Caught signal (Segmentation fault) " a...
https://tracker.ceph.com/issues/1029
2011-04-27T01:18:03Z
changping Wu
<p>Hi ,<br />ceph 0.27,<br />I copy ceph.conf from a mon host to a client host /etc/ceph folder,<br />run "gceph" ,then printf " Caught signal (Segmentation fault) :</p>
<p>Jeff.</p>
<p>logs:</p>
<p>root@cephhost:/etc/ceph# gceph</p>
<p>(gceph:25607): GLib-GObject-WARNING **: value "-13" of type `gint' is invalid or out of range for property `wrap-width' of type `gint'</p>
<p>(gceph:25607): GLib-GObject-WARNING **: value "-13" of type `gint' is invalid or out of range for property `width' of type `gint'</p>
<p>(gceph:25607): Gtk-CRITICAL **: gtk_list_store_get_value: assertion `VALID_ITER (iter, list_store)' failed</p>
<p>(gceph:25607): GLib-GObject-CRITICAL **: g_object_set_property: assertion `G_IS_VALUE (value)' failed</p>
<p>(gceph:25607): GLib-GObject-CRITICAL **: g_value_unset: assertion `G_IS_VALUE (value)' failed</p>
<p>(gceph:25607): Gtk-CRITICAL **: gtk_list_store_get_value: assertion `VALID_ITER (iter, list_store)' failed</p>
<p>(gceph:25607): GLib-GObject-CRITICAL **: g_object_set_property: assertion `G_IS_VALUE (value)' failed</p>
<p>(gceph:25607): GLib-GObject-CRITICAL **: g_value_unset: assertion `G_IS_VALUE (value)' failed</p>
<p>(gceph:25607): GLib-GObject-WARNING **: value "-12" of type `gint' is invalid or out of range for property `wrap-width' of type `gint'</p>
(gceph:25607): GLib-GObject-WARNING **: value "-12" of type `gint' is invalid or out of range for property `width' of type `gint'
<ul>
<li>Caught signal (Segmentation fault) **<br /> in thread 0x7f5ed013f900</li>
<li>Caught signal (Segmentation fault) <b><br /> in thread 0x7f5ed013f900<br /> ceph version 0.27 (<a class="changeset" title="v0.27 Signed-off-by: Sage Weil <sage@newdream.net>" href="https://tracker.ceph.com/projects/ceph/repository/revisions/793034c62c8e9ffab4af675ca97135fd1b193c9c">793034c62c8e9ffab4af675ca97135fd1b193c9c</a>)<br /> 1: gceph() [0x502c5e]<br /> 2: (()+0xfb40) [0x7f5ecfb24b40]<br /> 3: (()+0x10dec8) [0x7f5ecc105ec8]<br /> 4: (()+0x10e4a4) [0x7f5ecc1064a4]<br /> 5: (()+0x1d626) [0x7f5ecbd66626]<br /> 6: (g_main_context_dispatch()+0x1f2) [0x7f5ece4a4342]<br /> 7: (()+0x442a8) [0x7f5ece4a82a8]<br /> 8: (g_main_loop_run()+0x195) [0x7f5ece4a87b5]<br /> 9: (gtk_main()+0xa7) [0x7f5ecc12c3e7]<br /> 10: (Gtk::Main::run(Gtk::Window&)+0x10b) [0x7f5ecf0c29ab]<br /> 11: (run_gui(int, char</b>)+0x39d) [0x4969ed]<br /> 12: (main()+0x2fc) [0x4806bc]<br /> 13: (_<em>libc_start_main()+0xfe) [0x7f5ecd960d8e]<br /> 14: gceph() [0x446e49]<br /> ceph version 0.27 (<a class="changeset" title="v0.27 Signed-off-by: Sage Weil <sage@newdream.net>" href="https://tracker.ceph.com/projects/ceph/repository/revisions/793034c62c8e9ffab4af675ca97135fd1b193c9c">793034c62c8e9ffab4af675ca97135fd1b193c9c</a>)<br /> 1: gceph() [0x502c5e]<br /> 2: (()+0xfb40) [0x7f5ecfb24b40]<br /> 3: (()+0x10dec8) [0x7f5ecc105ec8]<br /> 4: (()+0x10e4a4) [0x7f5ecc1064a4]<br /> 5: (()+0x1d626) [0x7f5ecbd66626]<br /> 6: (g_main_context_dispatch()+0x1f2) [0x7f5ece4a4342]<br /> 7: (()+0x442a8) [0x7f5ece4a82a8]<br /> 8: (g_main_loop_run()+0x195) [0x7f5ece4a87b5]<br /> 9: (gtk_main()+0xa7) [0x7f5ecc12c3e7]<br /> 10: (Gtk::Main::run(Gtk::Window&)+0x10b) [0x7f5ecf0c29ab]<br /> 11: (run_gui(int, char**)+0x39d) [0x4969ed]<br /> 12: (main()+0x2fc) [0x4806bc]<br /> 13: (</em>_libc_start_main()+0xfe) [0x7f5ecd960d8e]<br /> 14: gceph() [0x446e49]</li>
</ul>
RADOS - Bug #1017 (Closed): ceph 0.26 ,mkcephfs --crushmap crush.new ,wait for very long time,md...
https://tracker.ceph.com/issues/1017
2011-04-19T00:18:44Z
changping Wu
<p>Hi ,<br />ceph 0.26 + btrfs +ubuntu 10.04 x86_64;<br />we want to use a special crushmap when do mkcephfs,but wait for very very long time,mds is still "creating" stat.</p>
<p>reproduce steps:<br />1. $mkcephfs -c /etc/ceph/ceph.conf -a -v -k adminkeyring --crushmap crush.new<br />2. $init-ceph -c /etc/ceph/ceph.conf -a -v start</p>
<p>root@ubuntu-mon0:/etc/ceph# ceph -s<br />2011-04-19 13:39:43.442162 pg v10: 1584 pgs: 1584 creating; 0 KB data, 448 KB used, 1200 GB / 1200 GB avail<br />2011-04-19 13:39:43.443851 mds e4: 1/1/1 up {0=up:creating}, 1 up:standby<br />2011-04-19 13:39:43.443899 osd e6: 6 osds: 6 up, 6 in<br />2011-04-19 13:39:43.443962 log 2011-04-19 13:38:42.159058 mon0 172.16.35.10:6789/0 10 : [INF] osd5 172.16.35.77:6803/6383 boot<br />2011-04-19 13:39:43.444026 mon e1: 3 mons at {0=172.16.35.10:6789/0,1=172.16.35.10:6790/0,2=172.16.35.10:6791/0}</p>
RADOS - Bug #1016 (Closed): ceph 0.26,crushmap change,mount fail.
https://tracker.ceph.com/issues/1016
2011-04-19T00:12:07Z
changping Wu
<p>Hi ,</p>
<p>ceph 0.26 ,change crushmap,then at a client ,mount ceph ,but fail.ceph.conf/crush.new.txt/crush.origin.txt/ attached.</p>
<p>It could take the following steps to reproduce this issue:</p>
<p>$mkcephfs -c /etc/ceph/ceph.conf -a -v -k /etc/ceph/adminkeyring<br />$init-ceph -c /etc/ceph/ceph.conf -a -v start<br />waiting for mds stat chage to "mds e5: 1/1/1 up {0=up:active}, 1 up:standby" <br />then:</p>
<p>$crushtool -c crush.new.txt -o crush.new<br />$ceph osd setcrushmap -i crush.new</p>
<p>root@ubuntu-mon0:/etc/ceph/crushmap# ceph -s<br />2011-04-19 13:13:09.012541 pg v55: 1584 pgs: 1584 active+clean; 22 KB data, 27720 KB used, 1200 GB / 1200 GB avail<br />2011-04-19 13:13:09.014376 mds e5: 1/1/1 up {0=up:active}, 1 up:standby<br />2011-04-19 13:13:09.014404 osd e11: 6 osds: 6 up, 6 in<br />2011-04-19 13:13:09.014449 log 2011-04-19 22:12:25.099880 osd3 172.16.35.76:6803/18151 8 : [INF] 0.19 scrub ok<br />2011-04-19 13:13:09.014502 mon e1: 3 mons at {0=172.16.35.10:6789/0,1=172.16.35.10:6790/0,2=172.16.35.10:6791/0}</p>
<p>at a client ,<br />loaded libceph.ko ceph.ko<br />type : <br />$ sudo mount -t ceph 172.16.35.10:6789:/ /mnt/ceph/ -o mount_timeout=5<br />but printk:<br />mount error 5 = Input/output error</p>
<p>Jeff ,Wu</p>
Ceph - Bug #1015 (Rejected): ceph 0.26, mkcephfs : ERROR: error creating empty object store
https://tracker.ceph.com/issues/1015
2011-04-18T23:51:25Z
changping Wu
<p>Hi ,<br />when do mkcephfs ,if it doesn't create osdN directory manually, then ,mkcephfs fail.<br />1. ceph 0.26 + btrfs +ubuntu 10.04 x86_64 + ceph.conf (attached);<br />2. mkcephfs -c /etc/ceph/ceph.conf -a -v -k adminkeyring</p>
<p>3.logs:</p>
<p>root@ubuntu-mon0:/etc/ceph# mkcephfs -c /etc/ceph/ceph.conf -a -v -k adminkeyring <br />temp dir is /tmp/mkcephfs.bfucWGAKhm<br />preparing monmap in /tmp/mkcephfs.bfucWGAKhm/monmap<br />/usr/bin/monmaptool --create --clobber --add 0 172.16.35.10:6789 --add 1 172.16.35.10:6790 --add 2 172.16.35.10:6791 --print /tmp/mkcephfs.bfucWGAKhm/monmap<br />/usr/bin/monmaptool: monmap file /tmp/mkcephfs.bfucWGAKhm/monmap<br />/usr/bin/monmaptool: generated fsid be1fd1b7-e585-bcd9-7ada-c078540f67ae<br />epoch 1<br />fsid be1fd1b7-e585-bcd9-7ada-c078540f67ae<br />last_changed 2011-04-19 13:44:52.532136<br />created 2011-04-19 13:44:52.532136<br />0: 172.16.35.10:6789/0 mon.0<br />1: 172.16.35.10:6790/0 mon.1<br />2: 172.16.35.10:6791/0 mon.2<br />/usr/bin/monmaptool: writing epoch 1 to /tmp/mkcephfs.bfucWGAKhm/monmap (3 monitors)<br />/usr/bin/cconf -c /etc/ceph/ceph.conf -n osd.0 "user" <br />/usr/bin/cconf -c /etc/ceph/ceph.conf -n osd.0 "ssh path"
=== osd.0 === <br />pushing conf and monmap to ubuntu-osd0<br />--- ssh ubuntu-osd0 "cd /etc/ceph ; ulimit <del>c unlimited ; mkdir -p /tmp/mkcephfs.bfucWGAKhm" <br />--</del> ssh ubuntu-osd0 "cd /etc/ceph ; ulimit -c unlimited ; /sbin/mkcephfs -d /tmp/mkcephfs.bfucWGAKhm --init-daemon osd.0"
<strong>* WARNING: Ceph is still under heavy development, and is only suitable for *</strong>
<strong>* testing and review. Do not trust it with important data. *</strong><br />2011-04-19 22:43:04.537607 7fa4cbe50720 ** ERROR: error creating empty object store in /mnt/data/osd0/osd0: error 2: No such file or directory<br />failed: 'ssh ubuntu-osd0 /sbin/mkcephfs -d /tmp/mkcephfs.bfucWGAKhm --init-daemon osd.0'<br />root@ubuntu-mon0:/etc/ceph#</p>
<p>Jeff Wu.</p>
Linux kernel client - Bug #909 (Can't reproduce): ceph-client+ceph v0.25.1,iozone test, "libceph:...
https://tracker.ceph.com/issues/909
2011-03-21T22:35:42Z
changping Wu
<p>Hi ,<br />I am doing iozone test for<br />ceph v0.25.1 + ceph-client master , commit :61e062a18f10f57fb507f4d883e7d1fce898a8a7<br />but printk many "osd timeout ",so the performance is very slow.this issue is very easy to reproduce.</p>
<p>the reproducing steps is at the below:</p>
<p>1. build a ceph 0.25.1 server with the attachment file ceph.conf + ext4/btrfs;<br />2. git ceph-client :commit :61e062a18f10f57fb507f4d883e7d1fce898a8a7,make menuconfig&&make -j10 &&make modules_install && make install && reboot;<br />3. mount.ceph monaddr:6789:/ /mnt/ceph -o mount_timeout=10<br />4. run iozone:<br />$iozone -z -c -e -a -n 200M -g 2000M -i 0 -i 1 -i 2 -f /mnt/ceph/fio -Rb ./iozone.xls</p>
<p>------------------------<br />I note that "osd timeout" issue always exits at each version of ceph-client/[-standalone] when do stress test.<br />which decrease the R/W performance and some times, make ceph client host hang up .</p>
<p>I have some of the ideas for this issue, maybe,they are useful for fixing the bug.</p>
<p>1).prolong timeout time ;<br />2).add a timer for each of the R/W requests ,not osd ;<br />3).if R/W stress is too strong ,abort all of the requests, <br /> for instance, re-send timeout requests , but still fail,then abort all of the R/W requests,<br /> make sure that ceph client system doesn't become very slow or hang up .</p>
<p>Jeff Wu</p>
<p>-----------------------<br />logs:</p>
<p>ubuntu-osd1:~$ ceph -s<br />2011-03-22 21:07:00.516925 pg v3103: 1056 pgs: 1056 active+clean; 1001 MB data, 4940 MB used, 182 GB / 196 GB avail<br />2011-03-22 21:07:00.518335 mds e11: 1/1/1 up {0=up:active}, 1 up:standby<br />2011-03-22 21:07:00.518360 osd e44: 4 osds: 4 up, 4 in<br />2011-03-22 21:07:00.518398 log 2011-03-22 00:45:35.004304 osd3 172.16.10.66:6803/6152 89 : [INF] 3.1p3 scrub ok<br />2011-03-22 21:07:00.518458 class rbd (v1.3 [x86-64])<br />2011-03-22 21:07:00.518472 mon e1: 3 mons at {0=172.16.10.171:6789/0,1=172.16.10.171:6790/0,2=172.16.10.171:6791/0}</p>
<ol>
<li>tail -f /var/log/messages<br />Mar 22 10:46:56 ubuntu-client kernel: [ 948.758323] libceph: tid 115358 timed out on osd0, will reset osd<br />Mar 22 10:46:56 ubuntu-client kernel: [ 948.767389] libceph: tid 111238 timed out on osd3, will reset osd<br />Mar 22 10:47:56 ubuntu-client kernel: [ 1008.890012] libceph: tid 128526 timed out on osd3, will reset osd<br />Mar 22 10:47:56 ubuntu-client kernel: [ 1008.898780] libceph: tid 128841 timed out on osd0, will reset osd<br />Mar 22 10:47:56 ubuntu-client kernel: [ 1008.907657] libceph: tid 128637 timed out on osd1, will reset osd<br />Mar 22 10:47:56 ubuntu-client kernel: [ 1008.916531] libceph: tid 128929 timed out on osd2, will reset osd<br />Mar 22 10:48:56 ubuntu-client kernel: [ 1069.040017] libceph: tid 125017 timed out on osd2, will reset osd<br />Mar 22 10:48:56 ubuntu-client kernel: [ 1069.050206] libceph: tid 123203 timed out on osd1, will reset osd<br />Mar 22 10:48:56 ubuntu-client kernel: [ 1069.060196] libceph: tid 109631 timed out on osd0, will reset osd<br />Mar 22 10:48:56 ubuntu-client kernel: [ 1069.070018] libceph: tid 101673 timed out on osd3, will reset osd<br />Mar 22 10:49:56 ubuntu-client kernel: [ 1129.190013] libceph: tid 143305 timed out on osd3, will reset osd<br />Mar 22 10:49:56 ubuntu-client kernel: [ 1129.200096] libceph: tid 143505 timed out on osd0, will reset osd<br />Mar 22 10:49:56 ubuntu-client kernel: [ 1129.209906] libceph: tid 142874 timed out on osd1, will reset osd<br />Mar 22 10:49:56 ubuntu-client kernel: [ 1129.219750] libceph: tid 143543 timed out on osd2, will reset osd<br />Mar 22 10:50:57 ubuntu-client kernel: [ 1189.340010] libceph: tid 139507 timed out on osd2, will reset osd<br />Mar 22 10:50:57 ubuntu-client kernel: [ 1189.350361] libceph: tid 137335 timed out on osd1, will reset osd<br />Mar 22 10:50:57 ubuntu-client kernel: [ 1189.360568] libceph: tid 137986 timed out on osd3, will reset osd<br />Mar 22 10:50:57 ubuntu-client kernel: [ 1189.370585] libceph: tid 135340 timed out on osd0, will reset osd<br />Mar 22 10:51:57 ubuntu-client kernel: [ 1249.500010] libceph: tid 153039 timed out on osd0, will reset osd<br />Mar 22 10:51:57 ubuntu-client kernel: [ 1249.509849] libceph: tid 152511 timed out on osd3, will reset osd<br />Mar 22 10:51:57 ubuntu-client kernel: [ 1249.519782] libceph: tid 152689 timed out on osd1, will reset osd<br />Mar 22 10:51:57 ubuntu-client kernel: [ 1249.529640] libceph: tid 153132 timed out on osd2, will reset osd<br />Mar 22 10:52:57 ubuntu-client kernel: [ 1309.650012] libceph: tid 124807 timed out on osd2, will reset osd<br />Mar 22 10:52:57 ubuntu-client kernel: [ 1309.660397] libceph: tid 148584 timed out on osd1, will reset osd<br />Mar 22 10:52:57 ubuntu-client kernel: [ 1309.670513] libceph: tid 147318 timed out on osd3, will reset osd<br />Mar 22 10:52:57 ubuntu-client kernel: [ 1309.680504] libceph: tid 144286 timed out on osd0, will reset osd<br />Mar 22 10:53:57 ubuntu-client kernel: [ 1369.810011] libceph: tid 165429 timed out on osd0, will reset osd<br />Mar 22 10:53:57 ubuntu-client kernel: [ 1369.819820] libceph: tid 165246 timed out on osd3, will reset osd<br />Mar 22 10:53:57 ubuntu-client kernel: [ 1369.829895] libceph: tid 165310 timed out on osd1, will reset osd<br />Mar 22 10:53:57 ubuntu-client kernel: [ 1369.839895] libceph: tid 165454 timed out on osd2, will reset osd<br />Mar 22 10:54:57 ubuntu-client kernel: [ 1429.960012] libceph: tid 163748 timed out on osd2, will reset osd<br />Mar 22 10:54:57 ubuntu-client kernel: [ 1429.970315] libceph: tid 161754 timed out on osd1, will reset osd<br />Mar 22 10:54:57 ubuntu-client kernel: [ 1429.980509] libceph: tid 162641 timed out on osd3, will reset osd<br />Mar 22 10:54:57 ubuntu-client kernel: [ 1429.990558] libceph: tid 156104 timed out on osd0, will reset osd<br />Mar 22 10:55:57 ubuntu-client kernel: [ 1490.120012] libceph: tid 172157 timed out on osd0, will reset osd<br />Mar 22 10:55:57 ubuntu-client kernel: [ 1490.129949] libceph: tid 171732 timed out on osd3, will reset osd<br />Mar 22 10:55:57 ubuntu-client kernel: [ 1490.139926] libceph: tid 171855 timed out on osd1, will reset osd<br />Mar 22 10:55:57 ubuntu-client kernel: [ 1490.149810] libceph: tid 172228 timed out on osd2, will reset osd<br />Mar 22 10:56:57 ubuntu-client kernel: [ 1550.270012] libceph: tid 167512 timed out on osd2, will reset osd<br />Mar 22 10:56:57 ubuntu-client kernel: [ 1550.279539] libceph: tid 160619 timed out on osd1, will reset osd<br />Mar 22 10:56:57 ubuntu-client kernel: [ 1550.289065] libceph: tid 167756 timed out on osd3, will reset osd<br />Mar 22 10:57:02 ubuntu-client kernel: [ 1555.300014] libceph: tid 172545 timed out on osd0, will reset osd<br />Mar 22 10:57:58 ubuntu-client kernel: [ 1610.420012] libceph: tid 175697 timed out on osd3, will reset osd<br />Mar 22 10:57:58 ubuntu-client kernel: [ 1610.428973] libceph: tid 175750 timed out on osd1, will reset osd<br />Mar 22 10:57:58 ubuntu-client kernel: [ 1610.438061] libceph: tid 175335 timed out on osd2, will reset osd<br />Mar 22 10:57:58 ubuntu-client kernel: [ 1611.011252] libceph: skipping osd2 172.16.10.67:6803 seq 1 expected 2<br />Mar 22 10:58:58 ubuntu-client kernel: [ 1670.560011] libceph: tid 175335 timed out on osd2, will reset osd<br />Mar 22 10:58:58 ubuntu-client kernel: [ 1670.568524] libceph: tid 154514 timed out on osd1, will reset osd<br />Mar 22 10:58:58 ubuntu-client kernel: [ 1670.577133] libceph: tid 155310 timed out on osd3, will reset osd<br />Mar 22 10:59:58 ubuntu-client kernel: [ 1730.700013] libceph: tid 134814 timed out on osd3, will reset osd<br />Mar 22 10:59:58 ubuntu-client kernel: [ 1730.707931] libceph: tid 145838 timed out on osd1, will reset osd<br />Mar 22 10:59:58 ubuntu-client kernel: [ 1730.716044] libceph: tid 107651 timed out on osd2, will reset osd<br />Mar 22 11:00:58 ubuntu-client kernel: [ 1790.840013] libceph: tid 105967 timed out on osd2, will reset osd<br />Mar 22 11:00:58 ubuntu-client kernel: [ 1790.847502] libceph: tid 109902 timed out on osd1, will reset osd<br />Mar 22 11:00:58 ubuntu-client kernel: [ 1790.855090] libceph: tid 97628 timed out on osd3, will reset osd<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.685669] iozone_x86_64 D ffff880076524958 0 1057 1052 0x00000000<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.693141] ffff880076447cd8 0000000000000086 ffff880076447c38 ffffffff00000000<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.700991] 0000000000013c40 ffff8800765245c0 ffff880076524958 ffff880076447fd8<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.708830] ffff880076524960 0000000000013c40 ffff880076446010 0000000000013c40<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.716694] Call Trace:<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.719582] [<ffffffff810f8e40>] ? sync_page+0x0/0x60<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.725192] [<ffffffff8158b204>] io_schedule+0x44/0x60<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.730867] [<ffffffff810f8e85>] sync_page+0x45/0x60<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.736374] [<ffffffff8158b83f>] __wait_on_bit+0x5f/0x90<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.742237] [<ffffffff810f9053>] wait_on_page_bit+0x73/0x80<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.748360] [<ffffffff81082a20>] ? wake_bit_function+0x0/0x40<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.754677] [<ffffffff81104135>] ? pagevec_lookup_tag+0x25/0x40<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.761169] [<ffffffff810f94a3>] filemap_fdatawait_range+0x113/0x1c0<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.768090] [<ffffffff810f9650>] filemap_write_and_wait_range+0x70/0x80<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.775292] [<ffffffff8117a4ba>] vfs_fsync_range+0x5a/0x90<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.781370] [<ffffffff8117a55c>] vfs_fsync+0x1c/0x20<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.786924] [<ffffffff8117a59a>] do_fsync+0x3a/0x60<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.792410] [<ffffffff8117a5f0>] sys_fsync+0x10/0x20<br />Mar 22 11:01:13 ubuntu-client kernel: [ 1805.797971] [<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b<br />Mar 22 11:01:58 ubuntu-client kernel: [ 1850.980018] libceph: tid 102000 timed out on osd1, will reset osd<br />Mar 22 11:01:58 ubuntu-client kernel: [ 1850.987255] libceph: tid 101451 timed out on osd2, will reset osd<br />Mar 22 11:02:58 ubuntu-client kernel: [ 1911.110011] libceph: tid 90126 timed out on osd2, will reset osd<br />Mar 22 11:02:58 ubuntu-client kernel: [ 1911.116834] libceph: tid 96191 timed out on osd1, will reset osd<br />Mar 22 11:03:58 ubuntu-client kernel: [ 1971.240011] libceph: tid 91928 timed out on osd1, will reset osd<br />Mar 22 11:06:12 ubuntu-client kernel: [ 2105.160013] libceph: tid 221591 timed out on osd3, will reset osd<br />Mar 22 11:06:12 ubuntu-client kernel: [ 2105.168286] libceph: tid 222121 timed out on osd1, will reset osd<br />Mar 22 11:06:12 ubuntu-client kernel: [ 2105.176594] libceph: tid 223623 timed out on osd0, will reset osd<br />Mar 22 11:06:12 ubuntu-client kernel: [ 2105.184786] libceph: tid 225922 timed out on osd2, will reset osd<br />Mar 22 11:07:12 ubuntu-client kernel: [ 2165.310011] libceph: tid 239673 timed out on osd2, will reset osd<br />Mar 22 11:07:12 ubuntu-client kernel: [ 2165.318553] libceph: tid 239682 timed out on osd0, will reset osd<br />Mar 22 11:07:12 ubuntu-client kernel: [ 2165.327226] libceph: tid 239691 timed out on osd1, will reset osd<br />Mar 22 11:07:13 ubuntu-client kernel: [ 2165.335742] libceph: tid 239690 timed out on osd3, will reset osd<br />Mar 22 11:08:13 ubuntu-client kernel: [ 2225.460011] libceph: tid 245949 timed out on osd3, will reset osd<br />Mar 22 11:08:13 ubuntu-client kernel: [ 2225.468123] libceph: tid 245569 timed out on osd1, will reset osd<br />Mar 22 11:08:13 ubuntu-client kernel: [ 2225.476265] libceph: tid 245630 timed out on osd0, will reset osd<br />Mar 22 11:08:13 ubuntu-client kernel: [ 2225.484338] libceph: tid 245911 timed out on osd2, will reset osd<br />Mar 22 11:09:13 ubuntu-client kernel: [ 2285.610011] libceph: tid 247438 timed out on osd2, will reset osd<br />Mar 22 11:09:13 ubuntu-client kernel: [ 2285.618322] libceph: tid 247342 timed out on osd0, will reset osd<br />Mar 22 11:09:13 ubuntu-client kernel: [ 2285.626759] libceph: tid 247444 timed out on osd3, will reset osd<br />Mar 22 11:09:23 ubuntu-client kernel: [ 2295.650014] libceph: tid 247445 timed out on osd1, will reset osd<br />Mar 22 11:10:13 ubuntu-client kernel: [ 2345.750013] libceph: tid 258786 timed out on osd3, will reset osd<br />Mar 22 11:10:13 ubuntu-client kernel: [ 2345.758134] libceph: tid 257932 timed out on osd0, will reset osd<br />Mar 22 11:10:13 ubuntu-client kernel: [ 2345.766403] libceph: tid 258682 timed out on osd2, will reset osd<br />Mar 22 11:11:13 ubuntu-client kernel: [ 2405.890010] libceph: tid 253515 timed out on osd2, will reset osd<br />Mar 22 11:11:13 ubuntu-client kernel: [ 2405.897092] libceph: tid 250390 timed out on osd0, will reset osd<br />Mar 22 11:11:13 ubuntu-client kernel: [ 2405.904264] libceph: tid 253241 timed out on osd3, will reset osd<br />Mar 22 11:12:13 ubuntu-client kernel: [ 2466.030011] libceph: tid 236134 timed out on osd0, will reset osd<br />Mar 22 11:15:08 ubuntu-client kernel: [ 2640.610012] libceph: tid 283943 timed out on osd0, will reset osd<br />Mar 22 11:15:08 ubuntu-client kernel: [ 2640.617736] libceph: tid 284038 timed out on osd2, will reset osd<br />Mar 22 11:15:08 ubuntu-client kernel: [ 2640.625531] libceph: tid 285011 timed out on osd3, will reset osd<br />Mar 22 11:15:08 ubuntu-client kernel: [ 2640.633223] libceph: tid 285058 timed out on osd1, will reset osd<br />Mar 22 11:16:08 ubuntu-client kernel: [ 2700.760012] libceph: tid 298116 timed out on osd1, will reset osd<br />Mar 22 11:16:08 ubuntu-client kernel: [ 2700.767674] libceph: tid 297734 timed out on osd3, will reset osd<br />Mar 22 11:16:08 ubuntu-client kernel: [ 2700.775455] libceph: tid 297886 timed out on osd2, will reset osd<br />Mar 22 11:16:08 ubuntu-client kernel: [ 2700.783184] libceph: tid 298168 timed out on osd0, will reset osd<br />Mar 22 11:17:08 ubuntu-client kernel: [ 2760.910011] libceph: tid 301530 timed out on osd0, will reset osd<br />Mar 22 11:17:08 ubuntu-client kernel: [ 2760.916833] libceph: tid 301462 timed out on osd2, will reset osd<br />Mar 22 11:17:08 ubuntu-client kernel: [ 2760.923551] libceph: tid 301436 timed out on osd3, will reset osd<br />Mar 22 11:19:45 ubuntu-client kernel: [ 2917.590015] libceph: tid 318599 timed out on osd2, will reset osd<br />Mar 22 11:36:22 ubuntu-client kernel: [ 3915.230011] libceph: tid 539580 timed out on osd3, will reset osd<br />Mar 22 11:36:22 ubuntu-client kernel: [ 3915.241271] libceph: tid 541777 timed out on osd2, will reset osd<br />Mar 22 11:36:22 ubuntu-client kernel: [ 3915.252391] libceph: tid 542059 timed out on osd1, will reset osd<br />Mar 22 11:36:22 ubuntu-client kernel: [ 3915.263542] libceph: tid 542445 timed out on osd0, will reset osd<br />Mar 22 11:37:23 ubuntu-client kernel: [ 3975.390014] libceph: tid 595281 timed out on osd0, will reset osd<br />Mar 22 11:37:23 ubuntu-client kernel: [ 3975.401294] libceph: tid 595274 timed out on osd1, will reset osd<br />Mar 22 11:37:23 ubuntu-client kernel: [ 3975.412529] libceph: tid 595280 timed out on osd2, will reset osd<br />Mar 22 11:37:23 ubuntu-client kernel: [ 3975.423621] libceph: tid 595194 timed out on osd3, will reset osd<br />Mar 22 11:38:23 ubuntu-client kernel: [ 4035.550011] libceph: tid 586690 timed out on osd3, will reset osd<br />Mar 22 11:38:23 ubuntu-client kernel: [ 4035.560594] libceph: tid 591425 timed out on osd2, will reset osd<br />Mar 22 11:38:23 ubuntu-client kernel: [ 4035.571146] libceph: tid 582250 timed out on osd1, will reset osd<br />Mar 22 11:38:23 ubuntu-client kernel: [ 4035.581742] libceph: tid 592116 timed out on osd0, will reset osd<br />Mar 22 11:39:23 ubuntu-client kernel: [ 4095.710012] libceph: tid 586787 timed out on osd0, will reset osd<br />Mar 22 11:39:23 ubuntu-client kernel: [ 4095.720957] libceph: tid 581408 timed out on osd1, will reset osd<br />Mar 22 11:39:23 ubuntu-client kernel: [ 4095.731729] libceph: tid 584244 timed out on osd2, will reset osd<br />Mar 22 11:39:23 ubuntu-client kernel: [ 4095.742416] libceph: tid 583587 timed out on osd3, will reset osd<br />Mar 22 11:40:23 ubuntu-client kernel: [ 4155.870013] libceph: tid 602053 timed out on osd3, will reset osd<br />Mar 22 11:40:23 ubuntu-client kernel: [ 4155.880454] libceph: tid 602097 timed out on osd2, will reset osd<br />Mar 22 11:40:23 ubuntu-client kernel: [ 4155.890709] libceph: tid 602174 timed out on osd1, will reset osd<br />Mar 22 11:40:23 ubuntu-client kernel: [ 4155.900978] libceph: tid 602204 timed out on osd0, will reset osd<br />Mar 22 11:41:23 ubuntu-client kernel: [ 4216.030011] libceph: tid 596132 timed out on osd0, will reset osd<br />Mar 22 11:41:23 ubuntu-client kernel: [ 4216.041005] libceph: tid 570851 timed out on osd1, will reset osd<br />Mar 22 11:41:23 ubuntu-client kernel: [ 4216.051747] libceph: tid 573826 timed out on osd2, will reset osd<br />.......................................................................................................<br />Mar 22 12:02:27 ubuntu-client kernel: [ 5479.390947] libceph: tid 740652 timed out on osd1, will reset osd<br />Mar 22 12:02:27 ubuntu-client kernel: [ 5479.401794] libceph: tid 733253 timed out on osd0, will reset osd<br />Mar 22 12:03:02 ubuntu-client kernel: [ 5514.480013] libceph: tid 747953 timed out on osd3, will reset osd<br />Mar 22 12:03:27 ubuntu-client kernel: [ 5539.540019] libceph: tid 748153 timed out on osd0, will reset osd<br />Mar 22 12:03:27 ubuntu-client kernel: [ 5539.553914] libceph: tid 748121 timed out on osd1, will reset osd<br />Mar 22 12:03:27 ubuntu-client kernel: [ 5539.564888] libceph: tid 748025 timed out on osd2, will reset osd<br />Mar 22 12:04:27 ubuntu-client kernel: [ 5599.690011] libceph: tid 749173 timed out on osd3, will reset osd<br />Mar 22 12:04:27 ubuntu-client kernel: [ 5599.700632] libceph: tid 752137 timed out on osd2, will reset osd<br />Mar 22 12:04:27 ubuntu-client kernel: [ 5599.711256] libceph: tid 752185 timed out on osd1, will reset osd<br />Mar 22 12:04:27 ubuntu-client kernel: [ 5599.721748] libceph: tid 752199 timed out on osd0, will reset osd<br />Mar 22 12:05:27 ubuntu-client kernel: [ 5659.850014] libceph: tid 753220 timed out on osd0, will reset osd<br />Mar 22 12:05:27 ubuntu-client kernel: [ 5659.863625] libceph: tid 753198 timed out on osd2, will reset osd<br />Mar 22 12:05:37 ubuntu-client kernel: [ 5669.890012] libceph: tid 753224 timed out on osd3, will reset osd<br />Mar 22 12:05:37 ubuntu-client kernel: [ 5669.900697] libceph: tid 753619 timed out on osd1, will reset osd<br />Mar 22 12:06:27 ubuntu-client kernel: [ 5720.010022] libceph: tid 764305 timed out on osd2, will reset osd<br />Mar 22 12:06:27 ubuntu-client kernel: [ 5720.020362] libceph: tid 764307 timed out on osd0, will reset osd<br />Mar 22 12:07:27 ubuntu-client kernel: [ 5780.150013] libceph: tid 762955 timed out on osd0, will reset osd<br />Mar 22 12:07:27 ubuntu-client kernel: [ 5780.162627] libceph: tid 755339 timed out on osd2, will reset osd<br />Mar 22 12:08:27 ubuntu-client kernel: [ 5840.290018] libceph: tid 759879 timed out on osd0, will reset osd<br />Mar 22 12:09:28 ubuntu-client kernel: [ 5900.420012] libceph: tid 756522 timed out on osd0, will reset osd<br />Mar 22 12:10:28 ubuntu-client kernel: [ 5960.550012] libceph: tid 749745 timed out on osd0, will reset osd<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.815595] iozone_x86_64 D ffff880076524958 0 1057 1052 0x00000000<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.823045] ffff880076447cd8 0000000000000086 ffff880076447c38 ffffffff00000000<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.830861] 0000000000013c40 ffff8800765245c0 ffff880076524958 ffff880076447fd8<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.838678] ffff880076524960 0000000000013c40 ffff880076446010 0000000000013c40<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.846519] Call Trace:<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.849382] [<ffffffff810f8e40>] ? sync_page+0x0/0x60<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.854956] [<ffffffff8158b204>] io_schedule+0x44/0x60<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.860615] [<ffffffff810f8e85>] sync_page+0x45/0x60<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.866094] [<ffffffff8158b83f>] __wait_on_bit+0x5f/0x90<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.871948] [<ffffffff810f9053>] wait_on_page_bit+0x73/0x80<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.878052] [<ffffffff81082a20>] ? wake_bit_function+0x0/0x40<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.884344] [<ffffffff81104135>] ? pagevec_lookup_tag+0x25/0x40<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.890811] [<ffffffff810f94a3>] filemap_fdatawait_range+0x113/0x1c0<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.897710] [<ffffffff810f9650>] filemap_write_and_wait_range+0x70/0x80<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.904901] [<ffffffff8117a4ba>] vfs_fsync_range+0x5a/0x90<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.910960] [<ffffffff8117a55c>] vfs_fsync+0x1c/0x20<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.916490] [<ffffffff8117a59a>] do_fsync+0x3a/0x60<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.921951] [<ffffffff8117a5f0>] sys_fsync+0x10/0x20<br />Mar 22 12:11:13 ubuntu-client kernel: [ 6005.927488] [<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b<br />Mar 22 12:11:28 ubuntu-client kernel: [ 6020.680011] libceph: tid 720380 timed out on osd0, will reset osd<br />Mar 22 12:12:28 ubuntu-client kernel: [ 6080.810012] libceph: tid 711512 timed out on osd0, will reset osd</li>
</ol>
Ceph - Tasks #889 (Resolved): librbd.cc : more int ->uint64_t
https://tracker.ceph.com/issues/889
2011-03-15T02:08:05Z
changping Wu
<p>Hi ,<br />1. <br />commit : 4ee75a881ec637e2b0c5b74b16b1e44ac710707c<br />still exist some of params that need modify int ->uint64_t .<br />example : total_read ,total_write.</p>
<p>at read_iterate function, should modify<br /> "int total_read = 0;" <br />to <br /> "uint64_t total_read = 0;"</p>
<p>,or rbd export file size >2GB ,still fail.</p>
<p>i listed some of the functions at the below.</p>
<hr />
<p>int read_iterate(ImageCtx *ictx, uint64_t off, size_t len,<br /> int (*cb)(uint64_t, size_t, const char *, void *),<br /> void *arg)
{<br /> int r = ictx_check(ictx);<br /> if (r < 0)<br /> return r;
{<br /> if (!len)<br /> return 0;</p>
<pre><code>int r = ictx_check(ictx);<br /> if (r < 0)<br /> return r;</code></pre>
<p>int total_write = 0;<br />..............</p>
<p>int aio_read(ImageCtx *ictx, uint64_t off, size_t len,<br /> char *buf,<br /> AioCompletion *c)
{<br />..............<br /> int total_read = 0;<br />...............<br />}</p>
<p>int write(ImageCtx *ictx, uint64_t off, size_t len, const char *buf)
{<br /> if (!len)<br /> return 0;</p>
<pre><code>int r = ictx_check(ictx);<br /> if (r < 0)<br /> return r;</code></pre>
<pre><code>int total_write = 0;<br />..............................<br />}</code></pre>
<p>2. to suggest, modify some of functions that "int" ->"int64_t"</p>
<p>example :<br />modify:<br />int read_iterate(ImageCtx *ictx, uint64_t off, size_t len,<br /> int (*cb)(uint64_t, size_t, const char *, void *),<br /> void *arg)<br />to <br />int64_t read_iterate(ImageCtx *ictx, uint64_t off, size_t len,<br /> int (*cb)(uint64_t, size_t, const char *, void *),<br /> void *arg)</p>
<p>read,write,aio_read,aio_write ,etc.</p>
Ceph - Support #885 (Resolved): "librados" C API support get_fs_stats
https://tracker.ceph.com/issues/885
2011-03-14T00:19:50Z
changping Wu
<p>Hi ,</p>
<p>Could you add C API support for</p>
<p>int librados::RadosClient::get_fs_stats(ceph_statfs& stats)<br />?</p>
<p>To get "total used, total avail ,total space " values ,the function should be called.</p>
<p>thx.</p>
<p>Jeff.</p>
Ceph - Bug #884 (Resolved): testrados: glibc detected *** /home/ceph/ceph-server/src/.libs/lt-tes...
https://tracker.ceph.com/issues/884
2011-03-14T00:00:15Z
changping Wu
<p>Hi ,<br />reproduce steps:</p>
<p>1. ceph version: master 34cf240d70d1992263e32931031b2ba6cd497f14<br />2.testrados.c ,set rados_conf_set(cl, "mon host","172.16.10.10");<br />3.run ./testrados,then printf a invalid pointer error.</p>
<p>4. this bug should be in src/config.cc ,line 975 :</p>
<pre><code>934 int md_config_t::</code></pre>
<pre><code>935 set_val(const char *key, const char *val)</code></pre>
<pre><code>936 {</code></pre>
<p>.............................</p>
<pre><code>955 case OPT_STR: {</code></pre>
<pre><code>956 char <strong>p = (char</strong>)opt->val_ptr;</code></pre>
<pre><code>957 free(p);</code></pre>
<pre><code>958 opt->val_ptr = strdup(val);</code></pre>
<pre><code>959 return 0;</code></pre>
<pre><code>960 }</code></pre>
===================================================================<br />jeff@cephhost:~/work/ceph/ceph-server$ ./src/testrados
<ul>
<li>glibc detected <b>* /home/ceph/ceph-server/src/.libs/lt-testrados: free(): invalid pointer: 0x00007f3dfef4d6c0 *</b>
======= Backtrace: =========<br />/lib/libc.so.6(+0x774b6)[0x7f3dfe8ab4b6]<br />/lib/libc.so.6(cfree+0x73)[0x7f3dfe8b1c83]<br />/home/ceph/ceph-server/src/.libs/librados.so.2(<em>ZN11md_config_t7set_valEPKcS1_+0x186)[0x7f3dfecfaf56]<br />/home/ceph/ceph-server/src/.libs/lt-testrados(main+0x68)[0x4013d8]<br />/lib/libc.so.6(</em>_libc_start_main+0xfe)[0x7f3dfe852d8e]<br />/home/ceph/ceph-server/src/.libs/lt-testrados[0x4012a9]
======= Memory map: ========<br />00400000-00403000 r-xp 00000000 08:06 2369310 /home/ceph/ceph-server/src/.libs/lt-testrados<br />00602000-00603000 r--p 00002000 08:06 2369310 /home/ceph/ceph-server/src/.libs/lt-testrados<br />00603000-00604000 rw-p 00003000 08:06 2369310 /home/ceph/ceph-server/src/.libs/lt-testrados<br />0176b000-0178c000 rw-p 00000000 00:00 0 [heap]<br />7f3df8000000-7f3df8021000 rw-p 00000000 00:00 0 <br />7f3df8021000-7f3dfc000000 ---p 00000000 00:00 0 <br />7f3dfd5b6000-7f3dfd5cb000 r-xp 00000000 08:05 2359375 /lib/libgcc_s.so.1<br />7f3dfd5cb000-7f3dfd7ca000 ---p 00015000 08:05 2359375 /lib/libgcc_s.so.1<br />7f3dfd7ca000-7f3dfd7cb000 r--p 00014000 08:05 2359375 /lib/libgcc_s.so.1<br />7f3dfd7cb000-7f3dfd7cc000 rw-p 00015000 08:05 2359375 /lib/libgcc_s.so.1<br />7f3dfd7cc000-7f3dfd84e000 r-xp 00000000 08:05 2364627 /lib/libm-2.12.1.so<br />7f3dfd84e000-7f3dfda4d000 ---p 00082000 08:05 2364627 /lib/libm-2.12.1.so<br />7f3dfda4d000-7f3dfda4e000 r--p 00081000 08:05 2364627 /lib/libm-2.12.1.so<br />7f3dfda4e000-7f3dfda4f000 rw-p 00082000 08:05 2364627 /lib/libm-2.12.1.so<br />7f3dfda4f000-7f3dfdb37000 r-xp 00000000 08:05 1969787 /usr/lib/libstdc++.so.6.0.14<br />7f3dfdb37000-7f3dfdd36000 ---p 000e8000 08:05 1969787 /usr/lib/libstdc++.so.6.0.14<br />7f3dfdd36000-7f3dfdd3e000 r--p 000e7000 08:05 1969787 /usr/lib/libstdc++.so.6.0.14<br />7f3dfdd3e000-7f3dfdd40000 rw-p 000ef000 08:05 1969787 /usr/lib/libstdc++.so.6.0.14<br />7f3dfdd40000-7f3dfdd55000 rw-p 00000000 00:00 0 <br />7f3dfdd55000-7f3dfe1a1000 r-xp 00000000 08:05 1974719 /usr/lib/libcrypto++.so.8.0.0<br />7f3dfe1a1000-7f3dfe3a1000 ---p 0044c000 08:05 1974719 /usr/lib/libcrypto++.so.8.0.0<br />7f3dfe3a1000-7f3dfe409000 r--p 0044c000 08:05 1974719 /usr/lib/libcrypto++.so.8.0.0<br />7f3dfe409000-7f3dfe40d000 rw-p 004b4000 08:05 1974719 /usr/lib/libcrypto++.so.8.0.0<br />7f3dfe40d000-7f3dfe411000 rw-p 00000000 00:00 0 <br />7f3dfe411000-7f3dfe429000 r-xp 00000000 08:05 2364637 /lib/libpthread-2.12.1.so<br />7f3dfe429000-7f3dfe628000 ---p 00018000 08:05 2364637 /lib/libpthread-2.12.1.so<br />7f3dfe628000-7f3dfe629000 r--p 00017000 08:05 2364637 /lib/libpthread-2.12.1.so<br />7f3dfe629000-7f3dfe62a000 rw-p 00018000 08:05 2364637 /lib/libpthread-2.12.1.so<br />7f3dfe62a000-7f3dfe62e000 rw-p 00000000 00:00 0 <br />7f3dfe62e000-7f3dfe633000 r-xp 00000000 08:06 2369091 /home/ceph/ceph-server/src/.libs/libcrush.so.1.0.0<br />7f3dfe633000-7f3dfe832000 ---p 00005000 08:06 2369091 /home/ceph/ceph-server/src/.libs/libcrush.so.1.0.0<br />7f3dfe832000-7f3dfe833000 r--p 00004000 08:06 2369091 /home/ceph/ceph-server/src/.libs/libcrush.so.1.0.0<br />7f3dfe833000-7f3dfe834000 rw-p 00005000 08:06 2369091 /home/ceph/ceph-server/src/.libs/libcrush.so.1.0.0<br />7f3dfe834000-7f3dfe9ae000 r-xp 00000000 08:05 2364623 /lib/libc-2.12.1.so<br />7f3dfe9ae000-7f3dfebad000 ---p 0017a000 08:05 2364623 /lib/libc-2.12.1.so<br />7f3dfebad000-7f3dfebb1000 r--p 00179000 08:05 2364623 /lib/libc-2.12.1.so<br />7f3dfebb1000-7f3dfebb2000 rw-p 0017d000 08:05 2364623 /lib/libc-2.12.1.so<br />7f3dfebb2000-7f3dfebb7000 rw-p 00000000 00:00 0 <br />7f3dfebb7000-7f3dfed3d000 r-xp 00000000 08:06 2368409 /home/ceph/ceph-server/src/.libs/librados.so.2.0.0<br />7f3dfed3d000-7f3dfef3d000 ---p 00186000 08:06 2368409 /home/ceph/ceph-server/src/.libs/librados.so.2.0.0<br />7f3dfef3d000-7f3dfef43000 r--p 00186000 08:06 2368409 /home/ceph/ceph-server/src/.libs/librados.so.2.0.0<br />7f3dfef43000-7f3dfef4d000 rw-p 0018c000 08:06 2368409 /home/ceph/ceph-server/src/.libs/librados.so.2.0.0<br />7f3dfef4d000-7f3dfef62000 rw-p 00000000 00:00 0 <br />7f3dfef62000-7f3dfef82000 r-xp 00000000 08:05 2359317 /lib/ld-2.12.1.so<br />7f3dff15b000-7f3dff162000 rw-p 00000000 00:00 0 <br />7f3dff17f000-7f3dff182000 rw-p 00000000 00:00 0 <br />7f3dff182000-7f3dff183000 r--p 00020000 08:05 2359317 /lib/ld-2.12.1.so<br />7f3dff183000-7f3dff184000 rw-p 00021000 08:05 2359317 /lib/ld-2.12.1.so<br />7f3dff184000-7f3dff185000 rw-p 00000000 00:00 0 <br />7fff604bf000-7fff604e0000 rw-p 00000000 00:00 0 [stack]<br />7fff605ff000-7fff60600000 r-xp 00000000 00:00 0 [vdso]<br />ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]<br />aborted</li>
</ul>
Ceph - Bug #876 (Resolved): rbd export rbd image > 2GB, export error: Invalid argument
https://tracker.ceph.com/issues/876
2011-03-10T23:13:35Z
changping Wu
<p>Hi ,<br />reproduce steps: <br />1).ceph v0.25.</p>
<p>2) Imported a 10GB image into foo pool<br />$ rbd ls -p foo<br />10GBB<br />10GBrbd<br />3) export 10GB rbd image</p>
<p>$rbd export 10GBB ./test/10GBBBBB -p foo</p>
<p>logs:</p>
<p>....................................<br />writing 4194304 bytes at ofs 2105540608<br />writing 4194304 bytes at ofs 2109734912<br />writing 4194304 bytes at ofs 2113929216<br />writing 4194304 bytes at ofs 2118123520<br />writing 4194304 bytes at ofs 2122317824<br />writing 4194304 bytes at ofs 2126512128<br />writing 4194304 bytes at ofs 2130706432<br />writing 4194304 bytes at ofs 2134900736<br />writing 4194304 bytes at ofs 2139095040<br />writing 4194304 bytes at ofs 2143289344<br />export error: Invalid argument</p>
<p>4) lseek only support < 2GB file.</p>
<p>Jeff,wu</p>
Linux kernel client - Bug #750 (Won't Fix): run "dd", printk " libceph: osd1 172.16.10.68:6805 so...
https://tracker.ceph.com/issues/750
2011-01-26T20:00:23Z
changping Wu
<p>Hi <br />i git ceph-client-standalone.git master-backport<br />build and insmod it.<br />ceph server: ceph 0.24.2</p>
<p>OS:linux-2.6.37+ (ceph-client.git unstable)<br />disk: SATA disk,<br />ceph config: one mon ,one mds,two osd ,at a machine.</p>
<p>run :<br />dd if=/dev/zero of=/mnt/ceph/dddd bs=1M count=2048</p>
<p>sometimes ,printk "</p>
<p>[ 1205.636693] libceph: osd1 172.16.10.68:6805 socket closed<br />" <br />then handle the timeout req failly.</p>
Linux kernel client - Bug #742 (Won't Fix): ceph-client.git unstable , tiotest ,printk "timeout, ...
https://tracker.ceph.com/issues/742
2011-01-25T23:47:55Z
changping Wu
<p>hi ,<br />i git ceph-client.git ,checkout to unstable.<br />build it at ubuntu 10.04 ,make install,reboot,<br />then run as linux-2.6.37+ kernel.<br />system infos:<br />1.ubuntu 10.04 server + linux-2.6.37+(ceph-client.git unstable);<br />2.cpu intel(R) Core(TM)2 Uuad CPU Q8400 @2.66HGz<br />3.mem 4Gb<br />4.SATA disk:WDC WD3200AAKS-7 Rev:02.0 <br />5. one mon ,one mds,two osd</p>
<p>6. run tiotest example shell script:
=======================================<br />#!/bin/bash</p>
<p>i=0<br />while [ $i -le 100 ]<br />do<br />./tiotest -f 10 -t 30 -d /mnt/ceph -T -D 20 -r 100<br />let i+=1<br />done<br />exit0</p>
<p>============<br />7.logs:</p>
<p>run tiotest example ,for several minutes,<br />printk "timeout ,reset osd" ,<br />then printk "mds0 caps stale" ,then a OOPS<br />maybe</p>
<p>at __cap_is_valid(struct ceph_cap *cap),exist a invalid spinlock.</p>
<p>The detail logs attached.</p>
<hr />
<p>[ 1475.050194] libceph: tid 57429 timed out on osd1, will reset osd<br />[ 1548.990035] ceph: mds0 caps stale<br />[ 1558.349986] INFO: rcu_sched_state detected stall on CPU 1 (t=6500 jiffies)</p>
<hr />
<p>[ 1346.330300] libceph: tid 50475 timed out on osd0, will reset osd<br />[ 1366.400033] handle_timeout:timeout<br />[ 1366.403740] libceph: tid 49617 timed out on osd0, will reset osd<br />[ 1412.640037] handle_timeout:timeout<br />[ 1434.920041] handle_timeout:timeout<br />[ 1454.960029] handle_timeout:timeout<br />[ 1454.963664] libceph: tid 55571 timed out on osd1, will reset osd<br />[ 1454.970235] libceph: tid 55675 timed out on osd0, will reset osd<br />[ 1475.040045] handle_timeout:timeout<br />[ 1475.043719] libceph: tid 57408 timed out on osd0, will reset osd<br />[ 1475.050194] libceph: tid 57429 timed out on osd1, will reset osd<br />[ 1548.990035] ceph: mds0 caps stale<br />[ 1558.349986] INFO: rcu_sched_state detected stall on CPU 1 (t=6500 jiffies)<br />[ 1558.350008] sending NMI to all CPUs:<br />[ 1558.350008] NMI backtrace for cpu 1<br />[ 1558.350008] CPU 1 <br />[ 1558.350008] Modules linked in: ceph libceph crc32c libcrc32c fbcon tileblit font bitblit softcursor i915 snd_hda_codec_analog drm_kms_helper drm snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_timer snd i2c_algo_bit i2c_core ppdev psmouse intel_agp video soundcore lp intel_gtt snd_page_alloc parport_pc shpchp output dell_wmi parport dcdbas serio_raw sparse_keymap usbhid r8169 mii hid ata_piix [last unloaded: libceph]<br />[ 1558.350008] <br />[ 1558.350008] Pid: 2783, comm: cosd Not tainted 2.6.37+ <a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: gpf in tcp_sendpage (Closed)" href="https://tracker.ceph.com/issues/1">#1</a> 0V4W66/OptiPlex 780 <br />[ 1558.350008] RIP: 0010:[<ffffffff812b76a7>] [<ffffffff812b76a7>] delay_tsc+0x27/0x80<br />[ 1558.350008] RSP: 0018:ffff8800dc403cd8 EFLAGS: 00000006<br />[ 1558.350008] RAX: 000003cf49baea28 RBX: 0000000000000001 RCX: 0000000049baea28<br />[ 1558.350008] RDX: 00000000000003cf RSI: 0000000000000000 RDI: 00000000002896a5<br />[ 1558.350008] RBP: ffff8800dc403cf8 R08: 0000000000000000 R09: 0000000000000001<br />[ 1558.350008] R10: ffffffff8163b960 R11: 0000000000000001 R12: ffffffff81a21b80<br />[ 1558.350008] R13: 0000000000000001 R14: 00000000002896a5 R15: ffffffff8109c120<br />[ 1558.350008] FS: 00007f28daace700(0000) GS:ffff8800dc400000(0000) knlGS:0000000000000000<br />[ 1558.350008] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b<br />[ 1558.350008] CR2: 000000000f153000 CR3: 000000010ba17000 CR4: 00000000000406e0<br />[ 1558.350008] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000<br />[ 1558.350008] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400<br />[ 1558.350008] Process cosd (pid: 2783, threadinfo ffff8801118de000, task ffff88010b598000)<br />[ 1558.350008] Stack:<br />[ 1558.350008] 0000000000000001 ffffffff81a21b80 ffffffff81a21b80 ffff8800dc403e88<br />[ 1558.350008] ffff8800dc403d08 ffffffff812b7677 ffff8800dc403d28 ffffffff8102e895<br />[ 1558.350008] 0000000000000000 ffff8800dc5d0360 ffff8800dc403d78 ffffffff810e2de9<br />[ 1558.350008] Call Trace:<br />[ 1558.350008] <IRQ> <br />[ 1558.350008] [<ffffffff812b7677>] __const_udelay+0x47/0x50<br />[ 1558.350008] [<ffffffff8102e895>] arch_trigger_all_cpu_backtrace+0x55/0x70<br />[ 1558.350008] [<ffffffff810e2de9>] __rcu_pending+0x169/0x3b0<br />[ 1558.350008] [<ffffffff8109c120>] ? tick_sched_timer+0x0/0xc0<br />[ 1558.350008] [<ffffffff810e309c>] rcu_check_callbacks+0x6c/0x120<br />[ 1558.350008] [<ffffffff81076998>] update_process_times+0x48/0x90<br />[ 1558.350008] [<ffffffff8109c186>] tick_sched_timer+0x66/0xc0<br />[ 1558.350008] [<ffffffff8108f5d3>] __run_hrtimer+0x93/0x200<br />[ 1558.350008] [<ffffffff8108f9de>] hrtimer_interrupt+0xde/0x250<br />[ 1558.350008] [<ffffffff81091ebf>] ? local_clock+0x6f/0x80<br />[ 1558.350008] [<ffffffff810351d7>] hpet_interrupt_handler+0x17/0x40<br />[ 1558.350008] [<ffffffff810dcf05>] handle_IRQ_event+0x55/0x190<br />[ 1558.350008] [<ffffffff812c94fe>] ? do_raw_spin_unlock+0x5e/0xb0<br />[ 1558.350008] [<ffffffff810df636>] handle_edge_irq+0xd6/0x180<br />[ 1558.350008] [<ffffffff8100eef9>] handle_irq+0x49/0xa0<br />[ 1558.350008] [<ffffffff81595948>] do_IRQ+0x68/0xf0<br />[ 1558.350008] [<ffffffff8158db53>] ret_from_intr+0x0/0x16<br />[ 1558.350008] <EOI> <br />[ 1558.350008] [<ffffffff810a4a9c>] ? lock_acquire+0xcc/0x150<br />[ 1558.350008] [<ffffffffa03ecfb0>] ? __cap_is_valid+0x30/0xf0 [ceph]<br />[ 1558.350008] [<ffffffff8158cca6>] _raw_spin_lock+0x36/0x70<br />[ 1558.350008] [<ffffffffa03ecfb0>] ? __cap_is_valid+0x30/0xf0 [ceph]<br />[ 1558.350008] [<ffffffffa03ecfb0>] __cap_is_valid+0x30/0xf0 [ceph]<br />[ 1558.350008] [<ffffffffa03edcdc>] __ceph_caps_issued+0x5c/0x100 [ceph]<br />[ 1558.350008] [<ffffffffa03f05e6>] ceph_check_caps+0x146/0xc60 [ceph]<br />[ 1558.350008] [<ffffffff81013ae3>] ? native_sched_clock+0x13/0x60<br />[ 1558.350008] [<ffffffff81013949>] ? sched_clock+0x9/0x10<br />[ 1558.350008] [<ffffffff81013ae3>] ? native_sched_clock+0x13/0x60<br />[ 1558.350008] [<ffffffff81013949>] ? sched_clock+0x9/0x10<br />[ 1558.350008] [<ffffffff81091cd5>] ? sched_clock_local+0x25/0x90<br />[ 1558.350008] [<ffffffffa03f12d2>] ceph_flush_dirty_caps+0xd2/0x240 [ceph]<br />[ 1558.350008] [<ffffffff81197dc0>] ? sync_one_sb+0x0/0x30<br />[ 1558.350008] [<ffffffffa03dbb6f>] ceph_sync_fs+0x2f/0x160 [ceph]<br />[ 1558.350008] [<ffffffff81197d83>] __sync_filesystem+0x63/0xa0<br />[ 1558.350008] [<ffffffff81197de0>] sync_one_sb+0x20/0x30<br />[ 1558.350008] [<ffffffff8116f4b7>] iterate_supers+0x77/0xe0<br />[ 1558.350008] [<ffffffff81197e1f>] sys_sync+0x2f/0x70<br />[ 1558.350008] [<ffffffff8100c082>] system_call_fastpath+0x16/0x1b<br />[ 1558.350008] Code: 00 00 00 00 55 48 89 e5 41 56 41 55 41 54 53 0f 1f 44 00 00 65 44 8b 2c 25 40 dc 00 00 49 89 fe 66 66 90 0f ae e8 e8 c9 ce d5 ff <66> 90 4c 63 e0 eb 11 66 90 f3 90 65 8b 1c 25 40 dc 00 00 44 39 <br />[ 1558.350008] Call Trace:</p>
Ceph - Bug #674 (Can't reproduce): tiobench stress test , OSD timeout
https://tracker.ceph.com/issues/674
2010-12-27T01:45:44Z
changping Wu
<p>Hi,<br />we do multi-thread stress test for ceph 0.23.1 , ceph client printk osd timeout.</p>
<p>1. test tool: tiobench-0.3.3 ,test command:<br />./tiobench-0.3.3/tiotest -t 4 -f 4096 -r 1000 -b 131072 -d /mnt/ceph -T</p>
<p>2. ceph server version: 0.23-1<br />3. ceph client: git from git://ceph.newdeam.net/git/ceph-client-standalone.git, unstable-backport.<br />4. ceph server hosts OS: ubuntu 10.04 server x86_64 ,kernel:2.6.32-21-server <br />5. ceph client host OS:ubuntu 10.04 server x86_64,kernel: 2.6.32-21-server <br />6. ceph config: ceph-none, two OSD server:osd0 and osd1 , one MDS server and one MON server</p>
<p>===========================================<br />1. tiobench log:</p>
<p>===================================================================================</p>
<p>Run <a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: gpf in tcp_sendpage (Closed)" href="https://tracker.ceph.com/issues/1">#1</a>: ./tiobench-0.3.3/tiotest -t 4 -f 4096 -r 1000 -b 131072 -d /mnt/ceph -T</p>
<p>Unit information</p>
<p>================</p>
<p>File size = megabytes</p>
<p>Blk Size = bytes</p>
<p>Rate = megabytes per second</p>
<p>CPU% = percentage of CPU used during the test</p>
<p>Latency = milliseconds</p>
<p>Lat% = percent of requests that took longer than X seconds</p>
<p>CPU Eff = Rate divided by CPU% - throughput per cpu load</p>
<pre><code>File Blk Num Avg Maximum Lat% Lat% CPU</code></pre>
<p>Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff</p>
<p>---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----</p>
<pre><code>File Blk Num Avg Maximum Lat% Lat% CPU</code></pre>
<p>Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff</p>
<p>---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----</p>
<pre><code>File Blk Num Avg Maximum Lat% Lat% CPU</code></pre>
<p>Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff</p>
<p>---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----</p>
<pre><code>File Blk Num Avg Maximum Lat% Lat% CPU</code></pre>
<p>Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff</p>
<p>---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----</p>
<pre><code>File Blk Num Avg Maximum Lat% Lat% CPU</code></pre>
<p>Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff</p>
<p>---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----</p>
<pre><code>File Blk Num Avg Maximum Lat% Lat% CPU</code></pre>
<p>Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff</p>
<p>---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----</p>
<p>Sequential Reads</p>
<p>2.6.32-21-server 16384 4096 4 66.62 18.81% 0.227 251.11 0.00000 0.00000 354</p>
<p>2.6.32-21-server 16384 8192 4 65.77 16.16% 0.452 458.29 0.00000 0.00000 407</p>
<p>2.6.32-21-server 16384 16384 4 63.92 16.11% 0.968 559.74 0.00000 0.00000 397</p>
<p>2.6.32-21-server 16384 32768 4 64.61 14.39% 1.815 1046.43 0.00000 0.00000 449</p>
<p>2.6.32-21-server 16384 65536 4 66.26 15.32% 3.765 837.34 0.00000 0.00000 432</p>
<p>2.6.32-21-server 16384 13107 4 64.53 14.43% 7.566 1349.89 0.00000 0.00000 447</p>
<p>Random Reads</p>
<p>2.6.32-21-server 16384 4096 4 0.75 1.108% 20.405 106.68 0.00000 0.00000 68</p>
<p>2.6.32-21-server 16384 8192 4 1.66 0.797% 17.845 109.55 0.00000 0.00000 208</p>
<p>2.6.32-21-server 16384 16384 4 2.90 1.812% 20.977 333.91 0.00000 0.00000 160</p>
<p>2.6.32-21-server 16384 32768 4 6.25 1.849% 18.955 279.49 0.00000 0.00000 338</p>
<p>2.6.32-21-server 16384 65536 4 11.87 3.702% 20.782 439.90 0.00000 0.00000 321</p>
<p>2.6.32-21-server 16384 13107 4 22.14 6.199% 22.256 367.77 0.00000 0.00000 357</p>
<p>Sequential Writes</p>
<p>2.6.32-21-server 16384 4096 4 17.18 7.038% 0.862 14455.88 0.01473 0.00026 244</p>
<p>2.6.32-21-server 16384 8192 4 16.09 5.656% 1.831 18047.37 0.02990 0.00076 284</p>
<p>2.6.32-21-server 16384 16384 4 16.19 5.111% 3.671 13165.22 0.06485 0.00057 317</p>
<p>2.6.32-21-server 16384 32768 4 13.84 4.163% 8.497 24246.45 0.14687 0.00782 332</p>
<p>2.6.32-21-server 16384 65536 4 13.27 3.808% 17.584 24053.48 0.29259 0.01717 348</p>
<p>2.6.32-21-server 16384 13107 4 13.45 3.814% 35.383 23944.59 0.62485 0.03281 353</p>
<p>Random Writes</p>
<p>2.6.32-21-server 16384 4096 4 0.60 0.880% 0.220 330.88 0.00000 0.00000 68</p>
<p>2.6.32-21-server 16384 8192 4 1.00 0.669% 0.945 3338.01 0.02500 0.00000 149</p>
<p>2.6.32-21-server 16384 16384 4 1.41 0.699% 0.853 2497.44 0.02500 0.00000 202</p>
<p>2.6.32-21-server 16384 32768 4 2.61 0.814% 0.264 493.69 0.00000 0.00000 321</p>
<p>2.6.32-21-server 16384 65536 4 0.77 0.353% 1.176 1520.99 0.00000 0.00000 217</p>
<p>2.6.32-21-server 16384 13107 4 1.44 0.610% 0.838 1001.35 0.00000 0.00000 236</p>
<p>2. dmesg log:</p>
<p>========================================================</p>
<p>..............................................</p>
<p>..............................................</p>
<p>[75004.800179] INFO: task tiotest:2337 blocked for more than 120 seconds.</p>
<p>[75004.800213] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.</p>
<p>[75004.800240] tiotest D 00000000ffffffff 0 2337 2180 0x00000008</p>
<p>[75004.800243] ffff880116257cc8 0000000000000082 0000000000015bc0 0000000000015bc0</p>
<p>[75004.800246] ffff880115825f80 ffff880116257fd8 0000000000015bc0 ffff880115825bc0</p>
<p>[75004.800249] 0000000000015bc0 ffff880116257fd8 0000000000015bc0 ffff880115825f80</p>
<p>[75004.800252] Call Trace:</p>
<p>[75004.800254] [<ffffffff810f3460>] ? sync_page+0x0/0x50</p>
<p>[75004.800257] [<ffffffff815555c7>] io_schedule+0x47/0x70</p>
<p>[75004.800259] [<ffffffff810f349d>] sync_page+0x3d/0x50</p>
<p>[75004.800262] [<ffffffff81555bef>] __wait_on_bit+0x5f/0x90</p>
<p>[75004.800264] [<ffffffff810f3653>] wait_on_page_bit+0x73/0x80</p>
<p>[75004.800267] [<ffffffff81084fe0>] ? wake_bit_function+0x0/0x40</p>
<p>[75004.800269] [<ffffffff810fd9f5>] ? pagevec_lookup_tag+0x25/0x40</p>
<p>[75004.800272] [<ffffffff810f3ae5>] wait_on_page_writeback_range+0xf5/0x190</p>
<p>[75004.800275] [<ffffffff810f3cb8>] filemap_write_and_wait_range+0x78/0x90</p>
<p>[75004.800277] [<ffffffff8116a62e>] vfs_fsync_range+0x7e/0xe0</p>
<p>[75004.800280] [<ffffffff8116a6fd>] vfs_fsync+0x1d/0x20</p>
<p>[75004.800282] [<ffffffff8116a73e>] do_fsync+0x3e/0x60</p>
<p>[75004.800284] [<ffffffff8116a790>] sys_fsync+0x10/0x20</p>
<p>[75004.800286] [<ffffffff810131b2>] system_call_fastpath+0x16/0x1b</p>
<p>[75058.860051] libceph: tid 232244 timed out on osd0, will reset osd</p>
<p>[75058.861555] libceph: tid 232902 timed out on osd1, will reset osd</p>
<p>[75118.860035] libceph: tid 232596 timed out on osd0, will reset osd</p>
<p>[75118.860925] libceph: tid 233897 timed out on osd1, will reset osd</p>
<p>[75178.860021] libceph: tid 233467 timed out on osd0, will reset osd</p>
<p>[75846.350031] libceph: tid 280625 timed out on osd0, will reset osd</p>
<p>[75911.350030] libceph: tid 280786 timed out on osd0, will reset osd</p>
<p>[75986.350025] libceph: tid 281143 timed out on osd0, will reset osd</p>
<p>[76046.350037] libceph: tid 281253 timed out on osd0, will reset osd</p>
<p>[76106.350039] libceph: tid 281459 timed out on osd0, will reset osd</p>
<p>[76166.350038] libceph: tid 281661 timed out on osd0, will reset osd</p>
<p>[76226.350038] libceph: tid 281859 timed out on osd0, will reset osd</p>
<p>[76286.360026] libceph: tid 282082 timed out on osd0, will reset osd</p>
<p>[76346.360029] libceph: tid 282286 timed out on osd0, will reset osd</p>
<p>[76406.360025] libceph: tid 282478 timed out on osd0, will reset osd</p>
<p>[76466.360033] libceph: tid 282702 timed out on osd0, will reset osd</p>
<p>[76526.360029] libceph: tid 282899 timed out on osd0, will reset osd</p>
<p>[76586.360035] libceph: tid 283090 timed out on osd0, will reset osd</p>
<p>[76646.360033] libceph: tid 283304 timed out on osd0, will reset osd</p>
<p>[76754.770199] libceph: tid 283819 timed out on osd0, will reset osd</p>
<p>[76754.773797] libceph: tid 284035 timed out on osd1, will reset osd</p>
<p>[76814.770080] libceph: tid 284624 timed out on osd0, will reset osd</p>
<p>[76814.772596] libceph: tid 285401 timed out on osd1, will reset osd</p>
<p>[76874.770052] libceph: tid 285191 timed out on osd0, will reset osd</p>
<p>[76874.771749] libceph: tid 286271 timed out on osd1, will reset osd</p>
<p>[76934.770035] libceph: tid 285730 timed out on osd0, will reset osd</p>
<p>[76934.771057] libceph: tid 287155 timed out on osd1, will reset osd</p>
<p>[76994.770023] libceph: tid 286576 timed out on osd0, will reset osd</p>
<p>[77612.270030] libceph: tid 328771 timed out on osd0, will reset osd</p>
<p>-==========</p>
<p>1. test tool attached, test command,for instance: ./tiobench.sh /mnt/ceph 4096</p>