Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2015-04-22T07:19:37ZCeph
Redmine Ceph - Bug #11445 (Closed): admin client socket not exists in /var/run/ceph/ for qemuhttps://tracker.ceph.com/issues/114452015-04-22T07:19:37Zeyun xueyun221@gmail.com
<p>hi, i found admin client socket not exists in /var/run/ceph/ for qemu process, even i added the following configuration:</p>
<p>admin_socket = /var/run/ceph/ceph-client.$pid.asok</p>
<p>i tested ceph0.80.6 and 0.94.1, this bug all exists, with this bug, i can't know the using congigurations for qemu.</p> Ceph - Bug #11309 (Rejected): load_pgs very slow when use filestore_min_sync_interval = 60 in btrfshttps://tracker.ceph.com/issues/113092015-04-02T06:23:29Zeyun xueyun221@gmail.com
<p>hi, one some node of my ceph 0.80.7 cluster is using btrfs , others use xfs. and we used following config<br /> filestore_min_sync_interval = 60</p>
<p>here is the issue:<br />when i restart the osd with btrfs, they all start up(get in+up state) very slowly, once i used debug log,<br />i found that it is blocked by load_pgs, and in this function, it logs: sync_entry waiting for another 59.999083 to reach min interval 60.000000</p>
<p>seems it waste lots of time for wait sync procedure. when i delete filestore_min_sync_interval config, then osd start very fast.</p>
<p>but when i try to restart the osd with xfs fs, this issue is not exists, and the load_pgs runs very fast? no sync_entry waiting happens?</p>
<p>attch the osd logs?</p>
<p>2015-04-02 09:10:16.780791 7f2e80b52700 10 filestore(/var/lib/ceph/osd/ceph-4) _finish_op 0x2a1bca0 seq 5057087 osr(default 0x2aa0b98)/0x2aa0b98<br />2015-04-02 09:10:16.780831 7f2e8c2857a0 10 filestore(/var/lib/ceph/osd/ceph-4) sync_and_flush<br />2015-04-02 09:10:16.780854 7f2e8c2857a0 10 2015-04-02 09:10:16.780854 7f2e8c2857a0 10 filestore(/var/lib/ceph/osd/ceph-4) _flush_op_queue draining op tp<br />2015-04-02 09:10:16.780857 7f2e8c2857a0 10 2015-04-02 09:10:16.780857 7f2e8c2857a0 10 filestore(/var/lib/ceph/osd/ceph-4) _flush_op_queue waiting for apply finisher</p>
<p>2015-04-02 09:10:16.780867 7f2e8c2857a0 10 filestore(/var/lib/ceph/osd/ceph-4) start_sync filestore(/var/lib/ceph/osd/ceph-4) start_sync<br />2015-04-02 09:10:16.780870 7f2e8c2857a0 10 filestore(/var/lib/ceph/osd/ceph-4) sync waiting filestore(/var/lib/ceph/osd/ceph-4) sync waiting<br />2015-04-02 09:10:16.780873 7f2e83b58700 20 filestore(/var/lib/ceph/osd/ceph-4) sync_entry woke after 0.000917filestore(/var/lib/ceph/osd/ceph-4) sync_entry woke after 0.000917<br />2015-04-02 09:10:16.780885 7f2e83b58700 20 filestore(/var/lib/ceph/osd/ceph-4) sync_entry waiting for another 59.999083 to reach min interval 60.000000filestore(/var/lib/ceph/osd/ceph-4) sync_entry waiting for another 59.999083 to reach min interval 60.000000</p> Ceph - Bug #10546 (Resolved): ceph time check start round bug in monitor.cchttps://tracker.ceph.com/issues/105462015-01-14T18:16:55Zeyun xueyun221@gmail.com
<p>HI all,</p>
<p>My ceph cluster report clock skew even after time syncing 30min ago, so i'm digging in the src code of ceph0.80.6, <br />i think the following code in void Monitor::timecheck_start_round() in monitor.cc from line 3160 to 3168 is very strange,</p>
<p>in my opinion, the highligted part should be <strong>curr_time - timecheck_round_start < max</strong> ,</p>
<p>that is: if time elapsed less than max, then keep current round going, else cancel current round.</p>
<p>double max = g_conf->mon_timecheck_interval*3;<br />if (<strong>curr_time - timecheck_round_start > max</strong>) { <br />dout(10) << func << " keep current round going" << dendl; <br />goto out; <br />} else { <br />dout(10) << func <br /><< " finish current timecheck and start new" << dendl; <br />timecheck_cancel_round(); <br />}</p>
<p>just my opinion, any reply is welcomed!</p>
<p>thanks very much</p>
<ul>
<li>hammer <a class="changeset" title="mon: Monitor: fix timecheck rounds period Fixes: #10546 Backports: dumpling?,firefly,giant Sign..." href="https://tracker.ceph.com/projects/ceph/repository/revisions/2e749599ac6e1060cf553b521761a93fafbf65bb">2e749599ac6e1060cf553b521761a93fafbf65bb</a></li>
</ul>