Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2017-09-08T04:26:05ZCeph
Redmine sepia - Support #21302 (Resolved): Sepia Lab Access Requesthttps://tracker.ceph.com/issues/213022017-09-08T04:26:05ZPan Liuliupan1234@163.com
<p>1) Do you just need VPN access or will you also be running teuthology jobs?<br />running teuthology jobs</p>
<p>2) Desired Username: <br />liupan</p>
<p>3) Alternate e-mail address(es) we can reach you at: <br /><a class="email" href="mailto:liupan1111@gmail.com">liupan1111@gmail.com</a></p>
<p>4) If you don't already have an established history of code contributions to Ceph, is there an existing community or core developer you've worked with who has reviewed your work and can vouch for your access request?<br /><a class="external" href="https://github.com/liupan1111">https://github.com/liupan1111</a></p>
<p style="padding-left:2em;">If you answered "No" to # 4, please answer the following (paste directly below the question to keep indentation):</p>
<p style="padding-left:2em;">4a) Paste a link to a Blueprint or planning doc of yours that was reviewed at a Ceph Developer Monthly.</p>
<p style="padding-left:2em;">4b) Paste a link to an accepted pull request for a major patch or feature.</p>
<p style="padding-left:2em;">4c) If applicable, include a link to the current project (planning doc, dev branch, or pull request) that you are looking to test.</p>
<p>5) Paste your SSH public key(s) between the <code>pre</code> tags<br /><pre>ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDC3aWYoBDxRh13e+qver2xcmCtNsiVEt9NkN1cntdUxVbjfWgb+E/b54kPW456tswoVaYS+3xI5DYTPTN1tMhx8jnrgVKI2HhIYC3SO0+V+HNofzU1p7v5CfZxwyupXOgdfHmEWKOepPop8qWon8wZVUkj2t80w8Ifxh5gLVigGquc4kvMLyy6nBI0k6UGeRr9lSKmWLvzudWP70cWij3ynwxEeIuurU0HI9tDmjaVW5dJuu6IZHDhJrCpeUZBQIDqjH0pg1i1uXXwwiLI9NW8jWuALREDeMxAG4IgOZtoXdS7J9vVXHxKpa8ofb+qYNXI7B3OTLgjSS2ziiU1KEIr wanjun.lp@alibaba-inc.com</pre></p>
<p>6) Paste your hashed VPN credentials between the <code>pre</code> tags (Format: <code>user@hostname 22CharacterSalt 65CharacterHashedPassword</code>)<br /><pre>liupan@ceph-dev-2 9+Is4mIZgNkYyJLwHvSNOA 5a8fafc187d52041daf4365125692d4619fc557b75560913130c0596f83bbb77</pre></p> RADOS - Bug #21263 (Resolved): when disk error happens, osd reports assertion failure without any...https://tracker.ceph.com/issues/212632017-09-06T14:38:43ZPan Liuliupan1234@163.com
<p>I used fio+librbd to test one osd(bluestore), which built in an NVME SSD. After I plug-out this SSD, osd reports assertion failure "assert(r>=0)", and no more useful information. I tried filestore with same case, it reported the failed io information, and dumped assert(0 == "got unexpected error from io_getevents").</p>
<p>Need to implement this part in KernelDevice.cc same as FileJournal.cc.</p> Ceph - Bug #20749 (Resolved): the latency dumped by "ceph osd perf" is not realhttps://tracker.ceph.com/issues/207492017-07-23T01:51:35ZPan Liuliupan1234@163.com
<p>When there is no IO for the cluster(or for an OSD), the latency dumped by "ceph osd perf" is still not zero.</p> rbd - Bug #20426 (Resolved): some generic options can not be passed by rbd-nbdhttps://tracker.ceph.com/issues/204262017-06-27T09:38:37ZPan Liuliupan1234@163.com
<p>#rbd-nbd --help<br />Usage: rbd-nbd [options] map <image-or-snap-spec> Map an image to nbd device<br /> unmap <device path> Unmap nbd device<br /> list-mapped List mapped nbd devices<br />Options:<br /> --device <device path> Specify nbd device path<br /> --read-only Map read-only<br /> --nbds_max <limit> Override for module param nbds_max<br /> --max_part <limit> Override for module param max_part<br /> --exclusive Forbid writes by other clients
<p>--conf/-c FILE read configuration from the given configuration file<br /> --id/-i ID set ID portion of my name<br /> --name/-n TYPE.ID set name<br /> --cluster NAME set cluster name (default: ceph)<br /> --setuser USER set uid to user or uid (and gid to user's gid)<br /> --setgroup GROUP set gid to group or gid<br /> --version show version and quit</p>
<p>-d run in foreground, log to stderr.<br /> -f run in foreground, log to usual location.<br /> --debug_ms N set message debug level (e.g. 1)</p>
</p>
<p>We could find --conf/-c print out in help message. But when use it:<br />#rbd-nbd map image -c /etc/ceph/ceph.conf<br />rbd-nbd: unknown args: -c</p> RADOS - Bug #19487 (Closed): "GLOBAL %RAW USED" of "ceph df" is not consistent with check_full_st...https://tracker.ceph.com/issues/194872017-04-04T14:09:06ZPan Liuliupan1234@163.com
<p>1) Use vstart.sh to create a cluster, with option: osd failsafe ful ratio = .46</p>
<p>2) Input "ceph df":<br />GLOBAL:<br />SIZE AVAIL RAW USED %RAW USED<br />439G 217G 199G 45.45</p>
<p>3) rbd create image --size 128</p>
<p>4) ceph -w: <br />2017-04-04 20:57:40:894659 [ERR] OSD full dropping all updates 51% full<br />2017-04-04 20:57:44:364939 [ERR] pgmap v31: 24 pgs: 24 active+clean: 2068 bytes data, 199 GB used, 217 GB / 439 GB avail</p>
<p>So we could see ceph -w displays it is full(51%), but ceph df only 45.45%.</p> rbd - Bug #19347 (Won't Fix): cannot find /dev/nbd0 by using lsblk when map an image with size = 0https://tracker.ceph.com/issues/193472017-03-22T08:23:10ZPan Liuliupan1234@163.com
<p>1)rbd allows the user to create an image with size zero:<br /> rbd create image --size 0<br />2) then, rbd-nbd map image, it displayed:<br /> /dev/nbd0</p>
<p>3) I could successfully list it by using rbd-nbd list-mapped:<br />id device pool image mon_host <br />0 /dev/nbd0 rbd image1 DEFAULT</p>
<p>4) When I use lsblk, we cannot find it.</p>
<p>According to this, rbd-nbd shouldn't allow to map an image with size zero.</p> rbd - Bug #19108 (Resolved): rbd-nbd: prompt message when input nbds_max, and nbd module already ...https://tracker.ceph.com/issues/191082017-02-28T12:58:18ZPan Liuliupan1234@163.com
<p>when user specify --nbds_max, nbd-rbd will try to load nbd module, and set parm nbds_max. But if the nbd module is already loaded, new nbds_max would work. Prompt message should be dumped.</p> Ceph - Backport #18512 (Resolved): build/ops: compilation error when --with-radowsgw=nohttps://tracker.ceph.com/issues/185122017-01-13T02:02:03ZPan Liuliupan1234@163.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/12729">https://github.com/ceph/ceph/pull/12729</a></p> rbd - Cleanup #18186 (Resolved): add max_part and nbds_max options in rbd nbd map, in order to ke...https://tracker.ceph.com/issues/181862016-12-08T02:22:29ZPan Liuliupan1234@163.comrbd - Bug #18115 (Resolved): partition func should be enabled When load nbd.ko for rbd-nbdhttps://tracker.ceph.com/issues/181152016-12-02T01:00:40ZPan Liuliupan1234@163.com
<p>We are using rbd-nbd as our raw block device function. For different kernel versions, some set max_part as 0(kernel 4.2.8), and some set as 0(3.19). If max_part is 0, the user cannot make partition(fdisk, parted...).</p>
<p>I think for rbd nbd map use case, max_part should be 255(largest value supported by kernel), so that the user could make partition as needed.</p> Ceph - Bug #18057 (Resolved): shoud output hb_peers, not hb_inhttps://tracker.ceph.com/issues/180572016-11-29T03:02:56ZPan Liuliupan1234@163.com
<p>HB_IN should be replaced with HB_PEERS</p> Ceph - Bug #18004 (Resolved): heartbeat peers need to be updated when a new OSD added into an exi...https://tracker.ceph.com/issues/180042016-11-23T01:44:39ZPan Liuliupan1234@163.com
<p>The case is:<br />There is a storage node which has two osds. nearly one min later, a new OSD joins in. Monitor sees it as "up", and no heatbeat from existing two osds to this new one. When we kill this new OSD, no osd reports this to monitor, so monitor still treats it as "up". Then, if the user invokes fio, io may hang there.</p>