Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2024-01-09T23:24:31Z
Ceph
Redmine
sepia - Support #63988 (Resolved): Sepia Lab Access Request
https://tracker.ceph.com/issues/63988
2024-01-09T23:24:31Z
Neha Ojha
nojha@redhat.com
<p>1) Do you just need VPN access or will you also be running teuthology jobs?</p>
<p>Will be running teuthology jobs as well</p>
<p>2) Desired Username: nojha</p>
<p>3) Alternate e-mail address(es) we can reach you at:</p>
<p>4) If you don't already have an established history of code contributions to Ceph, is there an existing community or core developer you've worked with who has reviewed your work and can vouch for your access request?</p>
<p style="padding-left:2em;">If you answered "No" to # 4, please answer the following (paste directly below the question to keep indentation):</p>
<p style="padding-left:2em;">4a) Paste a link to a Blueprint or planning doc of yours that was reviewed at a Ceph Developer Monthly.</p>
<p style="padding-left:2em;">4b) Paste a link to an accepted pull request for a major patch or feature.</p>
<p style="padding-left:2em;">4c) If applicable, include a link to the current project (planning doc, dev branch, or pull request) that you are looking to test.</p>
<p>5) Paste your SSH public key(s) between the <code>pre</code> tags<br /><pre>ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfrBYCF5QzYdPuZ4poc9vFpNsnjtGQ4OtM03DUfNCdO5si+hslIvanCKj55e57HYtoWm3ZZQxTHQb7XnLJU+wvc39+mubG0+iGjYFVfiMl5f5xvM633ZLq6XNomn4Jg34WVqZPxbXvpPdVMP/lu0Z7gIx4Af3yXmvH5NKlUMcQ0MjiHI5uJxgmDQqQCLJ9kHTD07Bp2oBvjL+SwJYwmLW64Jo1TOJmQ4k3Lwx9mhMcHNN8Pfe9EX4QogusEkBjD2MEn8hFcBO3KfCTPXz+/y5yBE4i30edCB2KMjhcJhqi3DTOnA/13Qg08Rxqe/uCxIPwO4RdRge4Ngiol+wAh+f0zr8XZmH5gwZot9KHC231fbCsNMx8v0xYD601TEWasPbghkWkl8nt6zRIzZX0noox+8aYknN7OivifUCUuKdrMSp284JAXGf3FhezFeMUC5Zp7cxhziLBY5rIAWmRmN4py/2PzHVyLJaNJqtIHUHFonO7DOl5s3VhyKpUYEM2b+SaclT479jxoXIyz9ReNQX62XylC0bwwt7Jqj2GISldXH4a35DLy93rHi9vnowtUjUF5fWZgs65e7ZRZSW4FlzAjjcK1kRc57g/3JOx8KX50QTDMt3eTGYTQEyg09L5sB6Zqxmu0de+jHShcz/ubwkATy3azOdnzVI379pLckWqcw== nehaojha@Nehas-MacBook-Pro.local</pre></p>
<p>6) Paste your hashed VPN credentials between the <code>pre</code> tags (Format: <code>user@hostname 22CharacterSalt 65CharacterHashedPassword</code>)<br /><pre>nojha@nojha-mac lBjiQ41OZJqOiNXfndpNzbRRGVH27ipaWn4LB8FxQgh2swYcJT6QpBBTULNt2eJ7sa7UumCYfktjHY6dorlwVg</pre></p>
mgr - Backport #58980 (Resolved): reef: mgr/telemetry: perf histograms are not formatted in `all`...
https://tracker.ceph.com/issues/58980
2023-03-13T16:42:20Z
Neha Ojha
nojha@redhat.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/50481">https://github.com/ceph/ceph/pull/50481</a></p>
RADOS - Backport #58639 (Resolved): quincy: Mon fail to send pending metadata through MMgrUpdate ...
https://tracker.ceph.com/issues/58639
2023-02-03T19:08:07Z
Neha Ojha
nojha@redhat.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/49989">https://github.com/ceph/ceph/pull/49989</a></p>
RADOS - Backport #58638 (Resolved): pacific: Mon fail to send pending metadata through MMgrUpdate...
https://tracker.ceph.com/issues/58638
2023-02-03T19:07:51Z
Neha Ojha
nojha@redhat.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/49988">https://github.com/ceph/ceph/pull/49988</a></p>
RADOS - Backport #58637 (Resolved): pacific: osd/scrub: "scrub a chunk" requests are sent to the ...
https://tracker.ceph.com/issues/58637
2023-02-03T16:58:47Z
Neha Ojha
nojha@redhat.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/48544">https://github.com/ceph/ceph/pull/48544</a></p>
RADOS - Backport #58636 (Resolved): quincy: osd/scrub: "scrub a chunk" requests are sent to the w...
https://tracker.ceph.com/issues/58636
2023-02-03T16:58:40Z
Neha Ojha
nojha@redhat.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/48543">https://github.com/ceph/ceph/pull/48543</a></p>
RADOS - Backport #58338 (Resolved): quincy: mon-stretched_cluster: degraded stretched mode lead t...
https://tracker.ceph.com/issues/58338
2022-12-21T20:06:57Z
Neha Ojha
nojha@redhat.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/48802">https://github.com/ceph/ceph/pull/48802</a></p>
RADOS - Backport #58337 (Rejected): pacific: mon-stretched_cluster: degraded stretched mode lead ...
https://tracker.ceph.com/issues/58337
2022-12-21T20:06:49Z
Neha Ojha
nojha@redhat.com
<p>Original backport <a class="external" href="https://github.com/ceph/ceph/pull/48803">https://github.com/ceph/ceph/pull/48803</a> was reverted in <a class="external" href="https://github.com/ceph/ceph/pull/49412">https://github.com/ceph/ceph/pull/49412</a> due to <a class="external" href="https://tracker.ceph.com/issues/58239">https://tracker.ceph.com/issues/58239</a></p>
RADOS - Bug #55488 (New): ENOENT on clone on EC non-primary shard
https://tracker.ceph.com/issues/55488
2022-04-28T21:29:19Z
Neha Ojha
nojha@redhat.com
<pre>
-2632> 2022-04-28T20:32:11.034+0000 7f683a737700 10 osd.0 pg_epoch: 593 pg[3.3bs2( v 547'242 (0'0,547'242] local-lis/les=591/592 n=11 ec=548/21 lis/c=591/548 les/c/f=592/549/0 sis=591) [7,2,0]p7(0) r=2 lpr=591 pi=[548,591)/3 luod=0'0 crt=547'242 lcod 0'0 mlcod 0'0 active mbc={} ps=[35~1,63~1,6b~1,6e~6,76~1,7a~1,7c~3,81~2,84~1,86~3,8a~4,90~3,95~1,97~1,99~1]] append_log log((0'0,547'242], crt=547'242) [593'243 (547'242) clone 3:de28928d:::smithi01883722-15:19e by unknown.0.0:0 2022-04-28T20:30:39.964912+0000 0 snaps [19a,199,193,190,18f,18a,182,17e] ObjectCleanRegions clean_offsets: [0~18446744073709551615], clean_omap: 1, new_object: 0,593'244 (547'242) modify 3:de28928d:::smithi01883722-15:head by client.4545.0:5158 2022-04-28T20:31:32.988871+0000 0 ObjectCleanRegions clean_offsets: [0~18446744073709551615], clean_omap: 1, new_object: 0]
-2631> 2022-04-28T20:32:11.034+0000 7f683a737700 10 osd.0 pg_epoch: 593 pg[3.3bs2( v 593'243 (0'0,593'243] local-lis/les=591/592 n=11 ec=548/21 lis/c=591/548 les/c/f=592/549/0 sis=591) [7,2,0]p7(0) r=2 lpr=591 pi=[548,591)/3 luod=0'0 lua=547'242 crt=547'242 lcod 0'0 mlcod 0'0 active mbc={} ps=[35~1,63~1,6b~1,6e~6,76~1,7a~1,7c~3,81~2,84~1,86~3,8a~4,90~3,95~1,97~1,99~1]] add_log_entry 593'243 (547'242) clone 3:de28928d:::smithi01883722-15:19e by unknown.0.0:0 2022-04-28T20:30:39.964912+0000 0 snaps [19a,199,193,190,18f,18a,182,17e] ObjectCleanRegions clean_offsets: [0~18446744073709551615], clean_omap: 1, new_object: 0
-2630> 2022-04-28T20:32:11.034+0000 7f683a737700 10 osd.0 pg_epoch: 593 pg[3.3bs2( v 593'244 (0'0,593'244] local-lis/les=591/592 n=11 ec=548/21 lis/c=591/548 les/c/f=592/549/0 sis=591) [7,2,0]p7(0) r=2 lpr=591 pi=[548,591)/3 luod=0'0 lua=547'242 crt=547'242 lcod 0'0 mlcod 0'0 active mbc={} ps=[35~1,63~1,6b~1,6e~6,76~1,7a~1,7c~3,81~2,84~1,86~3,8a~4,90~3,95~1,97~1,99~1]] add_log_entry 593'244 (547'242) modify 3:de28928d:::smithi01883722-15:head by client.4545.0:5158 2022-04-28T20:31:32.988871+0000 0 ObjectCleanRegions clean_offsets: [0~18446744073709551615], clean_omap: 1, new_object: 0
-2629> 2022-04-28T20:32:11.034+0000 7f683a737700 10 osd.0 pg_epoch: 593 pg[3.3bs2( v 593'244 (0'0,593'244] local-lis/les=591/592 n=11 ec=548/21 lis/c=591/548 les/c/f=592/549/0 sis=591) [7,2,0]p7(0) r=2 lpr=591 pi=[548,591)/3 luod=0'0 lua=547'242 crt=547'242 lcod 0'0 mlcod 0'0 active mbc={} ps=[35~1,63~1,6b~1,6e~6,76~1,7a~1,7c~3,81~2,84~1,86~3,8a~4,90~3,95~1,97~1,99~1]] append_log approx pg log length = 244
-2628> 2022-04-28T20:32:11.034+0000 7f683a737700 10 osd.0 pg_epoch: 593 pg[3.3bs2( v 593'244 (0'0,593'244] local-lis/les=591/592 n=11 ec=548/21 lis/c=591/548 les/c/f=592/549/0 sis=591) [7,2,0]p7(0) r=2 lpr=591 pi=[548,591)/3 luod=0'0 lua=547'242 crt=547'242 lcod 0'0 mlcod 0'0 active mbc={} ps=[35~1,63~1,6b~1,6e~6,76~1,7a~1,7c~3,81~2,84~1,86~3,8a~4,90~3,95~1,97~1,99~1]] append_log transaction_applied = 1
-2627> 2022-04-28T20:32:11.034+0000 7f683a737700 10 trim proposed trim_to = 0'0
-2626> 2022-04-28T20:32:11.034+0000 7f683a737700 6 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 593'243, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-2625> 2022-04-28T20:32:11.034+0000 7f683a737700 10 bluestore(/var/lib/ceph/osd/ceph-0) queue_transactions ch 0x56373e000780 3.3bs2_head
-2624> 2022-04-28T20:32:11.034+0000 7f683a737700 20 bluestore(/var/lib/ceph/osd/ceph-0) _txc_create osr 0x56373de5c9a0 = 0x563749fe3c00 seq 11
-2623> 2022-04-28T20:32:11.034+0000 7f683a737700 20 bluestore(/var/lib/ceph/osd/ceph-0).collection(3.3bs2_head 0x56373e000780) get_onode oid 2#3:de28928d:::smithi01883722-15:head# key 0x828000000000000003DE28928D'!smithi01883722-15!='0xFFFFFFFFFFFFFFFEFFFFFFFFFFFFFFFF6F
-2622> 2022-04-28T20:32:11.034+0000 7f683a737700 20 bluestore(/var/lib/ceph/osd/ceph-0).collection(3.3bs2_head 0x56373e000780) r -2 v.len 0
-2621> 2022-04-28T20:32:11.034+0000 7f683a737700 10 bluestore(/var/lib/ceph/osd/ceph-0) _txc_add_transaction op 17 got ENOENT on 2#3:de28928d:::smithi01883722-15:head#
-2620> 2022-04-28T20:32:11.034+0000 7f683a737700 -1 bluestore(/var/lib/ceph/osd/ceph-0) _txc_add_transaction error (2) No such file or directory not handled on operation 17 (op 0, counting from 0)
-2619> 2022-04-28T20:32:11.034+0000 7f683a737700 -1 bluestore(/var/lib/ceph/osd/ceph-0) ENOENT on clone suggests osd bug
-2618> 2022-04-28T20:32:11.034+0000 7f683a737700 0 _dump_transaction transaction dump:
{
"ops": [
{
"op_num": 0,
"op_name": "clone",
"collection": "3.3bs2_head",
"src_oid": "2#3:de28928d:::smithi01883722-15:head#",
"dst_oid": "2#3:de28928d:::smithi01883722-15:19e#"
},
{
"op_num": 1,
"op_name": "rmattr",
"collection": "3.3bs2_head",
"oid": "2#3:de28928d:::smithi01883722-15:19e#",
"name": "snapset"
},
{
"op_num": 2,
"op_name": "setattrs",
"collection": "3.3bs2_head",
"oid": "2#3:de28928d:::smithi01883722-15:19e#",
"attr_lens": {
"_": 249
}
},
</pre>
<p>On the primary side<br /><pre>
2022-04-28T20:32:11.034+0000 7f123535b700 20 osd.7 pg_epoch: 593 pg[3.3bs0( v 547'242 lc 52'38 (0'0,547'242] local-lis/les=591/592 n=11 ec=548/21 lis/c=591/548 les/c/f=592/549/0 sis=591) [7,2,0]p7(0) r=0 lpr=591 pi=[548,591)/3 crt=547'242 mlcod 0'0 active+recovery_wait+degraded m=31 mbc={0={(0+1)=17},1={(1+0)=17},2={(1+1)=17}} trimq=114 ps=[35~1,63~1,6b~1,6e~6,76~1,7a~1,7c~3,81~2,84~1,86~3,8a~4,90~3,95~1,97~1,99~1]] try_reads_to_commit: 3:de28928d:::smithi01883722-15:head,{}
2022-04-28T20:32:11.034+0000 7f123535b700 1 -- [v2:172.21.15.134:6826/77735,v1:172.21.15.134:6827/77735] --> [v2:172.21.15.18:6821/166234,v1:172.21.15.18:6823/166234] -- MOSDECSubOpWrite(3.3bs2 593/591 ECSubWrite(tid=1141, reqid=client.4545.0:5158, at_version=593'244, trim_to=0'0, roll_forward_to=0'0)) v2 -- 0x560ec8b17100 con 0x560ebf4a6800
2022-04-28T20:32:11.034+0000 7f123535b700 1 -- [v2:172.21.15.134:6826/77735,v1:172.21.15.134:6827/77735] --> [v2:172.21.15.18:6820/77740,v1:172.21.15.18:6822/77740] -- MOSDECSubOpWrite(3.3bs1 593/591 ECSubWrite(tid=1141, reqid=client.4545.0:5158, at_version=593'244, trim_to=0'0, roll_forward_to=0'0)) v2 -- 0x560eb71f9100 con 0x560ebf0e1400
2022-04-28T20:32:11.034+0000 7f123535b700 20 snap_mapper.add_oid 3:de28928d:::smithi01883722-15:19e 17e,182,18a,18f,190,193,199,19a
2022-04-28T20:32:11.034+0000 7f123535b700 15 bluestore(/var/lib/ceph/osd/ceph-7) omap_get_values meta oid #-1:c0371625:::snapmapper:0#
2022-04-28T20:32:11.034+0000 7f123535b700 20 _pin0x560eb6d4a000 #-1:c0371625:::snapmapper:0# pinned
2022-04-28T20:32:11.034+0000 7f123535b700 20 bluestore.onode(0x560eb6eccc80).flush flush done
2022-04-28T20:32:11.034+0000 7f123535b700 10 bluestore(/var/lib/ceph/osd/ceph-7) omap_get_values meta oid #-1:c0371625:::snapmapper:0# = 0
2022-04-28T20:32:11.034+0000 7f123535b700 20 _unpin0x560eb6d4a000 #-1:c0371625:::snapmapper:0# unpinned
2022-04-28T20:32:11.034+0000 7f123535b700 20 snap_mapper.get_snaps 3:de28928d:::smithi01883722-15:19e got.empty()
2022-04-28T20:32:11.034+0000 7f123535b700 20 snap_mapper.set_snaps 3:de28928d:::smithi01883722-15:19e 17e,182,18a,18f,190,193,199,19a
2022-04-28T20:32:11.034+0000 7f123535b700 20 snap_mapper.set_snaps set OBJ_.0_0000000000000003.B741941B.19e.smithi01883722-15..
2022-04-28T20:32:11.034+0000 7f123535b700 20 snap_mapper.add_oid set SNA_3_000000000000017E_.0_0000000000000003.B741941B.19e.smithi01883722-15..
2022-04-28T20:32:11.034+0000 7f123535b700 20 snap_mapper.add_oid set SNA_3_0000000000000182_.0_0000000000000003.B741941B.19e.smithi01883722-15..
2022-04-28T20:32:11.034+0000 7f123535b700 20 snap_mapper.add_oid set SNA_3_000000000000018A_.0_0000000000000003.B741941B.19e.smithi01883722-15..
2022-04-28T20:32:11.034+0000 7f123535b700 20 snap_mapper.add_oid set SNA_3_000000000000018F_.0_0000000000000003.B741941B.19e.smithi01883722-15..
2022-04-28T20:32:11.034+0000 7f123535b700 20 snap_mapper.add_oid set SNA_3_0000000000000190_.0_0000000000000003.B741941B.19e.smithi01883722-15..
2022-04-28T20:32:11.034+0000 7f123535b700 20 snap_mapper.add_oid set SNA_3_0000000000000193_.0_0000000000000003.B741941B.19e.smithi01883722-15..
2022-04-28T20:32:11.034+0000 7f123535b700 20 snap_mapper.add_oid set SNA_3_0000000000000199_.0_0000000000000003.B741941B.19e.smithi01883722-15..
2022-04-28T20:32:11.034+0000 7f123535b700 20 snap_mapper.add_oid set SNA_3_000000000000019A_.0_0000000000000003.B741941B.19e.smithi01883722-15..
2022-04-28T20:32:11.034+0000 7f123535b700 20 earliest_dup_version = 0
2022-04-28T20:32:11.034+0000 7f123535b700 20 trim 593'243 (547'242) clone 3:de28928d:::smithi01883722-15:19e by unknown.0.0:0 2022-04-28T20:30:39.964912+0000 0 snaps [19a,199,193,190,18f,18a,182,17e] ObjectCleanRegions clean_offsets: [0~18446744073709551615], clean_omap: 1, new_object: 0
2022-04-28T20:32:11.034+0000 7f123535b700 20 trim 593'244 (547'242) modify 3:de28928d:::smithi01883722-15:head by client.4545.0:5158 2022-04-28T20:31:32.988871+0000 0 ObjectCleanRegions clean_offsets: [0~18446744073709551615], clean_omap: 1, new_object: 0
2022-04-28T20:32:11.034+0000 7f123535b700 10 osd.7 pg_epoch: 593 pg[3.3bs0( v 547'242 lc 52'38 (0'0,547'242] local-lis/les=591/592 n=11 ec=548/21 lis/c=591/548 les/c/f=592/549/0 sis=591) [7,2,0]p7(0) r=0 lpr=591 pi=[548,591)/3 crt=547'242 mlcod 0'0 active+recovery_wait+degraded m=31 mbc={0={(0+1)=17},1={(1+0)=17},2={(1+1)=17}} trimq=114 ps=[35~1,63~1,6b~1,6e~6,76~1,7a~1,7c~3,81~2,84~1,86~3,8a~4,90~3,95~1,97~1,99~1]] append_log log((0'0,547'242], crt=547'242) [593'243 (547'242) clone 3:de28928d:::smithi01883722-15:19e by unknown.0.0:0 2022-04-28T20:30:39.964912+0000 0 snaps [19a,199,193,190,18f,18a,182,17e] ObjectCleanRegions clean_offsets: [0~18446744073709551615], clean_omap: 1, new_object: 0,593'244 (547'242) modify 3:de28928d:::smithi01883722-15:head by client.4545.0:5158 2022-04-28T20:31:32.988871+0000 0 ObjectCleanRegions clean_offsets: [0~18446744073709551615], clean_omap: 1, new_object: 0]
2022-04-28T20:32:11.034+0000 7f123535b700 10 osd.7 pg_epoch: 593 pg[3.3bs0( v 593'243 lc 52'38 (0'0,593'243] local-lis/les=591/592 n=11 ec=548/21 lis/c=591/548 les/c/f=592/549/0 sis=591) [7,2,0]p7(0) r=0 lpr=591 pi=[548,591)/3 luod=547'242 lua=547'242 crt=547'242 mlcod 0'0 active+recovery_wait+degraded m=31 mbc={0={(0+1)=17},1={(1+0)=17},2={(1+1)=17}} trimq=114 ps=[35~1,63~1,6b~1,6e~6,76~1,7a~1,7c~3,81~2,84~1,86~3,8a~4,90~3,95~1,97~1,99~1]] add_log_entry 593'243 (547'242) clone 3:de28928d:::smithi01883722-15:19e by unknown.0.0:0 2022-04-28T20:30:39.964912+0000 0 snaps [19a,199,193,190,18f,18a,182,17e] ObjectCleanRegions clean_offsets: [0~18446744073709551615], clean_omap: 1, new_object: 0
2022-04-28T20:32:11.034+0000 7f123535b700 10 osd.7 pg_epoch: 593 pg[3.3bs0( v 593'244 lc 52'38 (0'0,593'244] local-lis/les=591/592 n=11 ec=548/21 lis/c=591/548 les/c/f=592/549/0 sis=591) [7,2,0]p7(0) r=0 lpr=591 pi=[548,591)/3 luod=547'242 lua=547'242 crt=547'242 mlcod 0'0 active+recovery_wait+degraded m=31 mbc={0={(0+1)=17},1={(1+0)=17},2={(1+1)=17}} trimq=114 ps=[35~1,63~1,6b~1,6e~6,76~1,7a~1,7c~3,81~2,84~1,86~3,8a~4,90~3,95~1,97~1,99~1]] add_log_entry 593'244 (547'242) modify 3:de28928d:::smithi01883722-15:head by client.4545.0:5158 2022-04-28T20:31:32.988871+0000 0 ObjectCleanRegions clean_offsets: [0~18446744073709551615], clean_omap: 1, new_object: 0
2022-04-28T20:32:11.034+0000 7f123535b700 10 osd.7 pg_epoch: 593 pg[3.3bs0( v 593'244 lc 52'38 (0'0,593'244] local-lis/les=591/592 n=11 ec=548/21 lis/c=591/548 les/c/f=592/549/0 sis=591) [7,2,0]p7(0) r=0 lpr=591 pi=[548,591)/3 luod=547'242 lua=547'242 crt=547'242 mlcod 0'0 active+recovery_wait+degraded m=31 mbc={0={(0+1)=17},1={(1+0)=17},2={(1+1)=17}} trimq=114 ps=[35~1,63~1,6b~1,6e~6,76~1,7a~1,7c~3,81~2,84~1,86~3,8a~4,90~3,95~1,97~1,99~1]] append_log approx pg log length = 244
2022-04-28T20:32:11.034+0000 7f123535b700 10 osd.7 pg_epoch: 593 pg[3.3bs0( v 593'244 lc 52'38 (0'0,593'244] local-lis/les=591/592 n=11 ec=548/21 lis/c=591/548 les/c/f=592/549/0 sis=591) [7,2,0]p7(0) r=0 lpr=591 pi=[548,591)/3 luod=547'242 lua=547'242 crt=547'242 mlcod 0'0 active+recovery_wait+degraded m=31 mbc={0={(0+1)=17},1={(1+0)=17},2={(1+1)=17}} trimq=114 ps=[35~1,63~1,6b~1,6e~6,76~1,7a~1,7c~3,81~2,84~1,86~3,8a~4,90~3,95~1,97~1,99~1]] append_log transaction_applied = 1
2022-04-28T20:32:11.034+0000 7f123535b700 10 trim proposed trim_to = 0'0
2022-04-28T20:32:11.034+0000 7f123535b700 6 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 593'243, trimmed: , trimmed_dups: , clear_divergent_priors: 0
2022-04-28T20:32:11.035+0000 7f123535b700 10 bluestore(/var/lib/ceph/osd/ceph-7) queue_transactions ch 0x560ec8b6c1e0 3.3bs0_head
2022-04-28T20:32:11.035+0000 7f123535b700 20 bluestore(/var/lib/ceph/osd/ceph-7) _txc_create osr 0x560ec8d45760 = 0x560ec7aafc00 seq 22
2022-04-28T20:32:11.035+0000 7f123535b700 20 _pin0x560eb6e8e000 0#3:de28928d:::smithi01883722-15:head# pinned
2022-04-28T20:32:11.035+0000 7f123535b700 20 bluestore(/var/lib/ceph/osd/ceph-7).collection(3.3bs0_head 0x560ec8b6c1e0) get_onode oid 0#3:de28928d:::smithi01883722-15:19e# key 0x808000000000000003DE28928D'!smithi01883722-15!='0x000000000000019EFFFFFFFFFFFFFFFF6F
2022-04-28T20:32:11.035+0000 7f123535b700 20 bluestore(/var/lib/ceph/osd/ceph-7).collection(3.3bs0_head 0x560ec8b6c1e0) r -2 v.len 0
2022-04-28T20:32:11.035+0000 7f123535b700 20 bluestore.OnodeSpace(0x560ec8b6c320 in 0x560eb6e8e000) add 0#3:de28928d:::smithi01883722-15:19e# 0x560ecc02b680
2022-04-28T20:32:11.035+0000 7f123535b700 20 _add 0x560eb6e8e000 0#3:de28928d:::smithi01883722-15:19e# added, num=25
2022-04-28T20:32:11.035+0000 7f123535b700 15 bluestore(/var/lib/ceph/osd/ceph-7) _clone 3.3bs0_head 0#3:de28928d:::smithi01883722-15:head# -> 0#3:de28928d:::smithi01883722-15:19e#
2022-04-28T20:32:11.035+0000 7f123535b700 20 bluestore(/var/lib/ceph/osd/ceph-7) _assign_nid 3040
2022-04-28T20:32:11.035+0000 7f123535b700 20 bluestore.onode(0x560ed348f400).flush flush done
2022-04-28T20:32:11.035+0000 7f123535b700 15 bluestore(/var/lib/ceph/osd/ceph-7) _do_truncate 3.3bs0_head 0#3:de28928d:::smithi01883722-15:19e# 0x0
2022-04-28T20:32:11.035+0000 7f123535b700 15 bluestore(/var/lib/ceph/osd/ceph-7) _do_clone_range 3.3bs0_head 0#3:de28928d:::smithi01883722-15:head# -> 0#3:de28928d:::smithi01883722-15:19e# 0x0~114000 -> 0x0~114000
</pre></p>
<p>/a/nojha-2022-04-28_19:20:30-rados-pacific-distro-basic-smithi/6811513</p>
bluestore - Bug #55358 (Resolved): os/bluestore: Always update the cursor position in AVL near-fi...
https://tracker.ceph.com/issues/55358
2022-04-18T22:53:39Z
Neha Ojha
nojha@redhat.com
<p>To backport <a class="external" href="https://github.com/ceph/ceph/pull/45884">https://github.com/ceph/ceph/pull/45884</a>.<br />Quincy backport <a class="external" href="https://github.com/ceph/ceph/pull/45885">https://github.com/ceph/ceph/pull/45885</a> has been merged.</p>
RADOS - Bug #54592 (Resolved): partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark omap dirty
https://tracker.ceph.com/issues/54592
2022-03-16T16:13:26Z
Neha Ojha
nojha@redhat.com
<p>All the OMAP write OPs mark_omap_dirty(), except CEPH_OSD_OP_OMAPRMKEYRANGE. This leads to:</p>
<p>1. incorrectly setting clean_omap<br />2. not doing omap recovery, when it is actually required<br />3. PGs in inconsistent state after a scrub</p>
Orchestrator - Bug #54132 (Resolved): ssh errors too verbose
https://tracker.ceph.com/issues/54132
2022-02-03T17:18:43Z
Neha Ojha
nojha@redhat.com
<p>Related to <a class="external" href="https://tracker.ceph.com/issues/53358">https://tracker.ceph.com/issues/53358</a></p>
<p>14837 lines in health detail in gibba</p>
<pre>
HEALTH_WARN 1 hosts fail cephadm check
[WRN] CEPHADM_HOST_CHECK_FAILED: 1 hosts fail cephadm check
host gibba031 (172.21.2.131) failed check: Can't communicate with remote host `172.21.2.131`, possibly because python3 is not installed there. [Errno 113] Connect call failed ('172.21.2.131', 22)
Log: Opening SSH connection to 172.21.2.131, port 22
[conn=19, pktid=3460] Received MSG_CHANNEL_OPEN_CONFIRMATION (91), 17 bytes
00000000: 5b 00 00 01 c0 00 00 00 00 00 00 00 00 00 00 80 [...............
00000010: 00 .
[conn=19, chan=448] Initial send window 0, packet size 32768
[conn=20, pktid=3460] Received MSG_CHANNEL_OPEN_CONFIRMATION (91), 17 bytes
00000000: 5b 00 00 01 c0 00 00 00 00 00 00 00 00 00 00 80 [...............
00000010: 00 .
[conn=20, chan=448] Initial send window 0, packet size 32768
[conn=21, pktid=3460] Received MSG_CHANNEL_OPEN_CONFIRMATION (91), 17 bytes
00000000: 5b 00 00 01 c0 00 00 00 00 00 00 00 00 00 00 80 [...............
00000010: 00 .
[conn=21, chan=448] Initial send window 0, packet size 32768
[conn=22, pktid=3460] Received MSG_CHANNEL_OPEN_CONFIRMATION (91), 17 bytes
00000000: 5b 00 00 01 c0 00 00 00 00 00 00 00 00 00 00 80 [...............
00000010: 00 .
[conn=22, chan=448] Initial send window 0, packet size 32768
[conn=23, pktid=3460] Received MSG_CHANNEL_OPEN_CONFIRMATION (91), 17 bytes
00000000: 5b 00 00 01 c0 00 00 00 00 00 00 00 00 00 00 80 [...............
00000010: 00 .
[conn=23, chan=448] Initial send window 0, packet size 32768
[conn=24, pktid=3460] Received MSG_CHANNEL_OPEN_CONFIRMATION (91), 17 bytes
00000000: 5b 00 00 01 c0 00 00 00 00 00 00 00 00 00 00 80 [...............
...
[conn=36, chan=447] Received 10977 data bytes
[conn=36, chan=447, pktid=3453] Received MSG_CHANNEL_DATA (94), 10 bytes
00000000: 5e 00 00 01 bf 00 00 00 01 0a ^.........
[conn=36, chan=447] Received 1 data byte
[conn=36, chan=447, pktid=3454] Received MSG_CHANNEL_REQUEST (98), 25 bytes
00000000: 62 00 00 01 bf 00 00 00 0b 65 78 69 74 2d 73 74 b........exit-st
00000010: 61 74 75 73 00 00 00 00 00 atus.....
[conn=36, chan=447] Received exit status 0
[conn=36, chan=447, pktid=3455] Received MSG_CHANNEL_EOF (96), 5 bytes
00000000: 60 00 00 01 bf `....
[conn=36, chan=447] Received EOF
[conn=36, chan=447, pktid=3456] Received MSG_CHANNEL_CLOSE (97), 5 bytes
00000000: 61 00 00 01 bf a....
[conn=36, chan=447] Received channel close
[conn=36, pktid=2715] Sent MSG_IGNORE (2), 5 bytes
00000000: 02 00 00 00 00 .....
[conn=36, chan=447, pktid=2716] Sent MSG_CHANNEL_CLOSE (97), 5 bytes
00000000: 61 00 00 00 00 a....
[conn=34, chan=447] Channel closed
[conn=36, chan=447] Channel closed
</pre>
RADOS - Bug #53969 (Resolved): BufferList.rebuild_aligned_size_and_memory failure
https://tracker.ceph.com/issues/53969
2022-01-21T19:14:59Z
Neha Ojha
nojha@redhat.com
<pre>
[ RUN ] BufferList.rebuild_aligned_size_and_memory
../src/test/bufferlist.cc:1865: Failure
Value of: bl.rebuild_aligned(4096)
Actual: false
Expected: true
../src/test/bufferlist.cc:1866: Failure
Expected equality of these values:
bl.get_num_buffers()
Which is: 2
1
[ FAILED ] BufferList.rebuild_aligned_size_and_memory (0 ms)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/88914/consoleFull#14686218547725d52c-2930-43cb-b77a-ae0a919c2170">https://jenkins.ceph.com/job/ceph-pull-requests/88914/consoleFull#14686218547725d52c-2930-43cb-b77a-ae0a919c2170</a></p>
mgr - Bug #53803 (Resolved): KeyError in _process_pg_summary
https://tracker.ceph.com/issues/53803
2022-01-07T16:09:43Z
Neha Ojha
nojha@redhat.com
<pre>
[ubuntu@gibba001 ~]$ sudo ceph crash info 2022-01-07T01:55:35.854871Z_838d1cff-1d9b-49f2-90a1-712fe376535e
{
"backtrace": [
" File \"/usr/share/ceph/mgr/progress/module.py\", line 715, in serve\n self._process_pg_summary()",
" File \"/usr/share/ceph/mgr/progress/module.py\", line 628, in _process_pg_summary\n ev = self._events[ev_id]",
"KeyError: '9eee6ef8-df5c-43dc-bc68-12450fb16c5e'"
],
"ceph_version": "17.0.0-9964-gf2313edc",
"crash_id": "2022-01-07T01:55:35.854871Z_838d1cff-1d9b-49f2-90a1-712fe376535e",
"entity_name": "mgr.gibba001.zptzqf",
"mgr_module": "progress",
"mgr_module_caller": "PyModuleRunner::serve",
"mgr_python_exception": "KeyError",
"os_id": "centos",
"os_name": "CentOS Linux",
"os_version": "8",
"os_version_id": "8",
"process_name": "ceph-mgr",
"stack_sig": "ffcb9e030f352f7bada217e06b4dcbd1bb696faf171b0c1913ec302dd70f1440",
"timestamp": "2022-01-07T01:55:35.854871Z",
"utsname_hostname": "gibba001",
"utsname_machine": "x86_64",
"utsname_release": "4.18.0-301.1.el8.x86_64",
"utsname_sysname": "Linux",
"utsname_version": "#1 SMP Tue Apr 13 16:24:22 UTC 2021"
}
</pre>
<p>gibba cluster running f2313edc67106699e6ab91f50fa91928e579f7ac</p>
RADOS - Bug #53677 (Resolved): qa/tasks/backfill_toofull.py: AssertionError: 2.0 not in backfilling
https://tracker.ceph.com/issues/53677
2021-12-20T17:30:47Z
Neha Ojha
nojha@redhat.com
<pre>
2021-12-19T02:15:16.582 INFO:tasks.backfill_toofull:pg={'pgid': '2.0', 'version': "23'14551", 'reported_seq': 14906, 'reported_epoch': 33, 'state': 'active+undersized+degraded+remapped+backfill_toofull', 'last_fresh': '2021-12-19T02:14:54.380853+0000', 'last_change': '2021-12-19T02:14:54.380853+0000', 'last_active': '2021-12-19T02:14:54.380853+0000', 'last_peered': '2021-12-19T02:14:54.380853+0000', 'last_clean': '2021-12-19T01:34:28.459149+0000', 'last_became_active': '2021-12-19T01:35:54.242082+0000', 'last_became_peered': '2021-12-19T01:35:54.242082+0000', 'last_unstale': '2021-12-19T02:14:54.380853+0000', 'last_undegraded': '2021-12-19T01:35:54.225980+0000', 'last_fullsized': '2021-12-19T01:35:54.221427+0000', 'mapping_epoch': 28, 'log_start': "23'14469", 'ondisk_log_start': "23'14469", 'created': 15, 'last_epoch_clean': 16, 'parent': '0.0', 'parent_split_bits': 0, 'last_scrub': "0'0", 'last_scrub_stamp': '2021-12-19T01:33:15.714744+0000', 'last_deep_scrub': "0'0", 'last_deep_scrub_stamp': '2021-12-19T01:33:15.714744+0000', 'last_clean_scrub_stamp': '2021-12-19T01:33:15.714744+0000', 'log_size': 82, 'ondisk_log_size': 82, 'stats_invalid': False, 'dirty_stats_invalid': False, 'omap_stats_invalid': False, 'hitset_stats_invalid': False, 'hitset_bytes_stats_invalid': False, 'pin_stats_invalid': False, 'manifest_stats_invalid': False, 'snaptrimq_len': 0, 'last_scrub_duration': 0, 'scrub_schedule': 'periodic scrub scheduled @ 2021-12-20T03:48:01.242929+0000', 'stat_sum': {'num_bytes': 25794969624, 'num_objects': 6151, 'num_object_clones': 0, 'num_object_copies': 18453, 'num_objects_missing_on_primary': 0, 'num_objects_missing': 0, 'num_objects_degraded': 6151, 'num_objects_misplaced': 0, 'num_objects_unfound': 0, 'num_objects_dirty': 6151, 'num_whiteouts': 0, 'num_read': 1, 'num_read_kb': 1, 'num_write': 14551, 'num_write_kb': 42389506, 'num_scrub_errors': 0, 'num_shallow_scrub_errors': 0, 'num_deep_scrub_errors': 0, 'num_objects_recovered': 0, 'num_bytes_recovered': 0, 'num_keys_recovered': 0, 'num_objects_omap': 0, 'num_objects_hit_set_archive': 0, 'num_bytes_hit_set_archive': 0, 'num_flush': 0, 'num_flush_kb': 0, 'num_evict': 0, 'num_evict_kb': 0, 'num_promote': 0, 'num_flush_mode_high': 0, 'num_flush_mode_low': 0, 'num_evict_mode_some': 0, 'num_evict_mode_full': 0, 'num_objects_pinned': 0, 'num_legacy_snapsets': 0, 'num_large_omap_objects': 0, 'num_objects_manifest': 0, 'num_omap_bytes': 0, 'num_omap_keys': 0, 'num_objects_repaired': 0}, 'up': [3, 1, 2], 'acting': [3, 2147483647, 2], 'avail_no_missing': ['3(0)', '2(2)'], 'object_location_counts': [{'shards': '2(2),3(0)', 'objects': 6151}], 'blocked_by': [], 'up_primary': 3, 'acting_primary': 3, 'purged_snaps': []}
2021-12-19T02:15:16.582 DEBUG:tasks.backfill_toofull:not backfilling
2021-12-19T02:15:16.583 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/run_tasks.py", line 91, in run_tasks
manager = run_one_task(taskname, ctx=ctx, config=config)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_95a7d4799b562f3bbb5ec66107094963abd62fa1/teuthology/run_tasks.py", line 70, in run_one_task
return task(**kwargs)
File "/home/teuthworker/src/github.com_ceph_ceph-c_9f128e94b494f6aacaa2c4dc4adc35fe6fbd0e12/qa/tasks/backfill_toofull.py", line 169, in task
wait_for_pg_state(manager, pgid, 'backfilling', target)
File "/home/teuthworker/src/github.com_ceph_ceph-c_9f128e94b494f6aacaa2c4dc4adc35fe6fbd0e12/qa/tasks/backfill_toofull.py", line 30, in wait_for_pg_state
assert False, '%s not in %s' % (pgid, state)
AssertionError: 2.0 not in backfilling
</pre>
<p>/a/yuriw-2021-12-18_18:14:24-rados-wip-yuriw-master-12.18.21-distro-default-smithi/6570206</p>
<p><a class="external" href="https://sentry.ceph.com/organizations/ceph/issues/15974/events/612b123bed3c427184a6a1e65e3799b6/events/?project=2">https://sentry.ceph.com/organizations/ceph/issues/15974/events/612b123bed3c427184a6a1e65e3799b6/events/?project=2</a></p>
<p>Mykola: could you please take a look?</p>