Activity
From 08/04/2018 to 09/02/2018
09/02/2018
- 05:07 PM Bug #23352: osd: segfaults under normal operation
- Phat Le Ton wrote:
> I've just seen 12.2.8 release, Was your patch included in this release ?
Yes. See https://tr... - 04:35 PM Bug #23352: osd: segfaults under normal operation
- Brad Hubbard wrote:
> I've created a test package here based on 12.2.7 and including the one line patch above.
>
... - 01:30 PM Backport #35068 (In Progress): mimic: deep scrub cannot find the bitrot if the object is cached
- 01:17 PM Backport #34532 (In Progress): mimic: force-create-pg broken
- 12:58 PM Backport #32106 (In Progress): luminous: object errors found in be_select_auth_object() aren't lo...
- 12:44 PM Backport #32108 (In Progress): mimic: object errors found in be_select_auth_object() aren't logge...
- 12:36 PM Backport #27213 (In Progress): mimic: libradosstriper conditional compile
- 12:29 PM Feature #22750: libradosstriper conditional compile
- https://github.com/ceph/ceph/pull/21983
- 12:26 PM Backport #27212 (In Progress): mimic: rpm: should change ceph-mgr package depency from py-bcrypt ...
- 12:22 PM Backport #26910 (In Progress): luminous: PGLog.cc: saw valgrind issues while accessing complete_t...
- 12:18 PM Backport #26909 (In Progress): mimic: PGLog.cc: saw valgrind issues while accessing complete_to->...
- 12:08 PM Backport #26908 (In Progress): luminous: kv: MergeOperator name() returns string, and caller call...
- 12:06 PM Backport #26907 (In Progress): mimic: kv: MergeOperator name() returns string, and caller calls c...
- 12:04 PM Backport #25203 (In Progress): luminous: rados python bindings use prval from stack
- 12:03 PM Backport #25204 (In Progress): mimic: rados python bindings use prval from stack
- 12:00 PM Backport #25177 (In Progress): luminous: osd,mon: increase mon_max_pg_per_osd to 300
- 11:59 AM Backport #25176 (In Progress): mimic: osd,mon: increase mon_max_pg_per_osd to 300
- 11:52 AM Backport #25144 (In Progress): mimic: Automatically set expected_num_objects for new pools with >...
- 11:49 AM Backport #24992 (In Progress): mimic: valgrind-leaks.yaml: expected valgrind issues and found none
09/01/2018
- 08:49 PM Bug #22544 (Fix Under Review): objecter cannot resend split-dropped op when racing with con reset
- https://github.com/ceph/ceph/pull/23850
- 08:43 PM Bug #22544: objecter cannot resend split-dropped op when racing with con reset
- Here, it happened:...
- 07:20 AM Bug #21142: OSD crashes when loading pgs with "FAILED assert(interval.last > last)"
- Some steps tried to reproduce the bug:
1. Create a luminous cluster running in Kubernetes using hostNetwork and th...
08/31/2018
- 10:08 PM Bug #35076 (Resolved): mon: mgr options not parse propertly
- ...
- 05:17 PM Bug #35075 (New): copy-get stuck sending osd_op
- ...
- 11:07 AM Backport #35071 (Resolved): mimic: FAILED assert(osdmap_manifest.pinned.empty()) in OSDMonitor::p...
- https://github.com/ceph/ceph/pull/24918
- 11:06 AM Backport #35068 (Resolved): mimic: deep scrub cannot find the bitrot if the object is cached
- https://github.com/ceph/ceph/pull/23873
- 11:06 AM Backport #35067 (Resolved): luminous: deep scrub cannot find the bitrot if the object is cached
- https://github.com/ceph/ceph/pull/24802
- 08:53 AM Bug #34541 (Pending Backport): deep scrub cannot find the bitrot if the object is cached
- https://github.com/ceph/ceph/pull/23629
- 08:53 AM Bug #34541 (Resolved): deep scrub cannot find the bitrot if the object is cached
- quote from https://github.com/ceph/ceph/pull/23629
> Say a object who has data caches, but in a while later, cache...
08/30/2018
- 03:20 PM Backport #34532 (Resolved): mimic: force-create-pg broken
- https://github.com/ceph/ceph/pull/23872
- 01:53 PM Bug #26940 (Pending Backport): force-create-pg broken
- 12:06 PM Bug #34529 (Resolved): cbt tests in rados qa suite fails
- seems http://drop.ceph.com/qa/cosbench-0.4.2.c3.1.zip is not reachable anymore....
- 05:10 AM Backport #26992 (In Progress): luminous: discover_all_missing() not always called during activating
- https://github.com/ceph/ceph/pull/23817
08/29/2018
- 09:51 PM Bug #25076 (Duplicate): MON crash when upgrading luminous v12.2.7 -> mimic v13.2.0 during ceph-fu...
- 09:29 PM Bug #34321 (New): OSD crash because of DBObjectMap.cc: 662: FAILED assert(state.legacy)
- Version: 12.2.7
The following crash is observed during normal operation of the cluster, so no particular steps to ... - 08:08 PM Bug #27988: Warn if queue of scrubs ready to run exceeds some threshold
I'm want to fix 3 things here. First, user submitted scrubs are queued as due to occur immediately, but overdue sc...- 05:25 PM Bug #24612 (Pending Backport): FAILED assert(osdmap_manifest.pinned.empty()) in OSDMonitor::prune...
- 03:13 PM Bug #26994 (Resolved): test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) f...
08/28/2018
- 08:23 PM Bug #24033 (Resolved): rados: not all exceptions accept keyargs
- 08:22 PM Backport #25178 (Resolved): mimic: rados: not all exceptions accept keyargs
- 07:53 PM Backport #25178: mimic: rados: not all exceptions accept keyargs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23335
merged - 12:42 PM Bug #33561 (New): PG repair doesn't start on an inconsistent group
- Version: 12.2.7
Issue timeline:
1.Deep-scrub discovered inconsistency in one group on a pool with 4 replicas - the ... - 12:33 PM Bug #33420 (New): Forced deep-scrub doesn't start
- Version: 12.2.7
Issue timeline:
1. Cyclic deep-scrub discovered inconsistency:
2018-08-23 17:21:07.933458 osd.... - 11:11 AM Backport #32108 (Resolved): mimic: object errors found in be_select_auth_object() aren't logged t...
- https://github.com/ceph/ceph/pull/23870
- 11:11 AM Backport #32106 (Resolved): luminous: object errors found in be_select_auth_object() aren't logge...
- https://github.com/ceph/ceph/pull/23871
- 05:23 AM Bug #27988: Warn if queue of scrubs ready to run exceeds some threshold
- Talking with Sage, he believes there is already a warning status if you have scrubs that haven't run for more than 2x...
08/27/2018
- 09:21 PM Bug #20775 (Resolved): ceph_test_rados parameter error
- 07:55 PM Bug #25182: Upmaps forgotten after restarting OSDs
- I believe these log messages explain why the upmaps are being removed, but I'll attach the relevant section of the lo...
- 06:39 PM Bug #25182: Upmaps forgotten after restarting OSDs
- Bryan Stillwell wrote:
> What debugging logs would be helpful in figuring this out? I just restarted an OSD on my 1... - 06:07 PM Bug #25182: Upmaps forgotten after restarting OSDs
- What debugging logs would be helpful in figuring this out? I just restarted an OSD on my 13.2.1-based cluster and al...
- 06:44 PM Bug #23576: osd: active+clean+inconsistent pg will not scrub or repair
- Created tracker https://tracker.ceph.com/issues/27988 to add warning about too many scrubs pending.
- 04:26 PM Bug #23576: osd: active+clean+inconsistent pg will not scrub or repair
- David Turner wrote:
> I came across this again as well and I did some more testing. As it turns out what resolved t... - 04:26 PM Bug #23576: osd: active+clean+inconsistent pg will not scrub or repair
- I cam across this again as well and I did some more testing. As it turns out what resolved this issue for me was inc...
- 01:33 PM Bug #23576: osd: active+clean+inconsistent pg will not scrub or repair
- Hi - we are still experiencing this issue on 12.2.7 (so latest Luminous version)...
- 06:43 PM Bug #27988 (Rejected): Warn if queue of scrubs ready to run exceeds some threshold
The sched_scrub_pg set could be scanned during a new insert and the number of scrubs that are ready to be run could...- 05:18 PM Bug #27985 (Resolved): force-backfill sets forced_recovery instead of forced_backfill in 13.2.1
- I've noticed that using force-backfill in Mimic seems to be broken. It sets forced_recovery instead of forced_backfi...
- 04:17 AM Support #27203: osd down while bucket is deleting
- Actually,this issue still upset me
-2> 2018-08-23 16:14:52.673287 7f3aeb536700 1 heartbeat_map is_healthy 'OS...
08/26/2018
- 12:50 PM Bug #24612: FAILED assert(osdmap_manifest.pinned.empty()) in OSDMonitor::prune_init()
- https://github.com/ceph/ceph/pull/23742
Currently missing: a reproducer. Reproducing may not be trivial because th...
08/25/2018
- 08:42 PM Bug #27363 (New): 'rbd rm' does not clean tiered pool completly
- mimic (13.2.1)
linux kernel: 4.18.3-1.el7.elrepo.x86_64
ceph osd crush rule create-replicated hddreplrule default... - 05:26 PM Bug #27362 (New): Wrong erasure pool MAX AVAIL size calculation with technique=reed_sol_r6_op
- ...
- 05:53 AM Bug #24022: "ceph tell osd.x bench" writes resulting JSON to stderr instead of stdout.
- luminous backport https://github.com/ceph/ceph/pull/23680
08/24/2018
- 05:14 PM Bug #25084 (Resolved): Attempt to read object that can't be repaired loops forever
- 05:13 PM Bug #25108 (Pending Backport): object errors found in be_select_auth_object() aren't logged the same
- 05:12 PM Bug #24801: PG num_bytes becomes huge
So far with assert added to object_stat_sum_t::add() we saw this. Still not sure why the num_bytes is off.
<pr...- 12:54 PM Bug #24612 (In Progress): FAILED assert(osdmap_manifest.pinned.empty()) in OSDMonitor::prune_init()
- 02:00 AM Backport #26931 (In Progress): mimic: scrub livelock
- https://github.com/ceph/ceph/pull/23722
08/23/2018
- 09:22 PM Backport #27213 (Resolved): mimic: libradosstriper conditional compile
- https://github.com/ceph/ceph/pull/23869
- 09:21 PM Backport #27212 (Resolved): mimic: rpm: should change ceph-mgr package depency from py-bcrypt to ...
- https://github.com/ceph/ceph/pull/23868
- 09:20 PM Bug #25057 (Resolved): jewel->luminous: osdmap crc mismatch
- 09:20 PM Backport #25101 (Resolved): mimic: jewel->luminous: osdmap crc mismatch
- 11:31 AM Feature #22750 (Pending Backport): libradosstriper conditional compile
- 11:21 AM Feature #22750 (Resolved): libradosstriper conditional compile
- 11:28 AM Bug #27206 (Pending Backport): rpm: should change ceph-mgr package depency from py-bcrypt to pyth...
- https://github.com/ceph/ceph/pull/23648
- 11:27 AM Bug #27206 (Resolved): rpm: should change ceph-mgr package depency from py-bcrypt to python2-bcrypt
- Current deplist of ceph-mgr rpm package contains py-bcrypt depency which conflicts with python2-bcrypt needed for pyt...
- 11:23 AM Bug #26998 (Resolved): IOPS churn with "osd op queue" = "mclock_opclass" or "mclock_client"
- 08:19 AM Support #27203: osd down while bucket is deleting
- Format is ugly,my fault
- 07:59 AM Support #27203 (New): osd down while bucket is deleting
- My environment is
[tanweijie@gz-ceph-52-202 ~]$ ceph --version
ceph version 12.2.5 (cad919881333ac92274171586c827e0...
08/22/2018
- 10:20 PM Feature #26975: Rados level IO priority for OSD operations
- Do note that
1) "Messages" can already have priority, although its utility at this point is quite limited it's not t... - 09:32 PM Bug #26880 (Resolved): ceph-base debian package compiled on ubuntu/xenial has unmet runtime depen...
- 09:31 PM Backport #26881 (Resolved): mimic: ceph-base debian package compiled on ubuntu/xenial has unmet r...
- 09:19 PM Bug #26971: failed to become clean before timeout expired
- Looks like a PG is active+undersized state. Maybe the balancer screwed up?
- 09:14 PM Backport #24359 (Resolved): mimic: osd: leaked Session on osd.7
- 09:00 PM Bug #24875 (Resolved): OSD: still returning EIO instead of recovering objects on checksum errors
- 09:00 PM Backport #25226 (Resolved): mimic: OSD: still returning EIO instead of recovering objects on chec...
- 08:46 PM Backport #25101: mimic: jewel->luminous: osdmap crc mismatch
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23226
merged - 05:37 PM Bug #27053: qa: thrashosds: "[ERR] : 2.0 has 1 objects unfound and apparently lost"
- Similar failure seen in mimic: /a/yuriw-2018-08-21_23:27:39-rados-wip-yuri5-testing-2018-08-21-2033-mimic-distro-basi...
- 03:39 PM Bug #27053 (New): qa: thrashosds: "[ERR] : 2.0 has 1 objects unfound and apparently lost"
- This is for 12.2.8
Run: http://pulpito.ceph.com/yuriw-2018-08-21_16:17:40-rados-luminous-distro-basic-smithi/
Job... - 05:26 PM Bug #27055 (New): mimic: FAILED assert((uint64_t)buf.st_size == expected) in SyntheticWorkloadSta...
- ...
- 08:51 AM Bug #24956: osd: parent process need to restart log service after fork, or ceph-osd will not work...
- PR:https://github.com/ceph/ceph/pull/23685
- 06:28 AM Bug #26994 (Fix Under Review): test_module_commands (tasks.mgr.test_module_selftest.TestModuleSel...
- https://github.com/ceph/ceph/pull/23681
- 03:45 AM Bug #23352 (Resolved): osd: segfaults under normal operation
- The patch is only relevant to the osds.
- 02:56 AM Bug #26998: IOPS churn with "osd op queue" = "mclock_opclass" or "mclock_client"
- 02:14 AM Bug #26998: IOPS churn with "osd op queue" = "mclock_opclass" or "mclock_client"
- - https://github.com/ceph/dmclock/pull/58
- https://github.com/ceph/ceph/pull/23643 - 02:13 AM Bug #26998 (Resolved): IOPS churn with "osd op queue" = "mclock_opclass" or "mclock_client"
- for more details on this issue, please refer to https://github.com/ceph/dmclock/pull/58 . in short, if "osd op queue"...
08/21/2018
- 08:22 PM Bug #25146 (In Progress): "rocksdb: Corruption: Can't access /000000.sst" in upgrade:mimic-x:para...
- Very early fix: https://github.com/rzarzynski/rocksdb/tree/wip-bug-25146.
The case appears more complicated as the... - 07:58 PM Bug #26880: ceph-base debian package compiled on ubuntu/xenial has unmet runtime dependencies
- https://github.com/ceph/ceph/pull/23490 merged
- 07:30 PM Bug #26994: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) fails
- Something like this will probably fix it...
- 06:49 PM Bug #26994: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) fails
- Here's the culprit: hello isn't packaged so it can't announce its commands....
- 06:45 PM Bug #26994: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) fails
- The manager logs show all the modules except for `hello` being loaded...
- 05:55 PM Bug #26994: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) fails
- I can't reproduce this... it is as if the monitor has not received a summary of commands from the manager at the the ...
- 04:39 PM Bug #26994 (Resolved): test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) f...
- in https://github.com/ceph/ceph/pull/23558/commits/00223d2364b5a6cc32eb5f83f5a642b5aef2c946 , hello is used for testi...
- 04:03 PM Backport #26992 (Resolved): luminous: discover_all_missing() not always called during activating
- https://github.com/ceph/ceph/pull/23817
- 04:01 PM Feature #26975: Rados level IO priority for OSD operations
- For "Rados level" I mean librados API at least, and implementation in OSD too.
- 03:59 PM Feature #26975 (New): Rados level IO priority for OSD operations
- What I mean:
Suppose busy Ceph cluster.
Every OSD has many IO requests from clients in it's queue. Today, all r... - 12:56 AM Bug #26972 (Resolved): cluster [ERR] Error -2 reading object
http://qa-proxy.ceph.com/teuthology/dzafman-2018-08-17_08:14:49-rados-wip-zafman-testing4-distro-basic-smithi/29146...- 12:42 AM Bug #26971 (Duplicate): failed to become clean before timeout expired
http://qa-proxy.ceph.com/teuthology/dzafman-2018-08-16_17:35:08-rados:thrash-wip-zafman-testing4-distro-basic-smith...- 12:32 AM Bug #26970 (Resolved): src/osd/OSDMap.h: 1065: FAILED assert(__null != pool)
http://qa-proxy.ceph.com/teuthology/dzafman-2018-08-16_17:35:08-rados:thrash-wip-zafman-testing4-distro-basic-smith...
08/20/2018
- 11:19 PM Bug #22837 (Pending Backport): discover_all_missing() not always called during activating
Based on information from http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021512.html I'm marking ...- 05:53 PM Feature #24232 (Fix Under Review): Add new command ceph mon status
08/19/2018
- 03:12 PM Feature #26948 (Resolved): librados: add a way to get a count of omap vals in an iterator
- https://github.com/ceph/ceph/pull/23593
- 01:58 PM Bug #24485: LibRadosTwoPoolsPP.ManifestUnset failure
- /a/kchai-2018-08-19_13:01:23-rados-wip-kefu-testing-2018-08-19-1812-distro-basic-mira/2925024/
08/17/2018
- 09:10 PM Backport #24359: mimic: osd: leaked Session on osd.7
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22339
merged - 02:27 PM Bug #26958 (Resolved): osd/ReplicatedBackend.cc: 1321: FAILED assert(get_parent()->get_log().get_...
- ...
- 09:36 AM Bug #26880 (Pending Backport): ceph-base debian package compiled on ubuntu/xenial has unmet runti...
- 03:20 AM Feature #26955: os/filestore: Add switch to turn on/off filestore dir splitting
- https://github.com/ceph/ceph/pull/23460
1. Refined HashIndex::must_split() to be more readable.
2. Introduced a h... - 03:19 AM Feature #26955 (New): os/filestore: Add switch to turn on/off filestore dir splitting
- We had done pre-split and increased split multiple, etc, at the beginning of building cluster in order to reduce the ...
- 12:16 AM Bug #25108: object errors found in be_select_auth_object() aren't logged the same
08/16/2018
- 10:46 PM Backport #26870 (Resolved): mimic: osd: segfaults under normal operation
- 05:58 PM Bug #24612: FAILED assert(osdmap_manifest.pinned.empty()) in OSDMonitor::prune_init()
- /a/sage-2018-08-15_15:49:39-rados-wip-sage2-testing-2018-08-15-0731-distro-basic-smithi/2908178
08/15/2018
- 11:40 PM Bug #25084 (Fix Under Review): Attempt to read object that can't be repaired loops forever
- 11:35 PM Backport #25227 (Resolved): luminous: OSD: still returning EIO instead of recovering objects on c...
- 02:36 PM Feature #26948 (Resolved): librados: add a way to get a count of omap vals in an iterator
- We currently have functions like rados_read_op_omap_get_vals2 that hand back an iterator to a userland caller. There ...
08/14/2018
- 10:43 PM Bug #24866: FAILED assert(0 == "past_interval start interval mismatch") in check_past_interval_bo...
- Generally yes, but I havne't been able to reproduce to test a solution. I take it this has happened to you?
I'm h... - 01:34 PM Bug #24866: FAILED assert(0 == "past_interval start interval mismatch") in check_past_interval_bo...
- Guys, is there a way for an OSD to recover from this error?
- 09:58 PM Bug #26947 (Resolved): ENOENT on collection_move_rename from divergent activate
- ...
- 04:56 PM Bug #26940 (Fix Under Review): force-create-pg broken
- https://github.com/ceph/ceph/pull/23572
- 03:53 PM Bug #26940 (Resolved): force-create-pg broken
- This commit -
https://github.com/ceph/ceph/commit/7797ed67d2f9140b7eb9f182b06d04233e1e309c
has introduced regressio... - 04:33 AM Backport #26908 (Need More Info): luminous: kv: MergeOperator name() returns string, and caller c...
- 04:33 AM Backport #26908 (In Progress): luminous: kv: MergeOperator name() returns string, and caller call...
- https://github.com/ceph/ceph/pull/23566
08/13/2018
- 06:46 PM Backport #26932 (Resolved): luminous: scrub livelock
- https://github.com/ceph/ceph/pull/24396 (initial backport)
https://github.com/ceph/ceph/pull/24659 (follow-on fix) - 06:46 PM Backport #26931 (Resolved): mimic: scrub livelock
- https://github.com/ceph/ceph/pull/23722
- 06:01 PM Bug #26890 (Pending Backport): scrub livelock
- 07:38 AM Bug #20059: miscounting degraded objects
- Just adding another reference to #21803 here — this fix was meant to fix that issue as well, which it apparently did ...
- 03:14 AM Bug #23352: osd: segfaults under normal operation
- Brad Hubbard wrote:
> I've created a test package here based on 12.2.7 and including the one line patch above.
>
... - 12:59 AM Feature #24232: Add new command ceph mon status
- PR: https://github.com/ceph/ceph/pull/23525
08/12/2018
- 10:32 PM Backport #26871 (Resolved): luminous: osd: segfaults under normal operation
- 09:16 PM Backport #26910 (Resolved): luminous: PGLog.cc: saw valgrind issues while accessing complete_to->...
- https://github.com/ceph/ceph/pull/23211
- 09:16 PM Backport #26909 (Resolved): mimic: PGLog.cc: saw valgrind issues while accessing complete_to->ver...
- https://github.com/ceph/ceph/pull/23403
- 09:16 PM Backport #26908 (Resolved): luminous: kv: MergeOperator name() returns string, and caller calls c...
- https://github.com/ceph/ceph/pull/23566
- 09:16 PM Backport #26907 (Resolved): mimic: kv: MergeOperator name() returns string, and caller calls c_st...
- https://github.com/ceph/ceph/pull/23865
- 08:38 PM Bug #21592: LibRadosCWriteOps.CmpExt got 0 instead of -4095-1
- /a/sage-2018-08-11_18:40:58-rados-wip-sage-testing-2018-08-11-1120-distro-basic-smithi/2893875...
08/10/2018
- 08:13 PM Bug #23352: osd: segfaults under normal operation
- https://github.com/ceph/ceph/pull/23459 merged
- 04:54 AM Bug #12615: Repair of Erasure Coded pool with an unrepairable object causes pg state to lose clea...
- In a replicated case which in which all copies are bad, a rep_repair_primary_object() can cause loss of clean and ins...
- 04:44 AM Bug #25084: Attempt to read object that can't be repaired loops forever
- I don't think we should backport this change. In Luminous and possibly upgraded to Mimic there is a possibility that...
- 12:01 AM Bug #25084: Attempt to read object that can't be repaired loops forever
- https://github.com/ceph/ceph/pull/23518
- 02:54 AM Bug #26875 (Pending Backport): kv: MergeOperator name() returns string, and caller calls c_str() ...
- 02:47 AM Bug #24485: LibRadosTwoPoolsPP.ManifestUnset failure
- /a/kchai-2018-08-09_12:29:04-rados-wip-kefu-testing-2018-08-08-1144-distro-basic-smithi/2885459/
- 12:04 AM Bug #19753: Deny reservation if expected backfill size would put us over backfill_full_ratio
- https://github.com/ceph/ceph/pull/22797
08/09/2018
- 11:52 PM Bug #25084 (In Progress): Attempt to read object that can't be repaired loops forever
- What I actually ran into is that when do_read() fails because of the CRC mismatch, the recovery repair can pull from ...
- 07:56 PM Backport #24333 (In Progress): luminous: local_reserver double-reservation of backfilled pg
- PR: https://github.com/ceph/ceph/pull/23493
- 06:37 PM Feature #21366 (Resolved): tools/ceph-objectstore-tool: split filestore directories offline to ta...
- 06:37 PM Backport #24845 (Resolved): luminous: tools/ceph-objectstore-tool: split filestore directories of...
- 02:00 PM Bug #26891 (New): backfill reservation deadlock/stall
on backfill target:
- get backfill request, queue RequestBackfillPrio...- 01:34 PM Bug #26890: scrub livelock
- https://github.com/ceph/ceph/pull/23512
- 01:32 PM Bug #26890 (Resolved): scrub livelock
- - both osds locally reserve a scrub slot
- both osds send a scrub schedule request
- both scrub requests are reject... - 08:03 AM Bug #26880: ceph-base debian package compiled on ubuntu/xenial has unmet runtime dependencies
- Full info for ceph-base package:...
- 08:00 AM Bug #26880: ceph-base debian package compiled on ubuntu/xenial has unmet runtime dependencies
- Tried on fresh Ubuntu 16.04 vm to build Ceph packages for master branch, resulting .debs still depend on libstdc++6 (...
- 07:57 AM Bug #26880: ceph-base debian package compiled on ubuntu/xenial has unmet runtime dependencies
- as per Piotr Dałek we can reproduce this issue on master even with the fix .
08/08/2018
- 09:42 PM Bug #25146: "rocksdb: Corruption: Can't access /000000.sst" in upgrade:mimic-x:parallel-master-di...
- I think we need to fix this sooner rather than later. My suggestion is to incorporate enough of the original rocksdb...
- 09:10 PM Bug #26878 (Closed): `osd destroy` command hangs
- NOTABUG. :)
Presumably will have to update the ceph-volume tests but the louder notification PR is well on its way t... - 06:17 PM Bug #26878: `osd destroy` command hangs
- master PR https://github.com/ceph/ceph/pull/23492
- 12:03 PM Bug #26878 (Closed): `osd destroy` command hangs
- Running latest master without a manager daemon makes `osd destroy` commands hang.
ceph version 14.0.0-1906-g637bb2... - 06:52 PM Feature #1126 (Rejected): crush: extend rule definition
- actually, you can do the above, just set size=3 and you'll get 2 in first rack and 1 in second rack.
- 06:49 PM Feature #85 (Fix Under Review): osd: pg_num shrink
- https://github.com/ceph/ceph/pull/20469
- 06:33 PM Feature #84 (In Progress): mon: auto adjust pg_num as pool grows
- 05:16 PM Backport #24845: luminous: tools/ceph-objectstore-tool: split filestore directories offline to ta...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23418
merged - 12:50 PM Backport #26881 (In Progress): mimic: ceph-base debian package compiled on ubuntu/xenial has unme...
- 12:44 PM Backport #26881 (Resolved): mimic: ceph-base debian package compiled on ubuntu/xenial has unmet r...
- https://github.com/ceph/ceph/pull/23490
- 12:44 PM Bug #26880 (Pending Backport): ceph-base debian package compiled on ubuntu/xenial has unmet runti...
- https://github.com/ceph/ceph/pull/22990
https://github.com/ceph/ceph/pull/23432 - 12:30 PM Bug #26880 (Resolved): ceph-base debian package compiled on ubuntu/xenial has unmet runtime depen...
- ...
- 08:34 AM Backport #26839 (In Progress): mimic: librados application's symbol could conflict with the libce...
- -https://github.com/ceph/ceph/pull/23484-
- 08:32 AM Backport #26840 (In Progress): luminous: librados application's symbol could conflict with the li...
- https://github.com/ceph/ceph/pull/23483
- 03:35 AM Bug #25209 (Resolved): cls/test_cls_numops.sh aborts
- 01:53 AM Bug #26875 (Fix Under Review): kv: MergeOperator name() returns string, and caller calls c_str() ...
- https://github.com/ceph/ceph/pull/23477
08/07/2018
- 11:06 PM Bug #23857: flush (manifest) vs async recovery causes out of order op
- /a/yuriw-2018-08-06_20:38:17-rados-wip_master_8_6_2018-distro-basic-smithi/2873966/
the order of events here:
<... - 10:02 PM Bug #26875 (Resolved): kv: MergeOperator name() returns string, and caller calls c_str() on the t...
- On Tue, 7 Aug 2018, Réka Nikolett Kovács wrote:
> Hi,
>
> I am working on a bug finding tool that looks for a ... - 07:26 PM Bug #21142: OSD crashes when loading pgs with "FAILED assert(interval.last > last)"
- ubuntu@mastercontroller01:~$ ceph -s
cluster:
id: dc00b525-7dca-435a-bfa6-c0b9b216e1f2
health: HEALT... - 07:24 PM Bug #21142: OSD crashes when loading pgs with "FAILED assert(interval.last > last)"
- attaching new osd log.
- 06:12 PM Bug #21142: OSD crashes when loading pgs with "FAILED assert(interval.last > last)"
- We've encountered this again when we were adding a new OSD. Couldn't get the gdb as there was none installed and the ...
- 06:21 PM Bug #24866: FAILED assert(0 == "past_interval start interval mismatch") in check_past_interval_bo...
- attaching OSD log.
- 06:19 PM Bug #24866: FAILED assert(0 == "past_interval start interval mismatch") in check_past_interval_bo...
- We encountered this issue after trying out a patch for https://tracker.ceph.com/issues/21142.
Is it safe to bypas... - 08:50 AM Bug #25108 (Fix Under Review): object errors found in be_select_auth_object() aren't logged the same
- https://github.com/ceph/ceph/pull/23376/
- 03:32 AM Bug #26868 (Pending Backport): PGLog.cc: saw valgrind issues while accessing complete_to->version
- 02:02 AM Backport #26871 (In Progress): luminous: osd: segfaults under normal operation
- https://github.com/ceph/ceph/pull/23459
- 01:25 AM Backport #26871 (Resolved): luminous: osd: segfaults under normal operation
- https://github.com/ceph/ceph/pull/23459
- 02:01 AM Backport #26870 (In Progress): mimic: osd: segfaults under normal operation
- https://github.com/ceph/ceph/pull/23458
- 01:24 AM Backport #26870 (Resolved): mimic: osd: segfaults under normal operation
- https://github.com/ceph/ceph/pull/23458
08/06/2018
- 08:30 PM Backport #24495: luminous: osd: segv in Session::have_backoff
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22729
merged - 08:24 PM Backport #24501: luminous: osd: eternal stuck PG in 'unfound_recovery'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22546
merged - 06:49 PM Bug #24613: luminous: rest/test.py fails with expected 200, got 400
- This one looks similar.
/a/yuriw-2018-08-03_19:54:05-rados-wip-yuri-testing-2018-08-03-1639-luminous-distro-basic-... - 06:46 PM Bug #26868 (Fix Under Review): PGLog.cc: saw valgrind issues while accessing complete_to->version
- https://github.com/ceph/ceph/pull/23450
- 06:35 PM Bug #26868 (In Progress): PGLog.cc: saw valgrind issues while accessing complete_to->version
- 06:28 PM Bug #26868 (Resolved): PGLog.cc: saw valgrind issues while accessing complete_to->version
- This occurred during a rados run of https://tracker.ceph.com/issues/24988. This failure has not been seen on master o...
- 02:52 PM Bug #23352 (Pending Backport): osd: segfaults under normal operation
- 02:51 PM Bug #24875 (Pending Backport): OSD: still returning EIO instead of recovering objects on checksum...
08/04/2018
- 09:58 PM Bug #24174: PrimaryLogPG::try_flush_mark_clean mixplaced ctx release
- This was seen in luminous. Could this be related?...
Also available in: Atom