Activity
From 08/21/2018 to 09/19/2018
09/19/2018
- 10:46 PM Subtask #36091 (Resolved): [rbd top] collect client perf stats when query is enabled
- The OSD's 'collect_perf_metrics' MgrClient callback should record whether or not the query is enabled/disabled and ma...
- 03:13 PM Bug #35974: Apparent export-diff/import-diff corruption
- @Patrick: Interesting find. If it truly is related to just that option, we will have to get a RADOS core team member ...
- 01:27 PM Bug #21143: bad RESETSESSION between OSDs?
- ...
- 12:35 PM Backport #35836 (In Progress): mimic: mon: mgr options not parse propertly
- https://github.com/ceph/ceph/pull/24176
- 09:56 AM Bug #35969 (Pending Backport): "symbol lookup error: ceph-osd: undefined symbol: _ZdaPvm" on cent...
09/18/2018
- 10:24 PM Bug #35682 (Resolved): 34164d55c839acd35bbb1be5279e3e23e3bec1fd broke the librados examples
- 07:56 PM Bug #36040: mon: Valgrind: mon (InvalidFree, InvalidWrite, InvalidRead)
- Also in Mimic: /ceph/teuthology-archive/yuriw-2018-09-13_19:40:54-fs-mimic-distro-basic-smithi/3018437/remote/smithi0...
- 05:39 PM Feature #24176: osd: add command to drop OSD cache
- Mohamad, any update on this?
- 04:20 PM Bug #36073 (In Progress): failed to recover before timeout expired -- premerge+peered PGs?
- https://github.com/ceph/ceph/pull/24064
https://github.com/ceph/ceph/pull/23985 - 03:24 PM Bug #36073 (Resolved): failed to recover before timeout expired -- premerge+peered PGs?
- Appeared between 93748a325cd8 ("Merge pull request #23944 from ceph/wip-s3a-update-mirror") and 5a3344f0e52c ("Merge ...
- 03:38 PM Bug #24485: LibRadosTwoPoolsPP.ManifestUnset failure
- /a/kchai-2018-09-18_07:16:16-rados-wip-kefu2-testing-2018-09-18-1224-distro-basic-smithi/3037527
- 03:11 PM Bug #22330: ec: src/common/interval_map.h: 161: FAILED assert(len > 0)
- Running the multimds:basic suite with --filter 'clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inl...
- 03:09 PM Bug #21931: osd: src/osd/ECBackend.cc: 2164: FAILED assert((offset + length) <= (range.first.get_...
- Running the multimds:basic suite with --filter 'clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inl...
- 01:08 AM Bug #35849 (Closed): mimic: test_envlibrados_for_rocksdb.sh: build failed with error: #endif with...
- sure.
- 12:28 AM Bug #35849: mimic: test_envlibrados_for_rocksdb.sh: build failed with error: #endif without #if
- Ah, I see what happened. The github.com/facebook/rocksdb/ was broken on the day these tasks failed. See https://githu...
09/17/2018
- 08:55 PM Bug #22329: mon: Valgrind: mon (Leak_DefinitelyLost, Leak_IndirectlyLost)
- See also #36040
- 08:38 PM Bug #22329: mon: Valgrind: mon (Leak_DefinitelyLost, Leak_IndirectlyLost)
- Still not seeing anything in RADOS runs AFAIK, but I did notice there might be some disparity in coverage....
>13:... - 07:35 PM Bug #22329: mon: Valgrind: mon (Leak_DefinitelyLost, Leak_IndirectlyLost)
- -/ceph/teuthology-archive/pdonnell-2018-09-13_04:59:57-multimds-wip-pdonnell-testing-20180913.024004-distro-basic-smi...
- 08:54 PM Bug #36040 (New): mon: Valgrind: mon (InvalidFree, InvalidWrite, InvalidRead)
- From: /ceph/teuthology-archive/pdonnell-2018-09-13_04:59:57-multimds-wip-pdonnell-testing-20180913.024004-distro-basi...
- 02:24 PM Bug #35849: mimic: test_envlibrados_for_rocksdb.sh: build failed with error: #endif without #if
- Hey Brad,
I had reproduced it here: http://pulpito.ceph.com/nojha-2018-09-07_17:42:05-rados:singleton-mimic-distro... - 10:28 AM Bug #35849: mimic: test_envlibrados_for_rocksdb.sh: build failed with error: #endif without #if
- Hey Neha,
Can you reproduce this?
I tried mimicking the job in a Bionic container and it builds correctly. I al... - 07:53 AM Bug #35923 (Resolved): "ceph_assert(values.size() == 2)" in PG::peek_map_epoch()
- 06:26 AM Bug #35969 (Fix Under Review): "symbol lookup error: ceph-osd: undefined symbol: _ZdaPvm" on cent...
- 06:25 AM Bug #35969 (In Progress): "symbol lookup error: ceph-osd: undefined symbol: _ZdaPvm" on centos 7.4
- https://github.com/ceph/ceph/pull/24124
as suggested by Brad, we can just bump the BuildRequires of gperftools. - 05:42 AM Bug #24835: osd daemon spontaneous segfault
- Soenke,
Could you upload a coredump for each of the different backtraces as well as details of your environment (t...
09/14/2018
- 08:07 PM Bug #24022 (Resolved): "ceph tell osd.x bench" writes resulting JSON to stderr instead of stdout.
- 08:06 PM Backport #35941 (Resolved): luminous: "ceph tell osd.x bench" writes resulting JSON to stderr ins...
- 04:47 PM Backport #35941: luminous: "ceph tell osd.x bench" writes resulting JSON to stderr instead of std...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23680
merged - 08:05 PM Bug #23370 (Resolved): mgrc's ms_handle_reset races with send_pgstats()
- 08:05 PM Backport #23408 (Resolved): luminous: mgrc's ms_handle_reset races with send_pgstats()
- 04:46 PM Backport #23408: luminous: mgrc's ms_handle_reset races with send_pgstats()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23791
merged - 08:04 PM Bug #25112 (Resolved): osd,mon: increase mon_max_pg_per_osd to 250
- 08:04 PM Backport #25177 (Resolved): luminous: osd,mon: increase mon_max_pg_per_osd to 300
- 04:45 PM Backport #25177: luminous: osd,mon: increase mon_max_pg_per_osd to 300
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23862
merged - 08:03 PM Bug #25175 (Resolved): rados python bindings use prval from stack
- 08:03 PM Backport #25203 (Resolved): luminous: rados python bindings use prval from stack
- 04:45 PM Backport #25203: luminous: rados python bindings use prval from stack
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23864
merged - 08:03 PM Bug #25108 (Resolved): object errors found in be_select_auth_object() aren't logged the same
- 08:02 PM Backport #32106 (Resolved): luminous: object errors found in be_select_auth_object() aren't logge...
- 04:44 PM Backport #32106: luminous: object errors found in be_select_auth_object() aren't logged the same
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23871
merged - 06:43 PM Bug #35974: Apparent export-diff/import-diff corruption
- The exports between 3b and 3c were identical.
All the clients that are mounting the filesystems are currently using ... - 02:35 PM Bug #35974 (Need More Info): Apparent export-diff/import-diff corruption
- @Patrick: were the resulting exports different between run 3b and 3c? The logs indicate that they read the same data ...
- 02:31 PM Bug #35969: "symbol lookup error: ceph-osd: undefined symbol: _ZdaPvm" on centos 7.4
- asked on ceph-{maintainers,users,developers} to see if we can drop the support of centos 7.4, turns out it's a no-go....
- 11:45 AM Bug #23431: OSD Segmentation fault in thread_name:safe_timer
- See #23352
The fix is in 12.2.8 - 11:42 AM Bug #23431: OSD Segmentation fault in thread_name:safe_timer
- Hi,
Same issue with ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)
My OSDs ar... - 10:53 AM Bug #35682: 34164d55c839acd35bbb1be5279e3e23e3bec1fd broke the librados examples
- Brad Hubbard wrote:
> Working on a teuthology task to do a test build of the examples as well.
@Brad: I already h... - 04:28 AM Bug #35682 (In Progress): 34164d55c839acd35bbb1be5279e3e23e3bec1fd broke the librados examples
- 04:28 AM Bug #35682: 34164d55c839acd35bbb1be5279e3e23e3bec1fd broke the librados examples
- https://github.com/ceph/ceph/pull/24098
Working on a teuthology task to do a test build of the examples as well.
09/13/2018
- 08:51 PM Bug #35974: Apparent export-diff/import-diff corruption
- I have attached logs of the export-diffs run with "--debug-rbd=20" and "--debug-rados=20"
- 05:58 PM Bug #35974 (Need More Info): Apparent export-diff/import-diff corruption
- From the ML:
We utilize Ceph RBDs for our users' storage and need to keep data synchronized across data centres. F... - 05:48 PM Backport #35942 (Resolved): mimic: "ceph tell osd.x bench" writes resulting JSON to stderr instea...
- 03:16 PM Backport #35942: mimic: "ceph tell osd.x bench" writes resulting JSON to stderr instead of stdout.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24041
merged - 01:41 PM Bug #27363: 'rbd rm' does not clean tiered pool completly
- Moving this to the core team. This appears to be an issue w/ the cache tier. In my test, after removing the image, al...
- 11:51 AM Bug #21965: mon/MonClient.cc: 478: FAILED assert(authenticate_err == 0)
- hi, @sage, we encounter this assert after running the same qa case(workloads/rados_api_tests.yaml) for 30 times, but ...
- 11:34 AM Bug #35969: "symbol lookup error: ceph-osd: undefined symbol: _ZdaPvm" on centos 7.4
- this issue resembles #23653. both of them are related to new memory management APIs. #23653 was related to @aligned_a...
- 10:47 AM Bug #35969 (Resolved): "symbol lookup error: ceph-osd: undefined symbol: _ZdaPvm" on centos 7.4
- see /a/kchai-2018-09-13_01:57:49-ceph-disk-wip-fix-35906-distro-basic-ovh/3012294...
- 08:49 AM Bug #35808: ceph osd ok-to-stop result dosen't match the real situation
- xie xingguo wrote:
> I see you are using a pool min_size of 3, so no replicas is allowed to be offline and hence the... - 08:46 AM Bug #35808: ceph osd ok-to-stop result dosen't match the real situation
- John Spray wrote:
> It's a little bit odd that the ok-to-stop command said 4 PGs, but you actually had 5 PGs go inco... - 08:09 AM Documentation #35968: [doc][jewel] sync documentation "OSD Config Reference" default values with ...
- ...
- 08:09 AM Documentation #35968: [doc][jewel] sync documentation "OSD Config Reference" default values with ...
- OPTION(osd_map_cache_size, OPT_INT, 200)
OPTION(osd_scrub_during_recovery, OPT_BOOL, false) // Allow new scrubs to... - 08:08 AM Documentation #35968 (Won't Fix): [doc][jewel] sync documentation "OSD Config Reference" default ...
- http://docs.ceph.com/docs/jewel/rados/configuration/osd-config-ref/
Change following default values to the one use... - 08:05 AM Documentation #35967 (Resolved): [doc] sync documentation "OSD Config Reference" default values w...
- for:
http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/
http://docs.ceph.com/docs/mimic/rados/con... - 05:52 AM Backport #35964 (Resolved): mimic: RADOS: probably missing clone location for async_recovery_targets
- https://github.com/ceph/ceph/pull/24345
- 05:51 AM Bug #26875 (Resolved): kv: MergeOperator name() returns string, and caller calls c_str() on the t...
- 05:51 AM Backport #26908 (Resolved): luminous: kv: MergeOperator name() returns string, and caller calls c...
- 05:49 AM Backport #35963 (Resolved): mimic: choose_acting picked want > pool size
- https://github.com/ceph/ceph/pull/24344
- 05:49 AM Backport #35962 (Resolved): luminous: choose_acting picked want > pool size
- https://github.com/ceph/ceph/pull/24299
- 01:58 AM Bug #35546 (Pending Backport): RADOS: probably missing clone location for async_recovery_targets
- Changing status to Pending Backport to get the mimic backport tracker ticket opened for this.
09/12/2018
- 09:54 PM Backport #26908: luminous: kv: MergeOperator name() returns string, and caller calls c_str() on t...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23566
merged - 09:05 PM Bug #35847: wrong cluster_network doesn't cause any errors and ends up using monitor network?
- Agreed, we should have a clear error when one of the networks does not work.
- 09:03 PM Bug #35924 (Pending Backport): choose_acting picked want > pool size
- 08:07 PM Bug #35542: Backfill and recovery should validate all checksums
- Sage Weil wrote:
> I'm unclear what checksum is not being checked. There is only *sometimes* a full object checksum... - 03:34 PM Bug #35542: Backfill and recovery should validate all checksums
- I'm unclear what checksum is not being checked. There is only *sometimes* a full object checksum that we can validat...
- 04:12 PM Bug #26868 (Resolved): PGLog.cc: saw valgrind issues while accessing complete_to->version
- 04:12 PM Backport #26910 (Resolved): luminous: PGLog.cc: saw valgrind issues while accessing complete_to->...
- 03:24 PM Backport #26910: luminous: PGLog.cc: saw valgrind issues while accessing complete_to->version
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23211
merged - 04:11 PM Bug #25198 (Resolved): FAILED assert(trim_to <= info.last_complete) in PGLog::trim()
- 04:11 PM Backport #25199 (Resolved): luminous: FAILED assert(trim_to <= info.last_complete) in PGLog::trim()
- 03:24 PM Backport #25199: luminous: FAILED assert(trim_to <= info.last_complete) in PGLog::trim()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23211
merged - 04:11 PM Bug #25184 (Resolved): osd/PGLog.cc: use lgeneric_subdout instead of generic_dout
- 04:11 PM Backport #25219 (Resolved): luminous: osd/PGLog.cc: use lgeneric_subdout instead of generic_dout
- 03:24 PM Backport #25219: luminous: osd/PGLog.cc: use lgeneric_subdout instead of generic_dout
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23211
mergedReviewed-by: Josh Durgin <jdurgin@redhat.com> - 04:11 PM Feature #23979 (Resolved): Limit pg log length during recovery/backfill so that we don't run out ...
- 04:10 PM Backport #24988 (Resolved): luminous: Limit pg log length during recovery/backfill so that we don...
- 03:24 PM Backport #24988: luminous: Limit pg log length during recovery/backfill so that we don't run out ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23211
merged - 03:58 PM Bug #35955 (Resolved): ceph-objectstore-tool past_intervals broken
- ...
- 02:28 PM Bug #23879: test_mon_osdmap_prune.sh fails
- ...
- 02:22 PM Bug #20086: LibRadosLockECPP.LockSharedDurPP gets EEXIST
- http://pulpito.ceph.com/joshd-2018-09-12_06:44:56-rados-wip-luminous-cache-autotune-distro-basic-smithi/3010389/
<... - 02:04 PM Bug #35947: mon_status doesn't populate outside_quorum when some mons are down
- @sage: indeed :-).
Maybe rename the original use for "outside_quorum" to "outside_election" or something similar t... - 01:48 PM Bug #35947: mon_status doesn't populate outside_quorum when some mons are down
- We could add a new field for monitors that are... not part of the quorum, but I'm not sure what I'd call it if not "o...
- 01:29 PM Bug #35947: mon_status doesn't populate outside_quorum when some mons are down
- Yes, `outside quorum` is solely used to track which monitors are outside of the quorum during an election; once the e...
- 12:52 PM Bug #35947: mon_status doesn't populate outside_quorum when some mons are down
- Let me see if I get this right.
'After a successful election, `outside_quorum` is cleared."
^^ Do I understand ... - 12:45 PM Bug #35947: mon_status doesn't populate outside_quorum when some mons are down
- `outside quorum` does not pertain to down monitors. We may change that if people think it's more obvious, but the mai...
- 11:50 AM Bug #35947: mon_status doesn't populate outside_quorum when some mons are down
- ceph mon_status -f json | jq '.outside_quorum'
[]
^^ HEALTH_OK
ceph mon_status -f json | jq '.outside_quorum'
... - 10:53 AM Bug #35947: mon_status doesn't populate outside_quorum when some mons are down
- The structure in question is the mon_status output, so it would be useful if you could look at the output of the mon_...
- 08:05 AM Bug #35947 (New): mon_status doesn't populate outside_quorum when some mons are down
- I noticed the "mon_outside_quorum' metric always returns "0", despite if there are mons outside quorum or not:
cep... - 02:02 PM Bug #35923 (Fix Under Review): "ceph_assert(values.size() == 2)" in PG::peek_map_epoch()
- https://github.com/ceph/ceph/pull/24061
- 01:58 PM Bug #35923 (In Progress): "ceph_assert(values.size() == 2)" in PG::peek_map_epoch()
- This is fall-out from merge vs delete pg resurrection:...
- 05:22 AM Bug #35682: 34164d55c839acd35bbb1be5279e3e23e3bec1fd broke the librados examples
- Thanks John, That's one of the options I'm looking into.
09/11/2018
- 08:05 PM Backport #35942 (In Progress): mimic: "ceph tell osd.x bench" writes resulting JSON to stderr ins...
- 07:17 PM Backport #35942 (Resolved): mimic: "ceph tell osd.x bench" writes resulting JSON to stderr instea...
- https://github.com/ceph/ceph/pull/24041
- 08:01 PM Backport #35941 (In Progress): luminous: "ceph tell osd.x bench" writes resulting JSON to stderr ...
- 07:17 PM Backport #35941 (Resolved): luminous: "ceph tell osd.x bench" writes resulting JSON to stderr ins...
- https://github.com/ceph/ceph/pull/23680
- 04:30 PM Bug #24022 (Pending Backport): "ceph tell osd.x bench" writes resulting JSON to stderr instead of...
- 04:12 PM Bug #35924 (Fix Under Review): choose_acting picked want > pool size
- https://github.com/ceph/ceph/pull/24035
- 02:24 PM Bug #35924 (Resolved): choose_acting picked want > pool size
- ...
- 03:58 PM Bug #20694: osd/ReplicatedBackend.cc: 1417: FAILED assert(get_parent()->get_log().get_log().obje...
- Seen in mimic: /a/yuriw-2018-09-10_16:59:58-rados-wip-yuri-testing-2018-09-10-1525-mimic-distro-basic-smithi/3002608/
- 12:57 PM Bug #35923: "ceph_assert(values.size() == 2)" in PG::peek_map_epoch()
- #10629 has the same backtrace.
- 12:55 PM Bug #35923 (Resolved): "ceph_assert(values.size() == 2)" in PG::peek_map_epoch()
- now, there are two keys to check:...
- 12:27 PM Bug #35833 (Resolved): error: 'unique_ptr' in namespace 'std' does not name a type when compiling...
- 12:24 PM Feature #35544 (Resolved): "osd df" should show OSD state
- 12:19 PM Bug #35682: 34164d55c839acd35bbb1be5279e3e23e3bec1fd broke the librados examples
- I'm seeing the same thing.
I'm guessing that this is happening because the include of assert.h in buffer.h is pick... - 12:05 PM Bug #23879: test_mon_osdmap_prune.sh fails
- /a/kchai-2018-09-11_09:51:05-rados-wip-kefu-testing-2018-09-10-1219-distro-basic-mira/3005452/teuthology.log
<pr... - 10:45 AM Bug #35808: ceph osd ok-to-stop result dosen't match the real situation
- It's a little bit odd that the ok-to-stop command said 4 PGs, but you actually had 5 PGs go incomplete, but basically...
- 09:03 AM Bug #35808: ceph osd ok-to-stop result dosen't match the real situation
- I see you are using a pool min_size of 3, so no replicas is allowed to be offline and hence the result is expected?
09/10/2018
- 09:00 PM Bug #35845: osd-scrub-repair.sh:TEST_corrupt_scrub_replicated failed
- The test code that needs to be fixed is only present in Mimic and master.
- 03:24 PM Bug #35845: osd-scrub-repair.sh:TEST_corrupt_scrub_replicated failed
- https://github.com/ceph/ceph/pull/24013
- 03:36 PM Backport #35909 (Resolved): mimic: osd-scrub-repair.sh:TEST_corrupt_scrub_replicated failed
- https://github.com/ceph/ceph/pull/24017
09/09/2018
- 08:15 AM Tasks #25186 (In Progress): setup repo for building dependencies like boost, rocksdb, which are n...
- https://github.com/ceph/ceph/pull/23995
09/08/2018
- 05:05 PM Bug #24975 (Resolved): valgrind-leaks.yaml: expected valgrind issues and found none
- 05:05 PM Backport #24992 (Resolved): mimic: valgrind-leaks.yaml: expected valgrind issues and found none
- 03:33 PM Backport #24992: mimic: valgrind-leaks.yaml: expected valgrind issues and found none
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23744
merged - 06:31 AM Bug #35845: osd-scrub-repair.sh:TEST_corrupt_scrub_replicated failed
- Adding jewel because we are seeing an "osd-scrub-repair.sh" make check issue in jewel (not sure if it's this same iss...
- 02:56 AM Bug #35845: osd-scrub-repair.sh:TEST_corrupt_scrub_replicated failed
- It turns out this is just a difference in the iterator for the function throwing the exception....
- 01:51 AM Bug #35546 (Resolved): RADOS: probably missing clone location for async_recovery_targets
09/07/2018
- 11:40 PM Bug #35833 (In Progress): error: 'unique_ptr' in namespace 'std' does not name a type when compil...
- https://github.com/ceph/ceph/pull/23992
- 07:28 AM Bug #35833 (Resolved): error: 'unique_ptr' in namespace 'std' does not name a type when compiling...
- We should be able to compile a librados client program, such as examples/librados/hello_world.cc, on a system with li...
- 11:26 PM Bug #35845: osd-scrub-repair.sh:TEST_corrupt_scrub_replicated failed
- -https://github.com/ceph/ceph/pull/23991-
- 11:15 PM Bug #35845 (In Progress): osd-scrub-repair.sh:TEST_corrupt_scrub_replicated failed
- 06:43 PM Bug #35845: osd-scrub-repair.sh:TEST_corrupt_scrub_replicated failed
This must be caused by differences in the grep command on different distributions. It passes sometimes including o...- 04:39 PM Bug #35845 (Resolved): osd-scrub-repair.sh:TEST_corrupt_scrub_replicated failed
- ...
- 11:09 PM Backport #35855 (Resolved): mimic: should remove mentioning of "scrubq" in ceph(8) manpage
- https://github.com/ceph/ceph/pull/24210
- 11:09 PM Backport #35854 (Resolved): luminous: should remove mentioning of "scrubq" in ceph(8) manpage
- https://github.com/ceph/ceph/pull/24211
- 09:51 PM Feature #85: osd: pg_num shrink
- Yeah, merged now!
- 07:38 PM Feature #85: osd: pg_num shrink
- Sage, were you going to merge https://github.com/ceph/ceph/pull/20469 ?
- 06:55 PM Feature #85 (Resolved): osd: pg_num shrink
- \o/
- 08:54 PM Bug #24801: PG num_bytes becomes huge
- Fix is included in pull request https://github.com/ceph/ceph/pull/22797
- 06:57 PM Bug #22165 (Resolved): split pg not actually created, gets stuck in state unknown
- by commit fdfc5c64
- 06:56 PM Bug #26970 (Resolved): src/osd/OSDMap.h: 1065: FAILED assert(__null != pool)
- 06:05 PM Bug #35849 (Closed): mimic: test_envlibrados_for_rocksdb.sh: build failed with error: #endif with...
- ...
- 05:28 PM Bug #35847 (Resolved): wrong cluster_network doesn't cause any errors and ends up using monitor n...
- 1) set any random valid cluster network eg: cluster_network: 17.20.20.0/24
2) setup cluster , notice the cluster com... - 02:10 PM Bug #20694: osd/ReplicatedBackend.cc: 1417: FAILED assert(get_parent()->get_log().get_log().obje...
- /a/sage-2018-09-06_16:02:58-rados-wip-sage-testing-2018-09-05-1559-distro-basic-smithi/2985475
- 12:37 PM Backport #35067 (In Progress): luminous: deep scrub cannot find the bitrot if the object is cached
- -https://github.com/ceph/ceph/pull/23980-
- 11:12 AM Bug #35813 (Pending Backport): should remove mentioning of "scrubq" in ceph(8) manpage
- 10:22 AM Backport #35844 (Resolved): luminous: objecter cannot resend split-dropped op when racing with co...
- https://github.com/ceph/ceph/pull/24188
- 10:22 AM Backport #35843 (Resolved): mimic: objecter cannot resend split-dropped op when racing with con r...
- https://github.com/ceph/ceph/pull/24970
- 10:20 AM Backport #35836 (Resolved): mimic: mon: mgr options not parse propertly
- https://github.com/ceph/ceph/pull/24176
09/06/2018
- 09:21 PM Support #27203: osd down while bucket is deleting
- The heartbeat timing out like that means the OSD is overloaded - in particular delete operations for RGW can overwhel...
- 02:11 PM Bug #35813 (Fix Under Review): should remove mentioning of "scrubq" in ceph(8) manpage
- 02:09 PM Bug #35813 (Resolved): should remove mentioning of "scrubq" in ceph(8) manpage
- https://github.com/ceph/ceph/pull/23959
- 01:59 PM Bug #35076 (Pending Backport): mon: mgr options not parse propertly
- 11:16 AM Bug #27206 (Resolved): rpm: should change ceph-mgr package depency from py-bcrypt to python2-bcrypt
- 11:15 AM Backport #27212 (Resolved): mimic: rpm: should change ceph-mgr package depency from py-bcrypt to ...
- 09:57 AM Bug #35810 (Can't reproduce): FAILED assert(entries.begin()->version > info.last_update)
- ...
- 09:01 AM Bug #35808 (Rejected): ceph osd ok-to-stop result dosen't match the real situation
- The cluster is in healthy status, when I tried to run ceph osd ok-to-stop 0 it returns...
- 06:41 AM Backport #25144 (Resolved): mimic: Automatically set expected_num_objects for new pools with >=10...
- 06:41 AM Feature #22750 (Resolved): libradosstriper conditional compile
- w00t!
- 06:40 AM Backport #27213 (Resolved): mimic: libradosstriper conditional compile
- 06:37 AM Backport #32108 (Resolved): mimic: object errors found in be_select_auth_object() aren't logged t...
- 06:26 AM Bug #26940 (Resolved): force-create-pg broken
- 06:26 AM Backport #34532 (Resolved): mimic: force-create-pg broken
- 06:08 AM Backport #35068 (Resolved): mimic: deep scrub cannot find the bitrot if the object is cached
- 06:06 AM Backport #26907 (Resolved): mimic: kv: MergeOperator name() returns string, and caller calls c_st...
- 05:51 AM Backport #26909 (Resolved): mimic: PGLog.cc: saw valgrind issues while accessing complete_to->ver...
- 05:50 AM Backport #25220 (Resolved): mimic: osd/PGLog.cc: use lgeneric_subdout instead of generic_dout
- 05:50 AM Backport #25200 (Resolved): mimic: FAILED assert(trim_to <= info.last_complete) in PGLog::trim()
- 05:50 AM Backport #24989 (Resolved): mimic: Limit pg log length during recovery/backfill so that we don't ...
- 05:00 AM Bug #25153: output format is invalid of the crush tree json dumper
- New commit to solve the review problems: https://github.com/ceph/ceph/pull/23319/commits/fa1056cfc32ce3bf932d7c71f281...
- 02:36 AM Bug #27988 (In Progress): Warn if queue of scrubs ready to run exceeds some threshold
- https://github.com/ceph/ceph/pull/23848
- 12:33 AM Bug #22544 (Pending Backport): objecter cannot resend split-dropped op when racing with con reset
09/05/2018
- 09:52 PM Backport #25144: mimic: Automatically set expected_num_objects for new pools with >=100 PGs per OSD
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23860
merged - 09:50 PM Backport #27213: mimic: libradosstriper conditional compile
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23869
merged - 09:43 PM Backport #26931 (Resolved): mimic: scrub livelock
- 09:42 PM Backport #25176 (Resolved): mimic: osd,mon: increase mon_max_pg_per_osd to 300
- 09:42 PM Backport #25204 (Resolved): mimic: rados python bindings use prval from stack
- 09:39 PM Backport #32108: mimic: object errors found in be_select_auth_object() aren't logged the same
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23870
mergedReviewed-by: David Zafman <dzafman@redhat.com> - 09:38 PM Backport #34532: mimic: force-create-pg broken
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23872
merged - 09:37 PM Backport #35068: mimic: deep scrub cannot find the bitrot if the object is cached
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23873
merged - 09:32 PM Backport #26909: mimic: PGLog.cc: saw valgrind issues while accessing complete_to->version
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23403
merged - 09:32 PM Backport #25220: mimic: osd/PGLog.cc: use lgeneric_subdout instead of generic_dout
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23403
merged - 09:32 PM Backport #25200: mimic: FAILED assert(trim_to <= info.last_complete) in PGLog::trim()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23403
merged - 09:32 PM Backport #24989: mimic: Limit pg log length during recovery/backfill so that we don't run out of ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23403
merged - 09:24 PM Backport #26907: mimic: kv: MergeOperator name() returns string, and caller calls c_str() on the ...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23865
merged - 10:33 AM Feature #35687 (New): rgw: storing and reading total usage data to construct rgw service monitor ...
- There are problems for the current rgw usage data storing and reading implementation:
1. The usage data will be ac... - 05:08 AM Bug #35682 (Resolved): 34164d55c839acd35bbb1be5279e3e23e3bec1fd broke the librados examples
- ...
- 12:58 AM Bug #35546 (Resolved): RADOS: probably missing clone location for async_recovery_targets
- https://github.com/ceph/ceph/pull/23895
09/04/2018
- 11:33 PM Feature #35545: mon: show warning when running with an even number of mons
- https://github.com/ceph/ceph/pull/23922
- 11:16 PM Feature #35545 (New): mon: show warning when running with an even number of mons
- People seem to like configuring clusters with 4 monitors for some reason. I've seen this more than once in the wild.
- 09:48 PM Feature #35544: "osd df" should show OSD state
- Implementation is here: https://github.com/ceph/ceph/pull/23921
- 09:31 PM Feature #35544 (Resolved): "osd df" should show OSD state
- It's midly irritating that "osd df (tree)" doesn't shows the osd status while "osd tree" does.
- 06:40 PM Bug #35542: Backfill and recovery should validate all checksums
- Nope, 12.2.6 was the one that didn't handle checksums properly. So this looks like a real issue, although I think we ...
- 06:27 PM Bug #35542: Backfill and recovery should validate all checksums
- Oh, this may just be 12.2.5 being broken? In which case we can close.
- 06:27 PM Bug #35542 (Won't Fix): Backfill and recovery should validate all checksums
- From the thread "Copying without crc check when peering may lack reliability" on ceph-devel, it appears that backfill...
- 04:45 PM Feature #19944 (Rejected): [RFE]: add option/support config persistence with ceph tell command
- This seems to be addressed by the centralized config introduced in mimic.
- 01:49 PM Bug #21557: osd.6 found snap mapper error on pg 2.0 oid 2:0e781f33:::smithi14431805-379 ... :187 ...
- Another one: ...
- 12:09 PM Bug #34529 (Resolved): cbt tests in rados qa suite fails
- This was a result of http://status.sepia.ceph.com/incident/3676
dmick restarted the VM - 11:38 AM Tasks #25186: setup repo for building dependencies like boost, rocksdb, which are not provided by...
- for building ceph-libboost, use https://github.com/tchaikov/boost...
09/02/2018
- 05:07 PM Bug #23352: osd: segfaults under normal operation
- Phat Le Ton wrote:
> I've just seen 12.2.8 release, Was your patch included in this release ?
Yes. See https://tr... - 04:35 PM Bug #23352: osd: segfaults under normal operation
- Brad Hubbard wrote:
> I've created a test package here based on 12.2.7 and including the one line patch above.
>
... - 01:30 PM Backport #35068 (In Progress): mimic: deep scrub cannot find the bitrot if the object is cached
- 01:17 PM Backport #34532 (In Progress): mimic: force-create-pg broken
- 12:58 PM Backport #32106 (In Progress): luminous: object errors found in be_select_auth_object() aren't lo...
- 12:44 PM Backport #32108 (In Progress): mimic: object errors found in be_select_auth_object() aren't logge...
- 12:36 PM Backport #27213 (In Progress): mimic: libradosstriper conditional compile
- 12:29 PM Feature #22750: libradosstriper conditional compile
- https://github.com/ceph/ceph/pull/21983
- 12:26 PM Backport #27212 (In Progress): mimic: rpm: should change ceph-mgr package depency from py-bcrypt ...
- 12:22 PM Backport #26910 (In Progress): luminous: PGLog.cc: saw valgrind issues while accessing complete_t...
- 12:18 PM Backport #26909 (In Progress): mimic: PGLog.cc: saw valgrind issues while accessing complete_to->...
- 12:08 PM Backport #26908 (In Progress): luminous: kv: MergeOperator name() returns string, and caller call...
- 12:06 PM Backport #26907 (In Progress): mimic: kv: MergeOperator name() returns string, and caller calls c...
- 12:04 PM Backport #25203 (In Progress): luminous: rados python bindings use prval from stack
- 12:03 PM Backport #25204 (In Progress): mimic: rados python bindings use prval from stack
- 12:00 PM Backport #25177 (In Progress): luminous: osd,mon: increase mon_max_pg_per_osd to 300
- 11:59 AM Backport #25176 (In Progress): mimic: osd,mon: increase mon_max_pg_per_osd to 300
- 11:52 AM Backport #25144 (In Progress): mimic: Automatically set expected_num_objects for new pools with >...
- 11:49 AM Backport #24992 (In Progress): mimic: valgrind-leaks.yaml: expected valgrind issues and found none
09/01/2018
- 08:49 PM Bug #22544 (Fix Under Review): objecter cannot resend split-dropped op when racing with con reset
- https://github.com/ceph/ceph/pull/23850
- 08:43 PM Bug #22544: objecter cannot resend split-dropped op when racing with con reset
- Here, it happened:...
- 07:20 AM Bug #21142: OSD crashes when loading pgs with "FAILED assert(interval.last > last)"
- Some steps tried to reproduce the bug:
1. Create a luminous cluster running in Kubernetes using hostNetwork and th...
08/31/2018
- 10:08 PM Bug #35076 (Resolved): mon: mgr options not parse propertly
- ...
- 05:17 PM Bug #35075 (New): copy-get stuck sending osd_op
- ...
- 11:07 AM Backport #35071 (Resolved): mimic: FAILED assert(osdmap_manifest.pinned.empty()) in OSDMonitor::p...
- https://github.com/ceph/ceph/pull/24918
- 11:06 AM Backport #35068 (Resolved): mimic: deep scrub cannot find the bitrot if the object is cached
- https://github.com/ceph/ceph/pull/23873
- 11:06 AM Backport #35067 (Resolved): luminous: deep scrub cannot find the bitrot if the object is cached
- https://github.com/ceph/ceph/pull/24802
- 08:53 AM Bug #34541 (Pending Backport): deep scrub cannot find the bitrot if the object is cached
- https://github.com/ceph/ceph/pull/23629
- 08:53 AM Bug #34541 (Resolved): deep scrub cannot find the bitrot if the object is cached
- quote from https://github.com/ceph/ceph/pull/23629
> Say a object who has data caches, but in a while later, cache...
08/30/2018
- 03:20 PM Backport #34532 (Resolved): mimic: force-create-pg broken
- https://github.com/ceph/ceph/pull/23872
- 01:53 PM Bug #26940 (Pending Backport): force-create-pg broken
- 12:06 PM Bug #34529 (Resolved): cbt tests in rados qa suite fails
- seems http://drop.ceph.com/qa/cosbench-0.4.2.c3.1.zip is not reachable anymore....
- 05:10 AM Backport #26992 (In Progress): luminous: discover_all_missing() not always called during activating
- https://github.com/ceph/ceph/pull/23817
08/29/2018
- 09:51 PM Bug #25076 (Duplicate): MON crash when upgrading luminous v12.2.7 -> mimic v13.2.0 during ceph-fu...
- 09:29 PM Bug #34321 (New): OSD crash because of DBObjectMap.cc: 662: FAILED assert(state.legacy)
- Version: 12.2.7
The following crash is observed during normal operation of the cluster, so no particular steps to ... - 08:08 PM Bug #27988: Warn if queue of scrubs ready to run exceeds some threshold
I'm want to fix 3 things here. First, user submitted scrubs are queued as due to occur immediately, but overdue sc...- 05:25 PM Bug #24612 (Pending Backport): FAILED assert(osdmap_manifest.pinned.empty()) in OSDMonitor::prune...
- 03:13 PM Bug #26994 (Resolved): test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) f...
08/28/2018
- 08:23 PM Bug #24033 (Resolved): rados: not all exceptions accept keyargs
- 08:22 PM Backport #25178 (Resolved): mimic: rados: not all exceptions accept keyargs
- 07:53 PM Backport #25178: mimic: rados: not all exceptions accept keyargs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23335
merged - 12:42 PM Bug #33561 (New): PG repair doesn't start on an inconsistent group
- Version: 12.2.7
Issue timeline:
1.Deep-scrub discovered inconsistency in one group on a pool with 4 replicas - the ... - 12:33 PM Bug #33420 (New): Forced deep-scrub doesn't start
- Version: 12.2.7
Issue timeline:
1. Cyclic deep-scrub discovered inconsistency:
2018-08-23 17:21:07.933458 osd.... - 11:11 AM Backport #32108 (Resolved): mimic: object errors found in be_select_auth_object() aren't logged t...
- https://github.com/ceph/ceph/pull/23870
- 11:11 AM Backport #32106 (Resolved): luminous: object errors found in be_select_auth_object() aren't logge...
- https://github.com/ceph/ceph/pull/23871
- 05:23 AM Bug #27988: Warn if queue of scrubs ready to run exceeds some threshold
- Talking with Sage, he believes there is already a warning status if you have scrubs that haven't run for more than 2x...
08/27/2018
- 09:21 PM Bug #20775 (Resolved): ceph_test_rados parameter error
- 07:55 PM Bug #25182: Upmaps forgotten after restarting OSDs
- I believe these log messages explain why the upmaps are being removed, but I'll attach the relevant section of the lo...
- 06:39 PM Bug #25182: Upmaps forgotten after restarting OSDs
- Bryan Stillwell wrote:
> What debugging logs would be helpful in figuring this out? I just restarted an OSD on my 1... - 06:07 PM Bug #25182: Upmaps forgotten after restarting OSDs
- What debugging logs would be helpful in figuring this out? I just restarted an OSD on my 13.2.1-based cluster and al...
- 06:44 PM Bug #23576: osd: active+clean+inconsistent pg will not scrub or repair
- Created tracker https://tracker.ceph.com/issues/27988 to add warning about too many scrubs pending.
- 04:26 PM Bug #23576: osd: active+clean+inconsistent pg will not scrub or repair
- David Turner wrote:
> I came across this again as well and I did some more testing. As it turns out what resolved t... - 04:26 PM Bug #23576: osd: active+clean+inconsistent pg will not scrub or repair
- I cam across this again as well and I did some more testing. As it turns out what resolved this issue for me was inc...
- 01:33 PM Bug #23576: osd: active+clean+inconsistent pg will not scrub or repair
- Hi - we are still experiencing this issue on 12.2.7 (so latest Luminous version)...
- 06:43 PM Bug #27988 (Rejected): Warn if queue of scrubs ready to run exceeds some threshold
The sched_scrub_pg set could be scanned during a new insert and the number of scrubs that are ready to be run could...- 05:18 PM Bug #27985 (Resolved): force-backfill sets forced_recovery instead of forced_backfill in 13.2.1
- I've noticed that using force-backfill in Mimic seems to be broken. It sets forced_recovery instead of forced_backfi...
- 04:17 AM Support #27203: osd down while bucket is deleting
- Actually,this issue still upset me
-2> 2018-08-23 16:14:52.673287 7f3aeb536700 1 heartbeat_map is_healthy 'OS...
08/26/2018
- 12:50 PM Bug #24612: FAILED assert(osdmap_manifest.pinned.empty()) in OSDMonitor::prune_init()
- https://github.com/ceph/ceph/pull/23742
Currently missing: a reproducer. Reproducing may not be trivial because th...
08/25/2018
- 08:42 PM Bug #27363 (New): 'rbd rm' does not clean tiered pool completly
- mimic (13.2.1)
linux kernel: 4.18.3-1.el7.elrepo.x86_64
ceph osd crush rule create-replicated hddreplrule default... - 05:26 PM Bug #27362 (New): Wrong erasure pool MAX AVAIL size calculation with technique=reed_sol_r6_op
- ...
- 05:53 AM Bug #24022: "ceph tell osd.x bench" writes resulting JSON to stderr instead of stdout.
- luminous backport https://github.com/ceph/ceph/pull/23680
08/24/2018
- 05:14 PM Bug #25084 (Resolved): Attempt to read object that can't be repaired loops forever
- 05:13 PM Bug #25108 (Pending Backport): object errors found in be_select_auth_object() aren't logged the same
- 05:12 PM Bug #24801: PG num_bytes becomes huge
So far with assert added to object_stat_sum_t::add() we saw this. Still not sure why the num_bytes is off.
<pr...- 12:54 PM Bug #24612 (In Progress): FAILED assert(osdmap_manifest.pinned.empty()) in OSDMonitor::prune_init()
- 02:00 AM Backport #26931 (In Progress): mimic: scrub livelock
- https://github.com/ceph/ceph/pull/23722
08/23/2018
- 09:22 PM Backport #27213 (Resolved): mimic: libradosstriper conditional compile
- https://github.com/ceph/ceph/pull/23869
- 09:21 PM Backport #27212 (Resolved): mimic: rpm: should change ceph-mgr package depency from py-bcrypt to ...
- https://github.com/ceph/ceph/pull/23868
- 09:20 PM Bug #25057 (Resolved): jewel->luminous: osdmap crc mismatch
- 09:20 PM Backport #25101 (Resolved): mimic: jewel->luminous: osdmap crc mismatch
- 11:31 AM Feature #22750 (Pending Backport): libradosstriper conditional compile
- 11:21 AM Feature #22750 (Resolved): libradosstriper conditional compile
- 11:28 AM Bug #27206 (Pending Backport): rpm: should change ceph-mgr package depency from py-bcrypt to pyth...
- https://github.com/ceph/ceph/pull/23648
- 11:27 AM Bug #27206 (Resolved): rpm: should change ceph-mgr package depency from py-bcrypt to python2-bcrypt
- Current deplist of ceph-mgr rpm package contains py-bcrypt depency which conflicts with python2-bcrypt needed for pyt...
- 11:23 AM Bug #26998 (Resolved): IOPS churn with "osd op queue" = "mclock_opclass" or "mclock_client"
- 08:19 AM Support #27203: osd down while bucket is deleting
- Format is ugly,my fault
- 07:59 AM Support #27203 (New): osd down while bucket is deleting
- My environment is
[tanweijie@gz-ceph-52-202 ~]$ ceph --version
ceph version 12.2.5 (cad919881333ac92274171586c827e0...
08/22/2018
- 10:20 PM Feature #26975: Rados level IO priority for OSD operations
- Do note that
1) "Messages" can already have priority, although its utility at this point is quite limited it's not t... - 09:32 PM Bug #26880 (Resolved): ceph-base debian package compiled on ubuntu/xenial has unmet runtime depen...
- 09:31 PM Backport #26881 (Resolved): mimic: ceph-base debian package compiled on ubuntu/xenial has unmet r...
- 09:19 PM Bug #26971: failed to become clean before timeout expired
- Looks like a PG is active+undersized state. Maybe the balancer screwed up?
- 09:14 PM Backport #24359 (Resolved): mimic: osd: leaked Session on osd.7
- 09:00 PM Bug #24875 (Resolved): OSD: still returning EIO instead of recovering objects on checksum errors
- 09:00 PM Backport #25226 (Resolved): mimic: OSD: still returning EIO instead of recovering objects on chec...
- 08:46 PM Backport #25101: mimic: jewel->luminous: osdmap crc mismatch
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23226
merged - 05:37 PM Bug #27053: qa: thrashosds: "[ERR] : 2.0 has 1 objects unfound and apparently lost"
- Similar failure seen in mimic: /a/yuriw-2018-08-21_23:27:39-rados-wip-yuri5-testing-2018-08-21-2033-mimic-distro-basi...
- 03:39 PM Bug #27053 (New): qa: thrashosds: "[ERR] : 2.0 has 1 objects unfound and apparently lost"
- This is for 12.2.8
Run: http://pulpito.ceph.com/yuriw-2018-08-21_16:17:40-rados-luminous-distro-basic-smithi/
Job... - 05:26 PM Bug #27055 (New): mimic: FAILED assert((uint64_t)buf.st_size == expected) in SyntheticWorkloadSta...
- ...
- 08:51 AM Bug #24956: osd: parent process need to restart log service after fork, or ceph-osd will not work...
- PR:https://github.com/ceph/ceph/pull/23685
- 06:28 AM Bug #26994 (Fix Under Review): test_module_commands (tasks.mgr.test_module_selftest.TestModuleSel...
- https://github.com/ceph/ceph/pull/23681
- 03:45 AM Bug #23352 (Resolved): osd: segfaults under normal operation
- The patch is only relevant to the osds.
- 02:56 AM Bug #26998: IOPS churn with "osd op queue" = "mclock_opclass" or "mclock_client"
- 02:14 AM Bug #26998: IOPS churn with "osd op queue" = "mclock_opclass" or "mclock_client"
- - https://github.com/ceph/dmclock/pull/58
- https://github.com/ceph/ceph/pull/23643 - 02:13 AM Bug #26998 (Resolved): IOPS churn with "osd op queue" = "mclock_opclass" or "mclock_client"
- for more details on this issue, please refer to https://github.com/ceph/dmclock/pull/58 . in short, if "osd op queue"...
08/21/2018
- 08:22 PM Bug #25146 (In Progress): "rocksdb: Corruption: Can't access /000000.sst" in upgrade:mimic-x:para...
- Very early fix: https://github.com/rzarzynski/rocksdb/tree/wip-bug-25146.
The case appears more complicated as the... - 07:58 PM Bug #26880: ceph-base debian package compiled on ubuntu/xenial has unmet runtime dependencies
- https://github.com/ceph/ceph/pull/23490 merged
- 07:30 PM Bug #26994: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) fails
- Something like this will probably fix it...
- 06:49 PM Bug #26994: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) fails
- Here's the culprit: hello isn't packaged so it can't announce its commands....
- 06:45 PM Bug #26994: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) fails
- The manager logs show all the modules except for `hello` being loaded...
- 05:55 PM Bug #26994: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) fails
- I can't reproduce this... it is as if the monitor has not received a summary of commands from the manager at the the ...
- 04:39 PM Bug #26994 (Resolved): test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest) f...
- in https://github.com/ceph/ceph/pull/23558/commits/00223d2364b5a6cc32eb5f83f5a642b5aef2c946 , hello is used for testi...
- 04:03 PM Backport #26992 (Resolved): luminous: discover_all_missing() not always called during activating
- https://github.com/ceph/ceph/pull/23817
- 04:01 PM Feature #26975: Rados level IO priority for OSD operations
- For "Rados level" I mean librados API at least, and implementation in OSD too.
- 03:59 PM Feature #26975 (New): Rados level IO priority for OSD operations
- What I mean:
Suppose busy Ceph cluster.
Every OSD has many IO requests from clients in it's queue. Today, all r... - 12:56 AM Bug #26972 (Resolved): cluster [ERR] Error -2 reading object
http://qa-proxy.ceph.com/teuthology/dzafman-2018-08-17_08:14:49-rados-wip-zafman-testing4-distro-basic-smithi/29146...- 12:42 AM Bug #26971 (Duplicate): failed to become clean before timeout expired
http://qa-proxy.ceph.com/teuthology/dzafman-2018-08-16_17:35:08-rados:thrash-wip-zafman-testing4-distro-basic-smith...- 12:32 AM Bug #26970 (Resolved): src/osd/OSDMap.h: 1065: FAILED assert(__null != pool)
http://qa-proxy.ceph.com/teuthology/dzafman-2018-08-16_17:35:08-rados:thrash-wip-zafman-testing4-distro-basic-smith...
Also available in: Atom