Activity
From 12/03/2018 to 01/01/2019
01/01/2019
- 08:31 PM Bug #23145: OSD crashes during recovery of EC pg
- Peter Woodman wrote:
> This time:
> [...]
>
> I'll see what I can do re. debug osd logs.
That is to say, I'm ... - 07:45 PM Bug #23145: OSD crashes during recovery of EC pg
- This time:...
- 07:42 PM Bug #23145: OSD crashes during recovery of EC pg
- Hey, I've hit this once again- this time, though, the disk write cache was disabled, so the back-in-time explanation ...
- 04:04 PM Bug #37776 (Pending Backport): workunits/rados/test_health_warnings.sh fails with <9 osds down
- 03:49 PM Bug #37751 (Resolved): handle_conf_change crash in osd
- 03:12 PM Bug #21557: osd.6 found snap mapper error on pg 2.0 oid 2:0e781f33:::smithi14431805-379 ... :187 ...
- /a/sage-2019-01-01_04:27:00-rados-wip-sage-testing-2018-12-31-1546-distro-basic-smithi/3410885...
- 03:04 PM Bug #20798: LibRadosLockECPP.LockExclusiveDurPP gets EEXIST
- I'm guessing this is teh same......
- 02:59 PM Bug #18749: OSD: allow EC PGs to do recovery below min_size
- /a/sage-2019-01-01_04:27:00-rados-wip-sage-testing-2018-12-31-1546-distro-basic-smithi/3410708
- 04:30 AM Bug #37511 (Resolved): merge target placeholder may get wrong PastIntervals from source
- 04:30 AM Bug #37774 (Resolved): bad op 7
- 02:24 AM Bug #37777 (Closed): OSD dies on assert triggered by a spicific other OSD joining the cluster
- Short description: In a cluster with 44 OSDs, osd.8 will allways assert and die if osd.7 is part of or joins the clus...
12/31/2018
- 05:18 PM Bug #37776 (Fix Under Review): workunits/rados/test_health_warnings.sh fails with <9 osds down
- https://github.com/ceph/ceph/pull/25732
- 05:17 PM Bug #37776 (Resolved): workunits/rados/test_health_warnings.sh fails with <9 osds down
- ...
- 05:05 PM Bug #37775 (Fix Under Review): some pg_created messages not sent to mon
- https://github.com/ceph/ceph/pull/25731
- 04:43 PM Bug #37775: some pg_created messages not sent to mon
- how about,
- if pool CREATING flag is sent, we queue a 'created' message when the pg peers
- osd tracks pending cre... - 04:38 PM Bug #37775 (Resolved): some pg_created messages not sent to mon
- mon doesn't get pg_created for two pgs. CREATING flag is never removed, job fails with a final scrub timeout
/a/s... - 04:56 PM Bug #24601 (Pending Backport): FAILED assert(is_up(osd)) in OSDMap::get_inst(int)
- 03:10 PM Bug #37774 (Fix Under Review): bad op 7
- https://github.com/ceph/ceph/pull/25730
- 02:55 PM Bug #37774: bad op 7
- i am inclined to revert this change unless we guard it with a feature bit.
- 02:53 PM Bug #37774: bad op 7
- osd w/o https://github.com/ceph/ceph/pull/22385 does not understand this op.
- 02:50 PM Bug #37774 (Resolved): bad op 7
- ...
- 01:58 PM Bug #37766 (Fix Under Review): rados_shutdown hang forever in ~objecter()
- https://github.com/ceph/ceph/pull/25714
12/30/2018
- 04:10 PM Bug #37772 (New): unittest_seastar_messenger fails with debug build
- ...
- 02:09 PM Bug #37751: handle_conf_change crash in osd
- we started to guard @handle_conf_change()@ since aad318abc9a680d68aab96b051fb7457c8f7feac.
- 02:06 PM Bug #37751 (Fix Under Review): handle_conf_change crash in osd
- https://github.com/ceph/ceph/pull/25726
12/29/2018
- 04:30 PM Backport #37690 (Need More Info): luminous: ceph-objectstore-tool: Add HashInfo to object dump ou...
- While backporting changes related to tracker 37597, found the following compilation errors :
/home/jenkins-build/b... - 02:19 PM Backport #37689 (Need More Info): mimic: ceph-objectstore-tool: Add HashInfo to object dump output
- While backporting changes related to tracker 37597, getting cbegin not found compilation error :
/home/jenkins-bui...
12/28/2018
- 06:23 PM Backport #37690 (In Progress): luminous: ceph-objectstore-tool: Add HashInfo to object dump output
- 06:17 PM Backport #37689 (In Progress): mimic: ceph-objectstore-tool: Add HashInfo to object dump output
- 03:25 PM Bug #24531: Mimic MONs have slow/long running ops
- I just hit this on a 13.2.1 single-host cluster with 1 mon and 8 OSDs. The log is basically identical to the one Wido...
12/27/2018
- 06:54 PM Bug #37768 (Duplicate): mon gets stuck op for failing OSDs
- @6 slow ops, oldest one blocked for 736706 sec, mon.rofl has slow ops@
I have several slow monitor ops that were t... - 11:16 AM Bug #37747: slow requests are being show on Luminous version while using bluestore , and cluster ...
- Well we do not see any traffic related to this bug , so just updating to reflect current trials
1. we did tried to e... - 08:41 AM Bug #37766 (Resolved): rados_shutdown hang forever in ~objecter()
- we use tbd todo some test over, and shutdown our client, then it hang for a long time, and did't go on ever.
it lo...
12/26/2018
- 05:57 AM Bug #37764 (Fix Under Review): doc: Fix Create a Cluster url in Running Multiple Clusters
- 05:48 AM Bug #37764 (Resolved): doc: Fix Create a Cluster url in Running Multiple Clusters
- http://docs.ceph.com/docs/master/rados/configuration/common/#running-multiple-clusters
- 05:13 AM Feature #36737: Allow multi instances of "make tests" on the same machine
- partial fix: https://github.com/ceph/ceph/pull/25704
we also need to move the venv directories to "./build".
12/24/2018
- 02:25 PM Bug #37752 (Duplicate): pool stuck with 'creating' flag set
- ...
- 02:19 PM Bug #37751 (Resolved): handle_conf_change crash in osd
- ...
- 10:33 AM Bug #37747 (New): slow requests are being show on Luminous version while using bluestore , and cl...
- Hi
we are seeing a regression in luminous blue store compared to filestore jewels version
while the capacity of the...
12/23/2018
12/22/2018
- 02:33 AM Bug #36525 (Fix Under Review): osd-scrub-snaps.sh failure
- https://github.com/ceph/ceph/pull/25675
- 01:58 AM Bug #36525 (In Progress): osd-scrub-snaps.sh failure
12/21/2018
- 07:32 PM Bug #36525: osd-scrub-snaps.sh failure
I see that qa/tasks/workunit.py does set run.Raw('CEPH_CLI_TEST_DUP_COMMAND=1'). The interesting thing is that t...- 06:00 AM Bug #36525: osd-scrub-snaps.sh failure
- i am not able to reproduce this issue locally after running "../qa/run-standalone.sh osd-scrub-snaps.sh TEST_scrub_sn...
- 04:54 AM Bug #36525: osd-scrub-snaps.sh failure
Reproduced in 1 of 20 jobs after running teuthology-suite --machine-type smithi --suite rados --ceph wip-zafman-tes...- 04:07 AM Bug #36525: osd-scrub-snaps.sh failure
The test script does a single "ceph pg scrub 1.0" It shows the duplicate scrub:...- 05:25 PM Bug #37542 (Fix Under Review): nvme partitions aren't mapped back to device
- https://github.com/ceph/ceph/pull/25672
- 03:52 PM Bug #37714 (Resolved): test_dump_pgstate_history: Can't find expected values in history object, f...
- 01:28 AM Bug #37714 (Fix Under Review): test_dump_pgstate_history: Can't find expected values in history o...
- 01:27 AM Bug #37714: test_dump_pgstate_history: Can't find expected values in history object, failing
- Seems to be related to http://tracker.ceph.com/issues/37706.
Fixed by https://github.com/ceph/ceph/pull/25632
Tes... - 03:49 PM Bug #37706 (Resolved): list-inconsistent-pg fails with EINVAL
- 05:00 AM Bug #26972 (Resolved): cluster [ERR] Error -2 reading object
- 04:56 AM Bug #25194 (Can't reproduce): Negative stats found by deep-scrub
- 01:14 AM Bug #36546: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_list.back())->ops_in_...
teuthology-suite --machine-type smithi --suite rados --ceph wip-zafman-testing --filter scrub.yaml --num 20
/a/d...- 01:02 AM Subtask #37732 (New): qa/suites/rados/thrash-erasure-code*: coverage review tasks
- - Leveldb mons no longer relevant
- Shec should symlink thrashers dir to get newer thrashing
- Balancer, backoff, p... - 12:18 AM Subtask #37731: upgrade/luminous-x - add "require-osd-release nautilus" and clean up
- looks like both luminous-x and mimic-x need to be updated. in each of {parallel, stress-split*}, we need a symlink t...
- 12:11 AM Subtask #37731 (Resolved): upgrade/luminous-x - add "require-osd-release nautilus" and clean up
- All we need to add:
- exec:
osd.0:
- ceph osd require-osd-release nautilus
- ceph osd set-require-min-compat-cl... - 12:09 AM Subtask #37730 (New): qa/suites/rados/multimon: coverage review tasks
- - could add more rados workloads
- some redundancy with monthrash, which has a 9-mon cluster
- mon_seesaw may be a...
12/20/2018
- 11:08 PM Bug #37714: test_dump_pgstate_history: Can't find expected values in history object, failing
- This one is pretty reproducible on master with the filter 'all/admin_socket_output.yaml rados.yaml supported-random-d...
- 07:28 PM Bug #37264 (In Progress): scrub warning check incorrectly uses mon scrub interval
- 05:09 PM Bug #23145: OSD crashes during recovery of EC pg
- I've seen this on 12.2.5 and 12.2.10. I unfortunately can't offer any further logs files :/
Just to confirm that t... - 04:12 PM Bug #37511 (Fix Under Review): merge target placeholder may get wrong PastIntervals from source
- https://github.com/ceph/ceph/pull/25652
- 11:47 AM Backport #37688 (Need More Info): mimic: Command failed on smithi191 with status 1: '\n sudo yum ...
- 11:47 AM Backport #37687 (Need More Info): luminous: Command failed on smithi191 with status 1: '\n sudo y...
- 10:50 AM Bug #37720: Ceph-osd is halt when enable SPDK
- Please review the correction https://github.com/ceph/ceph/pull/25646
- 10:11 AM Bug #37720: Ceph-osd is halt when enable SPDK
- I'm working on the issue.
- 10:11 AM Bug #37720 (Resolved): Ceph-osd is halt when enable SPDK
- When set up development Ceph cluster enabling SPDK, observed ceph-osd is halt on aarch64 platform and assert on x86 p...
- 05:47 AM Bug #36405: unittest_seastar_messenger failure on ARM
- from dmesg:...
- 01:20 AM Bug #37718 (Rejected): ceph-osdomap-tool crashes
Rebuilding the binary fixed the problem. It looked like a library incompatibility because safe_to_start_threads sh...- 12:38 AM Bug #37718 (Rejected): ceph-osdomap-tool crashes
$ ../qa/run-standalone.sh "osd-scrub-snaps.sh TEST_scrub_snaps"
...
../qa/standalone/scrub/osd-scrub-snaps.sh:100...
12/19/2018
- 10:53 PM Bug #37705 (Closed): list-inconsistent-pg fails with EINVAL
- 10:07 PM Bug #37583 (Resolved): mix luminous + master mons break ceph cli
- 06:18 PM Backport #37688: mimic: Command failed on smithi191 with status 1: '\n sudo yum -y install ceph-r...
- Let's hold off on this backport, since the original issue has not been resolved yet.
- 06:18 PM Backport #37687: luminous: Command failed on smithi191 with status 1: '\n sudo yum -y install cep...
- Let's hold off on this backport, since the original issue has not been resolved yet.
- 06:06 PM Bug #37654: FAILED ceph_assert(info.history.same_interval_since != 0) in PG::start_peering_interv...
- /a/nojha-2018-12-19_01:41:09-rados-master-distro-basic-smithi/3375485/
- 06:03 PM Bug #37716 (New): failed to recover before timeout expired due to pgs going into backfill_toofull
- ...
- 05:38 PM Bug #37673 (Won't Fix): latency between "initiated" and "queued_for_pg"
- after testing locally, i think this is expected behavior.
test settings:
* docker container osd-host.alpha, 172... - 05:36 PM Bug #37714 (Resolved): test_dump_pgstate_history: Can't find expected values in history object, f...
- ...
- 11:35 AM Bug #37706 (Fix Under Review): list-inconsistent-pg fails with EINVAL
- https://github.com/ceph/ceph/pull/25632
- 02:55 AM Bug #37706: list-inconsistent-pg fails with EINVAL
I was suspicious of https://github.com/ceph/ceph/pull/23298, so I reverted all 74 commits and the problem wouldn't ...- 05:52 AM Bug #20874: osd/PGLog.h: 1386: FAILED assert(miter == missing.get_items().end() || (miter->second...
- Hit this again:
http://pulpito.ceph.com/xxg-2018-12-19_01:25:39-rados:thrash-wip-no-upmap-for-merge-distro-basic-s...
12/18/2018
- 07:10 PM Bug #37706: list-inconsistent-pg fails with EINVAL
- ...
- 07:04 PM Bug #37706 (Resolved): list-inconsistent-pg fails with EINVAL
Seen in run-standalone.sh runs of osd-scrub-snaps.sh and osd-scrub-repair.sh:...- 06:56 PM Bug #37705 (Closed): list-inconsistent-pg fails with EINVAL
- 11:26 AM Backport #37698 (In Progress): mimic: osd_memory_target: failed assert when options mismatch
- 11:10 AM Backport #37698 (Resolved): mimic: osd_memory_target: failed assert when options mismatch
- https://github.com/ceph/ceph/pull/25605
- 11:25 AM Backport #37697 (In Progress): luminous: osd_memory_target: failed assert when options mismatch
- 11:10 AM Backport #37697 (Resolved): luminous: osd_memory_target: failed assert when options mismatch
- https://github.com/ceph/ceph/pull/25604
- 11:23 AM Backport #37686 (In Progress): mimic: list-inconsistent-obj output truncated, causing osd-scrub-r...
- 11:09 AM Backport #37686 (Resolved): mimic: list-inconsistent-obj output truncated, causing osd-scrub-repa...
- https://github.com/ceph/ceph/pull/25603
- 11:09 AM Backport #37690 (Resolved): luminous: ceph-objectstore-tool: Add HashInfo to object dump output
- https://github.com/ceph/ceph/pull/25722
- 11:09 AM Backport #37689 (Resolved): mimic: ceph-objectstore-tool: Add HashInfo to object dump output
- https://github.com/ceph/ceph/pull/25721
- 11:09 AM Backport #37688 (Resolved): mimic: Command failed on smithi191 with status 1: '\n sudo yum -y ins...
- https://github.com/ceph/ceph/pull/26201
- 11:09 AM Backport #37687 (Rejected): luminous: Command failed on smithi191 with status 1: '\n sudo yum -y ...
- 03:58 AM Bug #37511: merge target placeholder may get wrong PastIntervals from source
- /a/sage-2018-12-17_17:34:16-rados-wip-sage2-testing-2018-12-17-0911-distro-basic-smithi/3372061
- 02:01 AM Bug #37679 (Fix Under Review): osd: pull object from the shard who missing it
- FAILED assert(get_parent()->get_log().get_log().objects.count(soid) && (get_parent()->get_log().get_log().objects.fin...
- 12:39 AM Bug #37507 (Pending Backport): osd_memory_target: failed assert when options mismatch
12/17/2018
- 10:28 PM Feature #37597 (Pending Backport): ceph-objectstore-tool: Add HashInfo to object dump output
- 10:27 PM Bug #37653 (Pending Backport): list-inconsistent-obj output truncated, causing osd-scrub-repair.s...
- https://github.com/ceph/ceph/pull/25548
- 09:34 PM Bug #37656: FileStore::_do_transaction() crashed with error 17 (merge collection vs osd restart)
- /a/dzafman-2018-12-14_11:02:20-rados-wip-zafman-testing-distro-basic-smithi/3362534
- 07:54 PM Bug #36497: FAILED ceph_assert(can_write == WriteStatus::NOWRITE) in ProtocolV1::replace()
- /a/dzafman-2018-12-14_11:02:20-rados-wip-zafman-testing-distro-basic-smithi/3362409
- 07:53 PM Bug #20694: osd/ReplicatedBackend.cc: 1417: FAILED assert(get_parent()->get_log().get_log().obje...
- /a/dzafman-2018-12-14_11:02:20-rados-wip-zafman-testing-distro-basic-smithi/3362388
- 04:52 PM Bug #36686: osd: pg log hard limit can cause crash during upgrade
- Oliver Freyermuth wrote:
> Let me extend that question with:
> What's the clean upgrade path for those on 12.2.8 or... - 04:27 PM Bug #36686: osd: pg log hard limit can cause crash during upgrade
- Let me extend that question with:
What's the clean upgrade path for those on 12.2.8 or 12.2.10 (and wanting to upgra... - 04:23 PM Bug #36686: osd: pg log hard limit can cause crash during upgrade
- Nathan Cutler wrote:
> Alexander Morozov wrote:
> > Any ETA for the fix?
>
> Did you mean ETA for 12.2.10? Lumin... - 11:43 AM Bug #36686: osd: pg log hard limit can cause crash during upgrade
- Alexander Morozov wrote:
> Any ETA for the fix?
Did you mean ETA for 12.2.10? Luminous v12.2.10 was released on N... - 06:56 AM Bug #36686: osd: pg log hard limit can cause crash during upgrade
- Nathan Cutler wrote:
> Neha, 12.2.9 has already been cut, so we'll need to expedite 12.2.10 to push the revert out t... - 03:48 PM Bug #37673 (Won't Fix): latency between "initiated" and "queued_for_pg"
- * 6 osd cluster
* separated cluster network on eth0, and public network on eth1.
* a rados client accessing from pu... - 02:59 PM Bug #37671 (Resolved): race between split and pg create
- ...
- 02:44 PM Bug #37507: osd_memory_target: failed assert when options mismatch
- merged https://github.com/ceph/ceph/pull/25421
- 01:23 PM Bug #36515: config options: 'services' field is empty for many config options
- Some of the config option 'services' fields have been addressed by https://github.com/ceph/ceph/pull/25456
- 01:10 PM Bug #25211 (Resolved): bug in PerfCounters
- 01:08 PM Bug #36709 (Closed): OSD stuck while flushing rocksdb WAL
- 12:50 PM Bug #36709: OSD stuck while flushing rocksdb WAL
- Thanks for your answers, that was helpful info.
It looks like aacraid module v.1.2.1.50877 issue. IO requests stucke...
12/14/2018
- 01:06 PM Bug #37665 (Fix Under Review): ceph-objectstore-tool export from luminous, import to master clear...
- turns out the upgrade suite already turns import/export tool tests off... let's just do the same.
https://github.com... - 12:57 PM Bug #37665: ceph-objectstore-tool export from luminous, import to master clears same_interval_since
- I'm thinking we should make ceph-objectstore-tool refuse to use an export from an older major release (without, say, ...
- 12:56 PM Bug #37665 (Resolved): ceph-objectstore-tool export from luminous, import to master clears same_i...
- on luminous exporting osd.3, pg last seen as...
- 07:25 AM Bug #25174: osd: assert failure with FAILED assert(repop_queue.front() == repop) In function 'vo...
- seen again here: http://qa-proxy.ceph.com/teuthology/yuriw-2018-12-12_21:15:36-kcephfs-wip-yuri5-testing-2018-12-12-1...
- 04:14 AM Cleanup #37662 (In Progress): Review-RADOS suite
- Master tracker for associated works arising from the RADOS teuthology suite review.
Attach related trackers for as...
12/13/2018
- 10:26 PM Bug #37656 (Triaged): FileStore::_do_transaction() crashed with error 17 (merge collection vs osd...
- ...
- 10:02 PM Bug #37654 (Resolved): FAILED ceph_assert(info.history.same_interval_since != 0) in PG::start_pee...
- ...
- 09:49 PM Bug #37653: list-inconsistent-obj output truncated, causing osd-scrub-repair.sh failure
The commit 873655062de03fbeda7053eaf34eab5a7644e1d1 from https://github.com/ceph/ceph/pull/24229 exposed a bug in...- 09:36 PM Bug #37653 (Resolved): list-inconsistent-obj output truncated, causing osd-scrub-repair.sh failure
This bug causes an diff to be detected because of missing entries. It would have been nice if the decode failure w...- 03:44 PM Feature #21073: mgr: ceph/rgw: show hostnames and ports in ceph -s status output
- The port info is in servicemap under frontend_config, though I agree it is specific enough and probably doesnt warran...
- 03:19 PM Feature #21073: mgr: ceph/rgw: show hostnames and ports in ceph -s status output
- https://github.com/ceph/ceph/pull/25540
This patch will show the service's id, but not the port. For the rgw examp... - 01:11 PM Bug #37439: Degraded PG does not discover remapped data on originating OSD
- Tested on a 5-node cluster with 20 OSDs and 14 3-replica pools.
Here's the log file (level 20) of OSD 18, which is... - 12:07 PM Bug #37439: Degraded PG does not discover remapped data on originating OSD
- please please let us edit issues and comments...
-I made a mistake in the above post: *please ignore* the @ceph os... - 11:13 AM Bug #37439: Degraded PG does not discover remapped data on originating OSD
- Easy steps to reproduce seem to be:
* Have a healthy cluster
* @ceph osd set pause # make sure no writes me... - 12:56 PM Bug #20798: LibRadosLockECPP.LockExclusiveDurPP gets EEXIST
- /a/sage-2018-12-12_23:36:13-rados-wip-sage2-testing-2018-12-12-1435-distro-basic-smithi/3335654
- 12:31 PM Bug #37640 (Resolved): Can not rollback from 12.2.1 to 12.2.0 for CEPH_MON_FEATURE_INCOMPAT_LUMINOUS
- Resolving as requested.
- 10:20 AM Bug #37640: Can not rollback from 12.2.1 to 12.2.0 for CEPH_MON_FEATURE_INCOMPAT_LUMINOUS
- liuzhong chen wrote:
> I can not find the ceph-mon project in the issue,So I add it in ceph-mgr column.If it is wron... - 03:13 AM Bug #37640: Can not rollback from 12.2.1 to 12.2.0 for CEPH_MON_FEATURE_INCOMPAT_LUMINOUS
- I can not find the ceph-mon project in the issue,So I add it in ceph-mgr column.If it is wrong,please move it right p...
- 03:11 AM Bug #37640 (Resolved): Can not rollback from 12.2.1 to 12.2.0 for CEPH_MON_FEATURE_INCOMPAT_LUMINOUS
- As 12.2.1 and higher version has mon feature CEPH_MON_FEATURE_INCOMPAT_LUMINOUS,we can not rollback from 12.2.1 to 12...
- 09:25 AM Bug #36725: luminous: Apparent Memory Leak in OSD
- I made dumps during the tune of the osd_memory_target value. Perhaps this data will be useful in the future....
- 09:11 AM Bug #36709: OSD stuck while flushing rocksdb WAL
- iostat -xtd 1 output most of time when the problem occurs:...
- 04:12 AM Bug #37618 (Pending Backport): Command failed on smithi191 with status 1: '\n sudo yum -y install...
12/12/2018
- 10:29 PM Bug #36709: OSD stuck while flushing rocksdb WAL
- The backtrace that's attached shows the kv_sync_thread waiting for I/O to complete from the block device:...
- 01:40 PM Bug #36709: OSD stuck while flushing rocksdb WAL
- I've finally reproduced this(as i hope) behavior.
Our staging cluster:
3 nodes with 22 osds on ssd's,
Kernel 4.1... - 10:19 PM Bug #37326: Daily inconsistent objects
- The ceph-users list may be able to help debug this faster - it could be many things in the hw/sw stack.
- 10:13 PM Bug #37593 (Fix Under Review): ec pool lost data due to snap clone
- 10:06 PM Bug #36725 (Closed): luminous: Apparent Memory Leak in OSD
- 09:03 PM Backport #37341 (Resolved): luminous: doc: Add bluestore memory autotuning docs
- 09:01 PM Backport #37341: luminous: doc: Add bluestore memory autotuning docs
- Note this is a follow-up on https://github.com/ceph/ceph/pull/24065
- 08:30 PM Backport #37343 (In Progress): luminous: Prioritize user specified scrubs
- 08:29 PM Bug #37583: mix luminous + master mons break ceph cli
- https://github.com/ceph/ceph/pull/25470
- 08:25 PM Backport #37342 (In Progress): mimic: Prioritize user specified scrubs
- 05:20 PM Bug #37264: scrub warning check incorrectly uses mon scrub interval
- https://github.com/ceph/ceph/pull/25112
- 05:15 PM Feature #37597: ceph-objectstore-tool: Add HashInfo to object dump output
- https://github.com/ceph/ceph/pull/25483
- 02:43 PM Bug #36040: mon: Valgrind: mon (InvalidFree, InvalidWrite, InvalidRead)
- seen again here: http://qa-proxy.ceph.com/teuthology/yuriw-2018-12-10_20:44:09-fs-wip-yuri4-testing-2018-12-10-1710-m...
- 01:52 PM Backport #36729 (In Progress): mimic: Add support for osd_delete_sleep configuration value
- 03:33 AM Bug #37618 (Fix Under Review): Command failed on smithi191 with status 1: '\n sudo yum -y install...
- change on teuthology side
- https://github.com/ceph/teuthology/pull/1244
change on ceph side
- https://githu... - 03:32 AM Bug #37618 (Resolved): Command failed on smithi191 with status 1: '\n sudo yum -y install ceph-ra...
- librados2 and librbd1 are installed as a dependency of qemu-kvm.
qemu-kvm is installed by ceph-cm-ansible, see [1].
...
12/11/2018
- 11:07 PM Bug #36725: luminous: Apparent Memory Leak in OSD
- Konstantin: thanks for pointing that out. that looks like the issue. Both OSD servers have 8GB RAM total, each run...
- 03:55 PM Feature #37597 (Resolved): ceph-objectstore-tool: Add HashInfo to object dump output
- 03:08 PM Bug #37507: osd_memory_target: failed assert when options mismatch
- Hi Mark,
You got it: 1105322466 boots, and 1105322465 crashes with the above trace.
Cheers, Dan - 12:18 PM Bug #37593 (Resolved): ec pool lost data due to snap clone
- the wrong process is posted in https://github.com/ceph/ceph/pull/25490
- 07:56 AM Bug #37452 (Resolved): FAILED ceph_assert(prealloc_left == (int64_t)need)
- thanks Igor!
- 02:03 AM Bug #24615 (Resolved): error message for 'unable to find any IP address' not shown
- 02:02 AM Bug #24615: error message for 'unable to find any IP address' not shown
- Thanks Francois, I'll close the ticket.
- 01:22 AM Bug #24615: error message for 'unable to find any IP address' not shown
- Hi Victor Denisov,
First, really sorry for my late answer (I was a little busy).
In fact, I have tested again w...
12/10/2018
- 11:18 PM Bug #37507: osd_memory_target: failed assert when options mismatch
- Hi Folks,
I'm guessing this is related to https://github.com/ceph/ceph/pull/25421 Basically a stupid uint64_t bug... - 10:37 PM Bug #37507: osd_memory_target: failed assert when options mismatch
- Thoughts, Mark?
- 10:13 PM Feature #37500: ceph status/health hang when they could give helpful hints
- Hmm, perhaps we could fall back to outputting other commands when connections to the monitor seem to be hanging, as t...
- 06:18 PM Bug #37583 (Fix Under Review): mix luminous + master mons break ceph cli
12/09/2018
- 06:19 PM Bug #37583 (Resolved): mix luminous + master mons break ceph cli
- both luminous and ceph cli fail intermittently, depending on which mon they connect to....
- 06:07 PM Bug #37582 (New): luminous: ceph -s client gets all mgrmaps
- ...
- 04:59 PM Bug #36748: ms_deliver_verify_authorizer no AuthAuthorizeHandler found for protocol 0
- /a/kchai-2018-12-09_00:37:50-rados-wip-kefu2-testing-2018-12-09-0002-distro-basic-smithi/3318960
- 04:48 AM Bug #36725: luminous: Apparent Memory Leak in OSD
- John, you are you in course about new 12.2.9 options osd_memory_target and bluestore_cache_autotune?
You should try ...
12/08/2018
- 08:30 PM Bug #37542: nvme partitions aren't mapped back to device
- Hrm, it looks like the code in question is...
- 02:51 PM Bug #36725: luminous: Apparent Memory Leak in OSD
- I have same problem.
- 01:44 PM Bug #37507: osd_memory_target: failed assert when options mismatch
- ...
- 03:46 AM Bug #20491 (New): objecter leaked OSDMap in handle_osd_map
- ...
12/07/2018
- 09:00 AM Bug #24601: FAILED assert(is_up(osd)) in OSDMap::get_inst(int)
- https://github.com/ceph/ceph/pull/25437
- 03:36 AM Bug #37542 (Resolved): nvme partitions aren't mapped back to device
- ...
- 02:32 AM Bug #17257: ceph_test_rados_api_lock fails LibRadosLockPP.LockExclusiveDurPP
- ...
12/06/2018
- 11:22 PM Bug #37525 (Resolved): unprime_split_children may discard query
- 06:26 PM Bug #37439: Degraded PG does not discover remapped data on originating OSD
- See also the ceph-devel mailing list thread "Degraded PG does not discover remapped data on originating OSD".
- 12:03 PM Bug #37532: mon: expected_num_objects warning triggers on bluestore-only setups
- 11:49 AM Bug #37532: mon: expected_num_objects warning triggers on bluestore-only setups
- I don't think it's wise to simply remove the code because filestore is no longer the default. We need to consider exi...
12/05/2018
- 11:28 PM Bug #37532: mon: expected_num_objects warning triggers on bluestore-only setups
- https://github.com/ceph/ceph/pull/25417
- 11:11 PM Bug #37532 (Resolved): mon: expected_num_objects warning triggers on bluestore-only setups
- Follow up for the mailing list thread http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-December/031711.html
...
12/04/2018
- 07:59 PM Bug #37512 (Resolved): ready_to_merge message lost
- 04:10 AM Bug #37512: ready_to_merge message lost
- https://github.com/ceph/ceph/pull/25388
- 04:09 AM Bug #37512 (Resolved): ready_to_merge message lost
- //a/sage-2018-12-03_17:39:26-rados-wip-sage2-testing-2018-12-03-0942-distro-basic-smithi/3304262...
- 07:58 PM Bug #37525 (Fix Under Review): unprime_split_children may discard query
- https://github.com/ceph/ceph/pull/25399
- 07:52 PM Bug #37525 (Resolved): unprime_split_children may discard query
- ...
- 09:48 AM Bug #23031: FAILED assert(!parent->get_log().get_missing().is_missing(soid))
- https://github.com/ceph/ceph/pull/25219
- 03:49 AM Bug #37511: merge target placeholder may get wrong PastIntervals from source
- I think the fix is to just bite the bullet and put PastIntervals at decrement time in the pg_info_t, along with the o...
- 03:49 AM Bug #37511 (Resolved): merge target placeholder may get wrong PastIntervals from source
- ...
- 03:25 AM Bug #37509: require past_interval bounds mismatch due to osd oldest_map
- I don't think the superblock.oldest_map should be a factor in this calculation. I suspect it is in there to deal wit...
- 03:24 AM Bug #37509 (Can't reproduce): require past_interval bounds mismatch due to osd oldest_map
- ...
12/03/2018
- 11:33 PM Backport #37496 (In Progress): mimic: OSD mkfs might assert when working agains bluestore disk th...
- https://github.com/ceph/ceph/pull/25385
- 04:02 PM Bug #37507 (Resolved): osd_memory_target: failed assert when options mismatch
- We tried setting osd_memory_target to 1GB and this results in the following assertion early after startup:...
Also available in: Atom