Activity
From 05/09/2019 to 06/07/2019
06/07/2019
- 03:26 AM Bug #40198 (In Progress): Setting noscrub causing extraneous deep scrubs
- 03:00 AM Bug #40198 (Resolved): Setting noscrub causing extraneous deep scrubs
ceph osd set noscrub
Wait 1 day or ceph --admin-daemon primary.asok trigger_scrub PGID...
06/06/2019
- 07:43 PM Bug #40193 (Duplicate): Changing pg_num and other pool settings are ignored
- ...
- 05:16 PM Bug #36739: ENOENT in collection_move_rename on EC backfill target
- https://github.com/ceph/ceph/pull/27015 (more complete fix) merged
- 05:15 PM Bug #20491 (Resolved): objecter leaked OSDMap in handle_osd_map
- 03:52 PM Backport #40192 (Resolved): nautilus: Rados.get_fsid() returning bytes in python3
- https://github.com/ceph/ceph/pull/28476
- 12:22 AM Bug #39997: not able to create osd keyring
- I am using ceph 13.2(mimic )
06/05/2019
- 09:46 PM Backport #40180 (Resolved): nautilus: qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
- https://github.com/ceph/ceph/pull/29252
- 09:46 PM Backport #40179 (Resolved): mimic: qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
- https://github.com/ceph/ceph/pull/29251
- 09:43 PM Bug #40078 (Pending Backport): qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
- 09:11 PM Bug #40078 (In Progress): qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
- 09:23 PM Bug #39665 (Fix Under Review): kstore: memory may leak on KStore::_do_read_stripe
- 09:19 PM Bug #39997: not able to create osd keyring
- This question is more relevant to the ceph-users mailing list, perhaps with more information about which version you ...
- 09:03 PM Bug #40081 (In Progress): mon: luminous crash attempting to decode maps after nautilus quorum has...
- 07:55 PM Backport #39476: nautilus: segv in fgets() in collect_sys_info reading /proc/cpuinfo
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28141
merged - 03:22 PM Bug #40112 (Pending Backport): mon: rados/multimon tests fail with clock skew
- 03:17 AM Bug #38403 (Fix Under Review): osd: leaked from OSDMap::apply_incremental
- ...
- 02:58 AM Bug #38403: osd: leaked from OSDMap::apply_incremental
- /a/kchai-2019-06-04_14:23:17-rados-wip-kefu-testing-2019-06-01-2346-distro-basic-smithi/4004812/
- 02:06 AM Bug #40154: nautilus: failed to become clean before timeout expired
With osd_max_backfills default to 1 and all recovery targeting OSD.1 all recovery is waiting behind PG 2.a to finis...
06/04/2019
- 08:28 PM Bug #40154 (New): nautilus: failed to become clean before timeout expired
- ...
- 02:58 AM Bug #36405 (Resolved): unittest_seastar_messenger failure on ARM
- 02:55 AM Bug #39997: not able to create osd keyring
- when I tried to run with following command
osd create <uuid> <osd no> --no-mon-config
keyring is generated
Cou... - 01:58 AM Bug #40119 (New): api_tier_pp hung causing a dead job
http://pulpito.ceph.com/dzafman-2019-05-31_07:47:29-rados-wip-zafman-testing-distro-basic-smithi/3992631...
06/03/2019
- 09:06 PM Support #40103: ceph monitor cannot start
- The ceph-users@ceph.com mailing list is a more reliable way to get help on issues like this. Looks like the OSDMap ha...
- 09:00 PM Bug #40117 (Duplicate): PG stuck in WaitActingChange
- osd.9 requests a switch to acting set=[5] from [9,5] which never shows up. The teuthology test hangs waiting for tha...
- 08:44 PM Bug #39282 (Resolved): EIO from process_copy_chunk_manifest
- 03:48 PM Bug #40112 (Resolved): mon: rados/multimon tests fail with clock skew
- See
http://pulpito.ceph.com/sage-2019-05-30_21:14:09-rados:multimon-master-distro-basic-smithi/
or
http://p... - 02:06 PM Bug #39115: ceph pg repair doesn't fix itself if osd is bluestore
- See #39116 for the stack trace.
I initially thought that this and the other issue were two separate problems. How...
06/01/2019
- 10:23 AM Backport #38850 (Resolved): upgrade: 1 nautilus mon + 1 luminous mon can't automatically form quorum
- 12:36 AM Bug #39175: RGW DELETE calls partially missed shortly after OSD startup
- I'd say the odds are high migrating the bucket indexes to bluestore would fix it - the omap structure there is very s...
05/31/2019
- 10:32 PM Bug #39115: ceph pg repair doesn't fix itself if osd is bluestore
- Since OSDs are crashing we should get stack traces out of the logs (e.g osd.9). Per http://tracker.ceph.com/issues/39...
- 08:19 PM Backport #38850: upgrade: 1 nautilus mon + 1 luminous mon can't automatically form quorum
- Joao Eduardo Luis wrote:
> backport PR to nautilus: https://github.com/ceph/ceph/pull/28262
merged - 07:38 PM Bug #39175: RGW DELETE calls partially missed shortly after OSD startup
- We are planning on migrating all of our clusters to BlueStore, but that's going to take the rest of the year. We cou...
- 07:08 PM Support #40103 (New): ceph monitor cannot start
- I have a ceph cluster running over 2 years and the monitor began crash since yesterday. I had some flapping OSDs up a...
- 01:11 AM Bug #40073 (In Progress): PG scrub stamps reset to 0.000000
05/30/2019
- 10:49 PM Bug #39175: RGW DELETE calls partially missed shortly after OSD startup
- Ok, so this is a different bug then. Any chance you're planning on migrating to bluestore with part of one of the pro...
- 09:39 PM Bug #39175: RGW DELETE calls partially missed shortly after OSD startup
- Hey Josh,
We backfilled onto the SSDs by creating a new crush rule which just uses the ssd class and switching the... - 02:49 PM Bug #40081: mon: luminous crash attempting to decode maps after nautilus quorum has been formed
- -https://github.com/ceph/ceph/pull/28323- (closed; see Pull Request ID field for the real PR)
This actually has us... - 10:39 AM Bug #40081 (Closed): mon: luminous crash attempting to decode maps after nautilus quorum has been...
- While upgrading, we found a rather annoying corner case:
Assuming we start with 3 luminous ceph-mon, upgrading fro... - 01:48 PM Backport #40084 (Resolved): nautilus: osd: Better error message when OSD count is less than osd_p...
- https://github.com/ceph/ceph/pull/29992
- 01:47 PM Backport #40083 (Resolved): mimic: osd: Better error message when OSD count is less than osd_pool...
- https://github.com/ceph/ceph/pull/30180
- 01:47 PM Backport #40082 (Resolved): luminous: osd: Better error message when OSD count is less than osd_p...
- https://github.com/ceph/ceph/pull/30298
- 01:29 PM Feature #38617 (Pending Backport): osd: Better error message when OSD count is less than osd_pool...
- 09:11 AM Backport #39699 (Resolved): nautilus: OSD down on snaptrim.
- 05:00 AM Bug #23387 (Resolved): Building Ceph on armhf fails due to out-of-memory
- i am resolving this issue. as quite a few (probably all) of issues noted by Louwrentius have been addressed by Daniel...
- 12:48 AM Bug #39723 (Duplicate): osd: valgrind Leak_DefinitelyLost
05/29/2019
- 10:34 PM Bug #39175: RGW DELETE calls partially missed shortly after OSD startup
- Hey Bryan, Neha's out this week. I'd like to verify whether this could be the same bug we'd seen before (http://track...
- 10:07 PM Backport #39699: nautilus: OSD down on snaptrim.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28203
merged - 09:42 PM Bug #38827 (Fix Under Review): valgrind: UninitCondition in ceph::crypto::onwire::AES128GCM_OnWir...
- https://github.com/ceph/ceph/pull/28305
- 02:54 PM Bug #38827: valgrind: UninitCondition in ceph::crypto::onwire::AES128GCM_OnWireRxHandler::authent...
- Second run (on slightly amended branch): http://pulpito.front.sepia.ceph.com/rzarzynski-2019-05-29_13:08:09-rgw-wip-b...
- 09:41 PM Bug #39723 (Fix Under Review): osd: valgrind Leak_DefinitelyLost
- 09:24 PM Bug #39723: osd: valgrind Leak_DefinitelyLost
- Okay, simple osdmap pointer assignment snafu. Working on a quick PR.
- 09:36 PM Bug #40073: PG scrub stamps reset to 0.000000
When auto repair is enabled a bug causes a regular scrub to reset time stamps which is only intended to happen when...- 07:49 PM Bug #40073: PG scrub stamps reset to 0.000000
- The similarity to #40066 is so striking I just had to mention it and create a "Relates to" link.
- 06:47 PM Bug #40073: PG scrub stamps reset to 0.000000
- A full pg query:...
- 06:47 PM Bug #40073 (Resolved): PG scrub stamps reset to 0.000000
- From Ceph-users, https://www.spinics.net/lists/ceph-users/msg52869.html
After upgrading from 14.2.0 to 14.2.1, I'v... - 09:02 PM Bug #40078 (Resolved): qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
yuriw-2019-05-16_23:32:37-rados-mimic_v13.2.6_QE-distro-basic-smithi/3959865
Command failed (workunit test scrub...- 05:39 PM Bug #40070: mon/OSDMonitor: target_size_bytes integer overflow
- This worked fine for me in an earlier version of this cluster, which was running 14.2.0. But it's possible things oth...
- 05:37 PM Bug #40070 (Rejected): mon/OSDMonitor: target_size_bytes integer overflow
- Nautilus 14.2.1 on Ubuntu 18.04 LTS, kernel 4.18 (HWE)
It appears that the "target_size_bytes" setting has an inte... - 11:26 AM Backport #39375 (Resolved): nautilus: ceph tell osd.xx bench help : gives wrong help
- 11:25 AM Backport #39421 (Resolved): nautilus: Don't mark removed osds in when running "ceph osd in any|al...
- 11:25 AM Backport #39721 (Resolved): nautilus: short pg log+nautilus-p2p-stress-split: "Error: finished ti...
- 11:25 AM Bug #39441 (Resolved): osd acting cycle
- 11:25 AM Backport #39512 (Resolved): nautilus: osd acting cycle
- 11:24 AM Backport #39514 (Resolved): nautilus: osd: segv in _preboot -> heartbeat
- 11:24 AM Backport #39519 (Resolved): nautilus: snaps missing in mapper, should be: ca was r -2...repaired
- 11:23 AM Backport #39539 (Resolved): nautilus: osd/ReplicatedBackend.cc: 1321: FAILED assert(get_parent()-...
- 11:23 AM Backport #39043 (Resolved): nautilus: osd/PGLog: preserve original_crt to check rollbackability
- 11:21 AM Backport #39432 (Resolved): nautilus: Degraded PG does not discover remapped data on originating OSD
- 11:21 AM Bug #39263 (Resolved): rados/upgrade/nautilus-x-singleton: mon.c@1(electing).elector(11) Shutting...
- 11:21 AM Backport #39419 (Resolved): nautilus: rados/upgrade/nautilus-x-singleton: mon.c@1(electing).elect...
- 11:18 AM Backport #39219 (Resolved): nautilus: osd: FAILED ceph_assert(attrs || !pg_log.get_missing().is_m...
05/28/2019
- 08:58 PM Bug #38827: valgrind: UninitCondition in ceph::crypto::onwire::AES128GCM_OnWireRxHandler::authent...
- Scheduled a resurrected run for validation: http://pulpito.front.sepia.ceph.com/rzarzynski-2019-05-28_20:56:45-rgw-wi...
- 05:43 PM Bug #38827: valgrind: UninitCondition in ceph::crypto::onwire::AES128GCM_OnWireRxHandler::authent...
- Changeset: https://github.com/ceph/ceph/compare/master...rzarzynski:wip-bug-38827.
- 05:13 PM Bug #38827: valgrind: UninitCondition in ceph::crypto::onwire::AES128GCM_OnWireRxHandler::authent...
- This bug looks like being duplicated by of http://tracker.ceph.com/issues/39449 which has been addressed with a pair ...
- 04:10 PM Backport #39375: nautilus: ceph tell osd.xx bench help : gives wrong help
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28035
merged - 04:10 PM Backport #39421: nautilus: Don't mark removed osds in when running "ceph osd in any|all|*"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28072
merged - 04:09 PM Backport #39721: nautilus: short pg log+nautilus-p2p-stress-split: "Error: finished tid 3 when la...
- David Zafman wrote:
> https://github.com/ceph/ceph/pull/28088
merged - 04:08 PM Backport #39512: nautilus: osd acting cycle
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28160
merged - 04:08 PM Backport #39514: nautilus: osd: segv in _preboot -> heartbeat
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28164
merged - 04:07 PM Backport #39519: nautilus: snaps missing in mapper, should be: ca was r -2...repaired
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28205
merged - 04:07 PM Backport #39539: nautilus: osd/ReplicatedBackend.cc: 1321: FAILED assert(get_parent()->get_log()....
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28219
merged - 04:06 PM Backport #39043: nautilus: osd/PGLog: preserve original_crt to check rollbackability
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27632
merged - 04:04 PM Backport #39432: nautilus: Degraded PG does not discover remapped data on originating OSD
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27744
merged - 04:03 PM Backport #39419: nautilus: rados/upgrade/nautilus-x-singleton: mon.c@1(electing).elector(11) Shut...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27771
merged - 04:03 PM Backport #39219: nautilus: osd: FAILED ceph_assert(attrs || !pg_log.get_missing().is_missing(soid...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27839
merged - 03:29 PM Bug #39449 (Resolved): Uninit in EVP_DecryptFinal_ex on ceph::crypto::onwire::AES128GCM_OnWireRxH...
- This has been backported with:
* https://github.com/ceph/ceph/pull/27320,
* https://github.com/ceph/ceph/pull/27321... - 10:18 AM Bug #40029: ceph-mon: Caught signal (Aborted) in (CrushWrapper::update_choose_args(CephContext*)+...
- I removed osd.0 and osd.1 from host-247, and re-ran deployment of osds to host-371. Both got added successfully.
... - 09:44 AM Bug #40029: ceph-mon: Caught signal (Aborted) in (CrushWrapper::update_choose_args(CephContext*)+...
- Attached logs of primary monitor with:
debug mon 10
debug ms 1
Started prior to osd-57 being added, and stopped ... - 09:41 AM Backport #38850: upgrade: 1 nautilus mon + 1 luminous mon can't automatically form quorum
- backport PR to nautilus: https://github.com/ceph/ceph/pull/28262
- 03:48 AM Bug #40035 (New): smoke.sh failing in jenkins "make check" test randomly
- ...
- 02:40 AM Backport #39538 (In Progress): mimic: osd/ReplicatedBackend.cc: 1321: FAILED assert(get_parent()-...
- https://github.com/ceph/ceph/pull/28259
05/27/2019
- 04:22 PM Bug #40029: ceph-mon: Caught signal (Aborted) in (CrushWrapper::update_choose_args(CephContext*)+...
- Happens on any host I create osd.57 on.
- 03:53 PM Bug #40029: ceph-mon: Caught signal (Aborted) in (CrushWrapper::update_choose_args(CephContext*)+...
- Recreating the OSDs, it seems that the monitors consistently crash when creating osd.57. And they consistently recov...
- 03:21 PM Bug #40029 (Resolved): ceph-mon: Caught signal (Aborted) in (CrushWrapper::update_choose_args(Cep...
- When adding a new osd, all primary monitors crashed....
05/25/2019
- 08:44 PM Bug #39555: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
- Note: I've only seen this in a relatively busy and full environment with quite a few backfills going on.
- 08:21 PM Bug #39555: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
- Thanks for the PR.
The problem itself seems to be caused as follows:
- A backfill starts to a set osds
- One of th...
05/24/2019
- 08:23 PM Bug #20491 (Fix Under Review): objecter leaked OSDMap in handle_osd_map
- https://github.com/ceph/ceph/pull/28242
I think we shouldn't backport the fix, as it might upset misbehaved (unloc... - 08:20 PM Bug #20491 (In Progress): objecter leaked OSDMap in handle_osd_map
- ...
- 08:45 AM Bug #36405: unittest_seastar_messenger failure on ARM
- Another one:...
- 12:27 AM Backport #39518 (In Progress): mimic: snaps missing in mapper, should be: ca was r -2...repaired
- https://github.com/ceph/ceph/pull/28232
05/23/2019
- 10:30 PM Bug #39175: RGW DELETE calls partially missed shortly after OSD startup
- Neha,
It was great meeting with you in Barcelona! I can't remember everything you wanted me to gather, but here's... - 06:49 AM Backport #39513 (In Progress): mimic: osd: segv in _preboot -> heartbeat
- https://github.com/ceph/ceph/pull/28220
- 03:33 AM Backport #39539 (In Progress): nautilus: osd/ReplicatedBackend.cc: 1321: FAILED assert(get_parent...
- https://github.com/ceph/ceph/pull/28219
- 12:47 AM Bug #18643: SnapTrimmer: inconsistencies may lead to snaptrimmer hang
- Do we still need to fix something here? https://github.com/ceph/ceph/pull/15635 at least sets a pg to snaptrim_error...
05/22/2019
- 10:00 PM Bug #39555 (In Progress): backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
- 06:01 AM Bug #39555: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
- The pull request https://github.com/ceph/ceph/pull/28204 generates a warning with a better message.
health: HE... - 02:27 PM Bug #40000 (New): osds do not bound xattrs and/or aggregate xattr data in pg log
- Currently we are having our cluster in an HEALTH_ERR state with 4 PGs inactive (3 of which are "peering" and 4th is "...
- 02:11 PM Bug #39978: Adding OSD to Luminous Cluster will crash the active mon
- Indeed the issue is related to adding a new host to the crush map.
I fixed it by manually adding the host to the cru... - 09:32 AM Bug #39997 (New): not able to create osd keyring
- i have set up i mon and i mgr and two osds in a single node.
when i try to create osd keyring via following command:... - 07:07 AM Backport #39475 (In Progress): mimic: segv in fgets() in collect_sys_info reading /proc/cpuinfo
- https://github.com/ceph/ceph/pull/28206
- 07:06 AM Backport #39519 (In Progress): nautilus: snaps missing in mapper, should be: ca was r -2...repaired
- https://github.com/ceph/ceph/pull/28205
- 06:50 AM Bug #24531: Mimic MONs have slow/long running ops
- Joao sent this as a possible fix: https://github.com/ceph/ceph/pull/28177
- 06:41 AM Bug #24531: Mimic MONs have slow/long running ops
- The attached file is three mon's dump_historic_slow_ops file.
I deploy v13.2.5 ceph by rook in kunnertes cluster,I...
05/21/2019
- 10:41 PM Bug #39555: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
- I know that, but the issue doesn't occur with those osds. The issue occurs with the ssds (checked with `ceph pg ls ba...
- 10:36 PM Bug #39555: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
- Rene, you have what looks more like an expected situation. With some OSDs showing as high as 72% utilization, a big ...
- 09:36 PM Bug #39555: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
- Here I can reproduce the issue with the ssd class.
We're in the process of reinstalling/redeploying (one host with a... - 09:21 PM Bug #39555: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
- Seeing a "ceph osd df" like Alex provided is helpful in determining what is going on. Looking at it repeatedly while...
- 05:06 PM Bug #39555: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
Erik:
New code that estimates and reserves final backfill space requirement is not present in v13.2.5. It isn't ...- 09:05 PM Backport #39699 (In Progress): nautilus: OSD down on snaptrim.
- 05:57 PM Backport #39698 (In Progress): mimic: OSD down on snaptrim.
- 05:51 PM Backport #38341 (In Progress): mimic: pg stuck in backfill_wait with plenty of disk space
- 02:20 PM Bug #24531: Mimic MONs have slow/long running ops
- same problem with Dan van der Ster,on a v13.2.5 cluster five hours ago.
I restart osd.0 when monitor logs show oldes... - 03:03 AM Backport #39516 (In Progress): nautilus: osd-backfill-space.sh test failed in TEST_backfill_multi...
- https://github.com/ceph/ceph/pull/28187
05/20/2019
- 11:47 PM Backport #39719 (In Progress): luminous: short pg log+nautilus-p2p-stress-split: "Error: finished...
- 01:38 PM Bug #39555: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
- Just to chime in: We too have seen this (on 13.2.5) on OSDs that are only 10-20% full.
It always (magically) clear... - 10:38 AM Bug #39978 (Duplicate): Adding OSD to Luminous Cluster will crash the active mon
- I recently upgraded my cluster to Luminous v12.2.11. While adding a new OSD the active monitor crashes (attempt to fr...
- 07:34 AM Bug #39972 (Fix Under Review): librados 'buffer::create' and related functions are not exported i...
- 06:51 AM Bug #39972 (Resolved): librados 'buffer::create' and related functions are not exported in C++ API
- Currently, there is no way to create any 'buffer::raw' objects since they are no longer exposed (since Nautilus) via ...
- 02:45 AM Backport #39514 (In Progress): nautilus: osd: segv in _preboot -> heartbeat
- https://github.com/ceph/ceph/pull/28164
05/18/2019
- 10:04 AM Feature #39966 (New): mon: allow log messages to be throttled and/or force trimming
- If some daemon is sending a lot of cluster log messages, we need a way to
- throttle, filter, or block them
- for...
05/17/2019
- 06:41 AM Bug #39956: OSD:Cancel copy op causes memory leak
- If two clients access the same snap object at the same time, and the object needs to promote, before the promote is c...
- 02:46 AM Bug #39956 (New): OSD:Cancel copy op causes memory leak
- ceph version 12.2.7
==00:00:06:00.712 3722687== 15,237,248 (2,770,560 direct, 12,466,688 indirect) bytes in 3,848 ... - 03:13 AM Backport #39512 (In Progress): nautilus: osd acting cycle
- https://github.com/ceph/ceph/pull/28160
05/16/2019
- 04:21 PM Bug #38827: valgrind: UninitCondition in ceph::crypto::onwire::AES128GCM_OnWireRxHandler::authent...
- The RGW verify suite has commented out the lines running valgrind on the mon.
https://github.com/ceph/ceph/pull/2815... - 11:31 AM Backport #38850: upgrade: 1 nautilus mon + 1 luminous mon can't automatically form quorum
- I have been working on it, able to reproduce, just unable yet to pin down the cause.
Reproducing basically takes t... - 02:00 AM Backport #39422 (In Progress): mimic: Don't mark removed osds in when running "ceph osd in any|al...
- https://github.com/ceph/ceph/pull/28142
- 01:41 AM Backport #39476 (In Progress): nautilus: segv in fgets() in collect_sys_info reading /proc/cpuinfo
- https://github.com/ceph/ceph/pull/28141
05/15/2019
- 03:50 PM Backport #39373 (In Progress): luminous: ceph tell osd.xx bench help : gives wrong help
- 03:44 PM Backport #38750 (In Progress): luminous: should report EINVAL in ErasureCode::parse() if m<=0
- 03:36 PM Backport #38880 (In Progress): luminous: ENOENT in collection_move_rename on EC backfill target
- 06:59 AM Backport #39374 (In Progress): mimic: ceph tell osd.xx bench help : gives wrong help
- https://github.com/ceph/ceph/pull/28097
05/14/2019
- 11:49 AM Backport #39720 (In Progress): mimic: short pg log+nautilus-p2p-stress-split: "Error: finished ti...
- 11:48 AM Backport #39721 (In Progress): nautilus: short pg log+nautilus-p2p-stress-split: "Error: finished...
- 11:41 AM Backport #39744 (Resolved): mimic: mon: "FAILED assert(pending_finishers.empty())" when paxos res...
- https://github.com/ceph/ceph/pull/28540
- 11:41 AM Backport #39743 (Resolved): nautilus: mon: "FAILED assert(pending_finishers.empty())" when paxos ...
- https://github.com/ceph/ceph/pull/28528
- 11:40 AM Backport #39738 (Resolved): nautilus: Binary data in OSD log from "CRC header" message
- https://github.com/ceph/ceph/pull/28504
- 11:40 AM Backport #39737 (Resolved): mimic: Binary data in OSD log from "CRC header" message
- https://github.com/ceph/ceph/pull/28503
05/13/2019
- 11:15 PM Bug #39723 (Duplicate): osd: valgrind Leak_DefinitelyLost
- ...
- 10:17 PM Backport #39721 (Resolved): nautilus: short pg log+nautilus-p2p-stress-split: "Error: finished ti...
- https://github.com/ceph/ceph/pull/28088
- 10:16 PM Backport #39720 (Resolved): mimic: short pg log+nautilus-p2p-stress-split: "Error: finished tid 3...
- https://github.com/ceph/ceph/pull/28089
- 10:16 PM Backport #39719 (Resolved): luminous: short pg log+nautilus-p2p-stress-split: "Error: finished ti...
- https://github.com/ceph/ceph/pull/28185
- 08:21 PM Bug #39304 (Pending Backport): short pg log+nautilus-p2p-stress-split: "Error: finished tid 3 whe...
- 08:09 PM Bug #39304 (Resolved): short pg log+nautilus-p2p-stress-split: "Error: finished tid 3 when last_a...
- 08:08 PM Bug #39582 (Pending Backport): Binary data in OSD log from "CRC header" message
- 03:31 AM Bug #39665: kstore: memory may leak on KStore::_do_read_stripe
- https://github.com/ceph/ceph/pull/28056
- 02:27 AM Backport #39421 (In Progress): nautilus: Don't mark removed osds in when running "ceph osd in any...
- https://github.com/ceph/ceph/pull/28072
05/12/2019
- 12:24 AM Bug #24974: Segmentation fault in tcmalloc::ThreadCache::ReleaseToCentralCache()
- dzafman-2019-05-09_20:06:24-rados-wip-zafman-testing-distro-basic-smithi/3943901...
05/11/2019
- 03:43 PM Backport #39205: nautilus: osd: leaked pg refs on shutdown
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27803
merged
05/10/2019
- 09:27 PM Bug #39175: RGW DELETE calls partially missed shortly after OSD startup
- The bucket was created on 2017-01-26 while the cluster was running the 0.94.3 (Hammer) release. Also the cluster has...
- 09:22 PM Bug #39484 (Pending Backport): mon: "FAILED assert(pending_finishers.empty())" when paxos restart
- 09:13 PM Backport #38881 (Resolved): nautilus: ENOENT in collection_move_rename on EC backfill target
- 03:19 PM Backport #38881: nautilus: ENOENT in collection_move_rename on EC backfill target
- Neha Ojha wrote:
> https://github.com/ceph/ceph/pull/27654
merged - 09:12 PM Backport #39504 (Resolved): nautilus: Give recovery for inactive PGs a higher priority
- 03:18 PM Backport #39504: nautilus: Give recovery for inactive PGs a higher priority
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27854
merged - 05:32 PM Bug #38345: mon: segv in MonOpRequest::~MonOpRequest OpHistory::cleanup
- /a/nojha-2019-05-10_00:33:57-upgrade-wip-parial-recovery-2019-05-09-distro-basic-smithi/3943156/
- 12:33 PM Backport #39206 (Resolved): mimic: osd: leaked pg refs on shutdown
- 12:23 PM Backport #39220 (Resolved): mimic: osd: FAILED ceph_assert(attrs || !pg_log.get_missing().is_miss...
- 12:23 PM Backport #38443 (Resolved): mimic: osd-markdown.sh can fail with CLI_DUP_COMMAND=1
- 12:22 PM Backport #38879 (Resolved): mimic: ENOENT in collection_move_rename on EC backfill target
- 12:13 PM Bug #39555: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
- I'm having this issue on Nautilus (14.2.1) but there is no way OSDs can be full as I'm not using more than 6.8% raw s...
- 11:00 AM Backport #39700 (Resolved): nautilus: [RFE] If the nodeep-scrub/noscrub flags are set in pools in...
- https://github.com/ceph/ceph/pull/29991
- 11:00 AM Backport #39699 (Resolved): nautilus: OSD down on snaptrim.
- https://github.com/ceph/ceph/pull/28203
- 11:00 AM Backport #39698 (Resolved): mimic: OSD down on snaptrim.
- https://github.com/ceph/ceph/pull/28202
- 10:59 AM Backport #39694 (Rejected): luminous: _txc_add_transaction error (39) Directory not empty not han...
- 10:59 AM Backport #39693 (Resolved): nautilus: _txc_add_transaction error (39) Directory not empty not han...
- https://github.com/ceph/ceph/pull/29115
- 10:58 AM Backport #39692 (Resolved): mimic: _txc_add_transaction error (39) Directory not empty not handle...
- https://github.com/ceph/ceph/pull/29217
- 10:56 AM Backport #39682 (Resolved): nautilus: filestore pre-split may not split enough directories
- https://github.com/ceph/ceph/pull/29988
- 10:56 AM Backport #39681 (Rejected): luminous: filestore pre-split may not split enough directories
- 08:29 AM Bug #39665 (Resolved): kstore: memory may leak on KStore::_do_read_stripe
- On testing kstore, we found that memory leaks when execute read ops. The root cause is when execute read ops, the in-...
- 04:02 AM Bug #39661: kstore: memory may leak on KStore::_do_read_stripe
- There is no need to cache the in-flight stripes on read process, we can just discard it on read ops.
- 03:53 AM Bug #39661 (New): kstore: memory may leak on KStore::_do_read_stripe
- On testing kstore, we found that memory leaks when execute read ops. The root cause is when execute read ops, the in-...
- 03:34 AM Bug #39636 (Resolved): osd: PeeringState valgrind error UninitCondition
- 03:34 AM Bug #39636 (Fix Under Review): osd: PeeringState valgrind error UninitCondition
- 12:16 AM Bug #39659 (New): FAILED ceph_assert(info.history.same_interval_since != 0)
- http://pulpito.ceph.com/sjust-2019-05-09_13:40:11-smoke-sjust-wip-peering-state-cleanup-distro-basic-smithi/3942704/ ...
05/09/2019
- 11:38 PM Bug #38893: RuntimeError: expected MON_CLOCK_SKEW but got none
- rados/multimon/{clusters/3.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados...
- 04:22 PM Bug #38827: valgrind: UninitCondition in ceph::crypto::onwire::AES128GCM_OnWireRxHandler::authent...
- Is this being actively worked on?
How close are we to a fix on this?
I would like to make this a high priority ... - 03:48 PM Backport #39206: mimic: osd: leaked pg refs on shutdown
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27938
merged - 02:22 PM Bug #39175: RGW DELETE calls partially missed shortly after OSD startup
- The cluster itself followed this upgrade path:
0.94.10 -> 10.2.10 -> 12.2.5 -> 12.2.8
We will look into the histo... - 12:12 AM Bug #39175: RGW DELETE calls partially missed shortly after OSD startup
- Hi Wes,
Can you check if the bucket on which you are seeing issues, was present in a jewel cluster. We have seen s... - 07:14 AM Bug #39390 (Pending Backport): filestore pre-split may not split enough directories
- 07:10 AM Bug #38124: OSD down on snaptrim.
- Greg Farnum wrote:
> No ETA; it'll have to wend its way through the backports process. I don't think any releases ar... - 03:40 AM Bug #39636: osd: PeeringState valgrind error UninitCondition
- Found it, testing.
- 03:05 AM Backport #39375 (In Progress): nautilus: ceph tell osd.xx bench help : gives wrong help
- https://github.com/ceph/ceph/pull/28035
- 02:05 AM Bug #21174 (Rejected): OSD crash: 903: FAILED assert(objiter->second->version > last_divergent_up...
- I'm closing this bug. The hardware configuration must make data safe that has been sync'ed to disk. This requires th...
- 01:49 AM Documentation #39011 (Resolved): Document how get_recovery_priority() and get_backfill_priority()...
- 01:49 AM Bug #39304 (Fix Under Review): short pg log+nautilus-p2p-stress-split: "Error: finished tid 3 whe...
- 12:31 AM Bug #23145: OSD crashes during recovery of EC pg
- FWIW, I'm running into this, too (on Nautilus). I've got 2 OSD's in this situation. Let me know if you want any debug...
Also available in: Atom