Activity
From 11/05/2018 to 12/04/2018
12/04/2018
- 07:59 PM Bug #37512 (Resolved): ready_to_merge message lost
- 04:10 AM Bug #37512: ready_to_merge message lost
- https://github.com/ceph/ceph/pull/25388
- 04:09 AM Bug #37512 (Resolved): ready_to_merge message lost
- //a/sage-2018-12-03_17:39:26-rados-wip-sage2-testing-2018-12-03-0942-distro-basic-smithi/3304262...
- 07:58 PM Bug #37525 (Fix Under Review): unprime_split_children may discard query
- https://github.com/ceph/ceph/pull/25399
- 07:52 PM Bug #37525 (Resolved): unprime_split_children may discard query
- ...
- 09:48 AM Bug #23031: FAILED assert(!parent->get_log().get_missing().is_missing(soid))
- https://github.com/ceph/ceph/pull/25219
- 03:49 AM Bug #37511: merge target placeholder may get wrong PastIntervals from source
- I think the fix is to just bite the bullet and put PastIntervals at decrement time in the pg_info_t, along with the o...
- 03:49 AM Bug #37511 (Resolved): merge target placeholder may get wrong PastIntervals from source
- ...
- 03:25 AM Bug #37509: require past_interval bounds mismatch due to osd oldest_map
- I don't think the superblock.oldest_map should be a factor in this calculation. I suspect it is in there to deal wit...
- 03:24 AM Bug #37509 (Can't reproduce): require past_interval bounds mismatch due to osd oldest_map
- ...
12/03/2018
- 11:33 PM Backport #37496 (In Progress): mimic: OSD mkfs might assert when working agains bluestore disk th...
- https://github.com/ceph/ceph/pull/25385
- 04:02 PM Bug #37507 (Resolved): osd_memory_target: failed assert when options mismatch
- We tried setting osd_memory_target to 1GB and this results in the following assertion early after startup:...
12/02/2018
- 07:30 PM Bug #36725: luminous: Apparent Memory Leak in OSD
- Upgraded one OSD server to 12.2.10: Same symptom observed. See attached. Two OSD daemons use up all physical memory...
12/01/2018
- 07:14 PM Feature #37500 (New): ceph status/health hang when they could give helpful hints
- Today I had an incident with my Ceph cluster that took down my infrastructure.
I am running Ceph(FS) 13.2.2 on Lin... - 06:42 AM Backport #37496 (Resolved): mimic: OSD mkfs might assert when working agains bluestore disk that ...
- https://github.com/ceph/ceph/pull/25385
11/30/2018
- 07:56 PM Bug #37404: OSD mkfs might assert when working agains bluestore disk that already has a superblock
- Finally merged within
https://github.com/ceph/ceph/pull/25308 - 07:20 PM Bug #37404 (Pending Backport): OSD mkfs might assert when working agains bluestore disk that alre...
- 08:51 AM Bug #37452: FAILED ceph_assert(prealloc_left == (int64_t)need)
- Kefu Chai wrote:
> but in the mean time, can we have a more user-friend error message in this case? i can hardly t... - 08:11 AM Bug #37452 (New): FAILED ceph_assert(prealloc_left == (int64_t)need)
- i'd like to keep this open as usability issue.
- 08:09 AM Bug #37452 (Rejected): FAILED ceph_assert(prealloc_left == (int64_t)need)
- 02:51 AM Bug #37452: FAILED ceph_assert(prealloc_left == (int64_t)need)
- Igor, thanks for looking into it.
so before the osd crashed, we had allocated 8.79 G out of 10G, and the free spac... - 07:33 AM Bug #24909 (Resolved): RBD client IOPS pool stats are incorrect (2x higher; includes IO hints as ...
- 07:33 AM Backport #36556 (Resolved): luminous: RBD client IOPS pool stats are incorrect (2x higher; includ...
- 06:24 AM Bug #24587 (Resolved): librados api aio tests race condition
- 06:22 AM Backport #36646 (Resolved): luminous: librados api aio tests race condition
- 06:21 AM Bug #36602 (Resolved): osd: race condition opening heartbeat connection
- 06:21 AM Backport #36636 (Resolved): luminous: osd: race condition opening heartbeat connection
- 06:18 AM Bug #36406 (Resolved): Cache-tier forward mode hang in luminous (again)
- 06:18 AM Backport #36657 (Resolved): luminous: Cache-tier forward mode hang in luminous (again)
11/29/2018
- 10:10 AM Bug #37452: FAILED ceph_assert(prealloc_left == (int64_t)need)
- The most probable root cause for the issue is the lack of free space at BlueStore main device. It's 10GB by default a...
- 05:46 AM Bug #37452 (Resolved): FAILED ceph_assert(prealloc_left == (int64_t)need)
- ...
- 09:58 AM Bug #37439: Degraded PG does not discover remapped data on originating OSD
- In the second scenario, the cluster was completely healthy before new disks were added. My guess is that non-remapped...
- 09:13 AM Backport #36321 (Resolved): luminous: Add support for osd_delete_sleep configuration value
- 01:09 AM Backport #36321: luminous: Add support for osd_delete_sleep configuration value
- Vikhyat Umrao wrote:
> https://github.com/ceph/ceph/pull/24501
merged - 09:10 AM Backport #36630 (Resolved): luminous: potential deadlock in PG::_scan_snaps when repairing snap m...
- 01:05 AM Backport #36630: luminous: potential deadlock in PG::_scan_snaps when repairing snap mapper
- https://github.com/ceph/ceph/pull/24833 merged
- 06:21 AM Bug #36177 (Resolved): rados rm --force-full is blocked when cluster is in full status
- 06:20 AM Backport #36436 (Resolved): luminous: rados rm --force-full is blocked when cluster is in full st...
- 01:03 AM Backport #36436: luminous: rados rm --force-full is blocked when cluster is in full status
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25018
merged - 01:17 AM Backport #36556: luminous: RBD client IOPS pool stats are incorrect (2x higher; includes IO hints...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25025
merged - 01:16 AM Backport #36646: luminous: librados api aio tests race condition
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/25028
merged - 01:14 AM Backport #36636: luminous: osd: race condition opening heartbeat connection
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/25035
merged - 01:14 AM Backport #36657: luminous: Cache-tier forward mode hang in luminous (again)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25074
merged
11/28/2018
- 10:06 PM Bug #37439: Degraded PG does not discover remapped data on originating OSD
- The first scenario definitely looks like an issue; perhaps we are improperly filtering for out rather than down durin...
- 02:07 PM Bug #37439: Degraded PG does not discover remapped data on originating OSD
- As I can't edit the post...
To clarify: With *missing* I mean the parts of the erasure coded object so the object ... - 02:00 PM Bug #37439 (Resolved): Degraded PG does not discover remapped data on originating OSD
- There seems to be an issue that an OSD is not queried for *missing objects* that were *remapped*, but the OSD for thi...
- 05:22 PM Backport #37437: mimic: crushtool: add --reclassify operation to convert legacy crush maps to use...
- h3. original description
The functionality has been added to master (nautilus) [1]. It would be nice to backport t... - 04:03 PM Backport #37437: mimic: crushtool: add --reclassify operation to convert legacy crush maps to use...
- PR: https://github.com/ceph/ceph/pull/25306
- 01:39 PM Backport #37437 (Resolved): mimic: crushtool: add --reclassify operation to convert legacy crush ...
- https://github.com/ceph/ceph/pull/25306
- 05:21 PM Backport #37438: luminous: crushtool: add --reclassify operation to convert legacy crush maps to ...
- h3. original description
The functionality has been added to master (nautilus) [1]. It would be nice to backport t... - 04:02 PM Backport #37438: luminous: crushtool: add --reclassify operation to convert legacy crush maps to ...
- PR: https://github.com/ceph/ceph/pull/25307
- 01:41 PM Backport #37438 (Resolved): luminous: crushtool: add --reclassify operation to convert legacy cru...
- https://github.com/ceph/ceph/pull/25307
- 05:20 PM Bug #37443 (Resolved): crushtool: add --reclassify operation to convert legacy crush maps to use ...
- The functionality has been added to master (nautilus) [1]. It would be nice to backport this.
[1] https://github.c... - 05:09 AM Bug #36732 (Resolved): tools/rados: fix segmentation fault
11/27/2018
- 08:40 PM Backport #36321 (In Progress): luminous: Add support for osd_delete_sleep configuration value
- 08:39 PM Backport #36321: luminous: Add support for osd_delete_sleep configuration value
- h3. original description
[RFE] Introduce an option or flag to throttle the pg deletion process
https://bugzilla.r... - 07:45 PM Bug #36250: ceph-osd process crashing
- I believe this issue was due to a malfunctioning ceph-fuse client, although I don't have data to back that up as it w...
- 06:02 PM Fix #37410 (Duplicate): change default osd_objectstore to bluestore
- duplicate of #36494
- 05:53 PM Fix #37410 (Fix Under Review): change default osd_objectstore to bluestore
- https://github.com/ceph/ceph/pull/25288
- 05:38 PM Fix #37410 (Duplicate): change default osd_objectstore to bluestore
- This way, the mon and associated tools know what the default actually is on the cluster.
- 06:01 PM Bug #36494: Change osd_objectstore default to bluestore
- Can you set this for backport to mimic and luminous?
- 03:30 PM Backport #37341 (In Progress): luminous: doc: Add bluestore memory autotuning docs
- 03:26 PM Backport #37340 (In Progress): mimic: doc: Add bluestore memory autotuning docs
- 02:27 PM Bug #36525: osd-scrub-snaps.sh failure
- /a/kchai-2018-11-27_11:44:27-rados-wip-kefu2-testing-2018-11-27-1724-distro-basic-smithi/3285226/teuthology.log
- 11:45 AM Bug #37404 (Fix Under Review): OSD mkfs might assert when working agains bluestore disk that alre...
- https://github.com/ceph/ceph/pull/25281/files
- 11:04 AM Bug #37404 (In Progress): OSD mkfs might assert when working agains bluestore disk that already h...
- 11:01 AM Bug #37404 (Resolved): OSD mkfs might assert when working agains bluestore disk that already has ...
- One might face an assert on collection's release which happens
after store destroy. For now is observable in some qa...
11/26/2018
- 11:49 PM Bug #24612 (Resolved): FAILED assert(osdmap_manifest.pinned.empty()) in OSDMonitor::prune_init()
- 11:49 PM Backport #35071 (Resolved): mimic: FAILED assert(osdmap_manifest.pinned.empty()) in OSDMonitor::p...
- 08:56 PM Backport #35071: mimic: FAILED assert(osdmap_manifest.pinned.empty()) in OSDMonitor::prune_init()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24918
merged - 11:48 PM Bug #22544 (Resolved): objecter cannot resend split-dropped op when racing with con reset
- 11:48 PM Backport #35843 (Resolved): mimic: objecter cannot resend split-dropped op when racing with con r...
- 08:55 PM Backport #35843: mimic: objecter cannot resend split-dropped op when racing with con reset
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24970
merged - 11:48 PM Bug #36358 (Resolved): Interactive mode CLI prints no output since Mimic
- 11:47 PM Backport #36432 (Resolved): mimic: Interactive mode CLI prints no output since Mimic
- 08:54 PM Backport #36432: mimic: Interactive mode CLI prints no output since Mimic
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24971
merged - 11:47 PM Backport #36433 (Resolved): mimic: monstore tool rebuild does not generate creating_pgs
- 08:54 PM Backport #36433: mimic: monstore tool rebuild does not generate creating_pgs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25016
merged - 11:46 PM Backport #36435 (Resolved): mimic: rados rm --force-full is blocked when cluster is in full status
- 08:53 PM Backport #36435: mimic: rados rm --force-full is blocked when cluster is in full status
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25017
merged - 11:45 PM Backport #36505 (Resolved): mimic: mon osdmap cash too small during upgrade to mimic
- 08:53 PM Backport #36505: mimic: mon osdmap cash too small during upgrade to mimic
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25019
merged - 11:44 PM Backport #36557 (Resolved): mimic: RBD client IOPS pool stats are incorrect (2x higher; includes ...
- 08:52 PM Backport #36557: mimic: RBD client IOPS pool stats are incorrect (2x higher; includes IO hints as...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25024
merged - 11:44 PM Backport #36637 (Resolved): mimic: osd: race condition opening heartbeat connection
- 08:51 PM Backport #36637: mimic: osd: race condition opening heartbeat connection
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/25026
merged - 11:43 PM Backport #36647 (Resolved): mimic: librados api aio tests race condition
- 08:51 PM Backport #36647: mimic: librados api aio tests race condition
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/25027
merged - 11:40 PM Backport #36658 (Resolved): mimic: Cache-tier forward mode hang in luminous (again)
- 08:48 PM Backport #36658: mimic: Cache-tier forward mode hang in luminous (again)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25075
merged - 08:45 PM Bug #37393 (Resolved): mimic: osd-backfill-stats.sh fails in rados/standalone/osd.yaml
- Run: http://pulpito.front.sepia.ceph.com/yuriw-2018-11-21_22:16:20-rados-wip-yuri5-testing-2018-11-21-1510-mimic-dist...
11/25/2018
- 09:56 AM Bug #37326: Daily inconsistent objects
- Anyone has any idea?
11/23/2018
- 04:52 PM Bug #22597: "sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-0'" fails in upgrade test
- The problematic chown was introduced in mimic, so backporting only that far back.
See https://github.com/ceph/ceph... - 02:34 AM Backport #37288 (In Progress): mimic: "sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-0'" fails i...
- https://github.com/ceph/ceph/pull/25227
11/22/2018
- 05:19 PM Backport #37273 (Resolved): mimic: debian: packaging need to reflect move of /etc/bash_completion...
- 04:46 PM Backport #37273: mimic: debian: packaging need to reflect move of /etc/bash_completion.d/radosgw-...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25115
merged - 07:32 AM Bug #36767: OSD: unrecoverable heartbeat connections
- see also: https://tracker.ceph.com/issues/36175
11/21/2018
- 08:25 AM Backport #37340 (Need More Info): mimic: doc: Add bluestore memory autotuning docs
- 07:19 AM Bug #37326: Daily inconsistent objects
- It happens on different disks, even on different host nodes.
- 06:40 AM Bug #24676: FreeBSD/Linux integration - monitor map with wrong sa_family
- Hello,
Just tested this and received the same "NetHandler create_socket couldn't create socket (97) Address family...
11/20/2018
- 09:42 PM Bug #36725: luminous: Apparent Memory Leak in OSD
- Upgraded one OSD server to 12.2.9. Clean reboot. Generating hourly report on memory and mempools. Three examples a...
- 09:10 PM Backport #37340: mimic: doc: Add bluestore memory autotuning docs
- This is blocked by mimic version of https://github.com/ceph/ceph/pull/24065
- 07:54 PM Backport #37340 (Resolved): mimic: doc: Add bluestore memory autotuning docs
- https://github.com/ceph/ceph/pull/25283
- 07:54 PM Backport #37343 (Resolved): luminous: Prioritize user specified scrubs
- https://github.com/ceph/ceph/pull/25514
- 07:54 PM Backport #37342 (Resolved): mimic: Prioritize user specified scrubs
- https://github.com/ceph/ceph/pull/25513
- 07:54 PM Backport #37341 (Resolved): luminous: doc: Add bluestore memory autotuning docs
- https://github.com/ceph/ceph/pull/25284
- 11:01 AM Bug #37289: Issue with overfilled OSD for cache-tier pools
- Whithout cache tiering everything is good.
After reaching 95% utilization of OSD for my replicated pool (whithout...
11/19/2018
- 10:57 PM Bug #36667: OSD object_map sync returned error
- This might also indicate something screwe dup the file permissions or ownership in /var/lib/ceph/osd/ceph-10. maybe ...
- 10:56 PM Bug #36709 (Need More Info): OSD stuck while flushing rocksdb WAL
- I'm not sure know rocksdb is what's stuck.. can you dump 'ceph daemon osd.NNN ops' to see what state teh oeprations a...
- 10:54 PM Bug #37264: scrub warning check incorrectly uses mon scrub interval
- You should be able to get the pool info out of the monitor's OSDMap, if that was a question... :)
- 10:51 PM Bug #37289: Issue with overfilled OSD for cache-tier pools
- I think teh first question to answer is if this can be reproduced without cache tiering. It's not immediately clear ...
- 10:48 PM Bug #37326 (Need More Info): Daily inconsistent objects
- Is this happening on the same disk all the time, or the same node? If so, that suggests a piece of hardware (e.g. con...
- 10:31 AM Bug #37326 (Need More Info): Daily inconsistent objects
- We have many Ceph mimic 13.2.1 installed with a similar configuration on ubuntu, but on one of them we get inconsiste...
- 10:48 PM Bug #36304 (Can't reproduce): FAILED ceph_assert(p != pg_slots.end()) in OSDShard::register_and_w...
- I'm guessing this was fixed by 450f337d6fd048c8c95a0ec0dec0d97f5474922e
- 10:43 PM Bug #36598: osd: "bluestore(/var/lib/ceph/osd/ceph-6) ENOENT on clone suggests osd bug"
- Sage thinks this might also be #36739.
- 10:40 PM Bug #36686 (In Progress): osd: pg log hard limit can cause crash during upgrade
- 10:40 PM Bug #36725 (Need More Info): luminous: Apparent Memory Leak in OSD
- can you dump the mempools (ceph daemon osd.NNN dump_mempools) several times over the growht of the process so we can ...
- 07:15 PM Bug #37269 (Pending Backport): Prioritize user specified scrubs
- 04:47 PM Bug #37329 (Pending Backport): doc: Add bluestore memory autotuning docs
- 04:44 PM Bug #37329 (Resolved): doc: Add bluestore memory autotuning docs
- https://github.com/ceph/ceph/pull/25069
11/17/2018
- 03:45 AM Bug #37299 (New): ceph-disk: ceph osd start failed: Command '['/usr/bin/systemctl', 'disable', 'c...
- Please see the details at:
https://bugzilla.redhat.com/show_bug.cgi?id=1649208#c0
11/16/2018
- 12:47 PM Bug #37289 (New): Issue with overfilled OSD for cache-tier pools
- We have bad issue in our ceph cluster.
Centos 7.5 (3.10.0-862.3.2.el7.x86_64)
Luminous 12.2.5, bluestore OSDs, us... - 11:35 AM Backport #37288 (Resolved): mimic: "sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-0'" fails in u...
- https://github.com/ceph/ceph/pull/25227
- 10:34 AM Bug #16500 (Resolved): ceph_erasure_code_benchmark parameter checking error for LRC plugin
- 06:22 AM Bug #22597 (Pending Backport): "sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-0'" fails in upgra...
- 04:53 AM Bug #36767 (Fix Under Review): OSD: unrecoverable heartbeat connections
- 02:53 AM Feature #23493: config: strip/escape single-quotes in values when setting them via conf file/assi...
- Joao,
Could you take a look at https://github.com/ceph/ceph/pull/20610 and see whether you consider it something t... - 01:59 AM Bug #37264: scrub warning check incorrectly uses mon scrub interval
The scrub warning also doesn't consider the pool specific scrub interval if specified. The scrub code gets the p...
11/15/2018
- 01:16 PM Bug #25146 (Resolved): "rocksdb: Corruption: Can't access /000000.sst" in upgrade:mimic-x:paralle...
- 11:36 AM Backport #37273 (In Progress): mimic: debian: packaging need to reflect move of /etc/bash_complet...
- 10:47 AM Backport #37273: mimic: debian: packaging need to reflect move of /etc/bash_completion.d/radosgw-...
- PR with this backport is https://github.com/ceph/ceph/pull/25115
- 09:44 AM Backport #37273 (Resolved): mimic: debian: packaging need to reflect move of /etc/bash_completion...
- https://github.com/ceph/ceph/pull/25115
- 10:36 AM Backport #37274 (In Progress): luminous: debian: packaging need to reflect move of /etc/bash_comp...
- 09:45 AM Backport #37274 (Resolved): luminous: debian: packaging need to reflect move of /etc/bash_complet...
- https://github.com/ceph/ceph/pull/24997
- 09:38 AM Bug #36725: luminous: Apparent Memory Leak in OSD
- raising priority since this might be a regression in 12.2.9
- 06:31 AM Bug #36741 (Pending Backport): debian: packaging need to reflect move of /etc/bash_completion.d/r...
- https://github.com/ceph/ceph/pull/24996
- 06:20 AM Bug #37269 (Resolved): Prioritize user specified scrubs
When scrubs start backing up, when a user asks for a scrub it doesn't get priority compared to overdue scrubs. The...- 06:14 AM Bug #37264 (Resolved): scrub warning check incorrectly uses mon scrub interval
When checking the mon_warn_not_scrubbed the mon_scrub_interval is used instead of osd_scrub_max_interval.
11/14/2018
- 08:01 PM Bug #36725: luminous: Apparent Memory Leak in OSD
- Note: Downgrading both OSD servers to v12.2.8 returned memory usage to normal.
- 11:43 AM Backport #36636: luminous: osd: race condition opening heartbeat connection
- std::lock_guard is a C++11 feature: https://en.cppreference.com/w/cpp/header/mutex
11/13/2018
- 02:23 PM Backport #36658 (In Progress): mimic: Cache-tier forward mode hang in luminous (again)
- 02:15 PM Backport #36657 (In Progress): luminous: Cache-tier forward mode hang in luminous (again)
- 11:57 AM Bug #36388: osd: "out of order op"
- This looks like the dup op entries were exceeded so the op was not detected as a dup. Perhaps we should increase the ...
- 04:55 AM Bug #25146: "rocksdb: Corruption: Can't access /000000.sst" in upgrade:mimic-x:parallel-master-di...
- https://github.com/ceph/ceph/pull/25070
11/12/2018
- 03:41 PM Bug #36767: OSD: unrecoverable heartbeat connections
- Pull request:
https://github.com/ceph/ceph/pull/25061 - 03:09 PM Bug #36767 (Fix Under Review): OSD: unrecoverable heartbeat connections
- There are several unrecoverable heartbeat connections according to logs.
They usually appears after problems/reprodu... - 07:05 AM Bug #36758 (Duplicate): aborts in rocksdb::TableFileName() in mimic-x upgrade test suite
- 05:26 AM Bug #36758: aborts in rocksdb::TableFileName() in mimic-x upgrade test suite
- i think it's a dup of #25146
- 02:57 AM Bug #16500 (Fix Under Review): ceph_erasure_code_benchmark parameter checking error for LRC plugin
- https://github.com/ceph/ceph/pull/25046
11/10/2018
- 10:01 PM Bug #36758: aborts in rocksdb::TableFileName() in mimic-x upgrade test suite
- marking it "urgent", as it can be consistently reproducible. and it renders the cluster unusable after upgrading from...
- 06:11 PM Bug #36758 (Duplicate): aborts in rocksdb::TableFileName() in mimic-x upgrade test suite
- ...
- 02:33 PM Backport #36636 (In Progress): luminous: osd: race condition opening heartbeat connection
- 11:46 AM Backport #36636 (Need More Info): luminous: osd: race condition opening heartbeat connection
- The master commit uses std::lock_guard, which is a C++17-ism, and this makes the backport non-trivial (?)
- 12:42 PM Subtask #36091 (Resolved): [rbd top] collect client perf stats when query is enabled
- *PR*: https://github.com/ceph/ceph/pull/24265
- 11:56 AM Backport #36646 (In Progress): luminous: librados api aio tests race condition
- 11:52 AM Backport #36647 (In Progress): mimic: librados api aio tests race condition
- 11:40 AM Backport #36637 (In Progress): mimic: osd: race condition opening heartbeat connection
- 11:38 AM Backport #36556 (In Progress): luminous: RBD client IOPS pool stats are incorrect (2x higher; inc...
- 11:37 AM Backport #36557 (In Progress): mimic: RBD client IOPS pool stats are incorrect (2x higher; includ...
- 10:19 AM Backport #36506 (In Progress): luminous: mon osdmap cash too small during upgrade to mimic
- 10:05 AM Backport #36505 (In Progress): mimic: mon osdmap cash too small during upgrade to mimic
- 09:59 AM Backport #36436 (In Progress): luminous: rados rm --force-full is blocked when cluster is in full...
- 09:54 AM Backport #36435 (In Progress): mimic: rados rm --force-full is blocked when cluster is in full st...
- 09:02 AM Backport #36433 (In Progress): mimic: monstore tool rebuild does not generate creating_pgs
11/09/2018
- 10:08 PM Bug #36667: OSD object_map sync returned error
- Check dmesg for hardware errors, this is leveldb/rocksdb returning an error writing to disk. You may want to ask the ...
- 10:05 PM Bug #36677 (Resolved): /usr/include/rados/buffer.h:657:61: error: expected ',' before ')' token
- 10:05 PM Bug #36732 (Fix Under Review): tools/rados: fix segmentation fault
- https://github.com/ceph/ceph/pull/24990
- 08:55 PM Bug #36610 (Resolved): filestore merge collection replay problem
- 08:54 PM Bug #36748 (New): ms_deliver_verify_authorizer no AuthAuthorizeHandler found for protocol 0
- ...
- 05:18 PM Bug #36746 (New): Ignore osd_find_best_info_ignore_history_les for erasure-coded PGs
The only case that osd_find_best_info_ignore_history_les would work for erasure coded pools is if an interval didn'...- 09:29 AM Bug #36741 (Resolved): debian: packaging need to reflect move of /etc/bash_completion.d/radosgw-a...
- Hi,
Between version 12.0.2 and 12.0.3, the file /etc/bash_completion.d/radosgw-admin moved from the radosgw packag...
11/08/2018
- 11:34 PM Bug #36739: ENOENT in collection_move_rename on EC backfill target
- we create a gen object normally, on a backfill target,...
- 10:25 PM Bug #36739: ENOENT in collection_move_rename on EC backfill target
- 10:24 PM Bug #36739 (Resolved): ENOENT in collection_move_rename on EC backfill target
- ...
- 09:13 PM Feature #36737: Allow multi instances of "make tests" on the same machine
- @Kefu pls take a look, IIRC you mentioned that this may not be a big effort.
- 09:12 PM Feature #36737 (Resolved): Allow multi instances of "make tests" on the same machine
- Currently it's only possible to run `...make; make tests -j8; ctest ...` on the same machine.
Please consider chan... - 10:02 AM Bug #36732 (Resolved): tools/rados: fix segmentation fault
- when connected to ceph cluster, if call exit(1) directly, will
cause the finisher thread segmentation fault as follo...
11/07/2018
- 11:37 PM Feature #24917: Gracefully deal with upgrades when bluestore skipping of data_digest becomes active
Josh, this code needs to be written. It needs a feature bit AND a mon flag that can only be set when all OSDs are ...- 10:07 PM Backport #36729 (Resolved): mimic: Add support for osd_delete_sleep configuration value
- https://github.com/ceph/ceph/pull/25507
- 10:06 PM Feature #36474 (Pending Backport): Add support for osd_delete_sleep configuration value
- 04:40 PM Bug #36686: osd: pg log hard limit can cause crash during upgrade
- Tests added:
https://github.com/ceph/ceph/pull/24954
https://github.com/ceph/ceph/pull/24938 - 04:27 PM Bug #36725 (Closed): luminous: Apparent Memory Leak in OSD
- Since last update (late October), been experiencing apparent memory leak in OSD process on two ceph servers in small ...
- 11:44 AM Backport #36432 (In Progress): mimic: Interactive mode CLI prints no output since Mimic
- 11:42 AM Backport #35843 (In Progress): mimic: objecter cannot resend split-dropped op when racing with co...
11/06/2018
- 01:22 PM Bug #20798: LibRadosLockECPP.LockExclusiveDurPP gets EEXIST
- /a/sage-2018-11-05_22:04:25-rados-wip-sage3-testing-2018-11-05-1406-distro-basic-smithi/3227352
- 11:54 AM Support #36326: Huge traffic spike and assert(is_primary())
- Thanks for the answer! It looks like traffic spike was caused by another issue: ceph-mon's db grows up to 15GB and it...
- 10:07 AM Bug #36709 (Closed): OSD stuck while flushing rocksdb WAL
- Hi all,
We use:
ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)
Clients work on:
... - 01:30 AM Bug #36686: osd: pg log hard limit can cause crash during upgrade
- Quoting my reply to ceph-devel for reference:
"Nathan, I don't think we want to revert it for 13.2.2.
This is b...
11/05/2018
- 10:42 PM Bug #22902 (Resolved): src/osd/PG.cc: 6455: FAILED assert(0 == "we got a bad state machine event")
- 10:32 PM Bug #36686: osd: pg log hard limit can cause crash during upgrade
- So, the luminous revert was merged. Neha, will there be a mimic revert as well? Since the pg hard limit patches are p...
- 10:13 PM Bug #36686: osd: pg log hard limit can cause crash during upgrade
- https://github.com/ceph/ceph/pull/24903 merged
- 10:28 PM Bug #36508 (Resolved): gperftools-libs-2.6.1-1 or newer required for binaries linked against corr...
- 10:28 PM Backport #36552 (Resolved): luminous: gperftools-libs-2.6.1-1 or newer required for binaries link...
- 10:10 PM Backport #36552: luminous: gperftools-libs-2.6.1-1 or newer required for binaries linked against ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24706
merged - 10:25 PM Bug #34541 (Resolved): deep scrub cannot find the bitrot if the object is cached
- 10:25 PM Backport #35067 (Resolved): luminous: deep scrub cannot find the bitrot if the object is cached
- 10:08 PM Backport #35067: luminous: deep scrub cannot find the bitrot if the object is cached
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24802
merged - 10:18 PM Backport #36678 (Resolved): luminous: src/osd/PG.cc: 6455: FAILED assert(0 == "we got a bad state...
- 05:20 PM Feature #24917: Gracefully deal with upgrades when bluestore skipping of data_digest becomes active
- Let's include this with any other feature bit addition.
- 01:30 PM Support #36614: Cluster uses substantially more space after rebalance (erasure codes)
- > I suspect it shouldn't.
But it does exactly that.
> That's will only re-copy the data to the HEAD revision.
...
Also available in: Atom