Project

General

Profile

Activity

From 11/15/2021 to 12/14/2021

12/14/2021

10:02 PM Bug #50042: rados/test.sh: api_watch_notify failures
... Neha Ojha
09:56 PM Bug #49524: ceph_test_rados_delete_pools_parallel didn't start
... Neha Ojha
12:31 PM Bug #50657 (Resolved): smart query on monitors
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
12:29 PM Bug #52583 (Resolved): partial recovery become whole object recovery after restart osd
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
12:23 PM Backport #52450 (Resolved): pacific: smart query on monitors
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44164
m...
Loïc Dachary
12:22 PM Backport #52451 (Resolved): octopus: smart query on monitors
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44177
m...
Loïc Dachary
12:20 PM Backport #51149 (Resolved): octopus: When read failed, ret can not take as data len, in FillInVer...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44174
m...
Loïc Dachary
12:20 PM Backport #51171 (Resolved): octopus: regression in ceph daemonperf command output, osd columns ar...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44176
m...
Loïc Dachary
12:20 PM Backport #52710 (Resolved): octopus: partial recovery become whole object recovery after restart osd
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44165
m...
Loïc Dachary
12:20 PM Backport #53389 (Resolved): octopus: pg-temp entries are not cleared for PGs that no longer exist
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/44097
m...
Loïc Dachary
08:37 AM Bug #53600 (Rejected): Crash in MOSDPGLog::encode_payload
3 OSDs crashed on the gibba cluster. All the OSDs were a part of gibba045 node.
*Observations:*
- osd.15 and os...
Sridhar Seshasayee
01:22 AM Bug #53584: FAILED ceph_assert(pop.data.length() == sinfo.aligned_logical_offset_to_chunk_offset(...
Neha Ojha wrote:
> ..., it seems like you have "enough copies available" to remove the problematic OSD but we won't ...
玮文 胡

12/13/2021

10:56 PM Bug #52416 (Fix Under Review): devices: mon devices appear empty when scraping SMART metrics
Neha Ojha
10:48 PM Bug #53575 (Rejected): Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
We could suppress this but since it is not coming from the Ceph code, rejecting it. Neha Ojha
10:41 PM Bug #53584 (Need More Info): FAILED ceph_assert(pop.data.length() == sinfo.aligned_logical_offset...
Can you provide OSD logs for the PG that is crashing (from all the shards)? From the error logs, it seems like you ha... Neha Ojha
10:08 AM Bug #53593: RBD cloned image is slow in 4k write with "waiting for rw locks"
[Observed Poor Performance]
On a rbd image, we found the 4k write IOPS is much lower than expected.
I understood th...
Cuicui Zhao
10:05 AM Bug #53593 (Pending Backport): RBD cloned image is slow in 4k write with "waiting for rw locks"
h1. [Observed Poor Performance]
On a rbd image, we found the 4k write IOPS is much lower than expected.
I understoo...
Cuicui Zhao

12/12/2021

01:39 PM Bug #53586 (New): rocksdb: build error with rocksdb-6.25.x
Here we go, again, same bug as in #52415, affects all attempt to build ceph-16.2.7 against rocksdb-6.25-*
Cheers,
...
chris denice
08:49 AM Bug #53584 (Need More Info): FAILED ceph_assert(pop.data.length() == sinfo.aligned_logical_offset...
... 玮文 胡

12/11/2021

04:15 PM Backport #51149: octopus: When read failed, ret can not take as data len, in FillInVerifyExtent
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44174
meged
Yuri Weinstein

12/10/2021

11:46 PM Backport #51171: octopus: regression in ceph daemonperf command output, osd columns aren't visibl...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44176
merged
Yuri Weinstein
11:43 PM Backport #52710: octopus: partial recovery become whole object recovery after restart osd
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44165
merged
Yuri Weinstein
11:43 PM Backport #53389: octopus: pg-temp entries are not cleared for PGs that no longer exist
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44097
merged
Yuri Weinstein
09:16 PM Bug #53516 (Fix Under Review): Disable health warning when autoscaler is on
Neha Ojha
06:03 PM Bug #52621: cephx: verify_authorizer could not decrypt ticket info: error: bad magic in decode_de...
... Neha Ojha

12/09/2021

11:06 PM Bug #52136: Valgrind reports memory "Leak_DefinitelyLost" errors.
/a/yuriw-2021-12-09_00:18:57-rados-wip-yuri-testing-2021-12-08-1336-distro-default-smithi/6553724/ ----> osd.1.log.gz Laura Flores
09:38 PM Bug #53575 (Resolved): Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
Found in /a/yuriw-2021-12-09_00:18:57-rados-wip-yuri-testing-2021-12-08-1336-distro-default-smithi/6553724
The fol...
Laura Flores
04:32 PM Backport #53549 (In Progress): nautilus: [RFE] Provide warning when the 'require-osd-release' fla...
Sridhar Seshasayee
01:43 PM Backport #53550 (In Progress): octopus: [RFE] Provide warning when the 'require-osd-release' flag...
Sridhar Seshasayee
12:53 PM Backport #53551 (In Progress): pacific: [RFE] Provide warning when the 'require-osd-release' flag...
Sridhar Seshasayee

12/08/2021

09:15 PM Backport #53551 (Resolved): pacific: [RFE] Provide warning when the 'require-osd-release' flag do...
https://github.com/ceph/ceph/pull/44259 Backport Bot
09:15 PM Backport #53550 (Resolved): octopus: [RFE] Provide warning when the 'require-osd-release' flag do...
https://github.com/ceph/ceph/pull/44260 Backport Bot
09:15 PM Backport #53549 (Rejected): nautilus: [RFE] Provide warning when the 'require-osd-release' flag d...
https://github.com/ceph/ceph/pull/44263 Backport Bot
09:13 PM Feature #51984 (Pending Backport): [RFE] Provide warning when the 'require-osd-release' flag does...
Neha Ojha
07:08 PM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
/a/yuriw-2021-12-07_16:04:59-rados-wip-yuri5-testing-2021-12-06-1619-distro-default-smithi/6551120
pg map right be...
Neha Ojha
06:49 PM Bug #53544 (New): src/test/osd/RadosModel.h: ceph_abort_msg("racing read got wrong version") in t...
... Neha Ojha
03:30 PM Bug #52124: Invalid read of size 8 in handle_recovery_delete()
/a/yuriw-2021-12-07_16:02:55-rados-wip-yuri11-testing-2021-12-06-1619-distro-default-smithi/6550873 Sridhar Seshasayee
12:15 PM Backport #53535 (Resolved): pacific: mon: mgrstatmonitor spams mgr with service_map
https://github.com/ceph/ceph/pull/44721 Backport Bot
12:15 PM Backport #53534 (Resolved): octopus: mon: mgrstatmonitor spams mgr with service_map
https://github.com/ceph/ceph/pull/44722 Backport Bot
12:10 PM Bug #53479 (Pending Backport): mon: mgrstatmonitor spams mgr with service_map
Sage Weil

12/07/2021

09:27 PM Bug #53516 (Resolved): Disable health warning when autoscaler is on
the command:
ceph health detail
displays a warning when a pool has many more objects per pg than other pools. Thi...
Christopher Hoffman

12/06/2021

10:05 PM Backport #53507 (Duplicate): pacific: ceph -s mon quorum age negative number
Backport Bot
10:03 PM Bug #53306 (Pending Backport): ceph -s mon quorum age negative number
Needs to be included in https://github.com/ceph/ceph/pull/43698 Neha Ojha
08:42 PM Backport #52450: pacific: smart query on monitors
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44164
merged
Yuri Weinstein
06:13 PM Bug #53506 (Fix Under Review): mon: frequent cpu_tp had timed out messages
Sage Weil
06:06 PM Bug #53506 (Closed): mon: frequent cpu_tp had timed out messages
... Sage Weil
11:06 AM Bug #52416: devices: mon devices appear empty when scraping SMART metrics
If `ceph-mon` runs as a systemd unit, check if `PrivateDevices=yes` in `/lib/systemd/system/ceph-mon@.service`; if so... Benoît Knecht
10:30 AM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
Ist Gab wrote:
> Igor Fedotov wrote:
> > …
>
> Igor, do you think if we put a super fast 2-4TB write optimized n...
Igor Fedotov
09:14 AM Bug #52189: crash in AsyncConnection::maybe_start_delay_thread()
Neha Ojha wrote:
> We'll need more information to debug a crash like this.
@Nea, we observed another one of the...
Christian Rohmann
08:49 AM Bug #51307: LibRadosWatchNotify.Watch2Delete fails
/a/yuriw-2021-12-03_15:27:18-rados-wip-yuri11-testing-2021-12-02-1451-distro-default-smithi/6542889... Sridhar Seshasayee
08:25 AM Bug #53500: rte_eal_init fail will waiting forever
r
The program being debugged has been started already.
Start it from the beginning? (y or n) y
Starting program: /...
chunsong feng
08:20 AM Bug #53500 (New): rte_eal_init fail will waiting forever
The rte_eal_init returns a failure message and does not wake up the waiting msgr-worker thread. As a result, the wait... chunsong feng

12/03/2021

09:02 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
Igor Fedotov wrote:
> …
Igor, do you think if we put a super fast 2-4TB write optimized nvme in front of each 15....
Ist Gab
01:16 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
Ist Gab wrote:
> Igor Fedotov wrote:
>
> > Right - PG removal/moving are the primary cause of bulk data removals....
Igor Fedotov
12:43 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
Igor Fedotov wrote:
> Right - PG removal/moving are the primary cause of bulk data removals. We're working on impr...
Ist Gab
12:39 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
Igor Fedotov wrote:
> So if compaction provides some relief (at least temporarily) - I would suggest running periodi...
Ist Gab
12:31 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
Ist Gab wrote:
> Most likely this is related to this pg delete/movement things because after the pg increase the c...
Igor Fedotov
12:12 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
Igor Fedotov wrote:
> In my opinion this issue is caused by a well-known problem with RocksDB performance degradatio...
Ist Gab
11:12 AM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
In my opinion this issue is caused by a well-known problem with RocksDB performance degradation after bulk data remov... Igor Fedotov
05:05 PM Backport #53486 (In Progress): pacific: LibRadosTwoPoolsPP.ManifestSnapRefcount Failure.
Neha Ojha
01:19 PM Backport #53486: pacific: LibRadosTwoPoolsPP.ManifestSnapRefcount Failure.
https://github.com/ceph/ceph/pull/44202 Myoungwon Oh
12:25 PM Backport #53486 (Resolved): pacific: LibRadosTwoPoolsPP.ManifestSnapRefcount Failure.
https://github.com/ceph/ceph/pull/44202 Backport Bot
12:20 PM Bug #52872 (Pending Backport): LibRadosTwoPoolsPP.ManifestSnapRefcount Failure.
Myoungwon Oh
12:20 PM Bug #53485 (Fix Under Review): monstore: logm entries are not garbage collected
We had to run a ceph cluster with a damaged cephfs for a while that got deleted already. We suspect this was the culp... Daniel Poelzleithner
01:56 AM Bug #53481 (New): rte_exit can't exit when call it in dpdk thread

(gdb) info thr
Id Target Id Frame
* 1 Thread 0xfffc1ba26100 (LW...
chunsong feng

12/02/2021

11:36 PM Backport #53480 (Resolved): pacific: Segmentation fault under Pacific 16.2.1 when using a custom ...
https://github.com/ceph/ceph/pull/44897 Backport Bot
11:33 PM Bug #50659 (Pending Backport): Segmentation fault under Pacific 16.2.1 when using a custom crush ...
Neha Ojha
11:31 PM Bug #52872: LibRadosTwoPoolsPP.ManifestSnapRefcount Failure.
Myoungwon Oh: should we backport this? please update the status accordingly. Neha Ojha
11:14 PM Bug #53479 (Fix Under Review): mon: mgrstatmonitor spams mgr with service_map
Sage Weil
10:46 PM Bug #53479 (Pending Backport): mon: mgrstatmonitor spams mgr with service_map
... Sage Weil
08:39 PM Bug #53138: cluster [WRN] Health check failed: Degraded data redundancy: 3/1164 objects degrade...
@Neha I am seeing these failures more than usual, maybe we might be having performance regression, if not, can we inc... Deepika Upadhyay
08:34 PM Backport #50274 (In Progress): pacific: FAILED ceph_assert(attrs || !recovery_state.get_pg_log()....
Neha Ojha
08:20 PM Bug #51652: heartbeat timeouts on filestore OSDs while deleting objects in upgrade:pacific-p2p-pa...
/a/yuriw-2021-11-28_15:43:54-upgrade:pacific-p2p-pacific-16.2.7_RC1-distro-default-smithi/6531998 Neha Ojha
02:26 PM Support #51609: OSD refuses to start (OOMK) due to pg split
Tor Martin Ølberg wrote:
> Tor Martin Ølberg wrote:
> > After an upgrade to 15.2.13 from 15.2.4 my small home lab c...
Igor Dell
07:28 AM Bug #50192: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soi...
https://github.com/ceph/ceph/pull/44181 Myoungwon Oh

12/01/2021

08:57 PM Bug #53454 (New): nautilus: MInfoRec in Started/ToDelete/WaitDeleteReseved causes state machine c...
... Neha Ojha
08:24 PM Backport #52451 (In Progress): octopus: smart query on monitors
Cory Snyder
08:14 PM Backport #51171 (In Progress): octopus: regression in ceph daemonperf command output, osd columns...
Cory Snyder
08:14 PM Backport #51172 (In Progress): pacific: regression in ceph daemonperf command output, osd columns...
Cory Snyder
08:12 PM Backport #51149 (In Progress): octopus: When read failed, ret can not take as data len, in FillIn...
Cory Snyder
08:12 PM Backport #51150 (In Progress): pacific: When read failed, ret can not take as data len, in FillIn...
Cory Snyder
07:38 PM Backport #52710 (In Progress): octopus: partial recovery become whole object recovery after resta...
Cory Snyder
07:05 PM Backport #52450 (In Progress): pacific: smart query on monitors
Cory Snyder
06:21 PM Bug #52261: OSD takes all memory and crashes, after pg_num increase
Aldo Briessmann wrote:
> Hi, same issue here on a cluster with ceph 16.2.4-r2 on Gentoo. Moving the cluster with the...
Neha Ojha
06:16 PM Bug #52261: OSD takes all memory and crashes, after pg_num increase
Hi, same issue here on a cluster with ceph 16.2.4-r2 on Gentoo. Moving the cluster with the in-progress PG split to 1... Aldo Briessmann
02:30 AM Bug #50192: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soi...
Needs a pacific backport, showed up in pacific... Neha Ojha

11/30/2021

03:45 AM Support #53432 (Resolved): How to use and optimize ceph dpdk
Write a CEPH DPDK enabling guide and place it in doc/dev. The document contains the following contents:
1. Compilati...
chunsong feng

11/29/2021

11:19 AM Bug #53237 (Resolved): mon: stretch mode blocks kernel clients from connecting
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
11:19 AM Bug #53258 (Resolved): mon: should always display disallowed leaders when set
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Loïc Dachary
11:17 AM Backport #53259 (Resolved): pacific: mon: should always display disallowed leaders when set
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43972
m...
Loïc Dachary
11:17 AM Backport #53239 (Resolved): pacific: mon: stretch mode blocks kernel clients from connecting
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43971
m...
Loïc Dachary

11/26/2021

10:54 AM Bug #52867 (New): pick_address.cc prints: unable to find any IPv4 address in networks 'fd00:fd00:...
moving over to rados Sebastian Wagner

11/24/2021

05:29 PM Bug #53308: pg-temp entries are not cleared for PGs that no longer exist
That makes sense to me, thanks Neha! Cory Snyder
05:15 PM Bug #53308 (Pending Backport): pg-temp entries are not cleared for PGs that no longer exist
Cory, I am marking this for backport to octopus and pacific, makes sense to you? Neha Ojha
05:29 PM Backport #53389 (In Progress): octopus: pg-temp entries are not cleared for PGs that no longer exist
Cory Snyder
05:20 PM Backport #53389 (Resolved): octopus: pg-temp entries are not cleared for PGs that no longer exist
https://github.com/ceph/ceph/pull/44097 Backport Bot
05:29 PM Backport #53388 (In Progress): pacific: pg-temp entries are not cleared for PGs that no longer exist
Cory Snyder
05:20 PM Backport #53388 (Resolved): pacific: pg-temp entries are not cleared for PGs that no longer exist
https://github.com/ceph/ceph/pull/44096 Backport Bot
03:50 PM Feature #51984 (Fix Under Review): [RFE] Provide warning when the 'require-osd-release' flag does...
Sridhar Seshasayee

11/23/2021

01:53 PM Bug #44286: Cache tiering shows unfound objects after OSD reboots
Update: Also happens with 16.2.5 :-( Jan-Philipp Litza
01:16 PM Bug #52948: osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
New instance seen in below pacific run:
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-20_20:20:29-fs-wip-yuri6...
Kotresh Hiremath Ravishankar
10:54 AM Bug #51945: qa/workunits/mon/caps.sh: Error: Expected return 13, got 0
Seems to be the same problem in:
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-20_18:00:22-rados-wip-yuri6-testi...
Ronen Friedman
07:40 AM Bug #39150: mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
/a/yuriw-2021-11-20_18:01:41-rados-wip-yuri8-testing-2021-11-20-0807-distro-basic-smithi/6516396 Aishwarya Mathuria

11/22/2021

08:29 PM Feature #21579 (Resolved): [RFE] Stop OSD's removal if the OSD's are part of inactive PGs
Vikhyat Umrao
07:11 PM Feature #51984: [RFE] Provide warning when the 'require-osd-release' flag does not match current ...
I am providing the history of PRs and commits that resulted in
the loss/removal of the checks for 'require-osd-relea...
Sridhar Seshasayee
06:45 PM Bug #53306 (Fix Under Review): ceph -s mon quorum age negative number
Sage Weil

11/20/2021

01:41 AM Bug #53349 (New): stat_sum.num_bytes of pool is incorrect when randomly writing small IOs to the ...
In a test, I found that when random writes with an IO size of 512B are performed on the rbd, The pool's stat_sum.num_... mingpo li
12:06 AM Bug #52657: MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)'
/a/ksirivad-2021-11-19_19:14:07-rados-wip-autoscale-profile-scale-up-default-distro-basic-smithi/6514251 Neha Ojha

11/19/2021

06:23 PM Bug #53342 (New): Exiting scrub checking -- not all pgs scrubbed
... Neha Ojha
04:31 PM Backport #53340 (New): pacific: osd/scrub: OSD crashes at PG removal
Backport Bot
04:30 PM Backport #53339 (Resolved): pacific: src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<cons...
https://github.com/ceph/ceph/pull/46767 Backport Bot
04:30 PM Backport #53338 (Resolved): pacific: osd/scrub: src/osd/scrub_machine.cc: 55: FAILED ceph_assert(...
Backport Bot
04:29 PM Bug #51843 (Pending Backport): osd/scrub: OSD crashes at PG removal
Neha Ojha
04:28 PM Bug #51942 (Pending Backport): src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<const NotA...
Neha Ojha
04:27 PM Bug #52012 (Pending Backport): osd/scrub: src/osd/scrub_machine.cc: 55: FAILED ceph_assert(state_...
Neha Ojha
03:46 AM Bug #53330 (New): ceph client request connection with an old invalid key.
We have a production ceph cluster with 3 mons and 516 osds.
Ceph version: 14.2.8
CPU: Intel(R) Xeon(R) Gold 5218
...
wencong wan
01:20 AM Bug #53329 (Duplicate): Set osd_fast_shutdown_notify_mon=true by default
Neha Ojha
01:18 AM Bug #53328 (Fix Under Review): osd_fast_shutdown_notify_mon option should be true by default
Neha Ojha

11/18/2021

11:10 PM Bug #53329 (Duplicate): Set osd_fast_shutdown_notify_mon=true by default
This option was introduced in https://github.com/ceph/ceph/pull/38909, but was set false by default. There is a lot o... Neha Ojha
09:30 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
Ist Gab wrote:
> Neha Ojha wrote:
> > Set osd_delete_sleep to 2 secs and go higher if this does not help. Setting o...
Neha Ojha
09:24 PM Bug #53328: osd_fast_shutdown_notify_mon option should be true by default
Pull request ID: 44016 Satoru Takeuchi
09:14 PM Bug #53328 (Duplicate): osd_fast_shutdown_notify_mon option should be true by default
osd_fast_shutdown_notify_mon option is false by default. So users suffer
from error log flood, slow ops, and the lon...
Satoru Takeuchi
09:22 PM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
Tobias Urdin wrote:
> After upgrading osd.107 to 15.5.15 and waiting 2 hours for it to recover 3,000 objects in a si...
Neha Ojha
09:11 PM Bug #53327 (Resolved): osd: osd_fast_shutdown_notify_mon not quite right and enable osd_fast_shut...
- it should send MOSDMarkMeDead not MarkMeDown
- we must confirm that we set a flag (preparing to stop?) that makes ...
Sage Weil
08:57 PM Bug #53326 (Fix Under Review): pgs wait for read lease after osd start
Sage Weil
08:28 PM Bug #53326 (Resolved): pgs wait for read lease after osd start
- pg is healthy
- primary osd stops
- wait for things to settle
- restart primary
- pg goes into WAIT state
Th...
Sage Weil
08:08 PM Bug #51942: src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<const NotActive*>())
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{defa... Neha Ojha
05:58 PM Bug #48298: hitting mon_max_pg_per_osd right after creating OSD, then decreases slowly
still encountering on ceph octopus 15.2.15 :(
please add the HEALTH_ERROR when the limit is hit, then one at least...
Jonas Jelten
12:51 PM Bug #53316 (New): qa: (smithi150) slow request osd_op, currently waiting for sub ops warning
The warning is seen in following teuthology run
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-17_19:02:43-fs-w...
Kotresh Hiremath Ravishankar
03:05 AM Feature #52424 (Fix Under Review): [RFE] Limit slow request details to mgr log
Prashant D

11/17/2021

06:14 PM Bug #53308 (Resolved): pg-temp entries are not cleared for PGs that no longer exist
When scaling down pg_num while it was in the process of scaling up, we consistently end up with stuck pg-temp entries... Cory Snyder
04:59 PM Bug #53306 (Resolved): ceph -s mon quorum age negative number
... Sage Weil
06:24 AM Bug #52624: qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
Seen in this pacific run as well.
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testi...
Kotresh Hiremath Ravishankar

11/16/2021

10:15 PM Bug #50659 (Fix Under Review): Segmentation fault under Pacific 16.2.1 when using a custom crush ...
Neha Ojha
03:12 PM Bug #50659: Segmentation fault under Pacific 16.2.1 when using a custom crush location hook
Thank you for this fix. It is very much appreciated. Andrew Davidoff
08:25 PM Bug #53295 (New): Leak_DefinitelyLost PrimaryLogPG::do_proxy_chunked_read()
... Neha Ojha
08:20 PM Bug #53294 (Pending Backport): rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuri...
... Neha Ojha
07:14 PM Bug #52867: pick_address.cc prints: unable to find any IPv4 address in networks 'fd00:fd00:fd00:3...
Kefu Chai wrote:
> @John,
>
> per the logging message pasted at http://ix.io/3B1y
>
>
> [...]
>
> it seem...
John Fulton
06:27 PM Backport #53259 (In Progress): pacific: mon: should always display disallowed leaders when set
Greg Farnum
06:26 PM Bug #53258 (Pending Backport): mon: should always display disallowed leaders when set
Greg Farnum
06:25 PM Bug #53237 (Pending Backport): mon: stretch mode blocks kernel clients from connecting
Greg Farnum
06:24 PM Backport #53239 (In Progress): pacific: mon: stretch mode blocks kernel clients from connecting
Greg Farnum
07:21 AM Backport #52936 (Resolved): pacific: Primary OSD crash caused corrupted object and further crashe...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43544
m...
Loïc Dachary
07:21 AM Backport #52868: stretch mode: allow users to change the tiebreaker monitor
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43457
m...
Loïc Dachary
12:32 AM Bug #53240 (Fix Under Review): full-object read crc is mismatch, because truncate modify oi.size ...
Neha Ojha

11/15/2021

03:27 PM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
After upgrading osd.107 to 15.5.15 and waiting 2 hours for it to recover 3,000 objects in a single PG it crashed agai... Tobias Urdin
03:19 PM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
... Tobias Urdin
01:15 PM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
This is still an issue and it repeatedly hits this during recovery when upgrading the cluster where some (already upg... Tobias Urdin
02:18 AM Bug #53219: LibRadosTwoPoolsPP.ManifestRollbackRefcount failure
Calculating reference count on manifest snapshotted object requires correct refcount information. So, current unittes... Myoungwon Oh
 

Also available in: Atom