Activity
From 11/10/2021 to 12/09/2021
12/09/2021
- 11:06 PM Bug #52136: Valgrind reports memory "Leak_DefinitelyLost" errors.
- /a/yuriw-2021-12-09_00:18:57-rados-wip-yuri-testing-2021-12-08-1336-distro-default-smithi/6553724/ ----> osd.1.log.gz
- 09:38 PM Bug #53575 (Resolved): Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
- Found in /a/yuriw-2021-12-09_00:18:57-rados-wip-yuri-testing-2021-12-08-1336-distro-default-smithi/6553724
The fol... - 04:32 PM Backport #53549 (In Progress): nautilus: [RFE] Provide warning when the 'require-osd-release' fla...
- 01:43 PM Backport #53550 (In Progress): octopus: [RFE] Provide warning when the 'require-osd-release' flag...
- 12:53 PM Backport #53551 (In Progress): pacific: [RFE] Provide warning when the 'require-osd-release' flag...
12/08/2021
- 09:15 PM Backport #53551 (Resolved): pacific: [RFE] Provide warning when the 'require-osd-release' flag do...
- https://github.com/ceph/ceph/pull/44259
- 09:15 PM Backport #53550 (Resolved): octopus: [RFE] Provide warning when the 'require-osd-release' flag do...
- https://github.com/ceph/ceph/pull/44260
- 09:15 PM Backport #53549 (Rejected): nautilus: [RFE] Provide warning when the 'require-osd-release' flag d...
- https://github.com/ceph/ceph/pull/44263
- 09:13 PM Feature #51984 (Pending Backport): [RFE] Provide warning when the 'require-osd-release' flag does...
- 07:08 PM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
- /a/yuriw-2021-12-07_16:04:59-rados-wip-yuri5-testing-2021-12-06-1619-distro-default-smithi/6551120
pg map right be... - 06:49 PM Bug #53544 (New): src/test/osd/RadosModel.h: ceph_abort_msg("racing read got wrong version") in t...
- ...
- 03:30 PM Bug #52124: Invalid read of size 8 in handle_recovery_delete()
- /a/yuriw-2021-12-07_16:02:55-rados-wip-yuri11-testing-2021-12-06-1619-distro-default-smithi/6550873
- 12:15 PM Backport #53535 (Resolved): pacific: mon: mgrstatmonitor spams mgr with service_map
- https://github.com/ceph/ceph/pull/44721
- 12:15 PM Backport #53534 (Resolved): octopus: mon: mgrstatmonitor spams mgr with service_map
- https://github.com/ceph/ceph/pull/44722
- 12:10 PM Bug #53479 (Pending Backport): mon: mgrstatmonitor spams mgr with service_map
12/07/2021
- 09:27 PM Bug #53516 (Resolved): Disable health warning when autoscaler is on
- the command:
ceph health detail
displays a warning when a pool has many more objects per pg than other pools. Thi...
12/06/2021
- 10:05 PM Backport #53507 (Duplicate): pacific: ceph -s mon quorum age negative number
- 10:03 PM Bug #53306 (Pending Backport): ceph -s mon quorum age negative number
- Needs to be included in https://github.com/ceph/ceph/pull/43698
- 08:42 PM Backport #52450: pacific: smart query on monitors
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44164
merged - 06:13 PM Bug #53506 (Fix Under Review): mon: frequent cpu_tp had timed out messages
- 06:06 PM Bug #53506 (Closed): mon: frequent cpu_tp had timed out messages
- ...
- 11:06 AM Bug #52416: devices: mon devices appear empty when scraping SMART metrics
- If `ceph-mon` runs as a systemd unit, check if `PrivateDevices=yes` in `/lib/systemd/system/ceph-mon@.service`; if so...
- 10:30 AM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- Ist Gab wrote:
> Igor Fedotov wrote:
> > …
>
> Igor, do you think if we put a super fast 2-4TB write optimized n... - 09:14 AM Bug #52189: crash in AsyncConnection::maybe_start_delay_thread()
- Neha Ojha wrote:
> We'll need more information to debug a crash like this.
@Nea, we observed another one of the... - 08:49 AM Bug #51307: LibRadosWatchNotify.Watch2Delete fails
- /a/yuriw-2021-12-03_15:27:18-rados-wip-yuri11-testing-2021-12-02-1451-distro-default-smithi/6542889...
- 08:25 AM Bug #53500: rte_eal_init fail will waiting forever
- r
The program being debugged has been started already.
Start it from the beginning? (y or n) y
Starting program: /... - 08:20 AM Bug #53500 (New): rte_eal_init fail will waiting forever
- The rte_eal_init returns a failure message and does not wake up the waiting msgr-worker thread. As a result, the wait...
12/03/2021
- 09:02 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- Igor Fedotov wrote:
> …
Igor, do you think if we put a super fast 2-4TB write optimized nvme in front of each 15.... - 01:16 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- Ist Gab wrote:
> Igor Fedotov wrote:
>
> > Right - PG removal/moving are the primary cause of bulk data removals.... - 12:43 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- Igor Fedotov wrote:
> Right - PG removal/moving are the primary cause of bulk data removals. We're working on impr... - 12:39 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- Igor Fedotov wrote:
> So if compaction provides some relief (at least temporarily) - I would suggest running periodi... - 12:31 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- Ist Gab wrote:
> Most likely this is related to this pg delete/movement things because after the pg increase the c... - 12:12 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- Igor Fedotov wrote:
> In my opinion this issue is caused by a well-known problem with RocksDB performance degradatio... - 11:12 AM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- In my opinion this issue is caused by a well-known problem with RocksDB performance degradation after bulk data remov...
- 05:05 PM Backport #53486 (In Progress): pacific: LibRadosTwoPoolsPP.ManifestSnapRefcount Failure.
- 01:19 PM Backport #53486: pacific: LibRadosTwoPoolsPP.ManifestSnapRefcount Failure.
- https://github.com/ceph/ceph/pull/44202
- 12:25 PM Backport #53486 (Resolved): pacific: LibRadosTwoPoolsPP.ManifestSnapRefcount Failure.
- https://github.com/ceph/ceph/pull/44202
- 12:20 PM Bug #52872 (Pending Backport): LibRadosTwoPoolsPP.ManifestSnapRefcount Failure.
- 12:20 PM Bug #53485 (Fix Under Review): monstore: logm entries are not garbage collected
- We had to run a ceph cluster with a damaged cephfs for a while that got deleted already. We suspect this was the culp...
- 01:56 AM Bug #53481 (New): rte_exit can't exit when call it in dpdk thread
(gdb) info thr
Id Target Id Frame
* 1 Thread 0xfffc1ba26100 (LW...
12/02/2021
- 11:36 PM Backport #53480 (Resolved): pacific: Segmentation fault under Pacific 16.2.1 when using a custom ...
- https://github.com/ceph/ceph/pull/44897
- 11:33 PM Bug #50659 (Pending Backport): Segmentation fault under Pacific 16.2.1 when using a custom crush ...
- 11:31 PM Bug #52872: LibRadosTwoPoolsPP.ManifestSnapRefcount Failure.
- Myoungwon Oh: should we backport this? please update the status accordingly.
- 11:14 PM Bug #53479 (Fix Under Review): mon: mgrstatmonitor spams mgr with service_map
- 10:46 PM Bug #53479 (Pending Backport): mon: mgrstatmonitor spams mgr with service_map
- ...
- 08:39 PM Bug #53138: cluster [WRN] Health check failed: Degraded data redundancy: 3/1164 objects degrade...
- @Neha I am seeing these failures more than usual, maybe we might be having performance regression, if not, can we inc...
- 08:34 PM Backport #50274 (In Progress): pacific: FAILED ceph_assert(attrs || !recovery_state.get_pg_log()....
- 08:20 PM Bug #51652: heartbeat timeouts on filestore OSDs while deleting objects in upgrade:pacific-p2p-pa...
- /a/yuriw-2021-11-28_15:43:54-upgrade:pacific-p2p-pacific-16.2.7_RC1-distro-default-smithi/6531998
- 02:26 PM Support #51609: OSD refuses to start (OOMK) due to pg split
- Tor Martin Ølberg wrote:
> Tor Martin Ølberg wrote:
> > After an upgrade to 15.2.13 from 15.2.4 my small home lab c... - 07:28 AM Bug #50192: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soi...
- https://github.com/ceph/ceph/pull/44181
12/01/2021
- 08:57 PM Bug #53454 (New): nautilus: MInfoRec in Started/ToDelete/WaitDeleteReseved causes state machine c...
- ...
- 08:24 PM Backport #52451 (In Progress): octopus: smart query on monitors
- 08:14 PM Backport #51171 (In Progress): octopus: regression in ceph daemonperf command output, osd columns...
- 08:14 PM Backport #51172 (In Progress): pacific: regression in ceph daemonperf command output, osd columns...
- 08:12 PM Backport #51149 (In Progress): octopus: When read failed, ret can not take as data len, in FillIn...
- 08:12 PM Backport #51150 (In Progress): pacific: When read failed, ret can not take as data len, in FillIn...
- 07:38 PM Backport #52710 (In Progress): octopus: partial recovery become whole object recovery after resta...
- 07:05 PM Backport #52450 (In Progress): pacific: smart query on monitors
- 06:21 PM Bug #52261: OSD takes all memory and crashes, after pg_num increase
- Aldo Briessmann wrote:
> Hi, same issue here on a cluster with ceph 16.2.4-r2 on Gentoo. Moving the cluster with the... - 06:16 PM Bug #52261: OSD takes all memory and crashes, after pg_num increase
- Hi, same issue here on a cluster with ceph 16.2.4-r2 on Gentoo. Moving the cluster with the in-progress PG split to 1...
- 02:30 AM Bug #50192: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soi...
- Needs a pacific backport, showed up in pacific...
11/30/2021
- 03:45 AM Support #53432 (Resolved): How to use and optimize ceph dpdk
- Write a CEPH DPDK enabling guide and place it in doc/dev. The document contains the following contents:
1. Compilati...
11/29/2021
- 11:19 AM Bug #53237 (Resolved): mon: stretch mode blocks kernel clients from connecting
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:19 AM Bug #53258 (Resolved): mon: should always display disallowed leaders when set
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:17 AM Backport #53259 (Resolved): pacific: mon: should always display disallowed leaders when set
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43972
m... - 11:17 AM Backport #53239 (Resolved): pacific: mon: stretch mode blocks kernel clients from connecting
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43971
m...
11/26/2021
- 10:54 AM Bug #52867 (New): pick_address.cc prints: unable to find any IPv4 address in networks 'fd00:fd00:...
- moving over to rados
11/24/2021
- 05:29 PM Bug #53308: pg-temp entries are not cleared for PGs that no longer exist
- That makes sense to me, thanks Neha!
- 05:15 PM Bug #53308 (Pending Backport): pg-temp entries are not cleared for PGs that no longer exist
- Cory, I am marking this for backport to octopus and pacific, makes sense to you?
- 05:29 PM Backport #53389 (In Progress): octopus: pg-temp entries are not cleared for PGs that no longer exist
- 05:20 PM Backport #53389 (Resolved): octopus: pg-temp entries are not cleared for PGs that no longer exist
- https://github.com/ceph/ceph/pull/44097
- 05:29 PM Backport #53388 (In Progress): pacific: pg-temp entries are not cleared for PGs that no longer exist
- 05:20 PM Backport #53388 (Resolved): pacific: pg-temp entries are not cleared for PGs that no longer exist
- https://github.com/ceph/ceph/pull/44096
- 03:50 PM Feature #51984 (Fix Under Review): [RFE] Provide warning when the 'require-osd-release' flag does...
11/23/2021
- 01:53 PM Bug #44286: Cache tiering shows unfound objects after OSD reboots
- Update: Also happens with 16.2.5 :-(
- 01:16 PM Bug #52948: osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
- New instance seen in below pacific run:
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-20_20:20:29-fs-wip-yuri6... - 10:54 AM Bug #51945: qa/workunits/mon/caps.sh: Error: Expected return 13, got 0
- Seems to be the same problem in:
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-20_18:00:22-rados-wip-yuri6-testi... - 07:40 AM Bug #39150: mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
- /a/yuriw-2021-11-20_18:01:41-rados-wip-yuri8-testing-2021-11-20-0807-distro-basic-smithi/6516396
11/22/2021
- 08:29 PM Feature #21579 (Resolved): [RFE] Stop OSD's removal if the OSD's are part of inactive PGs
- 07:11 PM Feature #51984: [RFE] Provide warning when the 'require-osd-release' flag does not match current ...
- I am providing the history of PRs and commits that resulted in
the loss/removal of the checks for 'require-osd-relea... - 06:45 PM Bug #53306 (Fix Under Review): ceph -s mon quorum age negative number
11/20/2021
- 01:41 AM Bug #53349 (New): stat_sum.num_bytes of pool is incorrect when randomly writing small IOs to the ...
- In a test, I found that when random writes with an IO size of 512B are performed on the rbd, The pool's stat_sum.num_...
- 12:06 AM Bug #52657: MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)'
- /a/ksirivad-2021-11-19_19:14:07-rados-wip-autoscale-profile-scale-up-default-distro-basic-smithi/6514251
11/19/2021
- 06:23 PM Bug #53342 (New): Exiting scrub checking -- not all pgs scrubbed
- ...
- 04:31 PM Backport #53340 (New): pacific: osd/scrub: OSD crashes at PG removal
- 04:30 PM Backport #53339 (Resolved): pacific: src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<cons...
- https://github.com/ceph/ceph/pull/46767
- 04:30 PM Backport #53338 (New): pacific: osd/scrub: src/osd/scrub_machine.cc: 55: FAILED ceph_assert(state...
- 04:29 PM Bug #51843 (Pending Backport): osd/scrub: OSD crashes at PG removal
- 04:28 PM Bug #51942 (Pending Backport): src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<const NotA...
- 04:27 PM Bug #52012 (Pending Backport): osd/scrub: src/osd/scrub_machine.cc: 55: FAILED ceph_assert(state_...
- 03:46 AM Bug #53330 (New): ceph client request connection with an old invalid key.
- We have a production ceph cluster with 3 mons and 516 osds.
Ceph version: 14.2.8
CPU: Intel(R) Xeon(R) Gold 5218
... - 01:20 AM Bug #53329 (Duplicate): Set osd_fast_shutdown_notify_mon=true by default
- 01:18 AM Bug #53328 (Fix Under Review): osd_fast_shutdown_notify_mon option should be true by default
11/18/2021
- 11:10 PM Bug #53329 (Duplicate): Set osd_fast_shutdown_notify_mon=true by default
- This option was introduced in https://github.com/ceph/ceph/pull/38909, but was set false by default. There is a lot o...
- 09:30 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- Ist Gab wrote:
> Neha Ojha wrote:
> > Set osd_delete_sleep to 2 secs and go higher if this does not help. Setting o... - 09:24 PM Bug #53328: osd_fast_shutdown_notify_mon option should be true by default
- Pull request ID: 44016
- 09:14 PM Bug #53328 (Duplicate): osd_fast_shutdown_notify_mon option should be true by default
- osd_fast_shutdown_notify_mon option is false by default. So users suffer
from error log flood, slow ops, and the lon... - 09:22 PM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
- Tobias Urdin wrote:
> After upgrading osd.107 to 15.5.15 and waiting 2 hours for it to recover 3,000 objects in a si... - 09:11 PM Bug #53327 (Resolved): osd: osd_fast_shutdown_notify_mon not quite right and enable osd_fast_shut...
- - it should send MOSDMarkMeDead not MarkMeDown
- we must confirm that we set a flag (preparing to stop?) that makes ... - 08:57 PM Bug #53326 (Fix Under Review): pgs wait for read lease after osd start
- 08:28 PM Bug #53326 (Resolved): pgs wait for read lease after osd start
- - pg is healthy
- primary osd stops
- wait for things to settle
- restart primary
- pg goes into WAIT state
Th... - 08:08 PM Bug #51942: src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<const NotActive*>())
- rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{defa...
- 05:58 PM Bug #48298: hitting mon_max_pg_per_osd right after creating OSD, then decreases slowly
- still encountering on ceph octopus 15.2.15 :(
please add the HEALTH_ERROR when the limit is hit, then one at least... - 12:51 PM Bug #53316 (New): qa: (smithi150) slow request osd_op, currently waiting for sub ops warning
- The warning is seen in following teuthology run
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-17_19:02:43-fs-w... - 03:05 AM Feature #52424 (Fix Under Review): [RFE] Limit slow request details to mgr log
11/17/2021
- 06:14 PM Bug #53308 (Resolved): pg-temp entries are not cleared for PGs that no longer exist
- When scaling down pg_num while it was in the process of scaling up, we consistently end up with stuck pg-temp entries...
- 04:59 PM Bug #53306 (Resolved): ceph -s mon quorum age negative number
- ...
- 06:24 AM Bug #52624: qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
- Seen in this pacific run as well.
http://pulpito.front.sepia.ceph.com/yuriw-2021-11-12_00:33:28-fs-wip-yuri7-testi...
11/16/2021
- 10:15 PM Bug #50659 (Fix Under Review): Segmentation fault under Pacific 16.2.1 when using a custom crush ...
- 03:12 PM Bug #50659: Segmentation fault under Pacific 16.2.1 when using a custom crush location hook
- Thank you for this fix. It is very much appreciated.
- 08:25 PM Bug #53295 (New): Leak_DefinitelyLost PrimaryLogPG::do_proxy_chunked_read()
- ...
- 08:20 PM Bug #53294 (Pending Backport): rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuri...
- ...
- 07:14 PM Bug #52867: pick_address.cc prints: unable to find any IPv4 address in networks 'fd00:fd00:fd00:3...
- Kefu Chai wrote:
> @John,
>
> per the logging message pasted at http://ix.io/3B1y
>
>
> [...]
>
> it seem... - 06:27 PM Backport #53259 (In Progress): pacific: mon: should always display disallowed leaders when set
- 06:26 PM Bug #53258 (Pending Backport): mon: should always display disallowed leaders when set
- 06:25 PM Bug #53237 (Pending Backport): mon: stretch mode blocks kernel clients from connecting
- 06:24 PM Backport #53239 (In Progress): pacific: mon: stretch mode blocks kernel clients from connecting
- 07:21 AM Backport #52936 (Resolved): pacific: Primary OSD crash caused corrupted object and further crashe...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43544
m... - 07:21 AM Backport #52868: stretch mode: allow users to change the tiebreaker monitor
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/43457
m... - 12:32 AM Bug #53240 (Fix Under Review): full-object read crc is mismatch, because truncate modify oi.size ...
11/15/2021
- 03:27 PM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
- After upgrading osd.107 to 15.5.15 and waiting 2 hours for it to recover 3,000 objects in a single PG it crashed agai...
- 03:19 PM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
- ...
- 01:15 PM Bug #50608: ceph_assert(is_primary()) in PrimaryLogPG::on_local_recover
- This is still an issue and it repeatedly hits this during recovery when upgrading the cluster where some (already upg...
- 02:18 AM Bug #53219: LibRadosTwoPoolsPP.ManifestRollbackRefcount failure
- Calculating reference count on manifest snapshotted object requires correct refcount information. So, current unittes...
11/14/2021
- 09:59 PM Bug #52901: osd/scrub: setting then clearing noscrub may lock a PG in 'scrubbing' state
- A test to detect this specific bug pushed as PR 43919
- 08:47 AM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- Neha Ojha wrote:
> Set osd_delete_sleep to 2 secs and go higher if this does not help. Setting osd_delete_sleep take...
11/13/2021
- 08:01 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- Ist Gab wrote:
> Neha Ojha wrote:
> > Can you try to set a higher value of "osd delete sleep" and see if that helps... - 05:50 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- Neha Ojha wrote:
> Can you try to set a higher value of "osd delete sleep" and see if that helps?
Which one speci...
11/12/2021
- 11:11 PM Backport #53259 (Resolved): pacific: mon: should always display disallowed leaders when set
- https://github.com/ceph/ceph/pull/43972
- 11:10 PM Bug #53258 (Resolved): mon: should always display disallowed leaders when set
- I made some usability improvements in https://github.com/ceph/ceph/pull/43373, but accidentally switched things so th...
- 11:08 PM Backport #53238 (Rejected): octopus: mon: stretch mode blocks kernel clients from connecting
- Apparently I sometimes fail at sorting alphanumerically?
- 06:57 PM Bug #51942: src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<const NotActive*>())
- Ronen, let's prioritize this.
- 06:56 PM Bug #48909 (Duplicate): clog slow request overwhelm monitors
- 06:51 PM Bug #53138 (Triaged): cluster [WRN] Health check failed: Degraded data redundancy: 3/1164 objec...
- This warning comes up because there are PGs recovering, probably because the test is injecting failures - we can igno...
- 06:46 PM Bug #52969: use "ceph df" command found pool max avail increase when there are degraded objects i...
- minghang zhao wrote:
> My solution is to add a function del_down_out_osd() to PGMap::get_rule_avail() to calculate t... - 06:43 PM Bug #53142: OSD crash in PG::do_delete_work when increasing PGs
- Can you try to set a higher value of "osd delete sleep" and see if that helps?
- 06:29 PM Bug #53190: counter num_read_kb is going down
- This seems possible to occur for many such counters in a distributed system like ceph, where these values are not tre...
- 06:26 PM Bug #52901 (Resolved): osd/scrub: setting then clearing noscrub may lock a PG in 'scrubbing' state
- 06:20 PM Bug #52503: cli_generic.sh: slow ops when trying rand write on cache pools
- Deepika Upadhyay wrote:
> /ceph/teuthology-archive/ideepika-2021-11-02_12:33:30-rbd-wip-ssd-cache-testing-distro-bas... - 06:17 PM Bug #53219: LibRadosTwoPoolsPP.ManifestRollbackRefcount failure
- Myoungwon Oh wrote:
> I think this is the same issue as https://tracker.ceph.com/issues/52872.
> Recovery takes alm... - 07:28 AM Bug #53219: LibRadosTwoPoolsPP.ManifestRollbackRefcount failure
- I think this is the same issue as https://tracker.ceph.com/issues/52872.
Recovery takes almost 8 minutes even if cur... - 05:58 PM Bug #53251 (New): compiler warning about deprecated fmt::format_to()
- ...
- 03:49 PM Bug #52948: osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
- http://qa-proxy.ceph.com/teuthology/ideepika-2021-11-12_08:56:59-rbd-wip-deepika-testing-2021-11-12-1203-distro-basic...
- 06:31 AM Bug #53240: full-object read crc is mismatch, because truncate modify oi.size and forget to clear...
- my ceph version is nautilous 14.2.5
- 04:08 AM Bug #53240: full-object read crc is mismatch, because truncate modify oi.size and forget to clear...
- https://github.com/ceph/ceph/pull/43902
- 03:27 AM Bug #53240: full-object read crc is mismatch, because truncate modify oi.size and forget to clear...
- The object oi.size should be 4194304, but it is actually 4063232.
The object data_digest is 0xffffffff, but read crc... - 02:56 AM Bug #53240 (Fix Under Review): full-object read crc is mismatch, because truncate modify oi.size ...
- I use 100 threads to dd on multiple files under the directory, so the same file can be truncated at any time.
When d... - 05:53 AM Cleanup #52754: windows warnings
- @Laura, they appear in windows shaman builds, anyone can take a look at the latest windows builds available here http...
11/11/2021
- 09:11 PM Bug #52867: pick_address.cc prints: unable to find any IPv4 address in networks 'fd00:fd00:fd00:3...
- John Fulton wrote:
> As per comment #3 I was on the right path but I should have set an OSD setting, not a mon setti... - 09:10 PM Bug #52867 (Need More Info): pick_address.cc prints: unable to find any IPv4 address in networks ...
- 08:40 PM Backport #53239 (Resolved): pacific: mon: stretch mode blocks kernel clients from connecting
- https://github.com/ceph/ceph/pull/43971
- 08:40 PM Backport #53238 (Rejected): octopus: mon: stretch mode blocks kernel clients from connecting
- This was reported by Red Hat at https://bugzilla.redhat.com/show_bug.cgi?id=2022190
> [66873.543382] libceph: got ... - 08:30 PM Bug #53237 (Resolved): mon: stretch mode blocks kernel clients from connecting
- This was reported by Red Hat at https://bugzilla.redhat.com/show_bug.cgi?id=2022190
> [66873.543382] libceph: got ... - 07:48 PM Cleanup #52754: windows warnings
- Deepika, the link is 404 now. Is there a way that we could preserve the Jenkins output and provide a different link?
- 03:26 PM Bug #52948: osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
- Analysis of logs from JobID: 6443924
osd.3 during running of the "ceph" teuthology task didn't get initialized. As... - 12:32 AM Bug #53219: LibRadosTwoPoolsPP.ManifestRollbackRefcount failure
- I'll take a look
11/10/2021
Also available in: Atom