Project

General

Profile

Activity

From 02/23/2022 to 03/24/2022

03/24/2022

09:06 PM Bug #51945: qa/workunits/mon/caps.sh: Error: Expected return 13, got 0
/a/yuriw-2022-03-11_00:13:58-rados-wip-yuri11-testing-2022-03-10-1443-octopus-distro-default-smithi/6730807... Laura Flores
08:17 PM Backport #54612: quincy: Add snaptrim stats to the existing PG stats.
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45524
mergedReviewed-by: Neha Ojha <nojha@redhat.com>
Yuri Weinstein
07:13 PM Bug #53663: Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools
I think this can be marked as a duplicate of 54592. Vikhyat Umrao
07:12 PM Bug #53663: Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools
Neha Ojha wrote:
> Christian Rohmann wrote:
> > This issue is still present, and also with 15.2.16.
> >
> > I ju...
Vikhyat Umrao
06:04 PM Bug #53663: Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools
Christian Rohmann wrote:
> This issue is still present, and also with 15.2.16.
>
> I just observed that after a s...
Neha Ojha
06:24 PM Bug #54296: OSDs using too much memory
Ruben Kerkhof wrote:
> > Excellent idea! I'll ask the customer and get back with the results.
>
> We restarted th...
Dan van der Ster
10:48 AM Bug #54296: OSDs using too much memory

> Excellent idea! I'll ask the customer and get back with the results.
We restarted the OSDs on a single node wi...
Ruben Kerkhof
05:19 PM Bug #52026 (In Progress): osd: pgs went back into snaptrim state after osd restart
Josh Durgin
01:00 PM Bug #52026: osd: pgs went back into snaptrim state after osd restart
Good news, the root cause was identifiable from logs in a test environment with snapshot-based mirroring enabled.
...
Josh Durgin
05:16 PM Backport #50893: pacific: osd/PrimaryLogPG.cc: FAILED ceph_assert(attrs || !recovery_state.get_pg...
/a/yuriw-2022-03-23_14:51:02-rados-wip-yuri4-testing-2022-03-21-1648-pacific-distro-default-smithi/6755998 Aishwarya Mathuria
04:43 PM Bug #50042: rados/test.sh: api_watch_notify failures
/a/yuriw-2022-03-24_14:35:45-rados-wip-yuri7-testing-2022-03-23-1332-quincy-distro-default-smithi/6757986... Laura Flores
03:28 PM Backport #55047 (In Progress): quincy: rados/test.sh hangs while running LibRadosTwoPoolsPP.Manif...
Laura Flores
03:20 PM Backport #55047 (Resolved): quincy: rados/test.sh hangs while running LibRadosTwoPoolsPP.Manifest...
https://github.com/ceph/ceph/pull/45624 Backport Bot
03:16 PM Bug #53855 (Pending Backport): rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlush...
Laura Flores
03:16 PM Bug #53855: rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount
/a/yuriw-2022-03-24_01:58:56-rados-wip-yuri7-testing-2022-03-23-1332-quincy-distro-default-smithi/6756757 Laura Flores
09:44 AM Backport #55021 (In Progress): quincy: LibRadosWatchNotify.Watch2Delete fails
Nitzan Mordechai
04:27 AM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
Reproduces on master. nikhil kshirsagar
02:02 AM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
Thanks for the comment Radoslaw!
This reproduces on master as well. Unfortunately I do not seem to have permission...
nikhil kshirsagar
01:50 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
... jianwei zhang
01:44 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
ceph v15.2.13 allow_ec_overwrite jianwei zhang
01:44 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
ceph pg 3.356 mark_unfound_lost revert
this command is not support for ec.
so ec must guarantee its own consist...
jianwei zhang
01:42 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
According to the current abnormal case, is it possible to suspect that ec cannot guarantee its own consistency? jianwei zhang
01:39 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
Neha Ojha wrote:
> jianwei zhang wrote:
> > Neha Ojha wrote:
> > > jianwei zhang wrote:
> > > > 1711'7107 : s0/1/...
jianwei zhang

03/23/2022

06:43 PM Bug #54556 (Won't Fix): Pools are wrongly reported to have non-power-of-two pg_num after update
Radoslaw Zarzynski
06:42 PM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
Could you please send a PR for this? Radoslaw Zarzynski
06:41 PM Bug #54558 (In Progress): malformed json in a Ceph RESTful API call can stop all ceph-mon services
Radoslaw Zarzynski
06:39 PM Bug #54994 (Fix Under Review): osd: add scrub duration for scrubs after recovery
Neha Ojha
06:27 PM Bug #55001: rados/test.sh: Early exit right after LibRados global tests complete
I think we can't be sure the timeout is because of @api_tier_pp@. To be conclusive we need to check who had the PID @... Radoslaw Zarzynski
06:21 PM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
jianwei zhang wrote:
> Neha Ojha wrote:
> > jianwei zhang wrote:
> > > 1711'7107 : s0/1/2/3/4/5都有所以都能写下去
> > > 17...
Neha Ojha
06:10 PM Bug #52026: osd: pgs went back into snaptrim state after osd restart
Could you please collect the logs exactly like mentioned in the comment #14 (https://tracker.ceph.com/issues/52026#no... Radoslaw Zarzynski
06:00 PM Bug #46847: Loss of placement information on OSD reboot
Please notice the min_size is set k (6), so there is little redundancy guaranteed.
Anyway, are you able to reproduce...
Radoslaw Zarzynski
05:47 PM Bug #53663: Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools
Vikhyat Umrao wrote:
> Christian Rohmann wrote:
> > This issue is still present, and also with 15.2.16.
> >
> > ...
Neha Ojha
03:56 PM Bug #53663: Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools
Christian Rohmann wrote:
> This issue is still present, and also with 15.2.16.
>
> I just observed that after a s...
Vikhyat Umrao
01:06 PM Bug #53663: Random scrub errors (omap_digest_mismatch) on pgs of RADOSGW metadata pools
This issue is still present, and also with 15.2.16.
I just observed that after a series of machine reboots due to ...
Christian Rohmann
05:42 PM Bug #49689 (Need More Info): osd/PeeringState.cc: ceph_abort_msg("past_interval start interval mi...
Radoslaw Zarzynski
05:41 PM Bug #50089: mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reducing number of monitors i...
https://github.com/ceph/ceph/pull/44993 was closed Neha Ojha
05:40 PM Bug #50089 (In Progress): mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reducing number...
Radoslaw Zarzynski
05:34 PM Bug #53923 (Can't reproduce): [Upgrade] mgr FAILED to decode MSG_PGSTATS
Neha Ojha
05:18 PM Bug #54552 (Resolved): ceph windows test hanging quincy backport PRs
Merged Kamoltat (Junior) Sirivadhna
04:16 PM Backport #55021 (Resolved): quincy: LibRadosWatchNotify.Watch2Delete fails
https://github.com/ceph/ceph/pull/45616 Backport Bot
04:15 PM Backport #55019 (In Progress): octopus: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark ...
Vikhyat Umrao
03:55 PM Backport #55019 (Resolved): octopus: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark oma...
https://github.com/ceph/ceph/pull/45593 Backport Bot
04:14 PM Bug #51307 (Pending Backport): LibRadosWatchNotify.Watch2Delete fails
Neha Ojha
04:09 PM Backport #55018 (In Progress): quincy: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark o...
Vikhyat Umrao
03:55 PM Backport #55018 (Resolved): quincy: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark omap...
https://github.com/ceph/ceph/pull/45592 Backport Bot
04:04 PM Backport #55020 (In Progress): pacific: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark ...
Vikhyat Umrao
03:55 PM Backport #55020 (Resolved): pacific: partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark oma...
https://github.com/ceph/ceph/pull/45591 Backport Bot
03:54 PM Bug #54592 (Pending Backport): partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark omap dirty
Vikhyat Umrao
03:21 PM Backport #55012 (In Progress): octopus: librados: check latest osdmap on ENOENT in pool_reverse_l...
Ilya Dryomov
03:15 PM Backport #55012 (Resolved): octopus: librados: check latest osdmap on ENOENT in pool_reverse_look...
https://github.com/ceph/ceph/pull/45587 Backport Bot
03:19 PM Backport #55013 (In Progress): pacific: librados: check latest osdmap on ENOENT in pool_reverse_l...
Ilya Dryomov
03:15 PM Backport #55013 (Resolved): pacific: librados: check latest osdmap on ENOENT in pool_reverse_look...
https://github.com/ceph/ceph/pull/45586 Backport Bot
03:11 PM Bug #54593 (Pending Backport): librados: check latest osdmap on ENOENT in pool_reverse_lookup()
Ilya Dryomov
02:41 PM Bug #55009 (Fix Under Review): Scrubbing exits due to error reading object head
Description: rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-ove... Laura Flores

03/22/2022

11:44 PM Backport #54567: pacific: mon/MonCommands.h: target_size_ratio range is incorrect
Kamoltat Sirivadhna wrote:
> https://github.com/ceph/ceph/pull/45397/commits
merged
Yuri Weinstein
11:42 PM Backport #54466: pacific: Setting osd_pg_max_concurrent_snap_trims to 0 prematurely clears the sn...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45323
merged
Yuri Weinstein
11:41 PM Backport #52078: pacific: api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.HitSetWrite
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45319
merged
Yuri Weinstein
11:38 PM Backport #53644: pacific: Disable health warning when autoscaler is on
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45152
merged
Yuri Weinstein
10:30 PM Bug #55001 (Resolved): rados/test.sh: Early exit right after LibRados global tests complete
This failure was previously tracked in issue #50042, but it has come up enough that it warrants its own Tracker. See ... Laura Flores
10:24 PM Bug #52124: Invalid read of size 8 in handle_recovery_delete()
/a/yuriw-2022-03-19_14:37:23-rados-wip-yuri6-testing-2022-03-18-1104-distro-default-smithi/6746705 Laura Flores
10:21 PM Bug #49888: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum ...
/a/yuriw-2022-03-19_14:37:23-rados-wip-yuri6-testing-2022-03-18-1104-distro-default-smithi/6746760 Laura Flores
10:15 PM Bug #51904: test_pool_min_size:AssertionError:wait_for_clean:failed before timeout expired due to...
/a/yuriw-2022-03-19_14:37:23-rados-wip-yuri6-testing-2022-03-18-1104-distro-default-smithi/6746893... Laura Flores
06:24 PM Bug #54994 (Resolved): osd: add scrub duration for scrubs after recovery
Scrub duration is being measured for scheduled scrubs and user-requested scrubs. It should be measured for scrubs hap... Aishwarya Mathuria
04:52 AM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
I've tested the above patch, and it seems to be working as intended.... nikhil kshirsagar
04:13 AM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
The earlier patch fails to compile, so I have changed it to the following, which does build.... nikhil kshirsagar

03/21/2022

11:30 PM Bug #53729 (Fix Under Review): ceph-osd takes all memory before oom on boot
Neha Ojha
08:24 PM Bug #54529 (Duplicate): mon/mon-bind.sh: Failure due to cores found
Laura Flores
06:58 PM Bug #54529: mon/mon-bind.sh: Failure due to cores found
Expected output from a successful run:
/a/yuriw-2022-03-18_00:42:20-rados-wip-yuri6-testing-2022-03-17-1547-distro...
Laura Flores
05:01 PM Bug #54529: mon/mon-bind.sh: Failure due to cores found
Happened in quincy:
/a/yuriw-2022-03-19_14:39:53-rados-quincy-distro-default-smithi/6747175
The first occurrence ...
Laura Flores
08:23 PM Bug #50089: mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reducing number of monitors i...
/a/teuthology-2022-01-09_07:01:02-rados-master-distro-default-smithi/6604561
/a/yuriw-2022-03-10_02:41:10-rados-wip-...
Laura Flores
03:44 PM Backport #54569 (Resolved): quincy: mon/MonCommands.h: target_size_ratio range is incorrect
Kamoltat (Junior) Sirivadhna
03:12 PM Bug #54556: Pools are wrongly reported to have non-power-of-two pg_num after update
Well, turns out (as a non-native English speaker) I confused "power of two" with "divisible by two"..
So my pg num...
Martin H.
12:43 PM Bug #54556: Pools are wrongly reported to have non-power-of-two pg_num after update
@Martin H, can you please attach the monitor log? Nitzan Mordechai
03:10 PM Bug #54517 (Duplicate): scrub/osd-scrub-snaps.sh: TEST FAILED WITH 1 ERRORS
Laura Flores
12:52 PM Feature #54564: Changes to auth_allow_insecure_global_id_reclaim are not in the audit log
Neha - changing component to RADOS. Venky Shankar
05:28 AM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
... nikhil kshirsagar
04:55 AM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
Discussed with Brad Hubbard. It seems the caps should be a list. For eg, this json works fine,... nikhil kshirsagar
03:30 AM Bug #54548: mon hang when run ceph -s command after execute "ceph osd in osd.<x>" command
PR(https://github.com/ceph/ceph/pull/41098) had been resolve it, this was also released in 14.2.22.
I can set "ceph ...
yite gu

03/19/2022

04:48 AM Backport #54612 (In Progress): quincy: Add snaptrim stats to the existing PG stats.
Sridhar Seshasayee
01:30 AM Bug #54962 (New): crash: int CrushWrapper::swap_bucket(ceph::common::CephContext*, int, int): ass...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=03e7a16b7fcb47f7c4cd3fc3...
Telemetry Bot
01:29 AM Bug #54958 (New): crash: PrimaryLogPG::cancel_copy_ops(bool, std::vector<unsigned long, std::allo...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=741586ba640a7fad3ce157d2...
Telemetry Bot
01:29 AM Bug #54953 (New): crash: tc_new()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7f06040865e59b62efdb4c0a...
Telemetry Bot
01:29 AM Bug #54950 (New): crash: tc_new()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4111935eeb4367c568ab82bd...
Telemetry Bot
01:29 AM Bug #54949 (New): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStore::Col...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=58efbfb4bb56b31b884c4c1a...
Telemetry Bot
01:29 AM Bug #54946 (New): crash: __pthread_mutex_lock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=634b1dfe837d110e60da688a...
Telemetry Bot
01:29 AM Bug #54942 (New): crash: rocksdb::VersionStorageInfo::AddFile(int, rocksdb::FileMetaData*, rocksd...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5008585126e5e071e1af8206...
Telemetry Bot
01:29 AM Bug #54941 (New): crash: virtual bool PrimaryLogPG::should_send_op(pg_shard_t, const hobject_t&):...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=eab0bdb43886dd0f28c269cc...
Telemetry Bot
01:29 AM Bug #54937 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=32a296fd650ca974d5a78f42...
Telemetry Bot
01:29 AM Bug #54932 (New): crash: PeeringState::proc_lease(pg_lease_t const&)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=941c3ef0cf8aaa22e08d1848...
Telemetry Bot
01:29 AM Bug #54931 (New): crash: virtual void MDSMonitor::update_from_paxos(bool*): assert(version > get_...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f682a35739f971a7aebd3ac0...
Telemetry Bot
01:29 AM Bug #54930 (New): crash: virtual void MDSMonitor::update_from_paxos(bool*): assert(version > get_...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7b3d65fe31144a4fab5f8e64...
Telemetry Bot
01:28 AM Bug #54928 (New): crash: tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::Free...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0882dd6ebdd933851d4e8980...
Telemetry Bot
01:28 AM Bug #54924 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=641db79530eeefadf1f68396...
Telemetry Bot
01:28 AM Bug #54923 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c73c9d35117af1b88a917121...
Telemetry Bot
01:28 AM Bug #54922 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d5d8cacb598d621b6334806c...
Telemetry Bot
01:28 AM Bug #54910 (New): crash: virtual bool PrimaryLogPG::should_send_op(pg_shard_t, const hobject_t&):...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d98ac62ec9dcb81a50454bd7...
Telemetry Bot
01:28 AM Bug #54909 (New): crash: virtual bool PrimaryLogPG::should_send_op(pg_shard_t, const hobject_t&):...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f1643748e02927cbf18a539e...
Telemetry Bot
01:28 AM Bug #54907 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c3089ee5db7d70037de368f8...
Telemetry Bot
01:28 AM Bug #54906 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=bd6d5369437a6703c6565c8a...
Telemetry Bot
01:28 AM Bug #54905 (New): crash: bool HealthMonitor::check_member_health(): assert(store_size > 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=351e7c0f971ada2034d0195b...
Telemetry Bot
01:28 AM Bug #54904 (New): crash: void FSMap::sanity(bool) const: assert(info.compat.writeable(fs->mds_map...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b690952069c4903b5ac70f98...
Telemetry Bot
01:28 AM Bug #54903 (New): crash: void FSMap::sanity(bool) const: assert(info.compat.writeable(fs->mds_map...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=537ff7983061e0ac7458973a...
Telemetry Bot
01:28 AM Bug #54901 (New): crash: __pthread_mutex_lock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b751fce9ec24163f36479651...
Telemetry Bot
01:28 AM Bug #54900 (New): crash: void PGLog::merge_log(pg_info_t&, pg_log_t&&, pg_shard_t, pg_info_t&, PG...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e058bd661dd1e8d6158cec74...
Telemetry Bot
01:28 AM Bug #54899 (New): crash: pthread_mutex_lock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f141982a26da5f99e5b010e4...
Telemetry Bot
01:27 AM Bug #54897 (New): crash: SnapSet::~SnapSet()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1bc781e05e382dd8bfe7c85f...
Telemetry Bot
01:27 AM Bug #54887 (New): crash: pg_vector_string(std::vector<int, std::allocator<int> > const&)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=18e53780107a1a4d4c47440e...
Telemetry Bot
01:27 AM Bug #54885 (New): crash: PgScrubber::build_scrub_map_chunk(ScrubMap&, ScrubMapBuilder&, hobject_t...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=14488b6d58800a16a1492258...
Telemetry Bot
01:27 AM Bug #54883 (New): crash: SpinLock::SpinLoop()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a62336b3bacaef402899c2db...
Telemetry Bot
01:27 AM Bug #54882 (New): crash: __pthread_mutex_lock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=de50da4d10ffe0e501662bd5...
Telemetry Bot
01:27 AM Bug #54881 (New): crash: __pthread_mutex_lock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c30b020bf0bc6c2b6b0471dd...
Telemetry Bot
01:27 AM Bug #54880 (New): crash: __pthread_mutex_lock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e866241c55b94d1db77d2b71...
Telemetry Bot
01:26 AM Bug #54864 (New): crash: __libc_malloc()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9965b220730849b743a9419a...
Telemetry Bot
01:26 AM Bug #54860 (New): crash: ReplicatedBackend::do_repop(boost::intrusive_ptr<OpRequest>)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2cf074fce60968d10cf531aa...
Telemetry Bot
01:26 AM Bug #54859 (New): crash: PrimaryLogPG::do_osd_ops(PrimaryLogPG::OpContext*, std::vector<OSDOp, st...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c581dc43a46a97efea4c59f1...
Telemetry Bot
01:26 AM Bug #54857 (New): crash: void interval_set<T, C>::insert(T, T, T*, T*) [with T = snapid_t; C = st...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c50013175c3f057a4b6153f5...
Telemetry Bot
01:26 AM Bug #54855 (New): crash: std::_Rb_tree_increment(std::_Rb_tree_node_base const*)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=bd3669100d83ac07ea4ee3f1...
Telemetry Bot
01:26 AM Bug #54851 (New): crash: boost::statechart::simple_state<PeeringState::Deleting, PeeringState::To...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f9159e25232543012d54387b...
Telemetry Bot
01:26 AM Bug #54849 (New): crash: PgScrubber::build_scrub_map_chunk(ScrubMap&, ScrubMapBuilder&, hobject_t...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=670d40d1669b94a19bb006d4...
Telemetry Bot
01:26 AM Bug #54845 (New): crash: eversion_t::get_key_name() const

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a7e0d9a8fe0203db17b93f02...
Telemetry Bot
01:25 AM Bug #54842 (New): crash: ceph::buffer::ptr::append(char const*, unsigned int)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e902522d127801a9d19f96b8...
Telemetry Bot
01:25 AM Bug #54839 (New): crash: PushOp::encode(ceph::buffer::list&, unsigned long) const

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c1d79a2b76cb4a1c05bf5628...
Telemetry Bot
01:25 AM Bug #54832 (New): crash: void FSMap::sanity(bool) const: assert(info.compat.writeable(fs->mds_map...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4cdeaf8c5fb4f7ea8ec9133c...
Telemetry Bot
01:25 AM Bug #54831 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=80078d4d0d62906a1043c235...
Telemetry Bot
01:25 AM Bug #54830 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8625a9fe17dddf626242a326...
Telemetry Bot
01:25 AM Bug #54829 (New): crash: void OSDMap::check_health(ceph::common::CephContext*, health_check_map_t...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0bd94e8d39bae181e6968ee7...
Telemetry Bot
01:25 AM Bug #54828 (New): crash: syscall()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=42cccf39ffbef722d9a81b58...
Telemetry Bot
01:25 AM Bug #54821 (New): crash: PGLog::IndexedLog::index(unsigned short) const

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b19a39f40e44343f0daad52a...
Telemetry Bot
01:25 AM Bug #54817 (New): crash: __cxa_deleted_virtual()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e7f5b237d94ce1d6641fe401...
Telemetry Bot
01:24 AM Bug #54808 (New): crash: std::__cxx11::string MonMap::get_name(unsigned int) const: assert(n < ra...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c6b2ab79aaa1d60546e6d5bc...
Telemetry Bot
01:24 AM Bug #54803 (New): crash: PeeringState::activate(ceph::os::Transaction&, unsigned int, PeeringCtxW...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0315dd7c5a1db88b730dbc2b...
Telemetry Bot
01:24 AM Bug #54801 (New): crash: __pthread_mutex_lock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2cd96f252c64835054df1981...
Telemetry Bot
01:24 AM Bug #54799 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=afb9e616d10e07b819f10d1a...
Telemetry Bot
01:24 AM Bug #54797 (New): crash: non-virtual thunk to PrimaryLogPG::log_operation(std::vector<pg_log_entr...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a08c98b41a368b5fa42a4310...
Telemetry Bot
01:24 AM Bug #54787 (New): crash: ceph::os::Transaction::encode(ceph::buffer::list&) const

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9e0ce013f4d8527b57b0cdce...
Telemetry Bot
01:23 AM Bug #54785 (New): crash: SubProcess::add_cmd_arg(char const*)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=45ccaf55240c9b22439fc1cc...
Telemetry Bot
01:23 AM Bug #54781 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1a0e97c526b2a62fcbcc2dff...
Telemetry Bot
01:23 AM Bug #54777 (New): crash: PGLog::IndexedLog::index(unsigned short) const

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=76afc4b185078394810d946e...
Telemetry Bot
01:23 AM Bug #54776 (New): crash: std::__detail::_Map_base<hobject_t, std::pair<hobject_t const, pg_log_en...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b1c2a89917ac52f780ca7c8f...
Telemetry Bot
01:23 AM Bug #54773 (New): crash: void MonMap::add(const mon_info_t&): assert(addr_mons.count(a) == 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f8fb85f12a0ce85f8ed314e9...
Telemetry Bot
01:23 AM Bug #54772 (New): crash: pthread_mutex_lock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9937a31bcd5d61d0d751a44f...
Telemetry Bot
01:23 AM Bug #54771 (New): crash: ceph::common::PerfCounters::set(int, unsigned long)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0d526134e18c9bc3b72eee21...
Telemetry Bot
01:23 AM Bug #54763 (New): crash: void FSMap::sanity(bool) const: assert(info.compat.writeable(fs->mds_map...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b1e9a423ef890d9a39c5274b...
Telemetry Bot
01:22 AM Bug #54759 (New): crash: bool HealthMonitor::check_member_health(): assert(store_size > 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8daed7dbe2a4db7e174d5c13...
Telemetry Bot
01:22 AM Bug #54758 (New): crash: const entity_addrvec_t& MonMap::get_addrs(unsigned int) const: assert(m ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=302a7b8cd1d593aae015628e...
Telemetry Bot
01:22 AM Bug #54754 (New): crash: void ECBackend::handle_recovery_read_complete(const hobject_t&, boost::t...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d0ccae3e83c00a7b1addaea8...
Telemetry Bot
01:22 AM Bug #54750 (New): crash: PeeringState::Crashed::Crashed(boost::statechart::state<PeeringState::Cr...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7bd9bb3fd0a5e31dc4970209...
Telemetry Bot
01:22 AM Bug #54745 (New): crash: void FSMap::sanity(bool) const: assert(info.compat.writeable(fs->mds_map...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=960282fac2ce57b510042e6d...
Telemetry Bot
01:22 AM Bug #54744 (New): crash: void MonMap::add(const mon_info_t&): assert(addr_mons.count(a) == 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0301792b024ebcd170453531...
Telemetry Bot
01:22 AM Bug #54738 (New): crash: PG::prepare_write(pg_info_t&, pg_info_t&, PastIntervals&, PGLog&, bool, ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=059dd3036d8a88e5f9c996ac...
Telemetry Bot
01:22 AM Bug #54737 (New): crash: pg_log_entry_t::pg_log_entry_t(pg_log_entry_t const&)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ebb22c1ff58e23579677eb56...
Telemetry Bot
01:22 AM Bug #54736 (New): crash: unsigned long const md_config_t::get_val<unsigned long>(ConfigValues con...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f7ef0aea5b23f12a6fd59930...
Telemetry Bot
01:22 AM Bug #54735 (New): crash: std::_List_iterator<pg_log_dup_t> std::list<pg_log_dup_t, mempool::pool_...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6cde73061c41e3b7fe3292c9...
Telemetry Bot
01:21 AM Bug #54726 (New): crash: rocksdb::ColumnFamilySet::~ColumnFamilySet()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=893942c770e4ec22a176fc22...
Telemetry Bot
01:21 AM Bug #54721 (New): crash: int fork_function(int, std::ostream&, std::function<signed char()>): ass...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ad3dbfcb12aa47eb6047ffbe...
Telemetry Bot
01:21 AM Bug #54720 (New): crash: const MDSMap::mds_info_t* FSMap::find_replacement_for(mds_role_t) const:...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=020e85155080c1db7172bb34...
Telemetry Bot
01:21 AM Bug #54718 (New): crash: int fork_function(int, std::ostream&, std::function<signed char()>): ass...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=15aa4d537d26feab1aba3461...
Telemetry Bot
01:21 AM Bug #54717 (New): crash: void FSMap::sanity(bool) const: assert(info.compat.writeable(fs->mds_map...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ed4a47210646fdd0a23da601...
Telemetry Bot
01:21 AM Bug #54712 (New): crash: void FSMap::sanity(bool) const: assert(info.compat.writeable(fs->mds_map...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=95b64a45f3fe10099294cddc...
Telemetry Bot
01:20 AM Bug #54710 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1b1acff7bf0f75c4d493e453...
Telemetry Bot
01:20 AM Bug #54709 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=20e436f4d417d0b2e865fe08...
Telemetry Bot
01:20 AM Bug #54708 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=739786197cebeeca2801b9e3...
Telemetry Bot
01:20 AM Bug #54706 (New): crash: void FSMap::sanity(bool) const: assert(info.compat.writeable(fs->mds_map...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=875f72c2b1e199134aa9060f...
Telemetry Bot
01:20 AM Bug #54705 (New): crash: void FSMap::sanity(bool) const: assert(info.compat.writeable(fs->mds_map...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=63d0ec2a6a196d5b5b62fe47...
Telemetry Bot
01:20 AM Bug #54704 (New): crash: base::internal::SpinLockDelay(int volatile*, int, int)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6973e13653e7538a798df4e2...
Telemetry Bot
01:19 AM Bug #54692 (New): crash: virtual bool PrimaryLogPG::should_send_op(pg_shard_t, const hobject_t&):...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a4e5858aaf2bc217dbba022e...
Telemetry Bot
01:19 AM Bug #54691 (New): crash: int fork_function(int, std::ostream&, std::function<signed char()>): ass...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2d6c1ccf0ade9c48e3bc400e...
Telemetry Bot
01:19 AM Bug #54689 (New): crash: void Monitor::sync_timeout(): assert(state == STATE_SYNCHRONIZING)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=66b2fb051c34a89f2f2a3603...
Telemetry Bot
01:19 AM Bug #54688 (New): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64_t, ch...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8a0477583bed66321f639af8...
Telemetry Bot
01:19 AM Bug #54687 (New): crash: rocksdb::BlockBasedTableBuilder::Add(rocksdb::Slice const&, rocksdb::Sli...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b3ad42ec5dc17d796d061199...
Telemetry Bot
01:19 AM Bug #54682 (New): crash: void ReplicatedBackend::_do_push(OpRequestRef): abort

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c0b08204690d3af94bdee5cf...
Telemetry Bot
01:19 AM Bug #54678 (New): crash: void Scrub::ScrubMachine::assert_not_active() const: assert(state_cast<c...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f8a211da681da1852304df01...
Telemetry Bot
01:18 AM Bug #54669 (New): crash: uint64_t PrimaryLogPG::recover_replicas(uint64_t, ThreadPool::TPHandle&,...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=66ff79cee22ea4d9e82d046e...
Telemetry Bot
01:18 AM Bug #54668 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=37a1b04b7d92ef8ccae3a231...
Telemetry Bot
01:18 AM Bug #54656 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=933a7566e28d7001a9494d78...
Telemetry Bot
01:18 AM Bug #54652 (New): crash: librados::IoCtx::watch2(std::basic_string<char, std::char_traits<char>, ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3200d614584c24ebe820a1f8...
Telemetry Bot
01:17 AM Bug #40777: hit assert in AuthMonitor::update_from_paxos

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2fba2ef6afdb444f87f1a9c17...
Telemetry Bot
01:17 AM Bug #54647 (New): crash: __pthread_mutex_lock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e8e60bf016a941ff8ee5c687...
Telemetry Bot
01:17 AM Bug #54646 (New): crash: void Monitor::win_standalone_election(): assert(rank == 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=558057b54ae471b046589a96...
Telemetry Bot
01:17 AM Bug #54641 (New): crash: virtual void PrimaryLogPG::on_local_recover(const hobject_t&, const Obje...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5ac165327d103d8ee1bc7a08...
Telemetry Bot
01:17 AM Bug #54632 (New): crash: RGWSI_Notify::unwatch(RGWSI_RADOS::Obj&, unsigned long)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=111109f978100da59232ead2...
Telemetry Bot
01:16 AM Bug #54631 (New): crash: void FSMap::sanity(bool) const: assert(info.compat.writeable(fs->mds_map...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8c73f593698d04bd8bcf913d...
Telemetry Bot
01:16 AM Bug #54630 (New): crash: void LogMonitor::_create_sub_incremental(MLog*, int, version_t): assert(...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3773cfd150902f2b0be87235...
Telemetry Bot
01:16 AM Bug #54629 (New): crash: void Paxos::commit_proposal(): assert(mon.is_leader())

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3015bc1ec993404897ed9453...
Telemetry Bot
01:16 AM Bug #54626 (New): crash: int BlueFS::_flush_range(BlueFS::FileWriter*, uint64_t, uint64_t): abort

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e4f31317c5a5fa9fe9de5e53...
Telemetry Bot

03/18/2022

05:40 PM Backport #54601: quincy: Add scrub_duration to pg dump json format
Aishwarya Mathuria wrote:
> https://github.com/ceph/ceph/pull/45471
merged
Yuri Weinstein
05:39 PM Backport #54569: quincy: mon/MonCommands.h: target_size_ratio range is incorrect
Kamoltat Sirivadhna wrote:
> https://github.com/ceph/ceph/pull/45396
merged
Yuri Weinstein
05:38 PM Backport #54527: quincy: cephadm upgrade pacific to quincy autoscaler is scaling pgs from 32 -> 3...
Kamoltat Sirivadhna wrote:
> https://github.com/ceph/ceph/pull/45363
merged
Yuri Weinstein
05:04 PM Bug #54617 (Fix Under Review): mgr/osd beacon timeout under high mon_command load
Neha Ojha
02:10 PM Bug #54617 (Fix Under Review): mgr/osd beacon timeout under high mon_command load
When mons handle a large amount of mon_commands, osdbeacon|mgrbeacon sent to peon mon
will wait for a long time with...
wencong wan
03:19 PM Documentation #54619 (Resolved): Doc: Improve mClock config reference documentation
Sridhar Seshasayee

03/17/2022

10:46 PM Backport #54526: pacific: cephadm upgrade pacific to quincy autoscaler is scaling pgs from 32 -> ...
https://github.com/ceph/ceph/pull/45173 Kamoltat (Junior) Sirivadhna
09:10 PM Backport #54614 (Resolved): quincy: support truncation sequences in sparse reads
https://github.com/ceph/ceph/pull/45736 Backport Bot
09:08 PM Feature #54280 (Pending Backport): support truncation sequences in sparse reads
As discussed with Greg on IRC. Neha Ojha
07:10 PM Backport #54612 (Resolved): quincy: Add snaptrim stats to the existing PG stats.
https://github.com/ceph/ceph/pull/45524 Backport Bot
07:07 PM Fix #54565 (Pending Backport): Add snaptrim stats to the existing PG stats.
Neha Ojha
06:03 PM Bug #53729 (New): ceph-osd takes all memory before oom on boot
Neha Ojha
03:45 PM Bug #54611 (Fix Under Review): prometheus metrics shows incorrect ceph version for upgraded ceph ...
Prashant D
03:38 PM Bug #54611 (Resolved): prometheus metrics shows incorrect ceph version for upgraded ceph daemon
If cluster is partially upgraded and on a particular host :
In baremetal environment, all daemons on host shows sa...
Prashant D
12:59 PM Bug #51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC back...
/a/yuriw-2022-03-16_20:38:07-rados-wip-yuri3-testing-2022-03-16-1030-distro-default-smithi/6739311 Sridhar Seshasayee
12:12 PM Bug #54516: mon/config.sh: unrecognized config option 'debug asok'
PR https://github.com/ceph/ceph/pull/44656 Added the test along with the code to support whitespace.
checking.
Nitzan Mordechai
11:10 AM Bug #54515: mon/health-mute.sh: TEST_mute: return 1 (HEALTH WARN 3 mgr modules have failed depend...
/a/kchai-2022-03-17_05:18:57-rados-wip-cxx20-fixes-core-kefu-distro-default-smithi/6740941/ Kefu Chai
03:44 AM Backport #54601 (In Progress): quincy: Add scrub_duration to pg dump json format
Aishwarya Mathuria
03:43 AM Backport #54601 (New): quincy: Add scrub_duration to pg dump json format
Aishwarya Mathuria
03:12 AM Backport #54601 (Resolved): quincy: Add scrub_duration to pg dump json format
https://github.com/ceph/ceph/pull/45471 Aishwarya Mathuria
03:15 AM Backport #54602 (Duplicate): quincy: Add scrub_duration to pg dump json format
Backport Bot
03:10 AM Feature #54600 (Pending Backport): Add scrub_duration to pg dump json format
Aishwarya Mathuria
03:03 AM Feature #54600 (Resolved): Add scrub_duration to pg dump json format
Addition of a scrub_duration field that shows how long the scrub/deep-scrub of a pg took in seconds.
This field will...
Aishwarya Mathuria
02:15 AM Bug #54576: cache tier set proxy faild
According to the description of the official website document:
REMOVING A WRITEBACK CACHE
Since a writeback cache m...
changzhi tan
12:17 AM Bug #54599 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

*New crash events were reported via Telemetry with newer versions (['16.2.6']) than encountered in Tracker (16.2.5)...
Telemetry Bot
12:16 AM Bug #54598 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

*New crash events were reported via Telemetry with newer versions (['16.2.6', '16.2.7']) than encountered in Tracke...
Telemetry Bot
12:16 AM Bug #54597 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

*New crash events were reported via Telemetry with newer versions (['16.2.6', '16.2.7']) than encountered in Tracke...
Telemetry Bot
12:16 AM Bug #54596 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

*New crash events were reported via Telemetry with newer versions (['16.2.6']) than encountered in Tracker (16.2.5)...
Telemetry Bot
12:16 AM Bug #54595 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

*New crash events were reported via Telemetry with newer versions (['16.2.6', '16.2.7']) than encountered in Tracke...
Telemetry Bot
12:16 AM Bug #54594 (New): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef): a...

*New crash events were reported via Telemetry with newer versions (['16.2.6', '16.2.7']) than encountered in Tracke...
Telemetry Bot
12:16 AM Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f582692869a94580abf07e669...
Telemetry Bot
12:15 AM Bug #50089: mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reducing number of monitors i...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=96b49c839d59492286f04a76e...
Telemetry Bot
12:15 AM Bug #49689: osd/PeeringState.cc: ceph_abort_msg("past_interval start interval mismatch") start

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3a50bb9444331ec2b94a68f89...
Telemetry Bot
12:14 AM Bug #36304: FAILED ceph_assert(p != pg_slots.end()) in OSDShard::register_and_wake_split_child(PG*)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=86176dad44ae51d3e7de7eac8...
Telemetry Bot

03/16/2022

08:17 PM Bug #54593 (Fix Under Review): librados: check latest osdmap on ENOENT in pool_reverse_lookup()
Ilya Dryomov
08:12 PM Bug #54593 (Resolved): librados: check latest osdmap on ENOENT in pool_reverse_lookup()
Need to avoid spurious ENOENT errors from rados_pool_reverse_lookup() and Rados::pool_reverse_lookup() caused by an o... Ilya Dryomov
07:24 PM Bug #43915 (New): leaked Session (alloc from OSD::ms_handle_authentication)
/a//yuriw-2022-03-14_18:49:17-rados-wip-yuri2-testing-2022-03-14-0946-quincy-distro-default-smithi/6736781/remote/smi... Neha Ojha
07:12 PM Fix #54565 (Fix Under Review): Add snaptrim stats to the existing PG stats.
Neha Ojha
07:01 PM Bug #54592 (Fix Under Review): partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark omap dirty
Neha Ojha
04:13 PM Bug #54592 (Resolved): partial recovery: CEPH_OSD_OP_OMAPRMKEYRANGE should mark omap dirty
All the OMAP write OPs mark_omap_dirty(), except CEPH_OSD_OP_OMAPRMKEYRANGE. This leads to:
1. incorrectly setting...
Neha Ojha
06:57 PM Bug #53729: ceph-osd takes all memory before oom on boot
Thanks Mykola and Dan, this shows exactly what the issue is! There are dup entries with a version higher than the log... Josh Durgin
04:24 PM Bug #53729: ceph-osd takes all memory before oom on boot
Dan van der Ster wrote:
> Mykola, to help understand the issue better, could you please attach the entire --op log...
Mykola Golub
03:21 PM Bug #53729: ceph-osd takes all memory before oom on boot
Mykola Golub wrote:
> Just to confirm Dan's findings, I am attaching the output of the command Dan asked me to run (...
Dan van der Ster
02:24 PM Bug #53729: ceph-osd takes all memory before oom on boot
Just to confirm Dan's findings, I am attaching the output of the command Dan asked me to run (number ob pg log and pg... Mykola Golub
12:31 PM Bug #53729: ceph-osd takes all memory before oom on boot
Updating here based on further investigations offline with Guillaume.
In his logs we saw bluestore iterating throu...
Dan van der Ster
08:48 AM Bug #53729: ceph-osd takes all memory before oom on boot
Dan van der Ster wrote:
> Guillame is it possible to catch this exact situation with debug_bluestore=20, i.e
>
> ...
Guillaume Fenollar
10:59 AM Feature #54580 (New): common/options: add FLAG_SECURE to Ceph options
h3. Context
It has been reported by several users that @ceph config dump@ and @ceph config-key dump@ may expose se...
Ernesto Puerta
10:02 AM Bug #54576 (Pending Backport): cache tier set proxy faild
When I set the proxy mode to remove a writeback cache according to the official documentation [[https://docs.ceph.com... changzhi tan
05:31 AM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
Reproduces without restful api too, ... nikhil kshirsagar
05:23 AM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
Moving to RADOS since it can be reproduced from a python script which takes the restful API out of the picture. Brad Hubbard
04:53 AM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
Moving to restful module so it an be triaged appropriately. Brad Hubbard
04:23 AM Bug #50089: mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reducing number of monitors i...
Update:
We ran and analyzed a total of 3 runs:
Local vstart stopping the monitors before removing the monitors...
Kamoltat (Junior) Sirivadhna

03/15/2022

06:27 PM Backport #54569 (In Progress): quincy: mon/MonCommands.h: target_size_ratio range is incorrect
Kamoltat (Junior) Sirivadhna
06:01 PM Backport #54569: quincy: mon/MonCommands.h: target_size_ratio range is incorrect
https://github.com/ceph/ceph/pull/45396 Kamoltat (Junior) Sirivadhna
05:45 PM Backport #54569 (Resolved): quincy: mon/MonCommands.h: target_size_ratio range is incorrect
Backport Bot
06:27 PM Backport #54567 (In Progress): pacific: mon/MonCommands.h: target_size_ratio range is incorrect
Kamoltat (Junior) Sirivadhna
05:59 PM Backport #54567: pacific: mon/MonCommands.h: target_size_ratio range is incorrect
https://github.com/ceph/ceph/pull/45397/commits Kamoltat (Junior) Sirivadhna
05:45 PM Backport #54567 (Resolved): pacific: mon/MonCommands.h: target_size_ratio range is incorrect
Backport Bot
06:26 PM Backport #54568 (In Progress): octopus: mon/MonCommands.h: target_size_ratio range is incorrect
https://github.com/ceph/ceph/pull/45398 Kamoltat (Junior) Sirivadhna
05:45 PM Backport #54568 (Resolved): octopus: mon/MonCommands.h: target_size_ratio range is incorrect
Backport Bot
06:00 PM Backport #54570 (Rejected): quincy: mon/MonCommands.h: target_size_ratio range is incorrect
Kamoltat (Junior) Sirivadhna
05:47 PM Backport #54570 (Rejected): quincy: mon/MonCommands.h: target_size_ratio range is incorrect
Currently if we give `target_size_ratio` a value more than 1.0 using the command: `ceph osd pool create <pool-name> -... Kamoltat (Junior) Sirivadhna
05:44 PM Bug #54316 (Pending Backport): mon/MonCommands.h: target_size_ratio range is incorrect
Kamoltat (Junior) Sirivadhna
01:47 PM Fix #54565 (Resolved): Add snaptrim stats to the existing PG stats.
On a per PG basis add the following snaptrim stats,
- objects trimmed and the
- time duration for the snaptrim
Sridhar Seshasayee
01:35 PM Feature #54564 (New): Changes to auth_allow_insecure_global_id_reclaim are not in the audit log
I expect that all setting changes will show up in the audit log (based on https://access.redhat.com/documentation/en-... Javier Kohen
12:08 PM Bug #53729: ceph-osd takes all memory before oom on boot
Guillaume Fenollar wrote:
> Dan van der Ster wrote:
> > Guillaume Fenollar wrote:
> > > Dan van der Ster wrote:
>...
Dan van der Ster
08:05 AM Bug #53729: ceph-osd takes all memory before oom on boot
FYI, in our case I described in [1] and following comments (15 osd cluster, after changing pg_num from 526 to 1026 os... Mykola Golub
11:13 AM Bug #54296: OSDs using too much memory
Dan van der Ster wrote:
> Hi Ruben, Did you make any more progress on this?
Hi Dan, I missed your update, sorry. ...
Ruben Kerkhof
09:47 AM Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
The bad json data that crashed the mon is pasted below..... nikhil kshirsagar
03:58 AM Bug #54558 (Resolved): malformed json in a Ceph RESTful API call can stop all ceph-mon services
When curl from cli, an HTTP request containing malformed json data, for creating user and defining capabilities, caus... nikhil kshirsagar
09:09 AM Bug #51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC back...
/a/yuriw-2022-03-14_18:47:44-rados-wip-yuri3-testing-2022-03-14-0946-distro-default-smithi/6736449 Aishwarya Mathuria
07:39 AM Bug #54548: mon hang when run ceph -s command after execute "ceph osd in osd.<x>" command
I see progress value is:... yite gu

03/14/2022

08:24 PM Bug #54556 (Won't Fix): Pools are wrongly reported to have non-power-of-two pg_num after update
We just updated our cluster from 14.2.1 to 14.2.22. Now (in addition two a few more) a new warning appears which we h... Martin H.
05:21 PM Bug #54552 (Fix Under Review): ceph windows test hanging quincy backport PRs
Kamoltat (Junior) Sirivadhna
03:42 PM Bug #54552 (Resolved): ceph windows test hanging quincy backport PRs
... Kamoltat (Junior) Sirivadhna
02:21 PM Backport #54526 (In Progress): pacific: cephadm upgrade pacific to quincy autoscaler is scaling p...
Kamoltat (Junior) Sirivadhna
02:20 PM Backport #54526: pacific: cephadm upgrade pacific to quincy autoscaler is scaling pgs from 32 -> ...
https://github.com/ceph/ceph/pull/45364 Kamoltat (Junior) Sirivadhna
02:21 PM Backport #54527 (In Progress): quincy: cephadm upgrade pacific to quincy autoscaler is scaling pg...
Kamoltat (Junior) Sirivadhna
02:21 PM Backport #54527: quincy: cephadm upgrade pacific to quincy autoscaler is scaling pgs from 32 -> 3...
https://github.com/ceph/ceph/pull/45363 Kamoltat (Junior) Sirivadhna
02:10 PM Bug #46847: Loss of placement information on OSD reboot
Oh also ceph pg repeer has not totally worked. I have a single object remaining unfound. ... Malcolm Haak
04:02 AM Bug #46847: Loss of placement information on OSD reboot
Neha Ojha wrote:
> Frank Schilder wrote:
> > Could somebody please set the status back to open and Affected Version...
Malcolm Haak
01:05 PM Bug #54548 (Won't Fix): mon hang when run ceph -s command after execute "ceph osd in osd.<x>" com...
1. run command "ceph osd in osd.<x>"
2. run command "ceph -s", I want to see progress, but "ceph -s" hang at this ti...
yite gu

03/13/2022

09:04 AM Bug #51307 (Fix Under Review): LibRadosWatchNotify.Watch2Delete fails
https://github.com/ceph/ceph/pull/45366 Nitzan Mordechai
08:34 AM Bug #51307: LibRadosWatchNotify.Watch2Delete fails
In that case it was not injection socket failure, it was:
2022-02-16T09:56:22.598+0000 15af4700 1 -- [v2:172.21.1...
Nitzan Mordechai

03/11/2022

01:02 PM Bug #53729: ceph-osd takes all memory before oom on boot
Dan van der Ster wrote:
> Guillaume Fenollar wrote:
> > Dan van der Ster wrote:
> > > Could you revert that and tr...
Guillaume Fenollar
08:22 AM Bug #53729: ceph-osd takes all memory before oom on boot
Guillaume Fenollar wrote:
> Dan van der Ster wrote:
> > Could you revert that and try running
> >
> > ceph-osd -...
Dan van der Ster
07:09 AM Bug #53729: ceph-osd takes all memory before oom on boot
Dan van der Ster wrote:
> Could you revert that and try running
>
> ceph-osd --debug_ms=1 --debug_osd=20 --debug_...
Guillaume Fenollar
06:47 AM Bug #53729: ceph-osd takes all memory before oom on boot
Guillaume Fenollar wrote:
> See that it reaches 14GB of RAM in 90 seconds approx and starts writing while crashing (...
Dan van der Ster
03:09 AM Bug #53729: ceph-osd takes all memory before oom on boot
Dan van der Ster wrote:
> > Can you somehow annotate the usage over time in the log?
>
> Could you please also se...
Guillaume Fenollar
03:02 AM Bug #53729: ceph-osd takes all memory before oom on boot
Mykola Golub wrote:
> Mykola Golub wrote:
>
> > pool 2 'ssd' replicated size 3 min_size 2 crush_rule 0 object_has...
Neha Ojha
09:54 AM Bug #52026: osd: pgs went back into snaptrim state after osd restart
We are having the same issue with ceph 15.2.13. We take RBD snapshots that gets deleted after 3 days.
The problem ge...
Jack Y
03:19 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
... jianwei zhang

03/10/2022

11:58 PM Bug #54516: mon/config.sh: unrecognized config option 'debug asok'
This was the first occurrence of this test failure according to the Sentry history (March 5th 2022), and it has since... Laura Flores
03:10 PM Bug #54516 (Won't Fix): mon/config.sh: unrecognized config option 'debug asok'
/a/yuriw-2022-03-04_21:56:41-rados-wip-yuri4-testing-2022-03-03-1448-distro-default-smithi/6721689... Kamoltat (Junior) Sirivadhna
11:46 PM Bug #54521: daemon: Error while waiting for process to exit
This looks a lot like a valgrind failure, but there were unfortunately no osd logs collected.... Laura Flores
03:35 PM Bug #54521 (Need More Info): daemon: Error while waiting for process to exit
This causes dead job: hit max job timeout
/a/yuriw-2022-03-04_21:56:41-rados-wip-yuri4-testing-2022-03-03-1448-dis...
Kamoltat (Junior) Sirivadhna
11:30 PM Bug #54529: mon/mon-bind.sh: Failure due to cores found
"Failure due to cores found" means that there is a coredump, and indeed there is a crash. Did we merge something rece... Neha Ojha
11:17 PM Bug #54529 (Duplicate): mon/mon-bind.sh: Failure due to cores found
Looks like this failed due to external connection issues, but I'll log it for documentation.
/a/teuthology-2022-01...
Laura Flores
11:30 PM Bug #54517: scrub/osd-scrub-snaps.sh: TEST FAILED WITH 1 ERRORS
Ronen this looks a lot like https://tracker.ceph.com/issues/54458, just with a slightly different output. Can you che... Laura Flores
03:18 PM Bug #54517 (Duplicate): scrub/osd-scrub-snaps.sh: TEST FAILED WITH 1 ERRORS
/a/teuthology-archive/yuriw-2022-03-04_21:56:41-rados-wip-yuri4-testing-2022-03-03-1448-distro-default-smithi/6721751... Kamoltat (Junior) Sirivadhna
10:52 PM Bug #54296: OSDs using too much memory
Hi Ruben, Did you make any more progress on this?
I'm going through all the osd pglog memory usage tickets, and it...
Dan van der Ster
10:21 PM Bug #53729: ceph-osd takes all memory before oom on boot
> Can you somehow annotate the usage over time in the log?
Could you please also set debug_prioritycache=5 -- this...
Dan van der Ster
09:35 PM Bug #53729: ceph-osd takes all memory before oom on boot
Guillaume Fenollar wrote:
> Neha Ojha wrote:
> > Can anyone provide osd logs with debug_osd=20,debug_ms=1 for OSDs ...
Dan van der Ster
05:23 AM Bug #53729: ceph-osd takes all memory before oom on boot
Mykola Golub wrote:
> pool 2 'ssd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1024 pgp_...
Mykola Golub
05:16 AM Bug #53729: ceph-osd takes all memory before oom on boot
Neha Ojha wrote:
> Can anyone provide osd logs with debug_osd=20,debug_ms=1 for OSDs that are hitting OOM?
I just...
Guillaume Fenollar
10:00 PM Backport #54527 (Resolved): quincy: cephadm upgrade pacific to quincy autoscaler is scaling pgs f...
Backport Bot
10:00 PM Backport #54526 (Resolved): pacific: cephadm upgrade pacific to quincy autoscaler is scaling pgs ...
Backport Bot
09:57 PM Bug #54263 (Pending Backport): cephadm upgrade pacific to quincy autoscaler is scaling pgs from 3...
Kamoltat (Junior) Sirivadhna
09:15 PM Feature #54525 (New): osd/mon: log memory usage during tick
The MDS has a nice feature that it prints out the rss and other memory stats every couple seconds at debug level 2.
...
Dan van der Ster
06:13 PM Bug #54507 (Duplicate): workunit test cls/test_cls_rgw: Manager failed: thrashosds
Laura Flores
03:28 PM Bug #51846: rados/test.sh: LibRadosList.ListObjectsCursor did not complete.
/a/yuriw-2022-03-04_21:56:41-rados-wip-yuri4-testing-2022-03-03-1448-distro-default-smithi/6721371
/a/yuriw-2022-03-...
Kamoltat (Junior) Sirivadhna
03:00 PM Bug #54515 (New): mon/health-mute.sh: TEST_mute: return 1 (HEALTH WARN 3 mgr modules have failed ...
/a/yuriw-2022-03-04_21:56:41-rados-wip-yuri4-testing-2022-03-03-1448-distro-default-smithi/6721547... Kamoltat (Junior) Sirivadhna
02:48 PM Bug #45423: api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.HitSetWrite
/a/yuriw-2022-03-04_21:56:41-rados-wip-yuri4-testing-2022-03-03-1448-distro-default-smithi/6721464 Kamoltat (Junior) Sirivadhna
10:50 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
Neha Ojha wrote:
> jianwei zhang wrote:
> > 1711'7107 : s0/1/2/3/4/5都有所以都能写下去
> > 1715'7108 : s0/2/3/5 满足k=4,所以...
jianwei zhang
01:57 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
Neha Ojha wrote:
> jianwei zhang wrote:
> > 1711'7107 : s0/1/2/3/4/5都有所以都能写下去
> > 1715'7108 : s0/2/3/5 满足k=4,所以...
jianwei zhang
04:59 AM Bug #45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_ra...
/a/yuriw-2022-03-04_21:56:41-rados-wip-yuri4-testing-2022-03-03-1448-distro-default-smithi/6721329 Kamoltat (Junior) Sirivadhna
04:32 AM Bug #54511 (Resolved): test_pool_min_size: AssertionError: not clean before minsize thrashing starts
/a/yuriw-2022-03-04_00:56:58-rados-wip-yuri4-testing-2022-03-03-1448-distro-default-smithi/6719015... Kamoltat (Junior) Sirivadhna
04:15 AM Bug #53767: qa/workunits/cls/test_cls_2pc_queue.sh: killing an osd during thrashing causes timeout
/a/yuriw-2022-03-04_00:56:58-rados-wip-yuri4-testing-2022-03-03-1448-distro-default-smithi/6718855 Kamoltat (Junior) Sirivadhna
01:48 AM Bug #51627: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soi...
https://tracker.ceph.com/issues/54509 Myoungwon Oh
01:47 AM Bug #54509: FAILED ceph_assert due to issue manifest API to the original object
https://github.com/ceph/ceph/pull/45137 Myoungwon Oh
01:47 AM Bug #54509 (Resolved): FAILED ceph_assert due to issue manifest API to the original object
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x55f1f3750606]
2: ceph-osd(+0x5b...
Myoungwon Oh

03/09/2022

09:44 PM Bug #54507 (Duplicate): workunit test cls/test_cls_rgw: Manager failed: thrashosds
/a/yuriw-2022-03-04_00:56:58-rados-wip-yuri4-testing-2022-03-03-1448-distro-default-smithi/6718934... Kamoltat (Junior) Sirivadhna
08:23 PM Bug #52535: monitor crashes after an OSD got destroyed: OSDMap.cc: 5686: FAILED ceph_assert(num_d...
Hello Radoslaw,
thank you for your response!
About two weeks ago I did first remove and then add 6 OSDs. I did no...
Sebastian Mazza
07:35 PM Bug #52535: monitor crashes after an OSD got destroyed: OSDMap.cc: 5686: FAILED ceph_assert(num_d...
Neha has made an interesting observation about the occurrences among different versions.
http://telemetry.front.se...
Radoslaw Zarzynski
07:32 PM Bug #52535: monitor crashes after an OSD got destroyed: OSDMap.cc: 5686: FAILED ceph_assert(num_d...
Hello Sebastian!
Was there any change about the OSD count? I mean particularly OSD removal.
Radoslaw Zarzynski
01:10 AM Bug #52535: monitor crashes after an OSD got destroyed: OSDMap.cc: 5686: FAILED ceph_assert(num_d...
I Faced the same problem with ceph version 16.2.6. It occurred after shutting down all 3 physical servers of the clus... Sebastian Mazza
08:16 PM Backport #54506 (In Progress): quincy: doc/rados/operations/placement-groups/#automated-scaling: ...
https://github.com/ceph/ceph/pull/45321 Kamoltat (Junior) Sirivadhna
07:50 PM Backport #54506 (Resolved): quincy: doc/rados/operations/placement-groups/#automated-scaling: --b...
Backport Bot
08:15 PM Backport #54505 (In Progress): pacific: doc/rados/operations/placement-groups/#automated-scaling:...
https://github.com/ceph/ceph/pull/45328 Kamoltat (Junior) Sirivadhna
07:50 PM Backport #54505 (Resolved): pacific: doc/rados/operations/placement-groups/#automated-scaling: --...
Backport Bot
07:59 PM Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
If this is easily reproducible could you please provide us with logs of replicas for the failing PG? It can be figure... Radoslaw Zarzynski
07:55 PM Bug #51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC back...
More for my own reference, but it's clear that the rw_manager problem occurs here in the PrimaryLogPG code when prepp... Laura Flores
07:47 PM Bug #54485 (Pending Backport): doc/rados/operations/placement-groups/#automated-scaling: --bulk i...
Neha Ojha
07:46 PM Bug #51307 (In Progress): LibRadosWatchNotify.Watch2Delete fails
Radoslaw Zarzynski
07:36 PM Bug #53729: ceph-osd takes all memory before oom on boot
Neha Ojha wrote:
> Can you share the output of "ceph osd dump"? I suspect that though you may have disabled the au...
Mykola Golub
07:31 PM Bug #53729: ceph-osd takes all memory before oom on boot
Neha Ojha wrote:
> Can anyone provide osd logs with debug_osd=20,debug_ms=1 for OSDs that are hitting OOM?
I uplo...
Mykola Golub
06:58 PM Bug #53729: ceph-osd takes all memory before oom on boot
Can anyone provide osd logs with debug_osd=20,debug_ms=1 for OSDs that are hitting OOM? Neha Ojha
06:45 PM Bug #53729: ceph-osd takes all memory before oom on boot
Mykola Golub wrote:
> We seem to observe a similar issue (16.2.7). On a pool with autoscale disabled pg num was chan...
Neha Ojha
05:56 PM Bug #53729: ceph-osd takes all memory before oom on boot
We seem to observe a similar issue (16.2.7). On a pool with autoscale disabled pg num was changed from 256 to 1024. A... Mykola Golub
07:13 PM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
jianwei zhang wrote:
> 1711'7107 : s0/1/2/3/4/5都有所以都能写下去
> 1715'7108 : s0/2/3/5 满足k=4,所以能写下去
> 1715'7109 : s0/2...
Neha Ojha
06:43 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
ceph v15.2.13 tag jianwei zhang
05:42 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
[root@node1 ceph]# zcat ceph.client.log-20220308.gz|grep 202000000034931.0000001a
2022-03-08T03:12:25.531+0800 7f484...
jianwei zhang
05:37 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
1711'7107 : s0/1/2/3/4/5都有所以都能写下去
1715'7108 : s0/2/3/5 满足k=4,所以能写下去
1715'7109 : s0/2/3/5 满足k=4,所以能写下去
1715'71...
jianwei zhang
05:35 AM Bug #53924: EC PG stuckrecovery_unfound+undersized+degraded+remapped+peered
I had a similar problem with pg recovery_unfound ... jianwei zhang
07:06 PM Bug #50042: rados/test.sh: api_watch_notify failures
Let's use this tracker to track all the watch notify failures. For other api test failures, let's open new trackers. ... Neha Ojha
03:36 PM Bug #50042: rados/test.sh: api_watch_notify failures
Found a case of https://tracker.ceph.com/issues/45423 in master, which had a fix that was merged. Seems like it's pop... Laura Flores
05:06 PM Backport #54468 (In Progress): octopus: Setting osd_pg_max_concurrent_snap_trims to 0 prematurely...
Laura Flores
04:56 PM Backport #54466 (In Progress): pacific: Setting osd_pg_max_concurrent_snap_trims to 0 prematurely...
Laura Flores
04:42 PM Backport #54467 (In Progress): quincy: Setting osd_pg_max_concurrent_snap_trims to 0 prematurely ...
Laura Flores
04:39 PM Backport #53659 (Resolved): pacific: mon: "FAILED ceph_assert(session_map.sessions.empty())" when...
https://github.com/ceph/ceph/pull/44543 has been merged. Laura Flores
04:36 PM Backport #53978 (Resolved): quincy: [RFE] Limit slow request details to mgr log
Laura Flores
04:36 PM Backport #53388 (Resolved): pacific: pg-temp entries are not cleared for PGs that no longer exist
Laura Flores
04:36 PM Backport #51150 (Resolved): pacific: When read failed, ret can not take as data len, in FillInVer...
Laura Flores
04:35 PM Backport #53486 (Resolved): pacific: LibRadosTwoPoolsPP.ManifestSnapRefcount Failure.
Laura Flores
04:35 PM Backport #53702 (Resolved): pacific: qa/tasks/backfill_toofull.py: AssertionError: 2.0 not in bac...
Laura Flores
04:33 PM Backport #53942 (Resolved): pacific: mon: all mon daemon always crash after rm pool
Laura Flores
04:33 PM Backport #53535 (Resolved): pacific: mon: mgrstatmonitor spams mgr with service_map
Laura Flores
04:32 PM Backport #53718 (Resolved): pacific: mon: frequent cpu_tp had timed out messages
Laura Flores
04:28 PM Backport #53480 (Resolved): pacific: Segmentation fault under Pacific 16.2.1 when using a custom ...
Laura Flores
04:12 PM Backport #52077 (In Progress): octopus: api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.HitSetWrite
Laura Flores
04:11 PM Backport #52078 (In Progress): pacific: api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.HitSetWrite
Laura Flores
03:33 PM Bug #45423: api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.HitSetWrite
Seems like this may have come back:
/a/dgalloway-2022-03-09_02:34:58-rados-wip-45272-distro-basic-smithi/6727572
Laura Flores

03/08/2022

06:52 PM Bug #51627: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soi...
Myoungwon Oh wrote:
> The error message looks like similar before, but the cause is difference from the prior case.
...
Neha Ojha
10:00 AM Bug #54489 (New): mon: ops get stuck in "resend forwarded message to leader"
I hited this bug "BUG #22114":https://tracker.ceph.com/issues/22114#change-211414 in octopus.
"description": "log(2 ...
Jiaxing Fan
09:02 AM Bug #51307: LibRadosWatchNotify.Watch2Delete fails
Laura Flores wrote:
> /a/yuriw-2022-02-16_00:25:26-rados-wip-yuri-testing-2022-02-15-1431-distro-default-smithi/6687...
Nitzan Mordechai

03/07/2022

03:00 PM Bug #54485 (Fix Under Review): doc/rados/operations/placement-groups/#automated-scaling: --bulk i...
Kamoltat (Junior) Sirivadhna
02:45 PM Bug #54485 (Resolved): doc/rados/operations/placement-groups/#automated-scaling: --bulk invalid c...
Command for creating a pool
was: `ceph osd create test_pool --bulk`
should be: `ceph osd pool create test_pool ...
Kamoltat (Junior) Sirivadhna

03/06/2022

08:42 PM Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
Just got this again during a recovery after doing maintenance on another node this OSD crashed.
-1> 2022-03-06T...
Tobias Urdin

03/04/2022

07:20 PM Backport #54232 (Resolved): pacific: devices: mon devices appear empty when scraping SMART metrics
Yaarit Hatuka
06:40 PM Backport #54232: pacific: devices: mon devices appear empty when scraping SMART metrics
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44959
merged
Yuri Weinstein
04:51 PM Bug #50042: rados/test.sh: api_watch_notify failures
/a/yuriw-2022-03-01_17:45:51-rados-wip-yuri3-testing-2022-02-28-0757-pacific-distro-default-smithi/6714656... Laura Flores
01:39 PM Bug #53729: ceph-osd takes all memory before oom on boot
BTW I'm using Ceph 15.2.16 Guillaume Fenollar
03:13 AM Bug #53729: ceph-osd takes all memory before oom on boot
Hi everyone,
I'm having this issue as well for several weeks. Something situations stabilizes by themselves, sometim...
Guillaume Fenollar

03/03/2022

06:54 PM Bug #54458 (Resolved): osd-scrub-snaps.sh: TEST_scrub_snaps failed due to malformed log message
Neha Ojha
08:10 AM Bug #54458 (Fix Under Review): osd-scrub-snaps.sh: TEST_scrub_snaps failed due to malformed log m...
Ronen Friedman
07:47 AM Bug #54458 (Resolved): osd-scrub-snaps.sh: TEST_scrub_snaps failed due to malformed log message
(created by PR #44941)
the test expects the following line:
"...found snap mapper error on pg 1.0 oid 1:461f8b5e:...
Ronen Friedman
06:15 PM Bug #45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_ra...
/a/yuriw-2022-03-01_17:45:51-rados-wip-yuri3-testing-2022-02-28-0757-pacific-distro-default-smithi/6714724 Laura Flores
06:13 PM Bug #50042: rados/test.sh: api_watch_notify failures
/a/yuriw-2022-03-01_17:45:51-rados-wip-yuri3-testing-2022-02-28-0757-pacific-distro-default-smithi/6714863... Laura Flores
06:09 PM Bug #53294 (Duplicate): rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush
Marking this one as the duplicate because the other Tracker has the PR attached to it. Laura Flores
06:02 PM Bug #47838: mon/test_mon_osdmap_prune.sh: first_pinned != trim_to
/a/yuriw-2022-03-01_17:45:51-rados-wip-yuri3-testing-2022-02-28-0757-pacific-distro-default-smithi/6714654 Laura Flores
05:55 PM Backport #54468 (Resolved): octopus: Setting osd_pg_max_concurrent_snap_trims to 0 prematurely cl...
https://github.com/ceph/ceph/pull/45324 Backport Bot
05:55 PM Backport #54467 (Resolved): quincy: Setting osd_pg_max_concurrent_snap_trims to 0 prematurely cle...
https://github.com/ceph/ceph/pull/45322 Backport Bot
05:55 PM Backport #54466 (Resolved): pacific: Setting osd_pg_max_concurrent_snap_trims to 0 prematurely cl...
https://github.com/ceph/ceph/pull/45323 Backport Bot
05:54 PM Bug #54396 (Pending Backport): Setting osd_pg_max_concurrent_snap_trims to 0 prematurely clears t...
Neha Ojha
05:48 PM Bug #54396 (Resolved): Setting osd_pg_max_concurrent_snap_trims to 0 prematurely clears the snapt...
Laura Flores
03:59 PM Bug #53855 (Resolved): rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount
Laura Flores
03:09 PM Bug #50659: Segmentation fault under Pacific 16.2.1 when using a custom crush location hook
This seems to be a pretty high priority issue, we just hit it upgrading from nautilus to 16.2.7 on a cluster with 100... Wyllys Ingersoll
09:50 AM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
I hited this bug in octopus.
"description": "log(2 entries from seq 1 at 2021-12-20T10:43:38.225243+0800...
Jiaxing Fan
01:35 AM Bug #52319: LibRadosWatchNotify.WatchNotify2 fails
This is a bit different to #47719. In that case we got an ENOENT when we expected an ENOTCONN but in the case of this... Brad Hubbard
12:17 AM Bug #52319: LibRadosWatchNotify.WatchNotify2 fails
Thanks Laura and Radek. Let me take another look at this. Brad Hubbard

03/02/2022

11:14 PM Bug #54263: cephadm upgrade pacific to quincy autoscaler is scaling pgs from 32 -> 32768 for ceph...
Update:
After recreating the problem by tweaking the upgrade/pacific-x/parallel suite and adding additional logs, ...
Kamoltat (Junior) Sirivadhna
06:46 PM Bug #54263 (Fix Under Review): cephadm upgrade pacific to quincy autoscaler is scaling pgs from 3...
Neha Ojha
12:38 AM Bug #54263 (In Progress): cephadm upgrade pacific to quincy autoscaler is scaling pgs from 32 -> ...
Vikhyat Umrao
11:04 PM Bug #52124: Invalid read of size 8 in handle_recovery_delete()
/a/yuriw-2022-03-01_22:42:19-rados-wip-yuri4-testing-2022-03-01-1206-distro-default-smithi/6715365 Laura Flores
09:51 PM Backport #54412 (Rejected): pacific:osd:add pg_num_max value
Don't need the backport in pacific at the moment, might do in the future tho. Kamoltat (Junior) Sirivadhna
07:20 PM Bug #54210 (Resolved): pacific: mon/pg_autoscaler.sh: echo failed on "bash -c 'ceph osd pool get ...
Radoslaw Zarzynski
07:16 PM Bug #52136: Valgrind reports memory "Leak_DefinitelyLost" errors.
Let's add it to qa/valgrind.supp to suppress this error, based on Adam's comment https://tracker.ceph.com/issues/5213... Neha Ojha
07:00 PM Bug #52319: LibRadosWatchNotify.WatchNotify2 fails
Added a related one (hypothesis: same issue in multiple places, one of them already fix by Brad). Radoslaw Zarzynski

03/01/2022

11:22 PM Bug #51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC back...
I'm guessing that the problem involves pgs that are stuck in the `active+recovering+undersized+remapped` state (or `a... Laura Flores
05:26 PM Bug #51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC back...
/a/yuriw-2022-02-15_16:22:25-rados-wip-yuri6-testing-2022-02-14-1456-distro-default-smithi/6685233... Laura Flores
08:11 PM Bug #54438: test/objectstore/store_test.cc: FAILED ceph_assert(bl_eq(state->contents[noid].data, ...
/a/benhanokh-2021-08-04_06:12:22-rados-wip_gbenhano_ncbz-distro-basic-smithi/6310791/ Neha Ojha
05:40 PM Bug #54438 (New): test/objectstore/store_test.cc: FAILED ceph_assert(bl_eq(state->contents[noid]....
/a/yuriw-2022-02-15_16:22:25-rados-wip-yuri6-testing-2022-02-14-1456-distro-default-smithi/6685291... Laura Flores
06:13 PM Bug #52319: LibRadosWatchNotify.WatchNotify2 fails
I linked a related issue that looks very similar to this failure, except with a slightly different LibRadosWatchNotif... Laura Flores
06:11 PM Bug #54439 (New): LibRadosWatchNotify.WatchNotify2Multi fails
/a/yuriw-2022-02-28_21:23:00-rados-wip-yuri-testing-2022-02-28-0823-quincy-distro-default-smithi/6711961... Laura Flores
05:21 PM Bug #52124: Invalid read of size 8 in handle_recovery_delete()
/a/yuriw-2022-02-15_16:22:25-rados-wip-yuri6-testing-2022-02-14-1456-distro-default-smithi/6685226 Laura Flores

02/28/2022

09:28 PM Backport #54082 (Resolved): pacific: mon: osd pool create <pool-name> with --bulk flag
Kamoltat (Junior) Sirivadhna
06:53 PM Bug #50842: pacific: recovery does not complete because of rw_manager lock not being released
I recovered logs from a scenario that looks very similar.
See the full result of `zcat /a/yuriw-2022-02-17_22:49:5...
Laura Flores
11:34 AM Bug #54423 (New): osd/scrub: bogus DigestUpdate events are created, logged and (hopefully) rejected
A mishandling of the counter of "the digest-updates we are waiting for, before finishing
with this scrubbed chunk" c...
Ronen Friedman

02/25/2022

10:57 PM Backport #53480: pacific: Segmentation fault under Pacific 16.2.1 when using a custom crush locat...
Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44897
merged
Yuri Weinstein
10:56 PM Backport #54082: pacific: mon: osd pool create <pool-name> with --bulk flag
Kamoltat Sirivadhna wrote:
> pull request: https://github.com/ceph/ceph/pull/44847
merged
Yuri Weinstein
09:59 PM Backport #54412 (Rejected): pacific:osd:add pg_num_max value
https://github.com/ceph/ceph/pull/45173 Kamoltat (Junior) Sirivadhna
05:55 PM Bug #50042: rados/test.sh: api_watch_notify failures
/a/yuriw-2022-02-24_22:04:22-rados-wip-yuri7-testing-2022-02-17-0852-pacific-distro-default-smithi/6704772... Laura Flores
05:54 AM Bug #54364 (Resolved): The built-in osd bench test shows inflated results.
Sridhar Seshasayee
05:54 AM Backport #54393 (Resolved): quincy: The built-in osd bench test shows inflated results.
Sridhar Seshasayee

02/24/2022

10:45 PM Backport #54386: octopus: [RFE] Limit slow request details to mgr log
please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/45154
ceph-backport.sh versi...
Ponnuvel P
07:43 PM Bug #52136: Valgrind reports memory "Leak_DefinitelyLost" errors.
/a/sseshasa-2022-02-24_11:27:07-rados-wip-45118-45121-quincy-testing-distro-default-smithi/6704275/remote/smithi174/l... Laura Flores
07:19 PM Bug #53294: rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlushDuringFlush
/a/sseshasa-2022-02-24_11:27:07-rados-wip-45118-45121-quincy-testing-distro-default-smithi/6704402... Laura Flores
06:36 PM Bug #54368 (Duplicate): ModuleNotFoundError: No module named 'tasks.cephadm'
Neha Ojha
05:51 PM Backport #53644 (In Progress): pacific: Disable health warning when autoscaler is on
Christopher Hoffman
03:33 PM Backport #53551 (Resolved): pacific: [RFE] Provide warning when the 'require-osd-release' flag do...
Sridhar Seshasayee
08:56 AM Bug #54396: Setting osd_pg_max_concurrent_snap_trims to 0 prematurely clears the snaptrim queue
More context:... Dan van der Ster
08:44 AM Bug #54396 (Fix Under Review): Setting osd_pg_max_concurrent_snap_trims to 0 prematurely clears t...
Dan van der Ster
08:41 AM Bug #54396 (Resolved): Setting osd_pg_max_concurrent_snap_trims to 0 prematurely clears the snapt...
See https://www.spinics.net/lists/ceph-users/msg71061.html... Dan van der Ster
08:38 AM Backport #54393 (Resolved): quincy: The built-in osd bench test shows inflated results.
https://github.com/ceph/ceph/pull/45141 Sridhar Seshasayee
08:37 AM Bug #54364 (Pending Backport): The built-in osd bench test shows inflated results.
Sridhar Seshasayee
02:45 AM Bug #51627: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soi...
The error message looks like similar before, but the cause is difference from the prior case.
Anyway, I posted the f...
Myoungwon Oh

02/23/2022

05:32 PM Bug #52124: Invalid read of size 8 in handle_recovery_delete()
Happened in a dead job.
/a/yuriw-2022-02-21_15:40:41-rados-wip-yuri4-testing-2022-02-18-0800-distro-default-smithi/6...
Laura Flores
05:16 PM Bug #51627: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_missing(soi...
Happened again. Could this be a new occurrence?
/a/yuriw-2022-02-21_15:40:41-rados-wip-yuri4-testing-2022-02-18-0800...
Laura Flores
05:00 PM Bug #45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_ra...
/a/yuriw-2022-02-21_15:40:41-rados-wip-yuri4-testing-2022-02-18-0800-distro-default-smithi/6698327 Laura Flores
03:15 PM Backport #54386 (Resolved): octopus: [RFE] Limit slow request details to mgr log
Backport Bot
 

Also available in: Atom