Activity
From 08/02/2021 to 08/31/2021
08/31/2021
- 09:57 PM Bug #50587 (Resolved): mon election storm following osd recreation: huge tcmalloc and ceph::msgr:...
- 09:41 PM Bug #52421 (Resolved): test tracker
- 09:41 PM Backport #52475 (Resolved): octopus: test tracker
- 09:20 PM Backport #52475 (Resolved): octopus: test tracker
- 09:40 PM Backport #52474 (Resolved): nautilus: test tracker
- 08:52 PM Backport #52474 (Resolved): nautilus: test tracker
- 09:40 PM Backport #52466 (Resolved): pacific: test tracker
- 03:44 PM Backport #52466 (Resolved): pacific: test tracker
- 08:59 AM Bug #49697: prime pg temp: unexpected optimization
- Recently, I find patch "https://github.com/ceph/ceph/commit/023524a26d7e12e7ddfc3537582b1a1cb03af69e" can solve my is...
- 03:44 AM Bug #52255: The pgs state are degraded, but all the osds is up and there is no recovering and bac...
- Neha Ojha wrote:
> can you share your osdmap? are all your osds up and in? the crushmap looks fine.
wish to get y...
08/30/2021
- 04:59 PM Bug #52408: osds not peering correctly after startup
- Other requested info from this rebuild of the cluster:...
- 04:57 PM Bug #52408: osds not peering correctly after startup
- Ok. I wasn't clear on whether I needed to run "ceph config set debug_osd 20" on all the hosts or just 1. I ran it on ...
- 02:28 PM Bug #50657: smart query on monitors
- Yaarit Hatuka wrote:
> Thanks. Are there mons on dedicated nodes or devices in your cluster configuration?
We hav... - 08:56 AM Bug #50657 (Pending Backport): smart query on monitors
- 01:44 PM Backport #51605 (In Progress): pacific: bufferlist::splice() may cause stack corruption in buffer...
- 01:44 PM Backport #51604 (In Progress): octopus: bufferlist::splice() may cause stack corruption in buffer...
- 09:00 AM Backport #52451 (Resolved): octopus: smart query on monitors
- https://github.com/ceph/ceph/pull/44177
- 09:00 AM Backport #52450 (Resolved): pacific: smart query on monitors
- https://github.com/ceph/ceph/pull/44164
- 07:01 AM Bug #52448 (Fix Under Review): osd: pg may get stuck in backfill_toofull after backfill is interr...
- 06:51 AM Bug #52448 (Resolved): osd: pg may get stuck in backfill_toofull after backfill is interrupted du...
- Consider a scenario:
- Data is written to a pool so one osd X is close to full but still lower than nearfool/toofu...
08/28/2021
- 02:59 PM Bug #52445 (New): OSD asserts on starting too many pushes
- I am running ceph version 15.2.5 cluster in the recent days scrub reported error and few pg failed due to OSD's rando...
08/27/2021
- 04:28 PM Bug #52124: Invalid read of size 8 in handle_recovery_delete()
- /a/yuriw-2021-08-26_18:40:53-rados-wip-yuri7-testing-2021-08-26-0841-distro-basic-smithi/6360450/remote/smithi052/log...
- 01:21 PM Bug #52421 (Pending Backport): test tracker
08/26/2021
- 09:55 PM Bug #52172 (Triaged): crash: ceph::buffer::v15_2_0::create_aligned_in_mempool(unsigned int, unsig...
- 09:51 PM Bug #52174 (Triaged): crash: ceph::buffer::v15_2_0::create_aligned_in_mempool(unsigned int, unsig...
- 09:46 PM Bug #52176 (Duplicate): crash: std::_Rb_tree<boost::intrusive_ptr<AsyncConnection>, boost::intrus...
- 09:41 PM Bug #52178 (Duplicate): crash: virtual void AuthMonitor::update_from_paxos(bool*): assert(ret == 0)
- 09:37 PM Bug #52180 (Duplicate): crash: void pg_missing_set<TrackChanges>::got(const hobject_t&, eversion_...
- 09:37 PM Bug #47299 (New): Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
- 09:33 PM Bug #52183 (Duplicate): crash: const entity_addrvec_t& MonMap::get_addrs(unsigned int) const: ass...
- 09:31 PM Bug #52186 (Duplicate): crash: void OSD::handle_osd_map(MOSDMap*): assert(p != added_maps_bl.end())
- 09:29 PM Bug #52195 (Duplicate): crash: /lib64/libpthread.so.0(
- 09:26 PM Bug #52190 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
- Not a ceph bug, most likely failed to write to rocksdb.
- 09:26 PM Bug #52191 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
- Not a ceph bug, most likely failed to write to rocksdb.
- 09:25 PM Bug #52192 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
- Not a ceph bug, most likely failed to write to rocksdb.
- 09:25 PM Bug #52193 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
- Not a ceph bug, most likely failed to write to rocksdb.
- 09:25 PM Bug #52197 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
- Not a ceph bug, most likely failed to write to rocksdb.
- 09:23 PM Bug #52198 (Duplicate): crash: virtual Monitor::~Monitor(): assert(session_map.sessions.empty())
- 09:22 PM Bug #52199 (Duplicate): crash: virtual Monitor::~Monitor(): assert(session_map.sessions.empty())
- 09:21 PM Bug #52200 (Duplicate): crash: void OSD::handle_osd_map(MOSDMap*): assert(p != added_maps_bl.end())
- 09:18 PM Bug #52207 (Duplicate): crash: std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<ch...
- 09:17 PM Bug #52210 (Closed): crash: CrushWrapper::decode(ceph::buffer::v15_2_0::list::iterator_impl<true>&)
- One cluster reporting all the crashes, likely failing to decode due to a corrupted on disk state.
- 09:15 PM Bug #52211 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
- Not a ceph bug, most likely failed to write to rocksdb.
- 09:13 PM Bug #52212 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
- 09:11 PM Bug #52213 (Duplicate): crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->o...
- 09:10 PM Bug #52214 (Duplicate): crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->o...
- 09:10 PM Bug #52217 (Duplicate): crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->o...
- 09:10 PM Bug #52218 (Duplicate): crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->o...
- 09:09 PM Bug #44715 (New): common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_list.back())->o...
- 09:07 PM Bug #52220: crash: void ECUtil::HashInfo::append(uint64_t, std::map<int, ceph::buffer::v15_2_0::l...
- One cluster reporting all the crashes.
- 09:06 PM Bug #52221 (Triaged): crash: void OSD::handle_osd_map(MOSDMap*): assert(p != added_maps_bl.end())
- 09:04 PM Bug #52143 (Duplicate): crash: void OSD::handle_osd_map(MOSDMap*): assert(p != added_maps_bl.end())
- 09:00 PM Bug #52225: crash: void Thread::create(const char*, size_t): assert(ret == 0)
- One cluster is reporting all the crashes.
- 08:59 PM Bug #52226: crash: PosixNetworkStack::spawn_worker(unsigned int, std::function<void ()>&&)
- One cluster reporting all the crashes.
- 08:58 PM Bug #52231: crash: std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::al...
- One cluster is reporting all the crashes.
- 08:56 PM Bug #52233: crash: void Infiniband::init(): assert(device)
- One cluster is reporting all the crashes.
- 08:19 PM Feature #52424 (Resolved): [RFE] Limit slow request details to mgr log
- Slow requests can overwhelm a cluster log with too many details, filling up the monitor DB.
There's no need to log... - 08:04 PM Feature #51984: [RFE] Provide warning when the 'require-osd-release' flag does not match current ...
- Please check - https://tracker.ceph.com/issues/52423
- 08:02 PM Feature #52423 (New): Do not allow running enable-msgr2 if cluster don't have osd release set to ...
- Do not allow running enable-msgr2 if cluster don't have osd release set to nautilus
See also - https://tracker.ceph.... - 07:53 PM Bug #50657: smart query on monitors
- Thanks. Are there mons on dedicated nodes or devices in your cluster configuration?
> Do you have a bug number for... - 07:30 PM Bug #50657: smart query on monitors
- > > Jan-Philipp, Hannes, is this a bare metal deployment (what OS?), or did you use cephadm?
>
> Yes, bare metal d... - 11:00 AM Bug #50657: smart query on monitors
- Yaarit Hatuka wrote:
> This fixes the missing sudoers file in mon nodes:
> https://github.com/ceph/ceph/pull/42913
... - 07:49 PM Bug #52408: osds not peering correctly after startup
- Thanks for providing these logs, but they don't have debug_osd=20 (we need it on all the osds). The pg query for 1.7c...
- 10:46 AM Bug #52408: osds not peering correctly after startup
- Tore down and rebuild the cluster again using my quincy-based image. This time, I didn't create any filesystems. ceph...
- 06:14 PM Bug #52421 (Resolved): test tracker
- please ignore
- 05:58 PM Bug #52418 (New): workloads/dedup-io-snaps: ceph_assert(!context->check_oldest_snap_flushed(oid, ...
- /a/yuriw-2021-08-24_19:42:41-rados-wip-yuri8-testing-2021-08-24-0913-distro-basic-smithi/6356797...
- 05:12 PM Bug #52416 (Resolved): devices: mon devices appear empty when scraping SMART metrics
- When invoking smartctl on mon devices, the device name is empty:...
- 03:56 PM Bug #52415 (Closed): rocksdb: build error with rocksdb-6.22.x
- https://github.com/ceph/ceph/pull/42815
- 03:11 PM Bug #52415: rocksdb: build error with rocksdb-6.22.x
- possibly fixed by https://github.com/ceph/ceph/pull/42815?
- 01:58 PM Bug #52415 (Resolved): rocksdb: build error with rocksdb-6.22.x
- Fedora rawhide (f35, f36) have recently upgraded to rocksdb-6.22.1
Now ceph's rocksdb integration fails to compile... - 04:10 AM Bug #39150: mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
- ...
08/25/2021
- 08:39 PM Bug #52408: osds not peering correctly after startup
- Tore down the old cluster and built a Pacific one (v16.2.5). That one doesn't have the same issue. I'll do a clean te...
- 07:44 PM Bug #52408: osds not peering correctly after startup
- peering info:...
- 06:53 PM Bug #52408: osds not peering correctly after startup
- Nothing in the logs for crashed osd.0. I think the last thing in the logs was a rocksdb dump. coredumpctl also didn't...
- 06:38 PM Bug #52408: osds not peering correctly after startup
- Jeff Layton wrote:
> This time when I brought it up, one osd didn't go "up". First two bits of info you asked for:
... - 06:17 PM Bug #52408: osds not peering correctly after startup
- This time when I brought it up, one osd didn't go "up". First two bits of info you asked for:...
- 05:33 PM Bug #52408: osds not peering correctly after startup
- 1. Can you try to reproduce this with 1 pool containing few pgs?
2. Turn the autoscaler off (ceph osd pool set foo p... - 01:46 PM Bug #52408: osds not peering correctly after startup
- My current build is based on upstream commit a49f10e760b4. It has some MDS patches on top, but nothing that should af...
- 01:45 PM Bug #52408 (Can't reproduce): osds not peering correctly after startup
- I might not have the right terminology here. I have a host that I run 3 VMs on that act as cephadm cluster nodes (mos...
- 04:00 AM Bug #50657 (Fix Under Review): smart query on monitors
- This fixes the missing sudoers file in mon nodes:
https://github.com/ceph/ceph/pull/42913
We'll address the fix f...
08/24/2021
- 09:54 PM Backport #52336: pacific: ceph df detail reports dirty objects without a cache tier
- Deepika Upadhyay wrote:
> https://github.com/ceph/ceph/pull/42860
merged - 09:53 PM Backport #51830: pacific: set a non-zero default value for osd_client_message_cap
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42615
merged - 08:12 PM Backport #51290 (Resolved): pacific: mon: stretch mode clusters do not sanely set default crush r...
- 05:54 PM Backport #51290 (In Progress): pacific: mon: stretch mode clusters do not sanely set default crus...
- 06:31 PM Bug #45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_ra...
- /a/yuriw-2021-08-23_19:24:05-rados-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basic-smithi/6353883
- 06:16 PM Backport #51952: pacific: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing()....
- Causing failures in pacific: /a/yuriw-2021-08-23_19:24:05-rados-wip-yuri4-testing-2021-08-23-0812-pacific-distro-basi...
- 10:45 AM Bug #50441 (Resolved): cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- Dan Mick wrote:
> Deepika, was that the reason why?
yep Dan, Neha marked needs info because of MB's comment, mark... - 12:40 AM Bug #52385 (Closed): a possible data loss due to recovery_unfound PG after restarting all nodes
- Related to the discussion in ceph-users ML.
https://marc.info/?l=ceph-users&m=162947327817532&w=2
I encountered a...
08/23/2021
- 09:53 PM Bug #50441: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- Deepika, was that the reason why?
- 08:08 PM Backport #51549 (Resolved): pacific: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42211
m... - 08:03 PM Backport #51568 (Resolved): pacific: pool last_epoch_clean floor is stuck after pg merging
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42224
m... - 05:54 PM Fix #52329: src/vstart: The command "set config key osd_mclock_max_capacity_iops_ssd" fails with ...
- Mon logs showing that the command is capable after the fix is applied:...
- 04:51 PM Bug #52026: osd: pgs went back into snaptrim state after osd restart
- Just wanted to note that we recently encountered what appears to be the same issue on some Luminous (12.2.12) cluster...
08/20/2021
- 04:14 PM Bug #52255: The pgs state are degraded, but all the osds is up and there is no recovering and bac...
- Neha Ojha wrote:
> can you share your osdmap? are all your osds up and in? the crushmap looks fine.
all the osds ... - 05:30 AM Backport #52337 (In Progress): octopus: ceph df detail reports dirty objects without a cache tier
- 02:36 AM Backport #52337 (Resolved): octopus: ceph df detail reports dirty objects without a cache tier
- https://github.com/ceph/ceph/pull/42862
- 03:02 AM Backport #52336: pacific: ceph df detail reports dirty objects without a cache tier
- https://github.com/ceph/ceph/pull/42860
- 02:36 AM Backport #52336 (Resolved): pacific: ceph df detail reports dirty objects without a cache tier
- https://github.com/ceph/ceph/pull/42860
- 02:36 AM Bug #52335 (Pending Backport): ceph df detail reports dirty objects without a cache tier
- 02:32 AM Bug #52335 (Resolved): ceph df detail reports dirty objects without a cache tier
- Description of problem:
'ceph df detail' reports a column for DIRTY objects under POOLS even though cache tiers are ...
08/19/2021
- 10:48 PM Bug #52026: osd: pgs went back into snaptrim state after osd restart
- I don't have the logs right now but it prints the state of the PG so if you search for`snaptrim` in `f0208568-fbf4-48...
- 08:48 PM Bug #52026 (New): osd: pgs went back into snaptrim state after osd restart
- Thanks for providing the logs, is there a particular PG we should look at the in the logs?
- 09:19 PM Bug #50441: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- I assume because of MB's comment, but that seems now to be historical
- 09:17 PM Bug #50441: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- Deepika: why is this issue in need-more-info? Looks like the original fix and pacific backport https://github.com/cep...
- 09:12 PM Bug #48844 (Duplicate): api_watch_notify: LibRadosWatchNotify.AioWatchDelete failed
- 09:08 PM Bug #52261 (Need More Info): OSD takes all memory and crashes, after pg_num increase
- 09:08 PM Bug #52255 (Need More Info): The pgs state are degraded, but all the osds is up and there is no r...
- 09:08 PM Bug #52255: The pgs state are degraded, but all the osds is up and there is no recovering and bac...
- can you share your osdmap? are all your osds up and in? the crushmap looks fine.
- 08:54 PM Bug #52319: LibRadosWatchNotify.WatchNotify2 fails
- Brad, are you aware of this one?
- 03:54 AM Bug #52319 (New): LibRadosWatchNotify.WatchNotify2 fails
- 2021-08-17T01:34:43.023 INFO:tasks.workunit.client.0.smithi111.stdout: api_watch_notify: [ RUN ] LibRado...
- 08:51 PM Bug #52136: Valgrind reports memory "Leak_DefinitelyLost" errors.
- Adam Kupczyk wrote:
> This leak is from internals of RocksDB.
> We have no access to FileMetaData objects, we canno... - 07:34 AM Bug #52136: Valgrind reports memory "Leak_DefinitelyLost" errors.
- This leak is from internals of RocksDB.
We have no access to FileMetaData objects, we cannot be responsible for this... - 08:48 PM Backport #51549: pacific: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- Deepika Upadhyay wrote:
> https://github.com/ceph/ceph/pull/42211
merged - 08:45 PM Bug #50659: Segmentation fault under Pacific 16.2.1 when using a custom crush location hook
- Adam, can you start talking a look at this?
- 03:24 PM Fix #52329 (Fix Under Review): src/vstart: The command "set config key osd_mclock_max_capacity_io...
- 02:28 PM Fix #52329 (Resolved): src/vstart: The command "set config key osd_mclock_max_capacity_iops_ssd" ...
- The following was observed when bringing up a vstart cluster:...
- 07:45 AM Backport #52322 (Resolved): pacific: LibRadosTwoPoolsPP.ManifestSnapRefcount failure
- https://github.com/ceph/ceph/pull/43306
- 07:42 AM Bug #51000 (Pending Backport): LibRadosTwoPoolsPP.ManifestSnapRefcount failure
- 04:47 AM Bug #51076: "wait_for_recovery: failed before timeout expired" during thrashosd test with EC back...
- I see the same assertion error in this dead job - https://pulpito.ceph.com/yuriw-2021-08-16_21:15:00-rados-wip-yuri-t...
08/18/2021
- 11:19 PM Backport #51569 (In Progress): octopus: pool last_epoch_clean floor is stuck after pg merging
- 09:03 PM Backport #51569: octopus: pool last_epoch_clean floor is stuck after pg merging
- https://github.com/ceph/ceph/pull/42837
- 09:53 PM Bug #52316: qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum']) == len(mons)
- ...
- 07:18 PM Bug #52316 (Resolved): qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum']) == len(...
- 2021-08-17T03:12:45.055 INFO:tasks.workunit.client.0.smithi135.stderr:2021-08-17T03:12:45.052+0000 7f27d941a700 1 --...
- 03:50 AM Backport #52307 (Resolved): pacific: doc: clarify use of `rados rm` command
- https://github.com/ceph/ceph/pull/51260
- 03:50 AM Backport #52306 (Rejected): octopus: doc: clarify use of `rados rm` command
- 03:47 AM Bug #52288 (Pending Backport): doc: clarify use of `rados rm` command
08/17/2021
- 04:40 PM Bug #52012 (Fix Under Review): osd/scrub: src/osd/scrub_machine.cc: 55: FAILED ceph_assert(state_...
- 01:35 PM Bug #52026: osd: pgs went back into snaptrim state after osd restart
- I searched a bit through the log I sent and I don't see any traces of a pg into the snaptrim state, probably because ...
- 07:12 AM Fix #51116: osd: Run osd bench test to override default max osd capacity for mclock.
- Removed the classification of the tracker as a "Feature". This is better classified as a "Fix" with the aim of improv...
- 04:09 AM Bug #52255: The pgs state are degraded, but all the osds is up and there is no recovering and bac...
- This is my crushmap
08/16/2021
- 11:29 PM Backport #51568: pacific: pool last_epoch_clean floor is stuck after pg merging
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/42224
merged - 10:33 PM Bug #52288 (Resolved): doc: clarify use of `rados rm` command
- The man page and the "--help" info for `rados rm ...` could be clearer.
- 08:18 PM Bug #52261: OSD takes all memory and crashes, after pg_num increase
- Can you attach a 'ceph osd dump' and 'ceph pg dump', plus a log of one of the osds starting leading up to the crash w...
- 03:22 PM Bug #52026: osd: pgs went back into snaptrim state after osd restart
- I reproduced the issue by doing a `ceph pg repeer` on a pg with a non-zero snaptrimq_len. After the pg has been repee...
08/15/2021
- 08:16 AM Bug #52261 (Need More Info): OSD takes all memory and crashes, after pg_num increase
- After increasing a pool pg_num from 256 to 512, all osds are down.
On startup, they take all of the memory. After ...
08/13/2021
- 03:12 AM Bug #52255 (Need More Info): The pgs state are degraded, but all the osds is up and there is no r...
- I removed a server yesterday, but there are 6 pgs are in stare degraded and no longer changed.
The copy size of pool...
08/11/2021
- 06:47 PM Bug #52233 (New): crash: void Infiniband::init(): assert(device)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=184ea175092db1eb5f584b66...- 06:47 PM Bug #52231 (New): crash: std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, s...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1aea506f1109fd768e765158...- 06:47 PM Bug #52226 (New): crash: PosixNetworkStack::spawn_worker(unsigned int, std::function<void ()>&&)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4bf2d022677b1bd10586cef6...- 06:47 PM Bug #52225 (New): crash: void Thread::create(const char*, size_t): assert(ret == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=77077c11d9fa7cd7f8d4ccaa...- 06:47 PM Bug #52221 (Triaged): crash: void OSD::handle_osd_map(MOSDMap*): assert(p != added_maps_bl.end())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9ee7dc6ce5b80b3a4a423d80...- 06:47 PM Bug #52220 (New): crash: void ECUtil::HashInfo::append(uint64_t, std::map<int, ceph::buffer::v15_...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9541d850892a4b0e1e7d3cce...- 06:47 PM Bug #52218 (Duplicate): crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->o...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=886c2848ae642fafcf59efce...- 06:47 PM Bug #52217 (Duplicate): crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->o...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=78325a1cfa85add67a004464...- 06:47 PM Bug #52214 (Duplicate): crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->o...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=637262c9313d56f56724c439...- 06:47 PM Bug #52213 (Duplicate): crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->o...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7838cb70174ac6ee701615d8...- 06:47 PM Bug #52212 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cb6a35bf8176df5e9719943c...- 06:46 PM Bug #52211 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=529537d03be27e8fd7c33eb3...- 06:46 PM Bug #52210 (Closed): crash: CrushWrapper::decode(ceph::buffer::v15_2_0::list::iterator_impl<true>&)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=16b0cae292a2e5aa1f4a59ae...- 06:46 PM Bug #52207 (Duplicate): crash: std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<ch...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e931cd074f4d4c57eafcfbec...- 06:46 PM Bug #52200 (Duplicate): crash: void OSD::handle_osd_map(MOSDMap*): assert(p != added_maps_bl.end())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8c8aeec7c24f8af53043cb86...- 06:46 PM Bug #52199 (Duplicate): crash: virtual Monitor::~Monitor(): assert(session_map.sessions.empty())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9a7de784614490d603daf107...- 06:46 PM Bug #52198 (Duplicate): crash: virtual Monitor::~Monitor(): assert(session_map.sessions.empty())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=45aa2b2ae51cb0358e27161c...- 06:46 PM Bug #52197 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f5b9d7371888d1a4fef1a569...- 06:46 PM Bug #52195 (Duplicate): crash: /lib64/libpthread.so.0(
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cb65b63c9b9a79458dcd7c3a...- 06:46 PM Bug #52194 (New): mon crash in rocksdb::Cleanable::~Cleanable()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e2287f3b36d2a97af38026b8...- 06:46 PM Bug #52193 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9f138c39ff09273c4f297dd4...- 06:46 PM Bug #52192 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3c44fd4fbb924dbf0de4d271...- 06:46 PM Bug #52191 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ef2841a5c5ebbac2def49fa6...- 06:46 PM Bug #52190 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=315779c6f6febaf097208f14...- 06:45 PM Bug #52189 (Need More Info): crash in AsyncConnection::maybe_start_delay_thread()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f416301151f8db40b0181db8...- 06:45 PM Bug #52186 (Duplicate): crash: void OSD::handle_osd_map(MOSDMap*): assert(p != added_maps_bl.end())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fc7f92f74bc7bb40c5e03a81...- 06:45 PM Bug #52183 (Duplicate): crash: const entity_addrvec_t& MonMap::get_addrs(unsigned int) const: ass...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1bfa48148eee52e245e1d06f...- 06:45 PM Bug #52180 (Duplicate): crash: void pg_missing_set<TrackChanges>::got(const hobject_t&, eversion_...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f582692869a94580abf07e66...- 06:45 PM Bug #52178 (Duplicate): crash: virtual void AuthMonitor::update_from_paxos(bool*): assert(ret == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7edafdde10f3891aee038aa8...- 06:45 PM Bug #52176 (Duplicate): crash: std::_Rb_tree<boost::intrusive_ptr<AsyncConnection>, boost::intrus...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c653670067a09d0d1578ea33...- 06:45 PM Bug #52174 (Triaged): crash: ceph::buffer::v15_2_0::create_aligned_in_mempool(unsigned int, unsig...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2f0088a1a0259603ad88df29...- 06:45 PM Bug #52173 (Need More Info): crash in ProtocolV2::send_message()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=275bfebdff86cb8d90c56459...- 06:44 PM Bug #52172 (Triaged): crash: ceph::buffer::v15_2_0::create_aligned_in_mempool(unsigned int, unsig...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0a14d6ccc26b531fb346e8c3...- 06:44 PM Bug #52171 (Triaged): crash: virtual int RocksDBStore::get(const string&, const string&, ceph::bu...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b8a9cbd0444778ca4112e187...- 06:44 PM Bug #52170 (Duplicate): crash: const entity_addrvec_t& MonMap::get_addrs(unsigned int) const: ass...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=96b49c839d59492286f04a76...- 06:44 PM Bug #52169 (New): crash: void SignalHandler::queue_signal_info(int, siginfo_t*, void*): assert(r ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=88dad7f26832c8036351625c...- 06:44 PM Bug #52168 (Duplicate): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionR...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=63ba607fc458e378045e8666...- 06:44 PM Bug #52167 (Won't Fix): crash: RDMAConnectedSocketImpl::RDMAConnectedSocketImpl(ceph::common::Cep...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=07ab19a6cb27368fa09313cf...- 06:44 PM Bug #52166 (Won't Fix): crash: void Device::binding_port(ceph::common::CephContext*, int): assert...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=8c306905582a9d790b752447...- 06:44 PM Bug #52165 (Rejected): crash: void MonitorDBStore::clear(std::set<std::__cxx11::basic_string<char...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9d5db03ed7874482b2960b7d...- 06:44 PM Bug #52164 (Duplicate): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionR...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4d2c4ced5cb129282e81fdf9...- 06:44 PM Bug #52163 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=eb081141be75e0baa7c9fe0a...- 06:44 PM Bug #52162 (Duplicate): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionR...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cef671ee83553d16f2680c10...- 06:44 PM Bug #52161 (Rejected): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3ef9be60ac33158aff0fa884...- 06:44 PM Bug #52160 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=95d3f1ffec846b1fe432b371...- 06:44 PM Bug #52159 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3a50bb9444331ec2b94a68f8...- 06:44 PM Bug #52158 (Need More Info): crash: ceph::common::PerfCounters::set(int, unsigned long)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2bce05236e68a5895b61a2b3...- 06:44 PM Bug #52156 (Duplicate): crash: virtual void OSDMonitor::update_from_paxos(bool*): assert(err == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1a7da12035f04d1f6b48fbb8...- 06:44 PM Bug #52155 (Need More Info): crash: pthread_rwlock_rdlock() in queue_want_up_thru
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ca3a3acf9c89597282538439...- 06:44 PM Bug #52154 (Won't Fix): crash: Infiniband::MemoryManager::Chunk::write(char*, unsigned int)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=93435f7389a14a7b2cf7302a...- 06:44 PM Bug #52153 (Won't Fix): crash: int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionR...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=606f52662a089d81dd216674...- 06:44 PM Bug #52152 (Duplicate): crash: pthread_getname_np()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1f0f8d147aa33e69a44fe0bb...- 06:43 PM Bug #52151 (New): crash: rocksdb::Cleanable::~Cleanable()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3c6179c451759d26d2272a88...- 06:43 PM Bug #52150 (Won't Fix): crash: bool HealthMonitor::check_member_health(): assert(store_size > 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=01246abff7cfc6a0f6751690...- 06:43 PM Bug #52149 (Duplicate): crash: void OSDShard::register_and_wake_split_child(PG*): assert(p != pg_...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=86176dad44ae51d3e7de7eac...- 06:43 PM Bug #52148 (Duplicate): crash: pthread_getname_np()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ad849d7b1a9aff5bb2b92f6f...- 06:43 PM Bug #52147 (Duplicate): crash: rocksdb::InstrumentedMutex::Lock()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=934e8b8d53204a2de4929567...- 06:43 PM Bug #52145 (Duplicate): crash: OSDMapRef OSDService::get_map(epoch_t): assert(ret)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3b9698aa938dbbb1fbbcfcd9...- 06:43 PM Bug #52143 (Duplicate): crash: void OSD::handle_osd_map(MOSDMap*): assert(p != added_maps_bl.end())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a75157c145de8e54f78f1da6...- 06:43 PM Bug #52142 (Duplicate): crash: virtual Monitor::~Monitor(): assert(session_map.sessions.empty())
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d2e21097b6d6975cd5ebe9ff...- 06:43 PM Bug #52141 (Need More Info): crash: void OSD::load_pgs(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2784c173c945de7507d54666...- 06:43 PM Bug #52140 (Duplicate): crash: OpTracker::~OpTracker(): assert((sharded_in_flight_list.back())->o...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=85dedc566bac7a7f47a8de6e...- 05:34 PM Bug #52136 (Resolved): Valgrind reports memory "Leak_DefinitelyLost" errors.
- Valgrind reported the memory leak error in the following jobs:
/a/yuriw-2021-08-05_21:11:40-rados-wip-yuri-testing... - 03:51 PM Bug #39150: mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4d653e9c3ee37041dd2a1cf55...- 03:51 PM Bug #46266: Monitor crashed in creating pool in CrushTester::test_with_fork()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d6b6f43e0c31315c6493798ed...- 03:51 PM Bug #44715: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_list.back())->ops_in_...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=58697a1c8d484e18346c670af...- 09:33 AM Bug #52129: LibRadosWatchNotify.AioWatchDelete failed
- This is probably a duplicate of https://tracker.ceph.com/issues/48844.
08/10/2021
- 09:26 PM Bug #52129 (Fix Under Review): LibRadosWatchNotify.AioWatchDelete failed
- ...
- 06:52 PM Bug #52127 (New): stretch mode: disallow users from removing the tiebreaker monitor
- Right now, there are no guards which prevent the user from removing the tiebreaker monitor from the monmap.
This i... - 06:51 PM Bug #52126 (Resolved): stretch mode: allow users to change the tiebreaker monitor
- Right now, it's impossible to change the tiebreaker monitor in stretch mode. That's an issue if the monitor needs to ...
- 06:49 PM Bug #52125 (New): stretch mode: disallow users from changing election strategy
- Right now, users can change the election strategy when in stretch mode. Uh, whoops?
- 06:45 PM Bug #52124 (Resolved): Invalid read of size 8 in handle_recovery_delete()
- ...
- 01:42 PM Bug #50659: Segmentation fault under Pacific 16.2.1 when using a custom crush location hook
- Based on the progress here it seems like I'm probably the only person to have reported this. I still can't figure out...
- 12:56 PM Bug #50441: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- @Deepika finally I think this issue I mentioned last week regarding the prometheus deployment after a new cluster ins...
- 05:04 AM Bug #50441 (Need More Info): cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- M B wrote:
> Unfortunately this issue does not seem to be resolved, or at least not with Pacific 16.2.5. I installed... - 05:02 AM Bug #50441: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- @Loic, sure, the PR addressing this issue was backported to pacific, spoke to Dan that octopus backport is not necess...
08/09/2021
- 09:01 PM Bug #50441: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- Deepika, you marked this issue resolved but I can't figure out why, would you be so kind as to explain ? Thanks in ad...
- 09:00 PM Bug #50441 (Pending Backport): cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- 08:59 PM Backport #51549 (New): pacific: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- 08:09 PM Backport #51840 (Resolved): pacific: osd: snaptrim logs to derr at every tick
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42482
m... - 08:05 PM Backport #51841 (Resolved): octopus: osd: snaptrim logs to derr at every tick
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/42484
m... - 06:23 PM Bug #49888: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum ...
- rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-...
- 06:07 PM Bug #45702: PGLog::read_log_and_missing: ceph_assert(miter == missing.get_items().end() || (miter...
- /a/yuriw-2021-08-06_16:31:19-rados-wip-yuri-master-8.6.21-distro-basic-smithi/6324561 - no logs
- 06:01 PM Bug #39150: mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
- /a/yuriw-2021-08-06_16:31:19-rados-wip-yuri-master-8.6.21-distro-basic-smithi/6324701
- 05:50 PM Bug #36304: FAILED ceph_assert(p != pg_slots.end()) in OSDShard::register_and_wake_split_child(PG*)
- /a/yuriw-2021-08-06_16:31:19-rados-wip-yuri-master-8.6.21-distro-basic-smithi/6324576
- 05:49 PM Bug #45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_ra...
- /a/yuriw-2021-08-06_16:31:19-rados-wip-yuri-master-8.6.21-distro-basic-smithi/6324539
- 09:15 AM Bug #52026: osd: pgs went back into snaptrim state after osd restart
- Here is the log of an osd that restarted and made a few pgs into the snaptrim state.
ceph-post-file: 88808267-4ec6...
08/06/2021
- 10:35 PM Bug #51998: PG autoscaler is wrong when pool is EC with technique=reed_sol_r6_op
- I think we should improve the code and seems like you have already figured out the problem. The reason you cannot dis...
- 10:12 PM Bug #52026 (Need More Info): osd: pgs went back into snaptrim state after osd restart
- Is it possible for you share some OSD logs with debug_osd=20 from when this issue happens?
- 10:09 PM Bug #38357: ClsLock.TestExclusiveEphemeralStealEphemeral failed
- showing up more often recently
- 10:05 PM Bug #49393 (Can't reproduce): Segmentation fault in ceph::logging::Log::entry()
- 10:04 PM Bug #46318: mon_recovery: quorum_status times out
- Haven't seen this in recent rados runs.
- 10:02 PM Bug #49727: lazy_omap_stats_test: "ceph osd deep-scrub all" hangs
- Haven't seen this recently.
- 10:00 PM Bug #48468: ceph-osd crash before being up again
- Reducing priority for now.
- 04:31 AM Backport #52078 (Resolved): pacific: api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.HitSetWrite
- https://github.com/ceph/ceph/pull/45319
- 04:31 AM Backport #52077 (Resolved): octopus: api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.HitSetWrite
- https://github.com/ceph/ceph/pull/45320
- 04:26 AM Bug #45423 (Pending Backport): api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.HitSetWrite
- 01:32 AM Bug #50441: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- Can't reproduce the failure; I just started a mon-and-mgr bootstrapped cluster with no incident:...
08/05/2021
- 10:08 PM Bug #6297 (In Progress): ceph osd tell * will break when FD limit reached, messenger should close...
- This has come up again so I am going to reopen this tracker so I can follow up on the resolution.
08/04/2021
- 03:54 PM Bug #52058: osd/scrub performance issue: multiple redundant "updates-applied" scrub events
- The refactored scrub code (in Pacific and forward) changed the handling
of applied updates notifications in PrimaryL... - 03:47 PM Bug #52058 (New): osd/scrub performance issue: multiple redundant "updates-applied" scrub events
- OSD logs show much too many unneeded UpdatesApplied ("updates were applied to the chunk selected for scrubbing").
... - 11:13 AM Backport #51988: pacific: osd: Add mechanism to avoid running osd benchmark on osd init when usin...
- Note that only a subset of the commits from the associated parent tracker PR can be backported to pacific. More speci...
- 11:01 AM Backport #51859 (Rejected): pacific: standalone/osd-rep-recov-eio.sh: TEST_rep_read_unfound faile...
- A backport of the changes associated with the parent tracker was deemed not necessary.
- 10:59 AM Bug #51074 (Resolved): standalone/osd-rep-recov-eio.sh: TEST_rep_read_unfound failed with "Bad da...
- This doesn't need to be backported to pacific. The reason is that the mclock_scheduler will not be made default for p...
- 08:27 AM Bug #48750: ceph config set using osd/host mask not working
- I have this exact problem in 16.2.4 as well. My workaround is to set it in ceph.conf
- 12:54 AM Bug #38357: ClsLock.TestExclusiveEphemeralStealEphemeral failed
- /a/kchai-2021-08-03_15:40:41-rados-wip-kefu-testing-2021-08-03-2117-distro-basic-smithi/6309411
- 12:48 AM Bug #45423: api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.HitSetWrite
- /a//kchai-2021-08-03_15:40:41-rados-wip-kefu-testing-2021-08-03-2117-distro-basic-smithi/6309402
08/03/2021
- 09:26 PM Bug #51942: src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<const NotActive*>())
- https://pulpito.ceph.com/nojha-2021-08-03_18:59:59-rados-wip-yuri-testing-2021-07-27-0830-pacific-distro-basic-smithi...
- 07:28 PM Backport #51966 (In Progress): nautilus: set a non-zero default value for osd_client_message_cap
- 07:25 PM Backport #51967 (In Progress): octopus: set a non-zero default value for osd_client_message_cap
- 07:19 PM Backport #51830 (In Progress): pacific: set a non-zero default value for osd_client_message_cap
- 01:20 PM Bug #52026 (Resolved): osd: pgs went back into snaptrim state after osd restart
- We are testing snapshots in CephFS. This is a 4 nodes clusters with only replicated pools. During our tests we did a ...
- 12:39 PM Bug #44286: Cache tiering shows unfound objects after OSD reboots
- Jan-Philipp Litza wrote:
> We even hit that bug twice today by rebooting two of our cache servers.
>
> What's int... - 12:09 PM Fix #52025 (Resolved): osd: Add config option to skip running the OSD benchmark on init.
- Update documentation on the steps to manually set the max osd iops capacity.
- 10:54 AM Bug #50441: cephadm bootstrap on arm64 fails to start ceph/ceph-grafana service
- Unfortunately this issue does not seem to be resolved, or at least not with Pacific 16.2.5. I installed a fresh new c...
- 05:46 AM Bug #52012: osd/scrub: src/osd/scrub_machine.cc: 55: FAILED ceph_assert(state_cast<const NotActiv...
- The fix is to use (A) & (B) above as a hint to the Replica, to discard all stale scrub processes.
In the suggested f... - 05:40 AM Bug #52012 (In Progress): osd/scrub: src/osd/scrub_machine.cc: 55: FAILED ceph_assert(state_cast<...
- Scenario:
- Primary reserves the replica
- Primary requests a scrub
- Replica in the process of creating the sc... - 05:34 AM Bug #52012 (Resolved): osd/scrub: src/osd/scrub_machine.cc: 55: FAILED ceph_assert(state_cast<con...
- A new scrub request arriving to the replica after manual 'set noscrub' then 'unset' asserts as the replica is
still ... - 02:06 AM Bug #47025: rados/test.sh: api_watch_notify_pp LibRadosWatchNotifyECPP.WatchNotify failed
- Sridhar Seshasayee wrote:
> Observed on master:
> /a/sseshasa-2021-07-14_10:37:09-rados-wip-sseshasa-testing-2021-0... - 12:23 AM Bug #45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_ra...
- I've modified this test to only run the TestWatchNotify subtests (2) and to generate debug logging. I'll report back ...
08/02/2021
- 06:15 PM Bug #49888: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum ...
- rados/singleton/{all/thrash-eio mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitma...
- 02:29 PM Feature #51984: [RFE] Provide warning when the 'require-osd-release' flag does not match current ...
- Sebastian Wagner wrote:
> Thinking. cephadm sets this automatically after the upgrade finishes in https://github.com... - 12:58 PM Feature #51984: [RFE] Provide warning when the 'require-osd-release' flag does not match current ...
- Thinking. cephadm sets this automatically after the upgrade finishes in https://github.com/ceph/ceph/blob/c50d8ebdefc...
- 12:55 AM Feature #51984 (Resolved): [RFE] Provide warning when the 'require-osd-release' flag does not mat...
- For more details please check:
https://bugzilla.redhat.com/show_bug.cgi?id=1988773 - 12:54 PM Bug #50657: smart query on monitors
- I also see this on mon/mgr hosts of a ceph octopus cluster:...
- 12:18 PM Bug #51998 (New): PG autoscaler is wrong when pool is EC with technique=reed_sol_r6_op
- Dear maintainer,
The PG autoscaler is wrong when trying to calculate the RATE for a pool in Erasure Coding using t... - 08:45 AM Fix #50574: qa/standalone: Modify/re-write failing standalone tests with mclock scheduler
- Associating parent tracker https://tracker.ceph.com/issues/51464 to this.
- 08:40 AM Fix #50574 (In Progress): qa/standalone: Modify/re-write failing standalone tests with mclock sch...
- The PR https://github.com/ceph/ceph/pull/42133 fixes a majority of the standalone tests to work with mclock. However,...
- 08:15 AM Backport #51988 (Resolved): pacific: osd: Add mechanism to avoid running osd benchmark on osd ini...
- https://github.com/ceph/ceph/pull/41731
- 08:12 AM Fix #51464 (Pending Backport): osd: Add mechanism to avoid running osd benchmark on osd init when...
Also available in: Atom