Activity
From 06/22/2022 to 07/21/2022
07/21/2022
- 10:55 PM Backport #56669 (Resolved): pacific: Deferred writes might cause "rocksdb: Corruption: Bad table ...
- https://github.com/ceph/ceph/pull/47296
- 10:55 PM Backport #56668 (Resolved): quincy: Deferred writes might cause "rocksdb: Corruption: Bad table m...
- https://github.com/ceph/ceph/pull/47297
- 10:54 PM Bug #54547 (Pending Backport): Deferred writes might cause "rocksdb: Corruption: Bad table magic ...
- 10:48 PM Bug #54547: Deferred writes might cause "rocksdb: Corruption: Bad table magic number"
- https://github.com/ceph/ceph/pull/46890 merged
07/20/2022
- 06:31 PM Bug #56503: Deleting large (~850gb) objects causes OSD to crash
- Do you have logs with @debug_bluestore=20@ maybe?
Mark, can this be related to the RocksDB's tombstones issues? - 04:58 PM Bug #56640: RGW S3 workload has a huge performance boost in quincy 17.2.0 as compared to 17.2.1
- There are two test cases that would be executed to find more details on what is going on with this feature and how it...
07/19/2022
- 09:07 PM Bug #56640: RGW S3 workload has a huge performance boost in quincy 17.2.0 as compared to 17.2.1
- Vikhyat Umrao wrote:
> This has been already verified that by default COSbench uses random not zero.
>
> COSBench... - 09:06 PM Bug #56640: RGW S3 workload has a huge performance boost in quincy 17.2.0 as compared to 17.2.1
- The `bluestore_zero_block_detection` was set to false in 17.2.1. For more details please check:
https://tracker.ce... - 08:55 PM Bug #56640: RGW S3 workload has a huge performance boost in quincy 17.2.0 as compared to 17.2.1
- This has been already verified that by default COSbench uses random not zero.
COSBench document - page 50 - https:... - 08:53 PM Bug #56640 (New): RGW S3 workload has a huge performance boost in quincy 17.2.0 as compared to 17...
- RGW S3 *small object* workload has a huge performance boost in the quincy 17.2.0 as compared to 17.2.1 due to bluesto...
- 04:33 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
- Well, this doesn't look like a bluestore issue anymore. Neither I can't see any clues it's the same as the original o...
- 12:53 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
- Igor Fedotov wrote:
> Hi!
>
> May I have some clarifications then.
> I can see multiple backtraces for .107 like... - 10:33 AM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
- Hi!
May I have some clarifications then.
I can see multiple backtraces for .107 like the last one. All are happen... - 08:29 AM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
- Aurélien Le Clainche wrote:
> Igor Fedotov wrote:
> > Aurélien Le Clainche wrote:
> > > Igor Fedotov wrote:
> > >...
07/18/2022
- 02:24 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
- Igor Fedotov wrote:
> Aurélien Le Clainche wrote:
> > Igor Fedotov wrote:
> > > Aurélien Le Clainche wrote:
> > >... - 01:58 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
- Aurélien Le Clainche wrote:
> Igor Fedotov wrote:
> > Aurélien Le Clainche wrote:
> > > do you have a procedure or... - 01:01 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
- Igor Fedotov wrote:
> Aurélien Le Clainche wrote:
> > do you have a procedure or example to do the fsck?
>
> Not... - 09:56 AM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
- Aurélien Le Clainche wrote:
> do you have a procedure or example to do the fsck?
Not sure how to do that with Roo... - 09:17 AM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
- Sébastien Bernard wrote:
> Regarding this problem, /var filesystem is an xfs filesystem.
> osd are setup by rook.
... - 09:10 AM Backport #56600 (Rejected): octopus: octopus : bluestore_cache_other pool memory leak ?
- 09:10 AM Backport #56599 (Resolved): pacific: octopus : bluestore_cache_other pool memory leak ?
- 09:10 AM Backport #56598 (Resolved): quincy: octopus : bluestore_cache_other pool memory leak ?
- 09:07 AM Bug #56424 (Pending Backport): bluestore_cache_other mempool entry leak
- 09:05 AM Bug #56424: bluestore_cache_other mempool entry leak
- alexandre derumier wrote:
> Hi,
>
> I think it's fixing the problem.
>
> I'll keep it running, and I'll send n...
07/13/2022
- 03:25 PM Bug #56424 (Fix Under Review): bluestore_cache_other mempool entry leak
- 03:24 PM Bug #54288 (Rejected): rocksdb: Corruption: missing start of fragmented record
- Reverted by https://github.com/ceph/ceph/pull/47053
07/12/2022
- 08:22 PM Bug #55636 (Resolved): octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph...
- Efforts to solve the BlueFS bug are ongoing in bug #56533.
- 04:59 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
- I think I have a fix but unfortunately it's a bug (and hence a fix) in BlueFS.
BlueFS improperly handles the followi... - 12:49 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
- For the sake of completeness resending the log snippet relevant to the issue with the final false positive check:
20... - 12:44 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
- So tuning that RocksDB setting results in a bit different WAL file use pattern which in turn apparently reveals a bug...
- 12:42 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
- The issue is caused by a false positive allocations check. It detects a reference to an "already allocated" chunk. An...
- 12:35 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
- I deeply believe that this is an internal BlueFS problem. Insufficient locking? Data collision?
- 05:37 PM Bug #56533 (Fix Under Review): Bluefs might put an orpan op_update record in the log
- 05:26 PM Bug #56533 (In Progress): Bluefs might put an orpan op_update record in the log
- 05:25 PM Bug #56533 (Resolved): Bluefs might put an orpan op_update record in the log
- This has been originally revealed in Octopus when recycle_log_file_num is set to 0. See https://tracker.ceph.com/issu...
- 12:21 PM Bug #56488: BlueStore doesn't defer small writes for pre-pacific hdd osds
- There are two configurables to consider for deferred writes logic:
- bluestore_prefer_deferred_size "deferred_size"
...
07/11/2022
- 09:35 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
- This one passed again on ubuntu_latest. Strange that it seems to correlate:
/a/lflores-2022-07-11_16:31:11-rados:s... - 07:23 PM Bug #55636 (Fix Under Review): octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAI...
- I've opened a revert PR since we need to finalize the Octopus point release. @Igor let me know if you have a fix you ...
- 04:11 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
- Caught another instance here:
/a/yuriw-2022-07-08_20:17:39-rados-wip-yuri7-testing-2022-07-08-1007-octopus-distro-de... - 03:05 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
- Hey Igor, any updates?
07/08/2022
- 09:43 AM Bug #56503: Deleting large (~850gb) objects causes OSD to crash
- And here is dump_ops_in_flight from one of the OSDs. This OSD has block.db on SSD by the way. As you can see this sin...
- 09:22 AM Bug #56503 (New): Deleting large (~850gb) objects causes OSD to crash
- After deleting large S3 object - around 850GB in size, OSDs in our cluster started becaming laggy, unresponsive and e...
07/07/2022
- 08:31 AM Bug #56488 (Resolved): BlueStore doesn't defer small writes for pre-pacific hdd osds
- We're upgrading clusters to v16.2.9 from v15.2.16, and our simple "rados bench -p test 10 write -b 4096 -t 1" latency...
- 02:24 AM Bug #56450: Rados doesn't release the disk spaces after cephfs releases it
- Greg Farnum wrote:
> Xiubo Li wrote:
> > Greg Farnum wrote:
> > > So this looks like the client is not correctly u...
07/06/2022
- 11:12 PM Bug #56450: Rados doesn't release the disk spaces after cephfs releases it
- Xiubo Li wrote:
> Greg Farnum wrote:
> > So this looks like the client is not correctly updating its own df reporti... - 04:55 AM Bug #56450: Rados doesn't release the disk spaces after cephfs releases it
- Tried:
1, to umount the kclients or fuse clients
2, restart all the MDS daemons
3, restart all the OSD daemons
... - 02:56 AM Bug #56450: Rados doesn't release the disk spaces after cephfs releases it
- Greg Farnum wrote:
> So this looks like the client is not correctly updating its own df reporting, but that the data...
07/05/2022
- 03:16 PM Bug #56450: Rados doesn't release the disk spaces after cephfs releases it
- So this looks like the client is not correctly updating its own df reporting, but that the data is actually getting c...
- 10:02 AM Bug #56467: nautilus: osd crashs with _do_alloc_write failed with (28) No space left on device
- relates to https://tracker.ceph.com/issues/42913
- 09:05 AM Bug #56467 (New): nautilus: osd crashs with _do_alloc_write failed with (28) No space left on device
- osd crashs and can't be pulled up when bluestore runs out of space in N release. Here's the stack trace in the log:
...
07/04/2022
- 08:08 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
- Regarding this problem, /var filesystem is an xfs filesystem.
osd are setup by rook.
Rebooting the machine is enoug... - 03:44 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
- do you have a procedure or example to do the fsck?
- 11:03 AM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
- Could you please run fsck against this OSD and share the results
- 09:50 AM Bug #56456 (New): rook-ceph-v1.9.5: ceph-osd crash randomly
- Hi,
after a migration to rook-ceph v1.9.5, ceph osd crash : ... - 12:36 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
- Laura Flores wrote:
> Hey @Igor, mind taking a look?
looking - 06:35 AM Bug #56424: bluestore_cache_other mempool entry leak
- Hi,
I think it's fixing the problem.
Looking at stats, I still see small increase of cache other over time,
bu... - 06:28 AM Bug #56450: Rados doesn't release the disk spaces after cephfs releases it
- There is only one active MDS:...
- 03:18 AM Bug #56450 (New): Rados doesn't release the disk spaces after cephfs releases it
- Before running the Filesystem benchmark test, from OS we can see that the *_/_* directory had *_81GB_*:...
07/01/2022
- 07:34 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
- Hey @Igor, mind taking a look?
- 07:27 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
- Some recent observations:
On https://trello.com/c/w6qCkODQ/1567-wip-yuri-testing-2022-06-24-0817-octopus, I notice... - 04:24 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
- 5 occurrences on http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2022-06-24-0817-octopus.
- 07:24 PM Bug #56424 (In Progress): bluestore_cache_other mempool entry leak
- 03:31 PM Bug #56424: bluestore_cache_other mempool entry leak
- Igor Fedotov wrote:
> This one should work properly. Please try again
ok, no more crazy values, thanks !
So I'... - 01:44 PM Bug #56424: bluestore_cache_other mempool entry leak
- This one should work properly. Please try again
- 01:19 PM Bug #56424: bluestore_cache_other mempool entry leak
- my bad... fixing...
- 11:59 AM Bug #56424: bluestore_cache_other mempool entry leak
- ouch, seem buggy.
I have crazy values
"bluestore_cache_other": {
"items": 154949... - 11:47 AM Bug #56424: bluestore_cache_other mempool entry leak
- Igor Fedotov wrote:
> alexandre derumier wrote:
> > Igor Fedotov wrote:
> > > @Alexandre, I'd recommend to try the... - 11:01 AM Bug #56424: bluestore_cache_other mempool entry leak
- alexandre derumier wrote:
> Igor Fedotov wrote:
> > @Alexandre, I'd recommend to try the patch using a single OSD o... - 08:00 AM Bug #56424: bluestore_cache_other mempool entry leak
- Igor Fedotov wrote:
> @Alexandre, I'd recommend to try the patch using a single OSD only. Just to avoid any unexpect... - 07:58 AM Bug #56424: bluestore_cache_other mempool entry leak
- >I'm still not 100% sure the issue I found is the only bug though.
thanks for the explain. (so, if I understand, m...
06/30/2022
- 10:46 PM Bug #56424: bluestore_cache_other mempool entry leak
- @Alexandre, I'd recommend to try the patch using a single OSD only. Just to avoid any unexpected OSD misbehavior - th...
- 07:40 PM Bug #56424: bluestore_cache_other mempool entry leak
- alexandre derumier wrote:
> BTW, could you give me a small explain of what is the current problem ?
Well the prob... - 02:48 PM Bug #56424: bluestore_cache_other mempool entry leak
- BTW, could you give me a small explain of what is the current problem ?
I have 4 others cluster with same config, ... - 02:38 PM Bug #56424: bluestore_cache_other mempool entry leak
- Igor Fedotov wrote:
> Highly likely this fix https://github.com/ceph/ceph/pull/46911 is relevant. Not sure it fixes ... - 01:51 PM Bug #56424: bluestore_cache_other mempool entry leak
- Highly likely this fix https://github.com/ceph/ceph/pull/46911 is relevant. Not sure it fixes everything though..
I... - 12:06 PM Bug #56424: bluestore_cache_other mempool entry leak
- full log (10min) is available here:
https://mutulin1.odiso.net/ceph-osd.5.bluestore20debug.log.gz - 11:57 AM Bug #56424: bluestore_cache_other mempool entry leak
- here the grep on "pruned tailed" with debug_bluestore 20/20
- 11:33 AM Bug #56424: bluestore_cache_other mempool entry leak
- Alexandre,
could you please set debug_bluestore to 20 for 5-10 mins (be careful as the log will grow drastically) a... - 06:58 AM Bug #56424: bluestore_cache_other mempool entry leak
- here a 1 minute log with debug 20/20 of osd.5
(no scrub, no snap trim during this time)
https://mutulin1.odiso.... - 06:31 AM Bug #56424: bluestore_cache_other mempool entry leak
- sorry,
wrong screenshot of last 24h is last post.
here the correct graphs - 06:22 AM Bug #56424: bluestore_cache_other mempool entry leak
- some detailled of other_cache stats for osd.5 over last 24h
items:
80017467 -> 80716079
size:
2781599188 by... - 05:43 AM Bug #56424 (Resolved): bluestore_cache_other mempool entry leak
- Hi,
I have an octopus cluster (15.2.16),
(I was first installed in octopus, no upgrade from previous ceph versi...
06/28/2022
- 06:57 AM Bug #54547: Deferred writes might cause "rocksdb: Corruption: Bad table magic number"
- https://github.com/ceph/ceph/pull/46856 is a consistent replicator for deferred writes corrupting RocksDB.
- 12:22 AM Bug #55328: OSD crashed due to checksum error
- Hi Igor
I am continuously struggling with this issue, but unfortunately, I still cannot provide you with logs.
Afte...
06/24/2022
- 09:24 PM Backport #55360 (Resolved): octopus: os/bluestore: Always update the cursor position in AVL near-...
- 09:20 PM Bug #54288 (Resolved): rocksdb: Corruption: missing start of fragmented record
- 03:14 AM Bug #56383 (New): crash: ceph::buffer::ptr::iterator_impl<true>::operator
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9cc1fe3e027f3cf98c0c3316...- 03:14 AM Bug #56382 (Resolved): ONode ref counting is broken
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6bb2da74940c132cf3884cb9...- 03:14 AM Bug #56379 (New): crash: rocksdb::UncompressBlockContentsForCompressionType(rocksdb::Uncompressio...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b1f8d003169b7ab28ccfa0d9...- 03:14 AM Bug #56378 (New): crash: LZ4_decompress_safe()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fa6e5a4dbeb6ec5834f6028e...- 03:14 AM Bug #56376 (New): crash: rocksdb::Block::NewDataIterator(rocksdb::Comparator const*, unsigned lon...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fad2b9ddbbee699a1a975660...- 03:14 AM Bug #56375 (New): crash: rocksdb::DataBlockIter::NextImpl()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9dcd6fd26edca2c46bca7c64...- 03:13 AM Bug #56372 (New): crash: pthread_cond_wait()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=99f1ebfcce5a35ce782856e7...- 03:13 AM Bug #56370 (New): crash: int64_t BlueFS::_read(BlueFS::FileReader*, uint64_t, size_t, ceph::buffe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7275a73d55a4b1a27239138d...- 03:13 AM Bug #56369 (New): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64_t, ch...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=114851fd57b3f97cbeead539...- 03:13 AM Bug #56368 (New): crash: BlueStore::ExtentMap::fault_range(KeyValueDB*, uint32_t, uint32_t)::<lam...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=688cba224e4db38476402be3...- 03:13 AM Bug #56367 (New): crash: BlueStore::Onode::put()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=978f0095a5fd046ea12aa38a...- 03:13 AM Bug #56366 (New): crash: ceph::buffer::ptr::release()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=85100d5ac144a3d242c0cae3...- 03:13 AM Bug #56365 (New): crash: ceph::buffer::ptr::release()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=076e4ef53dfeb8b3d1ba4adb...- 03:13 AM Bug #56364 (New): crash: ceph::buffer::ptr::release()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=55b4cd081b104e2bf1d3b1a6...- 03:13 AM Bug #56363 (New): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64_t, ch...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3a15b13e448484b313ad69ca...- 03:13 AM Bug #56362 (New): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64_t, ch...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=feeb409db59ea0952734fd06...- 03:13 AM Bug #56361 (New): crash: virtual int KernelDevice::flush(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3d4731acdf48659d882151cf...- 03:13 AM Bug #56360 (New): crash: virtual int KernelDevice::flush(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=625b18fef7fb68f95b516951...- 03:13 AM Bug #56359 (New): crash: virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IO...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=87fefa3c056149d446126981...- 03:13 AM Bug #56356 (New): crash: BlueFS::get_free(unsigned int)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f9b6874a251f5c6a6e647d44...- 03:13 AM Bug #56354 (New): crash: virtual int BlueFS::SocketHook::call(std::string_view, const cmdmap_t&, ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d1ef289ed7197a0d72c0d196...- 03:13 AM Bug #56353 (New): crash: void BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectSto...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5996d0bc93a2a4b3266ea728...- 03:12 AM Bug #56346 (New): crash: BlueStore::_txc_create(BlueStore::Collection*, BlueStore::OpSequencer*, ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7911a4e6e2f7947d0b5be910...- 03:12 AM Bug #56335 (New): crash: tcmalloc::DLL_Remove(tcmalloc::Span*)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ade6baee319f62a1642e761c...- 03:12 AM Bug #56334 (New): crash: boost::dynamic_bitset<unsigned long, std::allocator<unsigned long> >::re...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c4563aa6978e87f351c2f3a9...- 03:12 AM Bug #56328 (New): crash: pthread_cond_wait()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5dc541840158b19c1fff06f5...- 03:12 AM Bug #56327 (New): crash: void BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectSto...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=87922c33f416e53fbe796b63...- 03:11 AM Bug #56315 (New): crash: bool rocksdb::InlineSkipList<rocksdb::MemTableRep::KeyComparator const&>...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5ff1348d1381602539968af8...- 03:11 AM Bug #56314 (New): crash: void BlueStore::_kv_sync_thread(): assert(r == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fbe5c6c2066ff394130b0641...- 03:11 AM Bug #56311 (New): crash: virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IO...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=95184972789a6e8f95d5278c...- 03:11 AM Bug #56310 (New): crash: pread64()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1cf418cda3fbe778c47ea7c3...- 03:11 AM Bug #56309 (New): crash: pread64()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4800be7a105a866561da5352...- 03:11 AM Bug #56308 (New): crash: int64_t BlueFS::_read(BlueFS::FileReader*, uint64_t, size_t, ceph::buffe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0d8c41707fbb4c5b28af86c0...- 03:11 AM Bug #56302 (New): crash: HybridAllocator::init_rm_free(uint64_t, uint64_t)::<lambda(uint64_t, uin...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=242c28b58de91f4762515497...- 03:10 AM Bug #56295 (New): crash: void BlueStore::_close_db_leave_bluefs(): assert(db)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=271e6ade36c8afbde557589a...- 03:10 AM Bug #56294 (New): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStore::Col...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a17572825091f2b2501eac5f...- 03:10 AM Bug #56293 (New): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStore::Col...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f85a0e7e6dc05a28e33ea281...- 03:10 AM Bug #56286 (New): crash: virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IO...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cdd746bf43d1e623188415a3...- 03:10 AM Bug #56284 (New): crash: virtual int RocksDBStore::get(const string&, const string&, ceph::buffer...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a9ff64251eae2feb56901bc4...- 03:10 AM Bug #56283 (New): crash: pread64()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7dc594425c2f10175c627120...- 03:10 AM Bug #56280 (New): crash: int BlueStore::expand_devices(std::ostream&): assert(r == 0)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cd09dfcce146a570391f318b...- 03:09 AM Bug #56273 (New): crash: int BlueFS::_replay(bool, bool): assert(next_seq > log_seq)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=99c400380101b911a28dc045...- 03:09 AM Bug #56272 (New): crash: int BlueFS::_replay(bool, bool): assert(delta.offset == fnode.allocated)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=618731d06fb205a9b28a444c...- 03:09 AM Bug #55529: ceph-17.2.0/src/os/bluestore/BlueStore.cc: 14136: FAILED ceph_assert(!c)
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2add8faed9d1c1a490c4dc7b5...- 03:09 AM Bug #56264 (New): crash: virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IO...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4cb29038f41093acb974a51e...- 03:09 AM Bug #56262 (New): crash: BlueStore::_txc_create(BlueStore::Collection*, BlueStore::OpSequencer*, ...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=97298aa00eec3260da644360...- 03:09 AM Bug #56260 (New): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64_t, ch...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=281102d7406ba46c2571bc72...- 03:08 AM Bug #56237 (New): crash: int64_t BlueFS::_read(BlueFS::FileReader*, uint64_t, size_t, ceph::buffe...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b9caf54d86ce91d07f3a706d...- 03:07 AM Bug #56235 (New): crash: bool SimpleBitmap::set(uint64_t, uint64_t): assert(offset + length < m_n...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=402b805f7d0461c2c6445d4d...- 03:07 AM Bug #56229 (New): crash: virtual int RocksDBStore::get(const string&, const char*, size_t, ceph::...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1f15cfefd8f91b31dca036b0...- 03:07 AM Bug #56226 (New): crash: bool SimpleBitmap::set(uint64_t, uint64_t): assert(offset + length < m_n...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b7aea0e8840d038322c58e0f...- 03:06 AM Bug #56212 (New): crash: HybridAllocator::init_rm_free(uint64_t, uint64_t)::<lambda(uint64_t, uin...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=10d855f4294a21b093b6c5b1...- 03:06 AM Bug #56211 (New): crash: HybridAllocator::init_rm_free(uint64_t, uint64_t)::<lambda(uint64_t, uin...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f9256e017dda06c76516ac0d...- 03:06 AM Bug #56210 (Resolved): crash: int BlueFS::_replay(bool, bool): assert(r == q->second->file_map.en...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=57467b994d37e485714a73e4...- 03:06 AM Bug #56208 (New): crash: HybridAllocator::init_rm_free(uint64_t, uint64_t)::<lambda(uint64_t, uin...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3977875de1af0d0b727ff0aa...- 03:06 AM Bug #56202 (New): crash: virtual int RocksDBStore::get(const string&, const char*, size_t, ceph::...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e638c7594631a27a2396170c...- 03:05 AM Bug #56200 (Duplicate): crash: ceph::buffer::ptr::release()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=19d587585ff240221f4672fc...- 03:05 AM Bug #56199 (New): crash: void KernelDevice::_aio_thread(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=dd460b41fa5ae02f079ac8f3...- 03:05 AM Bug #56197 (New): crash: pthread_rwlock_rdlock()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=69c6cc8ff03f338329d5bcfc...- 03:05 AM Bug #56193 (New): crash: virtual int KernelDevice::flush(): abort
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0aa4df9f7d59f81569fdb5d7...- 03:04 AM Bug #56190 (New): crash: BlueStore::collection_list(boost::intrusive_ptr<ObjectStore::CollectionI...
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=87cffa8d06c20f4084c4ae90...- 03:04 AM Bug #56189 (New): crash: pthread_cond_wait()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a1b56d10ffb10d94d3cc6f59...- 03:04 AM Bug #56187 (New): crash: BlueFS::_open_super()
http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0fa477f7ac90ad6273391f21...
06/23/2022
- 09:33 PM Bug #54288: rocksdb: Corruption: missing start of fragmented record
- https://github.com/ceph/ceph/pull/45040 merged
- 09:27 PM Backport #55360: octopus: os/bluestore: Always update the cursor position in AVL near-fit search
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/46687
merged
06/22/2022
- 04:39 PM Bug #56174: rook-ceph-osd crash randomly
- Pacific backport:
https://tracker.ceph.com/issues/53608 - 04:38 PM Bug #56174 (Duplicate): rook-ceph-osd crash randomly
- This has been fixed in 16.2.8
- 04:05 PM Bug #56174 (Duplicate): rook-ceph-osd crash randomly
backtrace: ...
Also available in: Atom