Project

General

Profile

Activity

From 06/22/2022 to 07/21/2022

07/21/2022

10:55 PM Backport #56669 (Resolved): pacific: Deferred writes might cause "rocksdb: Corruption: Bad table ...
https://github.com/ceph/ceph/pull/47296 Backport Bot
10:55 PM Backport #56668 (Resolved): quincy: Deferred writes might cause "rocksdb: Corruption: Bad table m...
https://github.com/ceph/ceph/pull/47297 Backport Bot
10:54 PM Bug #54547 (Pending Backport): Deferred writes might cause "rocksdb: Corruption: Bad table magic ...
Neha Ojha
10:48 PM Bug #54547: Deferred writes might cause "rocksdb: Corruption: Bad table magic number"
https://github.com/ceph/ceph/pull/46890 merged Yuri Weinstein

07/20/2022

06:31 PM Bug #56503: Deleting large (~850gb) objects causes OSD to crash
Do you have logs with @debug_bluestore=20@ maybe?
Mark, can this be related to the RocksDB's tombstones issues?
Radoslaw Zarzynski
04:58 PM Bug #56640: RGW S3 workload has a huge performance boost in quincy 17.2.0 as compared to 17.2.1
There are two test cases that would be executed to find more details on what is going on with this feature and how it... Vikhyat Umrao

07/19/2022

09:07 PM Bug #56640: RGW S3 workload has a huge performance boost in quincy 17.2.0 as compared to 17.2.1
Vikhyat Umrao wrote:
> This has been already verified that by default COSbench uses random not zero.
>
> COSBench...
Vikhyat Umrao
09:06 PM Bug #56640: RGW S3 workload has a huge performance boost in quincy 17.2.0 as compared to 17.2.1
The `bluestore_zero_block_detection` was set to false in 17.2.1. For more details please check:
https://tracker.ce...
Vikhyat Umrao
08:55 PM Bug #56640: RGW S3 workload has a huge performance boost in quincy 17.2.0 as compared to 17.2.1
This has been already verified that by default COSbench uses random not zero.
COSBench document - page 50 - https:...
Vikhyat Umrao
08:53 PM Bug #56640 (New): RGW S3 workload has a huge performance boost in quincy 17.2.0 as compared to 17...
RGW S3 *small object* workload has a huge performance boost in the quincy 17.2.0 as compared to 17.2.1 due to bluesto... Vikhyat Umrao
04:33 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
Well, this doesn't look like a bluestore issue anymore. Neither I can't see any clues it's the same as the original o... Igor Fedotov
12:53 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
Igor Fedotov wrote:
> Hi!
>
> May I have some clarifications then.
> I can see multiple backtraces for .107 like...
Aurélien Le Clainche
10:33 AM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
Hi!
May I have some clarifications then.
I can see multiple backtraces for .107 like the last one. All are happen...
Igor Fedotov
08:29 AM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
Aurélien Le Clainche wrote:
> Igor Fedotov wrote:
> > Aurélien Le Clainche wrote:
> > > Igor Fedotov wrote:
> > >...
Aurélien Le Clainche

07/18/2022

02:24 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
Igor Fedotov wrote:
> Aurélien Le Clainche wrote:
> > Igor Fedotov wrote:
> > > Aurélien Le Clainche wrote:
> > >...
Aurélien Le Clainche
01:58 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
Aurélien Le Clainche wrote:
> Igor Fedotov wrote:
> > Aurélien Le Clainche wrote:
> > > do you have a procedure or...
Igor Fedotov
01:01 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
Igor Fedotov wrote:
> Aurélien Le Clainche wrote:
> > do you have a procedure or example to do the fsck?
>
> Not...
Aurélien Le Clainche
09:56 AM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
Aurélien Le Clainche wrote:
> do you have a procedure or example to do the fsck?
Not sure how to do that with Roo...
Igor Fedotov
09:17 AM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
Sébastien Bernard wrote:
> Regarding this problem, /var filesystem is an xfs filesystem.
> osd are setup by rook.
...
Aurélien Le Clainche
09:10 AM Backport #56600 (Rejected): octopus: octopus : bluestore_cache_other pool memory leak ?
Backport Bot
09:10 AM Backport #56599 (Resolved): pacific: octopus : bluestore_cache_other pool memory leak ?
Backport Bot
09:10 AM Backport #56598 (Resolved): quincy: octopus : bluestore_cache_other pool memory leak ?
Backport Bot
09:07 AM Bug #56424 (Pending Backport): bluestore_cache_other mempool entry leak
Igor Fedotov
09:05 AM Bug #56424: bluestore_cache_other mempool entry leak
alexandre derumier wrote:
> Hi,
>
> I think it's fixing the problem.
>
> I'll keep it running, and I'll send n...
Igor Fedotov

07/13/2022

03:25 PM Bug #56424 (Fix Under Review): bluestore_cache_other mempool entry leak
Igor Fedotov
03:24 PM Bug #54288 (Rejected): rocksdb: Corruption: missing start of fragmented record
Reverted by https://github.com/ceph/ceph/pull/47053 Igor Fedotov

07/12/2022

08:22 PM Bug #55636 (Resolved): octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph...
Efforts to solve the BlueFS bug are ongoing in bug #56533. Laura Flores
04:59 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
I think I have a fix but unfortunately it's a bug (and hence a fix) in BlueFS.
BlueFS improperly handles the followi...
Igor Fedotov
12:49 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
For the sake of completeness resending the log snippet relevant to the issue with the final false positive check:
20...
Igor Fedotov
12:44 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
So tuning that RocksDB setting results in a bit different WAL file use pattern which in turn apparently reveals a bug... Igor Fedotov
12:42 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
The issue is caused by a false positive allocations check. It detects a reference to an "already allocated" chunk. An... Igor Fedotov
12:35 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
I deeply believe that this is an internal BlueFS problem. Insufficient locking? Data collision? Adam Kupczyk
05:37 PM Bug #56533 (Fix Under Review): Bluefs might put an orpan op_update record in the log
Igor Fedotov
05:26 PM Bug #56533 (In Progress): Bluefs might put an orpan op_update record in the log
Igor Fedotov
05:25 PM Bug #56533 (Resolved): Bluefs might put an orpan op_update record in the log
This has been originally revealed in Octopus when recycle_log_file_num is set to 0. See https://tracker.ceph.com/issu... Igor Fedotov
12:21 PM Bug #56488: BlueStore doesn't defer small writes for pre-pacific hdd osds
There are two configurables to consider for deferred writes logic:
- bluestore_prefer_deferred_size "deferred_size"
...
Adam Kupczyk

07/11/2022

09:35 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
This one passed again on ubuntu_latest. Strange that it seems to correlate:
/a/lflores-2022-07-11_16:31:11-rados:s...
Laura Flores
07:23 PM Bug #55636 (Fix Under Review): octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAI...
I've opened a revert PR since we need to finalize the Octopus point release. @Igor let me know if you have a fix you ... Laura Flores
04:11 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
Caught another instance here:
/a/yuriw-2022-07-08_20:17:39-rados-wip-yuri7-testing-2022-07-08-1007-octopus-distro-de...
Laura Flores
03:05 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
Hey Igor, any updates? Laura Flores

07/08/2022

09:43 AM Bug #56503: Deleting large (~850gb) objects causes OSD to crash
And here is dump_ops_in_flight from one of the OSDs. This OSD has block.db on SSD by the way. As you can see this sin... Marcin Gibula
09:22 AM Bug #56503 (New): Deleting large (~850gb) objects causes OSD to crash
After deleting large S3 object - around 850GB in size, OSDs in our cluster started becaming laggy, unresponsive and e... Marcin Gibula

07/07/2022

08:31 AM Bug #56488 (Resolved): BlueStore doesn't defer small writes for pre-pacific hdd osds
We're upgrading clusters to v16.2.9 from v15.2.16, and our simple "rados bench -p test 10 write -b 4096 -t 1" latency... Dan van der Ster
02:24 AM Bug #56450: Rados doesn't release the disk spaces after cephfs releases it
Greg Farnum wrote:
> Xiubo Li wrote:
> > Greg Farnum wrote:
> > > So this looks like the client is not correctly u...
Xiubo Li

07/06/2022

11:12 PM Bug #56450: Rados doesn't release the disk spaces after cephfs releases it
Xiubo Li wrote:
> Greg Farnum wrote:
> > So this looks like the client is not correctly updating its own df reporti...
Greg Farnum
04:55 AM Bug #56450: Rados doesn't release the disk spaces after cephfs releases it
Tried:
1, to umount the kclients or fuse clients
2, restart all the MDS daemons
3, restart all the OSD daemons
...
Xiubo Li
02:56 AM Bug #56450: Rados doesn't release the disk spaces after cephfs releases it
Greg Farnum wrote:
> So this looks like the client is not correctly updating its own df reporting, but that the data...
Xiubo Li

07/05/2022

03:16 PM Bug #56450: Rados doesn't release the disk spaces after cephfs releases it
So this looks like the client is not correctly updating its own df reporting, but that the data is actually getting c... Greg Farnum
10:02 AM Bug #56467: nautilus: osd crashs with _do_alloc_write failed with (28) No space left on device
relates to https://tracker.ceph.com/issues/42913 xu wang
09:05 AM Bug #56467 (New): nautilus: osd crashs with _do_alloc_write failed with (28) No space left on device
osd crashs and can't be pulled up when bluestore runs out of space in N release. Here's the stack trace in the log:
...
xu wang

07/04/2022

08:08 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
Regarding this problem, /var filesystem is an xfs filesystem.
osd are setup by rook.
Rebooting the machine is enoug...
Sébastien Bernard
03:44 PM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
do you have a procedure or example to do the fsck?
Aurélien Le Clainche
11:03 AM Bug #56456: rook-ceph-v1.9.5: ceph-osd crash randomly
Could you please run fsck against this OSD and share the results Igor Fedotov
09:50 AM Bug #56456 (New): rook-ceph-v1.9.5: ceph-osd crash randomly
Hi,
after a migration to rook-ceph v1.9.5, ceph osd crash : ...
Aurélien Le Clainche
12:36 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
Laura Flores wrote:
> Hey @Igor, mind taking a look?
looking
Igor Fedotov
06:35 AM Bug #56424: bluestore_cache_other mempool entry leak
Hi,
I think it's fixing the problem.
Looking at stats, I still see small increase of cache other over time,
bu...
alexandre derumier
06:28 AM Bug #56450: Rados doesn't release the disk spaces after cephfs releases it
There is only one active MDS:... Xiubo Li
03:18 AM Bug #56450 (New): Rados doesn't release the disk spaces after cephfs releases it
Before running the Filesystem benchmark test, from OS we can see that the *_/_* directory had *_81GB_*:... Xiubo Li

07/01/2022

07:34 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
Hey @Igor, mind taking a look? Laura Flores
07:27 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
Some recent observations:
On https://trello.com/c/w6qCkODQ/1567-wip-yuri-testing-2022-06-24-0817-octopus, I notice...
Laura Flores
04:24 PM Bug #55636: octopus: osd-bluefs-volume-ops.sh: TEST_bluestore2 fails with "FAILED ceph_assert(r =...
5 occurrences on http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2022-06-24-0817-octopus. Laura Flores
07:24 PM Bug #56424 (In Progress): bluestore_cache_other mempool entry leak
Vikhyat Umrao
03:31 PM Bug #56424: bluestore_cache_other mempool entry leak
Igor Fedotov wrote:
> This one should work properly. Please try again
ok, no more crazy values, thanks !
So I'...
alexandre derumier
01:44 PM Bug #56424: bluestore_cache_other mempool entry leak
This one should work properly. Please try again Igor Fedotov
01:19 PM Bug #56424: bluestore_cache_other mempool entry leak
my bad... fixing... Igor Fedotov
11:59 AM Bug #56424: bluestore_cache_other mempool entry leak
ouch, seem buggy.
I have crazy values
"bluestore_cache_other": {
"items": 154949...
alexandre derumier
11:47 AM Bug #56424: bluestore_cache_other mempool entry leak
Igor Fedotov wrote:
> alexandre derumier wrote:
> > Igor Fedotov wrote:
> > > @Alexandre, I'd recommend to try the...
alexandre derumier
11:01 AM Bug #56424: bluestore_cache_other mempool entry leak
alexandre derumier wrote:
> Igor Fedotov wrote:
> > @Alexandre, I'd recommend to try the patch using a single OSD o...
Igor Fedotov
08:00 AM Bug #56424: bluestore_cache_other mempool entry leak
Igor Fedotov wrote:
> @Alexandre, I'd recommend to try the patch using a single OSD only. Just to avoid any unexpect...
alexandre derumier
07:58 AM Bug #56424: bluestore_cache_other mempool entry leak
>I'm still not 100% sure the issue I found is the only bug though.
thanks for the explain. (so, if I understand, m...
alexandre derumier

06/30/2022

10:46 PM Bug #56424: bluestore_cache_other mempool entry leak
@Alexandre, I'd recommend to try the patch using a single OSD only. Just to avoid any unexpected OSD misbehavior - th... Igor Fedotov
07:40 PM Bug #56424: bluestore_cache_other mempool entry leak
alexandre derumier wrote:
> BTW, could you give me a small explain of what is the current problem ?
Well the prob...
Igor Fedotov
02:48 PM Bug #56424: bluestore_cache_other mempool entry leak
BTW, could you give me a small explain of what is the current problem ?
I have 4 others cluster with same config, ...
alexandre derumier
02:38 PM Bug #56424: bluestore_cache_other mempool entry leak
Igor Fedotov wrote:
> Highly likely this fix https://github.com/ceph/ceph/pull/46911 is relevant. Not sure it fixes ...
alexandre derumier
01:51 PM Bug #56424: bluestore_cache_other mempool entry leak
Highly likely this fix https://github.com/ceph/ceph/pull/46911 is relevant. Not sure it fixes everything though..
I...
Igor Fedotov
12:06 PM Bug #56424: bluestore_cache_other mempool entry leak
full log (10min) is available here:
https://mutulin1.odiso.net/ceph-osd.5.bluestore20debug.log.gz
alexandre derumier
11:57 AM Bug #56424: bluestore_cache_other mempool entry leak
here the grep on "pruned tailed" with debug_bluestore 20/20 alexandre derumier
11:33 AM Bug #56424: bluestore_cache_other mempool entry leak
Alexandre,
could you please set debug_bluestore to 20 for 5-10 mins (be careful as the log will grow drastically) a...
Igor Fedotov
06:58 AM Bug #56424: bluestore_cache_other mempool entry leak
here a 1 minute log with debug 20/20 of osd.5
(no scrub, no snap trim during this time)
https://mutulin1.odiso....
alexandre derumier
06:31 AM Bug #56424: bluestore_cache_other mempool entry leak
sorry,
wrong screenshot of last 24h is last post.
here the correct graphs
alexandre derumier
06:22 AM Bug #56424: bluestore_cache_other mempool entry leak
some detailled of other_cache stats for osd.5 over last 24h
items:
80017467 -> 80716079
size:
2781599188 by...
alexandre derumier
05:43 AM Bug #56424 (Resolved): bluestore_cache_other mempool entry leak
Hi,
I have an octopus cluster (15.2.16),
(I was first installed in octopus, no upgrade from previous ceph versi...
alexandre derumier

06/28/2022

06:57 AM Bug #54547: Deferred writes might cause "rocksdb: Corruption: Bad table magic number"
https://github.com/ceph/ceph/pull/46856 is a consistent replicator for deferred writes corrupting RocksDB. Adam Kupczyk
12:22 AM Bug #55328: OSD crashed due to checksum error
Hi Igor
I am continuously struggling with this issue, but unfortunately, I still cannot provide you with logs.
Afte...
Shinya Hayashi

06/24/2022

09:24 PM Backport #55360 (Resolved): octopus: os/bluestore: Always update the cursor position in AVL near-...
Igor Fedotov
09:20 PM Bug #54288 (Resolved): rocksdb: Corruption: missing start of fragmented record
Igor Fedotov
03:14 AM Bug #56383 (New): crash: ceph::buffer::ptr::iterator_impl<true>::operator

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9cc1fe3e027f3cf98c0c3316...
Telemetry Bot
03:14 AM Bug #56382 (Resolved): ONode ref counting is broken

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=6bb2da74940c132cf3884cb9...
Telemetry Bot
03:14 AM Bug #56379 (New): crash: rocksdb::UncompressBlockContentsForCompressionType(rocksdb::Uncompressio...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b1f8d003169b7ab28ccfa0d9...
Telemetry Bot
03:14 AM Bug #56378 (New): crash: LZ4_decompress_safe()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fa6e5a4dbeb6ec5834f6028e...
Telemetry Bot
03:14 AM Bug #56376 (New): crash: rocksdb::Block::NewDataIterator(rocksdb::Comparator const*, unsigned lon...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fad2b9ddbbee699a1a975660...
Telemetry Bot
03:14 AM Bug #56375 (New): crash: rocksdb::DataBlockIter::NextImpl()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=9dcd6fd26edca2c46bca7c64...
Telemetry Bot
03:13 AM Bug #56372 (New): crash: pthread_cond_wait()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=99f1ebfcce5a35ce782856e7...
Telemetry Bot
03:13 AM Bug #56370 (New): crash: int64_t BlueFS::_read(BlueFS::FileReader*, uint64_t, size_t, ceph::buffe...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7275a73d55a4b1a27239138d...
Telemetry Bot
03:13 AM Bug #56369 (New): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64_t, ch...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=114851fd57b3f97cbeead539...
Telemetry Bot
03:13 AM Bug #56368 (New): crash: BlueStore::ExtentMap::fault_range(KeyValueDB*, uint32_t, uint32_t)::<lam...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=688cba224e4db38476402be3...
Telemetry Bot
03:13 AM Bug #56367 (New): crash: BlueStore::Onode::put()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=978f0095a5fd046ea12aa38a...
Telemetry Bot
03:13 AM Bug #56366 (New): crash: ceph::buffer::ptr::release()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=85100d5ac144a3d242c0cae3...
Telemetry Bot
03:13 AM Bug #56365 (New): crash: ceph::buffer::ptr::release()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=076e4ef53dfeb8b3d1ba4adb...
Telemetry Bot
03:13 AM Bug #56364 (New): crash: ceph::buffer::ptr::release()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=55b4cd081b104e2bf1d3b1a6...
Telemetry Bot
03:13 AM Bug #56363 (New): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64_t, ch...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3a15b13e448484b313ad69ca...
Telemetry Bot
03:13 AM Bug #56362 (New): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64_t, ch...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=feeb409db59ea0952734fd06...
Telemetry Bot
03:13 AM Bug #56361 (New): crash: virtual int KernelDevice::flush(): abort

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3d4731acdf48659d882151cf...
Telemetry Bot
03:13 AM Bug #56360 (New): crash: virtual int KernelDevice::flush(): abort

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=625b18fef7fb68f95b516951...
Telemetry Bot
03:13 AM Bug #56359 (New): crash: virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IO...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=87fefa3c056149d446126981...
Telemetry Bot
03:13 AM Bug #56356 (New): crash: BlueFS::get_free(unsigned int)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f9b6874a251f5c6a6e647d44...
Telemetry Bot
03:13 AM Bug #56354 (New): crash: virtual int BlueFS::SocketHook::call(std::string_view, const cmdmap_t&, ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=d1ef289ed7197a0d72c0d196...
Telemetry Bot
03:13 AM Bug #56353 (New): crash: void BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectSto...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5996d0bc93a2a4b3266ea728...
Telemetry Bot
03:12 AM Bug #56346 (New): crash: BlueStore::_txc_create(BlueStore::Collection*, BlueStore::OpSequencer*, ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7911a4e6e2f7947d0b5be910...
Telemetry Bot
03:12 AM Bug #56335 (New): crash: tcmalloc::DLL_Remove(tcmalloc::Span*)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=ade6baee319f62a1642e761c...
Telemetry Bot
03:12 AM Bug #56334 (New): crash: boost::dynamic_bitset<unsigned long, std::allocator<unsigned long> >::re...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=c4563aa6978e87f351c2f3a9...
Telemetry Bot
03:12 AM Bug #56328 (New): crash: pthread_cond_wait()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5dc541840158b19c1fff06f5...
Telemetry Bot
03:12 AM Bug #56327 (New): crash: void BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectSto...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=87922c33f416e53fbe796b63...
Telemetry Bot
03:11 AM Bug #56315 (New): crash: bool rocksdb::InlineSkipList<rocksdb::MemTableRep::KeyComparator const&>...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=5ff1348d1381602539968af8...
Telemetry Bot
03:11 AM Bug #56314 (New): crash: void BlueStore::_kv_sync_thread(): assert(r == 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=fbe5c6c2066ff394130b0641...
Telemetry Bot
03:11 AM Bug #56311 (New): crash: virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IO...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=95184972789a6e8f95d5278c...
Telemetry Bot
03:11 AM Bug #56310 (New): crash: pread64()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1cf418cda3fbe778c47ea7c3...
Telemetry Bot
03:11 AM Bug #56309 (New): crash: pread64()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4800be7a105a866561da5352...
Telemetry Bot
03:11 AM Bug #56308 (New): crash: int64_t BlueFS::_read(BlueFS::FileReader*, uint64_t, size_t, ceph::buffe...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0d8c41707fbb4c5b28af86c0...
Telemetry Bot
03:11 AM Bug #56302 (New): crash: HybridAllocator::init_rm_free(uint64_t, uint64_t)::<lambda(uint64_t, uin...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=242c28b58de91f4762515497...
Telemetry Bot
03:10 AM Bug #56295 (New): crash: void BlueStore::_close_db_leave_bluefs(): assert(db)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=271e6ade36c8afbde557589a...
Telemetry Bot
03:10 AM Bug #56294 (New): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStore::Col...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a17572825091f2b2501eac5f...
Telemetry Bot
03:10 AM Bug #56293 (New): crash: void BlueStore::_do_write_small(BlueStore::TransContext*, BlueStore::Col...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f85a0e7e6dc05a28e33ea281...
Telemetry Bot
03:10 AM Bug #56286 (New): crash: virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IO...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cdd746bf43d1e623188415a3...
Telemetry Bot
03:10 AM Bug #56284 (New): crash: virtual int RocksDBStore::get(const string&, const string&, ceph::buffer...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a9ff64251eae2feb56901bc4...
Telemetry Bot
03:10 AM Bug #56283 (New): crash: pread64()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=7dc594425c2f10175c627120...
Telemetry Bot
03:10 AM Bug #56280 (New): crash: int BlueStore::expand_devices(std::ostream&): assert(r == 0)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=cd09dfcce146a570391f318b...
Telemetry Bot
03:09 AM Bug #56273 (New): crash: int BlueFS::_replay(bool, bool): assert(next_seq > log_seq)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=99c400380101b911a28dc045...
Telemetry Bot
03:09 AM Bug #56272 (New): crash: int BlueFS::_replay(bool, bool): assert(delta.offset == fnode.allocated)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=618731d06fb205a9b28a444c...
Telemetry Bot
03:09 AM Bug #55529: ceph-17.2.0/src/os/bluestore/BlueStore.cc: 14136: FAILED ceph_assert(!c)

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=2add8faed9d1c1a490c4dc7b5...
Telemetry Bot
03:09 AM Bug #56264 (New): crash: virtual int KernelDevice::read(uint64_t, uint64_t, ceph::bufferlist*, IO...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=4cb29038f41093acb974a51e...
Telemetry Bot
03:09 AM Bug #56262 (New): crash: BlueStore::_txc_create(BlueStore::Collection*, BlueStore::OpSequencer*, ...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=97298aa00eec3260da644360...
Telemetry Bot
03:09 AM Bug #56260 (New): crash: int64_t BlueFS::_read_random(BlueFS::FileReader*, uint64_t, uint64_t, ch...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=281102d7406ba46c2571bc72...
Telemetry Bot
03:08 AM Bug #56237 (New): crash: int64_t BlueFS::_read(BlueFS::FileReader*, uint64_t, size_t, ceph::buffe...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b9caf54d86ce91d07f3a706d...
Telemetry Bot
03:07 AM Bug #56235 (New): crash: bool SimpleBitmap::set(uint64_t, uint64_t): assert(offset + length < m_n...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=402b805f7d0461c2c6445d4d...
Telemetry Bot
03:07 AM Bug #56229 (New): crash: virtual int RocksDBStore::get(const string&, const char*, size_t, ceph::...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=1f15cfefd8f91b31dca036b0...
Telemetry Bot
03:07 AM Bug #56226 (New): crash: bool SimpleBitmap::set(uint64_t, uint64_t): assert(offset + length < m_n...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=b7aea0e8840d038322c58e0f...
Telemetry Bot
03:06 AM Bug #56212 (New): crash: HybridAllocator::init_rm_free(uint64_t, uint64_t)::<lambda(uint64_t, uin...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=10d855f4294a21b093b6c5b1...
Telemetry Bot
03:06 AM Bug #56211 (New): crash: HybridAllocator::init_rm_free(uint64_t, uint64_t)::<lambda(uint64_t, uin...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=f9256e017dda06c76516ac0d...
Telemetry Bot
03:06 AM Bug #56210 (Resolved): crash: int BlueFS::_replay(bool, bool): assert(r == q->second->file_map.en...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=57467b994d37e485714a73e4...
Telemetry Bot
03:06 AM Bug #56208 (New): crash: HybridAllocator::init_rm_free(uint64_t, uint64_t)::<lambda(uint64_t, uin...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=3977875de1af0d0b727ff0aa...
Telemetry Bot
03:06 AM Bug #56202 (New): crash: virtual int RocksDBStore::get(const string&, const char*, size_t, ceph::...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=e638c7594631a27a2396170c...
Telemetry Bot
03:05 AM Bug #56200 (Duplicate): crash: ceph::buffer::ptr::release()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=19d587585ff240221f4672fc...
Telemetry Bot
03:05 AM Bug #56199 (New): crash: void KernelDevice::_aio_thread(): abort

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=dd460b41fa5ae02f079ac8f3...
Telemetry Bot
03:05 AM Bug #56197 (New): crash: pthread_rwlock_rdlock()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=69c6cc8ff03f338329d5bcfc...
Telemetry Bot
03:05 AM Bug #56193 (New): crash: virtual int KernelDevice::flush(): abort

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0aa4df9f7d59f81569fdb5d7...
Telemetry Bot
03:04 AM Bug #56190 (New): crash: BlueStore::collection_list(boost::intrusive_ptr<ObjectStore::CollectionI...

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=87cffa8d06c20f4084c4ae90...
Telemetry Bot
03:04 AM Bug #56189 (New): crash: pthread_cond_wait()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=a1b56d10ffb10d94d3cc6f59...
Telemetry Bot
03:04 AM Bug #56187 (New): crash: BlueFS::_open_super()

http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0fa477f7ac90ad6273391f21...
Telemetry Bot

06/23/2022

09:33 PM Bug #54288: rocksdb: Corruption: missing start of fragmented record
https://github.com/ceph/ceph/pull/45040 merged Yuri Weinstein
09:27 PM Backport #55360: octopus: os/bluestore: Always update the cursor position in AVL near-fit search
Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/46687
merged
Yuri Weinstein

06/22/2022

04:39 PM Bug #56174: rook-ceph-osd crash randomly
Pacific backport:
https://tracker.ceph.com/issues/53608
Igor Fedotov
04:38 PM Bug #56174 (Duplicate): rook-ceph-osd crash randomly
This has been fixed in 16.2.8
Igor Fedotov
04:05 PM Bug #56174 (Duplicate): rook-ceph-osd crash randomly

backtrace: ...
Aurélien Le Clainche
 

Also available in: Atom