Tomasz Torcz
- Email: tomek@pipebreaker.pl
- Registered on: 07/27/2016
- Last connection: 05/16/2023
Issues
Activity
05/16/2023
- 12:50 PM Ceph Bug #59660: Corruption: unknown checksum type 4 (ceph-osd fails to start)
- Hi Adam, I'm the original reporter of this issue (https://bugzilla.redhat.com/show_bug.cgi?id=2193399).
To summarize...
03/30/2022
- 06:18 PM RADOS Bug #55140 (Duplicate): quincy OSD won't start: what(): void pg_stat_t::decode(ceph::buffer::v...
- My cluster has 3 control nodes running rawhide (mons, mgrs, mds).
1 physical server with 6 HDDs running 6 OSDs (fedo...
03/16/2022
- 01:23 PM bluestore Bug #54561: 5 out of 6 OSD crashing after update to 17.1.0-0.2.rc1.fc37.x86_64
- Thanks, that also explains why one OSD survived upgrade. It's on LVM volume (the rest are on raw disks), so the it is...
- 11:05 AM bluestore Bug #54561: 5 out of 6 OSD crashing after update to 17.1.0-0.2.rc1.fc37.x86_64
- Here it is: https://pipebreaker.pl/ceph-osd.1.bitmap.log.zstd
- 10:23 AM bluestore Bug #54561: 5 out of 6 OSD crashing after update to 17.1.0-0.2.rc1.fc37.x86_64
- Log with debug bluestore = 20 put at https://pipebreaker.pl/ceph-osd.1.log.zstd (26MiB)
Compressed, as it grown to ...
04/29/2020
- 09:35 AM mgr Bug #45147: Module 'diskprediction_local' takes forever to load
- FYI, the same happens on Fedora 33.
11/07/2019
- 12:33 PM mgr Bug #42680 (New): crash in in thread 7f6a445ee700 thread_name:devicehealth
- Hi,
It seems ceph-mgr crashes in relation to device health. I think it may be related to scraping....
02/21/2019
- 03:49 PM bluestore Bug #38329: OSD crashes in get_str_map while creating with ceph-volume
- (original reporter here)
I have following customisation in ceph.conf:...
07/26/2017
- 07:12 PM Ceph Bug #20765: bluestore: mismatched uuid in bdev_label after unclean shutdown
- Ukhm. There was something completely messed up with my installation. FS UUID shown by 'ceph -s' was different than th...
- 02:07 PM Ceph Bug #20765: bluestore: mismatched uuid in bdev_label after unclean shutdown
- Provisioning was basic: wipefs -a all the partitions and the whole disk at the end. Then
ceph-disk prepare /dev/sd...
Also available in: Atom