Activity
From 02/13/2022 to 03/14/2022
03/14/2022
- 06:16 PM Bug #54555 (Fix Under Review): ceph-bluestore-tool doesn't handle 'free-fragmentation' command
- 05:56 PM Bug #54554 (Resolved): Bluestore volume selector improperly tracks bluefs log size
- Here is the output of "bluefs stats" admin command:
>ceph tell osd.1 bluefs stats
1 : device size 0x3fffe000 : usin... - 04:56 PM Bug #54547: Deferred writes might cause "rocksdb: Corruption: Bad table magic number"
- a couple addendums to the previous comment:
- vstart cluster above should use spinning drives or benefit from settin... - 04:18 PM Bug #54547: Deferred writes might cause "rocksdb: Corruption: Bad table magic number"
- I managed to reproduce (to a major degree) the bug with vstart-ed cluster:
- osd_fast_shutdown = true
- rbd above e... - 12:58 PM Bug #54547 (Triaged): Deferred writes might cause "rocksdb: Corruption: Bad table magic number"
- 12:58 PM Bug #54547 (New): Deferred writes might cause "rocksdb: Corruption: Bad table magic number"
- 12:57 PM Bug #54547 (Duplicate): Deferred writes might cause "rocksdb: Corruption: Bad table magic number"
- 12:24 PM Bug #54547 (Resolved): Deferred writes might cause "rocksdb: Corruption: Bad table magic number"
- Looks like under some circumstances deferred write op might persist in DB longer then allocated extents it's targeted...
- 01:00 PM Bug #54409 (Duplicate): OSD fails to start up with "rocksdb: Corruption: Bad table magic number" ...
- 12:58 PM Bug #54409 (Triaged): OSD fails to start up with "rocksdb: Corruption: Bad table magic number" error
- 12:58 PM Bug #54409 (New): OSD fails to start up with "rocksdb: Corruption: Bad table magic number" error
- 12:57 PM Bug #54409 (Duplicate): OSD fails to start up with "rocksdb: Corruption: Bad table magic number" ...
- 01:23 AM Bug #53184: failed to start new osd due to SIGSEGV in BlueStore::read()
- The contents of osd.6.log are the log of a different Pod than the prepare pod.
> Yeah, ceph-osd's mkfs and ceph-vo...
03/11/2022
- 01:52 PM Bug #53184: failed to start new osd due to SIGSEGV in BlueStore::read()
- Yuma Ogami wrote:
>
> > 4) Am I correct that this ceph-volume's prepare stuff (which includes mkfs call) is run fr...
03/10/2022
- 06:59 PM Backport #54523 (In Progress): quincy: default osd_fast_shutdown=true would cause NCB to recover ...
- 06:54 PM Backport #54523 (Resolved): quincy: default osd_fast_shutdown=true would cause NCB to recover all...
- https://github.com/ceph/ceph/pull/45342
- 06:54 PM Bug #53266 (Pending Backport): default osd_fast_shutdown=true would cause NCB to recover allocati...
- 01:55 AM Bug #53184: failed to start new osd due to SIGSEGV in BlueStore::read()
- Thank you for your reply.
> 1) How easily can you reproduce this issue?
We could reproduce this issue by about 10...
03/09/2022
- 11:31 PM Bug #54465 (Fix Under Review): BlueFS broken sync compaction mode
- 05:19 PM Bug #53184: failed to start new osd due to SIGSEGV in BlueStore::read()
- Hi Yuma,
thanks a lot for the information. It's really helpful.
May I ask some questions, please?
1) How easily ... - 07:17 AM Bug #53184: failed to start new osd due to SIGSEGV in BlueStore::read()
- I'm sat's colleague.
We experimented in a updated environment and confirmed the same result. Could you please chec... - 02:16 PM Bug #38637 (Won't Fix): BlueStore::ExtentMap::fault_range() assert
03/03/2022
- 04:12 PM Bug #54465 (Resolved): BlueFS broken sync compaction mode
- BlueFS fine grain locking refactor block sync compaction mode.
The problem is off-by-1 in seq which leads to drop ... - 03:31 PM Bug #53483 (Resolved): Bluestore: Function not available for other platforms
- 03:29 PM Bug #53466: OSD is unable to allocate free space for BlueFS
- https://pad.ceph.com/p/bluefs_enospc
- 03:18 PM Bug #54409 (Need More Info): OSD fails to start up with "rocksdb: Corruption: Bad table magic num...
- 03:04 PM Bug #52502 (Can't reproduce): src/os/bluestore/BlueStore.cc: FAILED ceph_assert(collection_ref)
03/02/2022
- 02:07 PM Bug #54415 (Fix Under Review): Quincy 32bit compilation failure with deduction/substitution faile...
02/27/2022
- 09:49 PM Bug #44924 (Resolved): High memory usage in fsck/repair
- 09:48 PM Backport #53890 (Resolved): pacific: High memory usage in fsck/repair
- 07:37 AM Bug #54415 (Fix Under Review): Quincy 32bit compilation failure with deduction/substitution faile...
- Platform is alpine linux with the Quincy rc tarball compiler is gcc 11.2.1, all 32bit platforms have the same failure...
02/25/2022
- 10:44 PM Backport #53890: pacific: High memory usage in fsck/repair
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/44613
merged - 06:33 PM Backport #53391: pacific: BlueFS truncate() and poweroff can create corrupted files
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/45171
ceph-backport.sh versi... - 01:09 PM Bug #54409: OSD fails to start up with "rocksdb: Corruption: Bad table magic number" error
- From the following log snippet it looks like SST file content (LBA: 0x630000) got overwritten by presumably deferred ...
- 12:53 PM Bug #54409 (Duplicate): OSD fails to start up with "rocksdb: Corruption: Bad table magic number" ...
2022-02-21T13:53:43.367+0100 7fc9582f4700 10 bluefs open_for_read h 0x55bd4301f000 on file(ino 1419 size 0x73c mtim...
02/24/2022
- 01:12 AM Bug #53184: failed to start new osd due to SIGSEGV in BlueStore::read()
- Reproduction test in progress.
02/22/2022
- 10:14 PM Backport #54318: quincy: BlueFS improperly tracks vselector sizes in _flush_special()
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45079
merged - 02:08 PM Support #54365: Setup 64kB block size of Bluestore RAW HDD
- Please do not alter bdev_block_size parameter. You should use bluestore_min_alloc_size_hdd parameter instead.
- 01:02 PM Support #54365 (New): Setup 64kB block size of Bluestore RAW HDD
- Hello,
we are using Ceph like object storage with our proprietary client using librados. We store directly to Ceph p...
02/18/2022
- 03:18 PM Bug #53466: OSD is unable to allocate free space for BlueFS
- Just an addendum to my previous comment.
From free-dump analysis the amount of free space available for 64K allocati... - 03:13 PM Bug #53466: OSD is unable to allocate free space for BlueFS
- Hey Jan,
as for high utilization - both of your OSDs I'm aware of are at more than 80% space utilization:
- from ... - 12:09 PM Bug #53466: OSD is unable to allocate free space for BlueFS
- Hi Igor
thank you for your input.
You are correct, the logs and the dump are not from the same OSD, they only sha... - 11:00 AM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Seena, thanks a lot for the logs.
So everything is fine (I mean there is no bug) with the allocator, your volume jus...
02/17/2022
- 06:42 PM Backport #54318 (In Progress): quincy: BlueFS improperly tracks vselector sizes in _flush_special()
- 06:30 PM Backport #54318 (Resolved): quincy: BlueFS improperly tracks vselector sizes in _flush_special()
- https://github.com/ceph/ceph/pull/45079
- 06:26 PM Bug #54248 (Pending Backport): BlueFS improperly tracks vselector sizes in _flush_special()
- 06:07 PM Support #54315: 1 fsck error per osd during nautilus -> octopus upgrade (S3 cluster)
- The only place in the fsck code we found that increments `errors` but doesn't `derr` was this:...
- 05:23 PM Support #54315 (New): 1 fsck error per osd during nautilus -> octopus upgrade (S3 cluster)
- At the end of the conversion to per-pool omap, around half of our OSDs had 1 error, but the log didn't show the error...
- 10:23 AM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Igor Fedotov wrote:
> Seena Fallah wrote:
> > Igor Fedotov wrote:
> > > Hi Seena,
> > > no. that's right. The dum...
02/16/2022
- 11:20 PM Fix #54299: osd error restart
- Igor Fedotov wrote:
> Well, unfortunately I'm not an expert in using rook deployments hence unable to provide any he... - 03:46 PM Fix #54299: osd error restart
- Well, unfortunately I'm not an expert in using rook deployments hence unable to provide any help on how to run the to...
- 03:16 PM Fix #54299: osd error restart
- Igor Fedotov wrote:
> duans song wrote:
> > Igor Fedotov wrote:
> > > Could you please share the output for ceph-b... - 02:11 PM Fix #54299: osd error restart
- duans song wrote:
> Igor Fedotov wrote:
> > Could you please share the output for ceph-bluestore-tool's free-dump c... - 01:54 PM Fix #54299: osd error restart
- Igor Fedotov wrote:
> Could you please share the output for ceph-bluestore-tool's free-dump command?
What command... - 01:12 PM Fix #54299: osd error restart
- Could you please share the output for ceph-bluestore-tool's free-dump command?
- 01:06 PM Fix #54299 (Need More Info): osd error restart
- debug -83> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.bottommost_compression_opts...
- 10:16 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Seena Fallah wrote:
> Igor Fedotov wrote:
> > Hi Seena,
> > no. that's right. The dump might be pretty large if yo... - 10:14 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Seena Fallah wrote:
> Igor Fedotov wrote:
> > Hi Seena,
> > no. that's right. The dump might be pretty large if yo... - 05:08 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Igor Fedotov wrote:
> Hi Seena,
> no. that's right. The dump might be pretty large if your disk is very big. But un... - 05:03 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Igor Fedotov wrote:
> Hi Seena,
> no. that's right. The dump might be pretty large if your disk is very big. But un... - 03:42 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Hi Seena,
no. that's right. The dump might be pretty large if your disk is very big. But unfortunately I need the fu... - 03:20 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Igor Fedotov wrote:
> Hi Seena,
> I presume you're sharing stderr output only, right?
> Could you please share mor... - 12:13 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Hi Seena,
I presume you're sharing stderr output only, right?
Could you please share more verbose OSD startup log.
... - 11:38 AM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Seena Fallah wrote:
> Hi,
>
> I got the same issue with some of my OSDs (pacific v16.2.7 & SSD)
> [...]
I tri... - 11:35 AM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Hi,
I got the same issue with some of my OSDs (pacific v16.2.7 & SSD)... - 09:56 PM Bug #53002 (Resolved): crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
- 09:55 PM Backport #53608 (Resolved): pacific: crash BlueStore::Onode::put from BlueStore::TransContext::~T...
- 07:07 PM Backport #53608: pacific: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44723
merged - 08:21 PM Backport #54209: quincy: BlueStore.h: 4148: FAILED ceph_assert(cur >= p.length)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44952
merged - 02:55 PM Bug #53466: OSD is unable to allocate free space for BlueFS
- Hi Alexander,
thanks for the dump, finally I've got a chance to take a look. Sorry for the delay.
I was hoping th...
02/15/2022
- 11:11 PM Bug #54288 (Fix Under Review): rocksdb: Corruption: missing start of fragmented record
- 02:29 PM Bug #54288: rocksdb: Corruption: missing start of fragmented record
- Interestingly RocksDB from Pacific onward deals with wal_recovery_mode and recycle_log_file_num inconsistency a bit d...
- 02:21 PM Bug #54288 (Triaged): rocksdb: Corruption: missing start of fragmented record
- Investigation revealed that RocksDB embedded to Ceph Octopus release implicitly substitutes default wal_recovery_mode...
- 02:10 PM Bug #54288 (Rejected): rocksdb: Corruption: missing start of fragmented record
- After a kernel panic one of our OSD is unable to start and is showing:
rocksdb: [db/db_impl_open.cc:518] db/099134.l... - 07:13 PM Bug #53266 (Fix Under Review): default osd_fast_shutdown=true would cause NCB to recover allocati...
- 06:16 PM Bug #54248 (Fix Under Review): BlueFS improperly tracks vselector sizes in _flush_special()
02/14/2022
- 10:44 PM Bug #53678 (Resolved): NCB's reconstruct allocations improperly handles shared blobb
- 10:43 PM Backport #54175 (Resolved): quincy: NCB's reconstruct allocations improperly handles shared blobb
- 03:51 PM Bug #53266 (Pending Backport): default osd_fast_shutdown=true would cause NCB to recover allocati...
Also available in: Atom