Activity
From 02/02/2022 to 03/03/2022
03/03/2022
- 04:12 PM Bug #54465 (Resolved): BlueFS broken sync compaction mode
- BlueFS fine grain locking refactor block sync compaction mode.
The problem is off-by-1 in seq which leads to drop ... - 03:31 PM Bug #53483 (Resolved): Bluestore: Function not available for other platforms
- 03:29 PM Bug #53466: OSD is unable to allocate free space for BlueFS
- https://pad.ceph.com/p/bluefs_enospc
- 03:18 PM Bug #54409 (Need More Info): OSD fails to start up with "rocksdb: Corruption: Bad table magic num...
- 03:04 PM Bug #52502 (Can't reproduce): src/os/bluestore/BlueStore.cc: FAILED ceph_assert(collection_ref)
03/02/2022
- 02:07 PM Bug #54415 (Fix Under Review): Quincy 32bit compilation failure with deduction/substitution faile...
02/27/2022
- 09:49 PM Bug #44924 (Resolved): High memory usage in fsck/repair
- 09:48 PM Backport #53890 (Resolved): pacific: High memory usage in fsck/repair
- 07:37 AM Bug #54415 (Fix Under Review): Quincy 32bit compilation failure with deduction/substitution faile...
- Platform is alpine linux with the Quincy rc tarball compiler is gcc 11.2.1, all 32bit platforms have the same failure...
02/25/2022
- 10:44 PM Backport #53890: pacific: High memory usage in fsck/repair
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/44613
merged - 06:33 PM Backport #53391: pacific: BlueFS truncate() and poweroff can create corrupted files
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/45171
ceph-backport.sh versi... - 01:09 PM Bug #54409: OSD fails to start up with "rocksdb: Corruption: Bad table magic number" error
- From the following log snippet it looks like SST file content (LBA: 0x630000) got overwritten by presumably deferred ...
- 12:53 PM Bug #54409 (Duplicate): OSD fails to start up with "rocksdb: Corruption: Bad table magic number" ...
2022-02-21T13:53:43.367+0100 7fc9582f4700 10 bluefs open_for_read h 0x55bd4301f000 on file(ino 1419 size 0x73c mtim...
02/24/2022
- 01:12 AM Bug #53184: failed to start new osd due to SIGSEGV in BlueStore::read()
- Reproduction test in progress.
02/22/2022
- 10:14 PM Backport #54318: quincy: BlueFS improperly tracks vselector sizes in _flush_special()
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/45079
merged - 02:08 PM Support #54365: Setup 64kB block size of Bluestore RAW HDD
- Please do not alter bdev_block_size parameter. You should use bluestore_min_alloc_size_hdd parameter instead.
- 01:02 PM Support #54365 (New): Setup 64kB block size of Bluestore RAW HDD
- Hello,
we are using Ceph like object storage with our proprietary client using librados. We store directly to Ceph p...
02/18/2022
- 03:18 PM Bug #53466: OSD is unable to allocate free space for BlueFS
- Just an addendum to my previous comment.
From free-dump analysis the amount of free space available for 64K allocati... - 03:13 PM Bug #53466: OSD is unable to allocate free space for BlueFS
- Hey Jan,
as for high utilization - both of your OSDs I'm aware of are at more than 80% space utilization:
- from ... - 12:09 PM Bug #53466: OSD is unable to allocate free space for BlueFS
- Hi Igor
thank you for your input.
You are correct, the logs and the dump are not from the same OSD, they only sha... - 11:00 AM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Seena, thanks a lot for the logs.
So everything is fine (I mean there is no bug) with the allocator, your volume jus...
02/17/2022
- 06:42 PM Backport #54318 (In Progress): quincy: BlueFS improperly tracks vselector sizes in _flush_special()
- 06:30 PM Backport #54318 (Resolved): quincy: BlueFS improperly tracks vselector sizes in _flush_special()
- https://github.com/ceph/ceph/pull/45079
- 06:26 PM Bug #54248 (Pending Backport): BlueFS improperly tracks vselector sizes in _flush_special()
- 06:07 PM Support #54315: 1 fsck error per osd during nautilus -> octopus upgrade (S3 cluster)
- The only place in the fsck code we found that increments `errors` but doesn't `derr` was this:...
- 05:23 PM Support #54315 (New): 1 fsck error per osd during nautilus -> octopus upgrade (S3 cluster)
- At the end of the conversion to per-pool omap, around half of our OSDs had 1 error, but the log didn't show the error...
- 10:23 AM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Igor Fedotov wrote:
> Seena Fallah wrote:
> > Igor Fedotov wrote:
> > > Hi Seena,
> > > no. that's right. The dum...
02/16/2022
- 11:20 PM Fix #54299: osd error restart
- Igor Fedotov wrote:
> Well, unfortunately I'm not an expert in using rook deployments hence unable to provide any he... - 03:46 PM Fix #54299: osd error restart
- Well, unfortunately I'm not an expert in using rook deployments hence unable to provide any help on how to run the to...
- 03:16 PM Fix #54299: osd error restart
- Igor Fedotov wrote:
> duans song wrote:
> > Igor Fedotov wrote:
> > > Could you please share the output for ceph-b... - 02:11 PM Fix #54299: osd error restart
- duans song wrote:
> Igor Fedotov wrote:
> > Could you please share the output for ceph-bluestore-tool's free-dump c... - 01:54 PM Fix #54299: osd error restart
- Igor Fedotov wrote:
> Could you please share the output for ceph-bluestore-tool's free-dump command?
What command... - 01:12 PM Fix #54299: osd error restart
- Could you please share the output for ceph-bluestore-tool's free-dump command?
- 01:06 PM Fix #54299 (Need More Info): osd error restart
- debug -83> 2022-02-16T13:04:03.711+0000 7f35fee9d080 4 rocksdb: Options.bottommost_compression_opts...
- 10:16 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Seena Fallah wrote:
> Igor Fedotov wrote:
> > Hi Seena,
> > no. that's right. The dump might be pretty large if yo... - 10:14 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Seena Fallah wrote:
> Igor Fedotov wrote:
> > Hi Seena,
> > no. that's right. The dump might be pretty large if yo... - 05:08 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Igor Fedotov wrote:
> Hi Seena,
> no. that's right. The dump might be pretty large if your disk is very big. But un... - 05:03 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Igor Fedotov wrote:
> Hi Seena,
> no. that's right. The dump might be pretty large if your disk is very big. But un... - 03:42 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Hi Seena,
no. that's right. The dump might be pretty large if your disk is very big. But unfortunately I need the fu... - 03:20 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Igor Fedotov wrote:
> Hi Seena,
> I presume you're sharing stderr output only, right?
> Could you please share mor... - 12:13 PM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Hi Seena,
I presume you're sharing stderr output only, right?
Could you please share more verbose OSD startup log.
... - 11:38 AM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Seena Fallah wrote:
> Hi,
>
> I got the same issue with some of my OSDs (pacific v16.2.7 & SSD)
> [...]
I tri... - 11:35 AM Bug #53899: bluefs _allocate allocation failed - BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
- Hi,
I got the same issue with some of my OSDs (pacific v16.2.7 & SSD)... - 09:56 PM Bug #53002 (Resolved): crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
- 09:55 PM Backport #53608 (Resolved): pacific: crash BlueStore::Onode::put from BlueStore::TransContext::~T...
- 07:07 PM Backport #53608: pacific: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44723
merged - 08:21 PM Backport #54209: quincy: BlueStore.h: 4148: FAILED ceph_assert(cur >= p.length)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44952
merged - 02:55 PM Bug #53466: OSD is unable to allocate free space for BlueFS
- Hi Alexander,
thanks for the dump, finally I've got a chance to take a look. Sorry for the delay.
I was hoping th...
02/15/2022
- 11:11 PM Bug #54288 (Fix Under Review): rocksdb: Corruption: missing start of fragmented record
- 02:29 PM Bug #54288: rocksdb: Corruption: missing start of fragmented record
- Interestingly RocksDB from Pacific onward deals with wal_recovery_mode and recycle_log_file_num inconsistency a bit d...
- 02:21 PM Bug #54288 (Triaged): rocksdb: Corruption: missing start of fragmented record
- Investigation revealed that RocksDB embedded to Ceph Octopus release implicitly substitutes default wal_recovery_mode...
- 02:10 PM Bug #54288 (Rejected): rocksdb: Corruption: missing start of fragmented record
- After a kernel panic one of our OSD is unable to start and is showing:
rocksdb: [db/db_impl_open.cc:518] db/099134.l... - 07:13 PM Bug #53266 (Fix Under Review): default osd_fast_shutdown=true would cause NCB to recover allocati...
- 06:16 PM Bug #54248 (Fix Under Review): BlueFS improperly tracks vselector sizes in _flush_special()
02/14/2022
- 10:44 PM Bug #53678 (Resolved): NCB's reconstruct allocations improperly handles shared blobb
- 10:43 PM Backport #54175 (Resolved): quincy: NCB's reconstruct allocations improperly handles shared blobb
- 03:51 PM Bug #53266 (Pending Backport): default osd_fast_shutdown=true would cause NCB to recover allocati...
02/11/2022
- 07:46 PM Backport #54175: quincy: NCB's reconstruct allocations improperly handles shared blobb
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44918
merged
02/10/2022
- 03:41 PM Bug #54226 (Duplicate): bluestore crash and not repairable scrub errors
- 03:27 PM Bug #54248 (Resolved): BlueFS improperly tracks vselector sizes in _flush_special()
- This problem is introduced in fine grain locking refactor.
- 02:40 PM Backport #53891 (Resolved): octopus: High memory usage in fsck/repair
- 02:37 PM Backport #53891: octopus: High memory usage in fsck/repair
- Igor Fedotov wrote:
> https://github.com/ceph/ceph/pull/44614
merged - 01:11 AM Backport #53392: octopus: BlueFS truncate() and poweroff can create corrupted files
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44860
merged
02/09/2022
- 12:21 PM Bug #54226 (Duplicate): bluestore crash and not repairable scrub errors
- An object with the Name "c76c7ac2014adb9f0f0837ac1e85fd1e241af225908b6a0c3d3a44d6b866e732_00400000" makes trouble if ...
- 10:37 AM Backport #54209 (In Progress): quincy: BlueStore.h: 4148: FAILED ceph_assert(cur >= p.length)
02/08/2022
- 07:05 PM Backport #54209 (Resolved): quincy: BlueStore.h: 4148: FAILED ceph_assert(cur >= p.length)
- https://github.com/ceph/ceph/pull/44952
- 07:02 PM Bug #53907 (Pending Backport): BlueStore.h: 4148: FAILED ceph_assert(cur >= p.length)
02/07/2022
- 09:51 AM Backport #54175 (In Progress): quincy: NCB's reconstruct allocations improperly handles shared blobb
- 09:30 AM Backport #54175 (Resolved): quincy: NCB's reconstruct allocations improperly handles shared blobb
- https://github.com/ceph/ceph/pull/44918
- 09:29 AM Bug #53678 (Pending Backport): NCB's reconstruct allocations improperly handles shared blobb
02/04/2022
02/03/2022
- 03:36 PM Bug #53185 (Resolved): FSCK removes allocation file when called in DEEP mode causing next mount t...
- 03:35 PM Bug #53266 (In Progress): default osd_fast_shutdown=true would cause NCB to recover allocation ma...
- 03:27 PM Bug #53678 (Fix Under Review): NCB's reconstruct allocations improperly handles shared blobb
- 11:21 AM Bug #53907 (Fix Under Review): BlueStore.h: 4148: FAILED ceph_assert(cur >= p.length)
02/02/2022
- 10:31 AM Bug #53011 (Resolved): fsck/repair uses invalid prefix when removing undecodable Shared Blob
- 10:31 AM Backport #53392 (In Progress): octopus: BlueFS truncate() and poweroff can create corrupted files
- 10:31 AM Backport #53195 (Resolved): pacific: fsck/repair uses invalid prefix when removing undecodable Sh...
- 09:58 AM Bug #52185: crash: void BlueStore::_kv_sync_thread(): assert(r == 0)
- Telemetry Bot wrote:
> http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?orgId=1&var-sig_v2=0e...
Also available in: Atom