Activity
From 11/14/2017 to 12/13/2017
12/13/2017
- 05:08 PM Backport #22356 (Duplicate): luminous: OSD crash on boot with assert caused by Bluefs on flush write
- Duplicate of http://tracker.ceph.com/issues/22193
- 02:54 PM Bug #22427 (Resolved): osd_fsid does not exist, fsid is generated instead
- Not sure if this is a bug or incorrect documentation. Documentation (http://docs.ceph.com/docs/master/ceph-volume/lvm...
12/10/2017
- 06:35 AM Bug #22044: rocksdb log replay - corruption: missing start of fragmented record
- With 12.2.1, I can't reproduce.
Can you do *ceph-bluestore-tool fsck --path <path> --debug-bluestore=20 --log-file=c... - 04:57 AM Bug #22290 (Closed): Memory leak in OSDs with bluestore running 12.2.1 beyond the buffer_anon mem...
12/09/2017
- 05:49 AM Backport #22356 (In Progress): luminous: OSD crash on boot with assert caused by Bluefs on flush ...
- https://github.com/ceph/ceph/pull/19410
- 05:46 AM Backport #22356 (Duplicate): luminous: OSD crash on boot with assert caused by Bluefs on flush write
12/06/2017
- 09:24 PM Bug #22290: Memory leak in OSDs with bluestore running 12.2.1 beyond the buffer_anon mempool leak
- Upgraded to 12.2.2 and don't see the fast memory leak anymore. This bug can be closed. Will reopen or open a new bug ...
- 08:12 PM Bug #22115: OSD SIGABRT on bluestore_prefer_deferred_size = 104857600: assert(_buffers.size() <= ...
- Could you elaborate on what is the condition where caused assertion? Node a or b,c,d, or all?
- 10:54 AM Backport #22264 (Fix Under Review): luminous: bluestore: db.slow used when db is not full
12/03/2017
- 12:36 PM Bug #20997: bluestore_types.h: 739: FAILED assert(p != extents.end())
- Hi Sage
Thank You very much for the reply.
I did fsck deep by ceph-bluestore-tool like this:...
11/30/2017
- 10:09 PM Bug #22245 (New): [segfault] ceph-bluestore-tool bluefs-log-dump
- @Sage Please have a look at uploaded file.
- 08:24 PM Bug #22245: [segfault] ceph-bluestore-tool bluefs-log-dump
- @octopus:build [master]$ ./bin/ceph-post-file c
args: -- c
./bin/ceph-post-file: upload tag e12e6b30-953d-43d5-80a4... - 06:26 PM Bug #22290 (Closed): Memory leak in OSDs with bluestore running 12.2.1 beyond the buffer_anon mem...
- We are trying out Ceph on a small cluster and are observing memory leakage in the OSD processes. The leak seems to be...
- 03:28 PM Backport #22264 (In Progress): luminous: bluestore: db.slow used when db is not full
- 01:14 PM Backport #22264: luminous: bluestore: db.slow used when db is not full
- https://github.com/ceph/ceph/pull/19257
- 03:28 PM Bug #20847: low performance for bluestore rbd block creation vs filestore
- 03:13 PM Bug #22285: _read_bdev_label unable to decode label at offset
- This is expected behavior coming from bluestore when it is first created, because (iirc) it hasn't been initialized y...
- 03:02 PM Bug #22285 (Resolved): _read_bdev_label unable to decode label at offset
- While maybe not directly triggered by ceph-volume, the following error is displayed on osd creation. OSD is still cre...
11/29/2017
- 11:30 PM Bug #21736 (Need More Info): Cannot create bluestore OSD
- Please put 'debug bluestore = 20' and 'debug osd = 20' in the [osd.4] section of the ceph.conf and retry this command...
- 11:29 PM Bug #21556 (Need More Info): luminous bluestore OSDs do not recover after out of memory
- Please check your kernel log for disk errors.. it looks like we're getting EIO back on a read?
- 11:26 PM Bug #21531 (Can't reproduce): BlueStore::TwoQCache::_trim:BlueStore cache can not be trim to caus...
- 11:25 PM Bug #21332 (Duplicate): OSD Caught signal with StupidAllocator on 12.2.0
- jemalloc + rocksdb
- 11:24 PM Bug #21257 (Duplicate): bluestore: BlueFS.cc: 1255: FAILED assert(!log_file->fnode.extents.empty())
- I think this was a variation of 4324c8bc7e66633035c15995e3f82ef91d3a5e8c
- 11:20 PM Bug #21255 (Closed): stop bluestore nvme osd, sgdisk it hang, sync operation hang
- This looks like a kernel bug. bluestore shouldn't have any ability to induce a kernel crash.
- 11:19 PM Bug #21087 (Can't reproduce): BlueFS Becomes Totally Inaccessible when Failed to Allocate
- Hmm, yeah I misunderstood the original problem--I thought bluefs was on a dedicated device.
There have been severa... - 11:16 PM Bug #21062 (Can't reproduce): ceph-osd crashes in bluestore
- this looks like an early luminous rc. we haven't seen these errors since 12.2.z
- 11:15 PM Bug #20997: bluestore_types.h: 739: FAILED assert(p != extents.end())
- Hi Marek,
Sorry for the slow response--just catching up around here. Is the OSD still in this state? Can you cap... - 11:13 PM Bug #20917 (Won't Fix): kraken "bluestore/BitMapAllocator.cc: 82: FAILED assert(!(need % m_block_...
- 11:12 PM Bug #20842 (Closed): Ceph 12.1 BlueStore low performance
- full vs empty performance disparity is known. the random small write workload leads to worst-case metadata overhead ...
- 11:11 PM Bug #20506 (Can't reproduce): OSD memory leak (possibly Bluestore-related?) in rgw suite in krake...
- 11:10 PM Bug #19984 (Can't reproduce): /build/ceph-12.0.2/src/os/bluestore/KernelDevice.cc: 364: FAILED as...
- 11:09 PM Bug #19511 (Resolved): bluestore overwhelms aio queue
- this is fixed in 12.2.1 or 12.2.2 (varying degrees of fixes)
- 11:09 PM Bug #19303 (Can't reproduce): "Segmentation fault..in thread 7f92b87b5700 thread_name:bstore_kv_s...
- 11:09 PM Bug #18389 (Can't reproduce): crash when opening bluefs superblock
- 05:12 PM Bug #21550: PG errors reappearing after OSD node rebooted on Luminous
- Not sure if this is actually bluestore, but Sage or somebody should look at it...
- 04:54 PM Bug #21480: bluestore: flush_commit is racy
- Best path is I think tor emove flush_commit and adjust callers to do something else.
- 02:58 PM Bug #22245 (Need More Info): [segfault] ceph-bluestore-tool bluefs-log-dump
- I can't reproduce this locally. Can you rerun the command with --log-file c --debug-bluestore 20 --no-log-to-stderr a...
- 10:08 AM Backport #22264: luminous: bluestore: db.slow used when db is not full
- Here is the UT that simulates reported issue and verifies rocksdb::GetPathId implementation from both Ceph's master a...
- 09:57 AM Backport #22264: luminous: bluestore: db.slow used when db is not full
- tangwenjun tang wrote:
> max_bytes_for_level_base and max_bytes_for_level_multiplier
> can control the level file l... - 07:19 AM Backport #22264: luminous: bluestore: db.slow used when db is not full
- max_bytes_for_level_base and max_bytes_for_level_multiplier
can control the level file located
11/28/2017
- 08:09 PM Backport #22264: luminous: bluestore: db.slow used when db is not full
- Looks like a RocksDB bug fixed by
https://github.com/facebook/rocksdb/commit/65a9cd616876c7a1204e1a50990400e4e1f61d7... - 01:16 PM Backport #22264 (Resolved): luminous: bluestore: db.slow used when db is not full
- ...
11/27/2017
- 01:05 AM Bug #22245 (Can't reproduce): [segfault] ceph-bluestore-tool bluefs-log-dump
- After running command several times, it's segfaulted.
./bin/ceph-bluestore-tool bluefs-log-dump --path dev/osd0
...
11/23/2017
- 04:28 AM Bug #21040: bluestore: multiple objects (clones?) referencing same blocks (on all replicas)
- Guoqin,
Note: Your issue appears to be different in that you don't seem to have any pgs where all replicas are sho...
11/22/2017
- 07:36 PM Bug #21040: bluestore: multiple objects (clones?) referencing same blocks (on all replicas)
- And some other results,...
- 07:04 PM Bug #21040: bluestore: multiple objects (clones?) referencing same blocks (on all replicas)
- Brad Hubbard wrote:
> Guoqin,
>
> Could you run the command Sage posted in comment #14 as well as the following?
...
11/21/2017
- 09:17 PM Bug #22061: Bluestore: OSD killed due to high RAM usage
- I can confirm this.
Problem informations:
- write only workload
- erasure coding
Really looks like : http:/...
11/20/2017
- 11:57 PM Bug #21040: bluestore: multiple objects (clones?) referencing same blocks (on all replicas)
- Guoqin,
Could you run the command Sage posted in comment #14 as well as the following?... - 03:31 PM Bug #22161 (Fix Under Review): bluestore: do not crash on over-large objects
- https://github.com/ceph/ceph/pull/19043
11/19/2017
- 03:38 PM Bug #21087: BlueFS Becomes Totally Inaccessible when Failed to Allocate
- Eric Nelson wrote:
> We're seeing this bug with our cachetier SSDs that have DBs located on 14G nvme partitions. Let... - 03:28 PM Bug #21040: bluestore: multiple objects (clones?) referencing same blocks (on all replicas)
- Got a lot of inconsistent pgs spread out in all 19 OSDs of mine every time when I do deep scrub. Without deep scrub, ...
- 10:21 AM Bug #22161 (Resolved): bluestore: do not crash on over-large objects
- See "[ceph-users] Getting errors on erasure pool writes k=2, m=1" for one example. We've had a number of reports from...
11/18/2017
11/14/2017
- 01:17 AM Bug #21087: BlueFS Becomes Totally Inaccessible when Failed to Allocate
- We're seeing this bug with our cachetier SSDs that have DBs located on 14G nvme partitions. Let me know if you'd like...
Also available in: Atom