Activity
From 11/01/2017 to 11/30/2017
11/30/2017
- 10:09 PM Bug #22245 (New): [segfault] ceph-bluestore-tool bluefs-log-dump
- @Sage Please have a look at uploaded file.
- 08:24 PM Bug #22245: [segfault] ceph-bluestore-tool bluefs-log-dump
- @octopus:build [master]$ ./bin/ceph-post-file c
args: -- c
./bin/ceph-post-file: upload tag e12e6b30-953d-43d5-80a4... - 06:26 PM Bug #22290 (Closed): Memory leak in OSDs with bluestore running 12.2.1 beyond the buffer_anon mem...
- We are trying out Ceph on a small cluster and are observing memory leakage in the OSD processes. The leak seems to be...
- 03:28 PM Backport #22264 (In Progress): luminous: bluestore: db.slow used when db is not full
- 01:14 PM Backport #22264: luminous: bluestore: db.slow used when db is not full
- https://github.com/ceph/ceph/pull/19257
- 03:28 PM Bug #20847: low performance for bluestore rbd block creation vs filestore
- 03:13 PM Bug #22285: _read_bdev_label unable to decode label at offset
- This is expected behavior coming from bluestore when it is first created, because (iirc) it hasn't been initialized y...
- 03:02 PM Bug #22285 (Resolved): _read_bdev_label unable to decode label at offset
- While maybe not directly triggered by ceph-volume, the following error is displayed on osd creation. OSD is still cre...
11/29/2017
- 11:30 PM Bug #21736 (Need More Info): Cannot create bluestore OSD
- Please put 'debug bluestore = 20' and 'debug osd = 20' in the [osd.4] section of the ceph.conf and retry this command...
- 11:29 PM Bug #21556 (Need More Info): luminous bluestore OSDs do not recover after out of memory
- Please check your kernel log for disk errors.. it looks like we're getting EIO back on a read?
- 11:26 PM Bug #21531 (Can't reproduce): BlueStore::TwoQCache::_trim:BlueStore cache can not be trim to caus...
- 11:25 PM Bug #21332 (Duplicate): OSD Caught signal with StupidAllocator on 12.2.0
- jemalloc + rocksdb
- 11:24 PM Bug #21257 (Duplicate): bluestore: BlueFS.cc: 1255: FAILED assert(!log_file->fnode.extents.empty())
- I think this was a variation of 4324c8bc7e66633035c15995e3f82ef91d3a5e8c
- 11:20 PM Bug #21255 (Closed): stop bluestore nvme osd, sgdisk it hang, sync operation hang
- This looks like a kernel bug. bluestore shouldn't have any ability to induce a kernel crash.
- 11:19 PM Bug #21087 (Can't reproduce): BlueFS Becomes Totally Inaccessible when Failed to Allocate
- Hmm, yeah I misunderstood the original problem--I thought bluefs was on a dedicated device.
There have been severa... - 11:16 PM Bug #21062 (Can't reproduce): ceph-osd crashes in bluestore
- this looks like an early luminous rc. we haven't seen these errors since 12.2.z
- 11:15 PM Bug #20997: bluestore_types.h: 739: FAILED assert(p != extents.end())
- Hi Marek,
Sorry for the slow response--just catching up around here. Is the OSD still in this state? Can you cap... - 11:13 PM Bug #20917 (Won't Fix): kraken "bluestore/BitMapAllocator.cc: 82: FAILED assert(!(need % m_block_...
- 11:12 PM Bug #20842 (Closed): Ceph 12.1 BlueStore low performance
- full vs empty performance disparity is known. the random small write workload leads to worst-case metadata overhead ...
- 11:11 PM Bug #20506 (Can't reproduce): OSD memory leak (possibly Bluestore-related?) in rgw suite in krake...
- 11:10 PM Bug #19984 (Can't reproduce): /build/ceph-12.0.2/src/os/bluestore/KernelDevice.cc: 364: FAILED as...
- 11:09 PM Bug #19511 (Resolved): bluestore overwhelms aio queue
- this is fixed in 12.2.1 or 12.2.2 (varying degrees of fixes)
- 11:09 PM Bug #19303 (Can't reproduce): "Segmentation fault..in thread 7f92b87b5700 thread_name:bstore_kv_s...
- 11:09 PM Bug #18389 (Can't reproduce): crash when opening bluefs superblock
- 05:12 PM Bug #21550: PG errors reappearing after OSD node rebooted on Luminous
- Not sure if this is actually bluestore, but Sage or somebody should look at it...
- 04:54 PM Bug #21480: bluestore: flush_commit is racy
- Best path is I think tor emove flush_commit and adjust callers to do something else.
- 02:58 PM Bug #22245 (Need More Info): [segfault] ceph-bluestore-tool bluefs-log-dump
- I can't reproduce this locally. Can you rerun the command with --log-file c --debug-bluestore 20 --no-log-to-stderr a...
- 10:08 AM Backport #22264: luminous: bluestore: db.slow used when db is not full
- Here is the UT that simulates reported issue and verifies rocksdb::GetPathId implementation from both Ceph's master a...
- 09:57 AM Backport #22264: luminous: bluestore: db.slow used when db is not full
- tangwenjun tang wrote:
> max_bytes_for_level_base and max_bytes_for_level_multiplier
> can control the level file l... - 07:19 AM Backport #22264: luminous: bluestore: db.slow used when db is not full
- max_bytes_for_level_base and max_bytes_for_level_multiplier
can control the level file located
11/28/2017
- 08:09 PM Backport #22264: luminous: bluestore: db.slow used when db is not full
- Looks like a RocksDB bug fixed by
https://github.com/facebook/rocksdb/commit/65a9cd616876c7a1204e1a50990400e4e1f61d7... - 01:16 PM Backport #22264 (Resolved): luminous: bluestore: db.slow used when db is not full
- ...
11/27/2017
- 01:05 AM Bug #22245 (Can't reproduce): [segfault] ceph-bluestore-tool bluefs-log-dump
- After running command several times, it's segfaulted.
./bin/ceph-bluestore-tool bluefs-log-dump --path dev/osd0
...
11/23/2017
- 04:28 AM Bug #21040: bluestore: multiple objects (clones?) referencing same blocks (on all replicas)
- Guoqin,
Note: Your issue appears to be different in that you don't seem to have any pgs where all replicas are sho...
11/22/2017
- 07:36 PM Bug #21040: bluestore: multiple objects (clones?) referencing same blocks (on all replicas)
- And some other results,...
- 07:04 PM Bug #21040: bluestore: multiple objects (clones?) referencing same blocks (on all replicas)
- Brad Hubbard wrote:
> Guoqin,
>
> Could you run the command Sage posted in comment #14 as well as the following?
...
11/21/2017
- 09:17 PM Bug #22061: Bluestore: OSD killed due to high RAM usage
- I can confirm this.
Problem informations:
- write only workload
- erasure coding
Really looks like : http:/...
11/20/2017
- 11:57 PM Bug #21040: bluestore: multiple objects (clones?) referencing same blocks (on all replicas)
- Guoqin,
Could you run the command Sage posted in comment #14 as well as the following?... - 03:31 PM Bug #22161 (Fix Under Review): bluestore: do not crash on over-large objects
- https://github.com/ceph/ceph/pull/19043
11/19/2017
- 03:38 PM Bug #21087: BlueFS Becomes Totally Inaccessible when Failed to Allocate
- Eric Nelson wrote:
> We're seeing this bug with our cachetier SSDs that have DBs located on 14G nvme partitions. Let... - 03:28 PM Bug #21040: bluestore: multiple objects (clones?) referencing same blocks (on all replicas)
- Got a lot of inconsistent pgs spread out in all 19 OSDs of mine every time when I do deep scrub. Without deep scrub, ...
- 10:21 AM Bug #22161 (Resolved): bluestore: do not crash on over-large objects
- See "[ceph-users] Getting errors on erasure pool writes k=2, m=1" for one example. We've had a number of reports from...
11/18/2017
11/14/2017
- 01:17 AM Bug #21087: BlueFS Becomes Totally Inaccessible when Failed to Allocate
- We're seeing this bug with our cachetier SSDs that have DBs located on 14G nvme partitions. Let me know if you'd like...
11/13/2017
- 09:38 AM Bug #22115: OSD SIGABRT on bluestore_prefer_deferred_size = 104857600: assert(_buffers.size() <= ...
- Also this :...
- 09:05 AM Bug #22115 (Duplicate): OSD SIGABRT on bluestore_prefer_deferred_size = 104857600: assert(_buffer...
- 1. How to reproduce:
ceph.conf on host A:...
11/10/2017
- 04:36 PM Bug #21259 (Duplicate): bluestore: segv in BlueStore::TwoQCache::_trim
- 08:54 AM Bug #22102 (Won't Fix): BlueStore crashed on rocksdb checksum mismatch
- Bluestore crashed in checksum mismatch:
ноя 10 09:53:59 dpr-2a1713-063-crd rcs-custom-daemon[16684]: 2017-11-10 09... - 07:04 AM Feature #21741 (Fix Under Review): os/bluestore: multi-tier support in BlueStore
11/09/2017
- 03:01 AM Bug #22066: bluestore osd asserts repeatedly with ceph-12.2.1/src/include/buffer.h: 882: FAILED a...
- No problem, for the time being some of these have been converted to bluestore osds with wal/db on them since it's hap...
- 02:45 AM Bug #22066: bluestore osd asserts repeatedly with ceph-12.2.1/src/include/buffer.h: 882: FAILED a...
- Eric Nelson , i think i set the wrong debug. It should "debug bdev = 20 && debug bluefs = 20". rather than "debug b...
11/08/2017
- 04:44 PM Bug #22066: bluestore osd asserts repeatedly with ceph-12.2.1/src/include/buffer.h: 882: FAILED a...
- Here's the stacktrace with debug bluestore = 20.
-183> 2017-11-08 16:41:10.602756 7fbdcb679e00 20 stupidalloc in... - 03:40 AM Bug #22066: bluestore osd asserts repeatedly with ceph-12.2.1/src/include/buffer.h: 882: FAILED a...
- Could you add more debug message by add "debug bluestore = 20" in /etc/ceph/ceph.conf? Thanks!
- 09:58 AM Bug #22061: Bluestore: OSD killed due to high RAM usage
- There are fixes planned for 12.2.2 which address memory issues with BlueStore. I don't know the exact links to them r...
11/07/2017
- 09:36 PM Bug #22066 (Duplicate): bluestore osd asserts repeatedly with ceph-12.2.1/src/include/buffer.h: 8...
- This may be a dup of http://tracker.ceph.com/issues/21932 - I'm experiencing this fairly regularly on our cachetier b...
- 12:49 PM Bug #22061 (Resolved): Bluestore: OSD killed due to high RAM usage
- We recently updated our cluster to 12.2.1. After that we moved OSDs on one of the nodes to Bluestore and since that w...
11/05/2017
- 02:58 PM Bug #22044 (Can't reproduce): rocksdb log replay - corruption: missing start of fragmented record
- I've conducted some crash tests (unplugging drives, the machine, terminating and restarting ceph systemd services) wi...
Also available in: Atom