Chris Holcombe
- Login: xfactor973
- Email: xfactor973@gmail.com
- Registered on: 01/22/2013
- Last sign in: 04/17/2018
Issues
open | closed | Total | |
---|---|---|---|
Assigned issues | 0 | 1 | 1 |
Reported issues | 2 | 7 | 9 |
Activity
10/15/2018
- 07:19 PM Ceph Bug #15386: bluestore+spdk results in osd abort with _read_fsid unparsable uuid
- Tracing through the source code it looks like I'm barking up the wrong tree. I believe this is saying the /var/lib/c...
- 07:04 PM Ceph Bug #15386: bluestore+spdk results in osd abort with _read_fsid unparsable uuid
- Note this also occurs with ceph 12.2.7 on bionic. It was installed via cloud archive:...
- 06:00 PM Ceph Bug #15386: bluestore+spdk results in osd abort with _read_fsid unparsable uuid
- I was able to reproduce this locally on a virtual machine with lvm on ubuntu 18.04. It seems that LVM produces UUID'...
04/17/2018
- 02:09 AM RADOS Documentation #23765 (New): librbd hangs if permissions are incorrect
- I've been building rust bindings for librbd against ceph jewel and luminous. I found out by accident that if a cephx...
10/02/2017
- 05:52 PM Ceph Bug #18478: "FAILED assert(crypto_context != __null)" in rados-kraken-distro-basic-smithi
- I'm also seeing this on jewel radosgw's:...
11/08/2016
- 09:35 PM Ceph Bug #17829 (Closed): CephFS does not support setting extended attributes
- I deployed a jewel ceph cluster with CephFS and mounted it with ceph-fuse. Using setfattr I was unable to set any ex...
- 09:25 PM CephFS Bug #17828 (Need More Info): libceph setxattr returns 0 without setting the attr
- Using the jewel libcephfs python bindings I ran the following code snippet:...
07/21/2016
- 03:27 PM Ceph Bug #16755: ceph-disk: encryption assumes admin key is present
- Put up a PR: https://github.com/ceph/ceph/pull/10382
07/20/2016
- 03:33 PM Ceph Bug #16755 (Resolved): ceph-disk: encryption assumes admin key is present
- When testing the ceph-disk in jewel new behavior was noticed where it assumes an admin key is present on all the osd ...
12/03/2015
- 06:34 PM Ceph Bug #13972: osd/ECUtil.h: 117: FAILED assert(old_size == total_chunk_size) in 0.80.10
- After rolling the rgw nodes the cluster went back to active+clean and the osds aren't crashing anymore.
Also available in: Atom