- Email: firstname.lastname@example.org
- Registered on: 02/06/2011
- Last connection: 10/25/2019
- 02:31 AM Linux kernel client Bug #21420: ceph_osdc_writepages(): pre-allocated osdc->msgpool_op messages vs large number of sn...
- Is the 400 snapshot limit per file system, or per directory, or per something else?
- 11:09 PM RADOS Support #22531: OSD flapping under repair/scrub after recieve inconsistent PG LFNIndex.cc: 439: F...
- For the record...
I was also suffering this problem on a pg repair. That was because I was following the procedure...
- 03:37 PM rbd Bug #7790 (Resolved): Kernel panic when creating ZFS pools on CEPH RBD devices
- Creating a ZFS pool on top of krbd causes a kernel panic.
From the ZFSonLinux bug tracker (https://github.com/zfso...
- 10:42 PM Ceph Bug #6233: OSD crash during repair
- Was missing xattrs:
2013-09-06 09:30:19.813811 7f0ae8cbc700 0 log [INF] : applying configuration change: internal...
- 07:46 PM Ceph Bug #6233: OSD crash during repair
- The pg being repaired at the time is 2.12, which 'ceph pg dump' tells me lives on [6,7]. Attached log is the output a...
- 05:11 PM Ceph Bug #6233 (Closed): OSD crash during repair
- On 0.56.7-1~bpo70+1, whilst trying to repair an OSD:
2013-09-05 09:19:33.020619 7f540a12d700 0 log [ERR] : 2.12 r...
- 11:20 PM rbd Bug #3737: Higher ping-latency observed in qemu with rbd_cache=true during disk-write
- Sigh. The attachment might help...
- 11:18 PM rbd Bug #3737: Higher ping-latency observed in qemu with rbd_cache=true during disk-write
- Confirmed here, with ceph-0.56.3 and qemu-1.3.1.
See attached test output.
A summary is, the average ping time,...
- 10:27 PM Ceph Bug #3286: librbd, kvm, async io hang
- OK, with ceph next @ bc32fc42 (and rbd_cache_size=33554433) it completed the full fio test several times, in contrast...
- 03:27 PM Ceph Bug #3286: librbd, kvm, async io hang
- Josh Durgin wrote:
> Unfortunately I haven't been able to reproduce this myself or track down the cause yet. Would i...
Also available in: Atom