Siegfried Hoellrigl
- Login: Siegfried.Hoellrigl
- Registered on: 05/22/2018
- Last sign in: 09/26/2018
Issues
open | closed | Total | |
---|---|---|---|
Assigned issues | 0 | 0 | 0 |
Reported issues | 0 | 2 | 2 |
Activity
09/26/2018
- 04:12 PM Ceph Bug #24227: jewel->luminous: osd/PrimaryLogPG.cc: 358: FAILED assert(p != recovery_info.ss.clone_snaps.end())
- Hi !
In the meantime we have upgraded to ceph 12.2.8.
In ceph.conf "osd distrust data digest = true" is still s...
09/05/2018
- 12:34 PM CephFS Support #35694 (Rejected): CephFS stops working after upgrade from 12.2.7 to 12.2.8
- Hi !
We have done an upgrade from 12.2.7 to 12.2.8 on Ubuntu 14.04.5 LTS (amd64).
In the order MON-OSD-MDS-MGR-RA...
06/11/2018
- 10:48 AM Ceph Bug #24227: jewel->luminous: osd/PrimaryLogPG.cc: 358: FAILED assert(p != recovery_info.ss.clone_snaps.end())
Hi !
After a while (Cluster was HEALTH_OK) - there started the scrubbing (or deep-scrubbing ?).
Now we have a s...
06/06/2018
- 08:28 AM Ceph Bug #24227: jewel->luminous: osd/PrimaryLogPG.cc: 358: FAILED assert(p != recovery_info.ss.clone_snaps.end())
Hi !
We have now recompiled ceph-12.2.5 with the Patch from https://tracker.ceph.com/issues/24396
and started thi...
05/28/2018
- 06:28 AM Ceph Bug #24227: jewel->luminous: osd/PrimaryLogPG.cc: 358: FAILED assert(p != recovery_info.ss.clone_snaps.end())
- Hi !
Is there any workaround to fix the problem ?
05/23/2018
- 02:37 PM Ceph Bug #24227: jewel->luminous: osd/PrimaryLogPG.cc: 358: FAILED assert(p != recovery_info.ss.clone_snaps.end())
- Yes. 12.2.5. But most of them still filestore. (OSD.130 is a bluestore already)
root@ceph-m-03:~# ceph versions
{...
05/22/2018
- 01:25 PM Ceph Bug #24227 (Won't Fix): jewel->luminous: osd/PrimaryLogPG.cc: 358: FAILED assert(p != recovery_info.ss.clone_snaps.end())
- Hi !
We have done an (almost) successful upgrade to Ceph Luminous 12.2.5.
The Cluster becomes almost healthy. B...
Also available in: Atom