Martin Millnert
- Login: millnert
- Registered on: 08/06/2017
- Last sign in: 05/18/2020
Issues
open | closed | Total | |
---|---|---|---|
Assigned issues | 0 | 0 | 0 |
Reported issues | 1 | 4 | 5 |
Activity
05/18/2020
- 12:53 AM mgr Bug #45574: subinterpreters: ceph/mgr/rook RuntimeError on import of RookOrchestrator - ceph cluster does not start
- Update: Red herring that this bug prevented cluster from starting.
I was doing upgrade from Luminous through Nauti...
05/17/2020
- 07:34 PM mgr Bug #45574 (New): subinterpreters: ceph/mgr/rook RuntimeError on import of RookOrchestrator - ceph cluster does not start
- The 'devicehealth' plugins' dependency on rook (package ceph-mgr-rook) code causes a cluster to not boot, after upgra...
03/18/2019
- 06:55 PM RADOS Bug #21174: OSD crash: 903: FAILED assert(objiter->second->version > last_divergent_update)
- Err. I believe I mixed up two different bugs, please disregard my previous comment. I don't currently recall what I ...
- 06:52 PM RADOS Bug #21174: OSD crash: 903: FAILED assert(objiter->second->version > last_divergent_update)
- For completeness: The root cause for the crashes I experienced were that I had oversized RADOS objects (2-10GB, max ...
09/19/2017
- 09:01 AM Ceph Bug #21430 (Closed): ceph-fuse blocked OSD op threads => OSD restart loop
- I can now seemingly easily reproduce a trigger of OSD meltdowns from what seems to be blocked OSD op threads, using a...
08/30/2017
- 02:31 PM Ceph Bug #21023: BlueStore-OSDs marked as destroyed in OSD-map after v12.1.1 to v12.1.4 upgrade
- Right, it was never a problem that the OSD remained in the OSD-map. The issue is that 12.1.x (x < 4) didn't care abou...
- 02:12 PM RADOS Bug #21174: OSD crash: 903: FAILED assert(objiter->second->version > last_divergent_update)
- To clarify then: I have not tested this with a replicated cephfs data pool. Only tested with ec data pool as per my 4...
- 06:12 AM RADOS Bug #21174 (Rejected): OSD crash: 903: FAILED assert(objiter->second->version > last_divergent_update)
- I've setup a cephfs erasure coded pool on a small cluster consisting of 5 bluestore OSDs.
The pools were created as ...
08/28/2017
- 08:51 AM Ceph Bug #21023: BlueStore-OSDs marked as destroyed in OSD-map after v12.1.1 to v12.1.4 upgrade
- Here's an update from the bug submitter.
1. The osdmaptool approach failed since there is no longer any command to...
08/17/2017
- 05:20 PM Ceph Bug #21023: BlueStore-OSDs marked as destroyed in OSD-map after v12.1.1 to v12.1.4 upgrade
- Update:
'ceph osd new' does return 0 if I simply skip giving it a auth json file. But it doesn't clear the 'destr...
Also available in: Atom