ceph ceph

  • Registered on: 10/19/2018
  • Last connection: 06/09/2022




11:04 AM Ceph Support #55393 (New): Ceph space is not getting reclaimed after snaptrimming
We are running with luminous 12.2.13 version in a production environment with 3 nodes. We were trying to restore a vo...


02:18 PM RADOS Bug #52618 (Won't Fix - EOL): Ceph Luminous 12.2.13 OSD assert message
2021-09-02 14:25:37.173453 7f2235baf700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/...


09:23 AM RADOS Support #49489 (New): Getting Long heartbeat and slow requests on ceph luminous 12.2.13
1. Current environment is integrated with ceph and openstack
2. It has NVME and SSD disks only
3. We have create fo...


01:21 PM RADOS Bug #47204: ceph osd getting shutdown after joining to cluster
uploaded ceph osd logs screenshot for reference
01:20 PM RADOS Bug #47204 (New): ceph osd getting shutdown after joining to cluster
after adding new disks in existing cluster with luminous 12.2.12 version (old servers) with one more node addition wi...


09:45 AM Ceph Bug #41331 (Rejected): ceph monitor not going back quorum
After disconnecting from network for 2 days, one ceph monitor node went to probing state. I have tried everything to ...


10:16 AM mgr Bug #36531 (Closed): 'MAX AVAIL' in 'ceph df' showing wrong information
I have a ceph cluster running with 18 OSDs, also created 3 pools with replicated profile.But the MAX AVAIL in showing...

Also available in: Atom