General

Profile

Tobias Fischer

Issues

Activity

11/19/2020

03:51 PM Ceph Bug #48297 (New): OSD process using up complete available memory after pg_num change / autoscaler on
we did following change on our cluster (cephadm octopus 15.2.5):
ceph osd pool set one pg_num 512
after some ti...

10/06/2020

03:13 PM Orchestrator Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
running "ceph config-key rm mgr/cephadm/osd_remove_queue" and restarting active mgr fixed the issue - "ceph orch" wor...
07:50 AM Orchestrator Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
running latest Version:
"overall": "ceph version 15.2.5 (2c93eff00150f0cc5f106a559557a58d3d7b6f1f) octopus (stable)"...
07:45 AM Orchestrator Bug #47580: cephadm: "Error ENOENT: Module not found": TypeError: type object argument after ** m...
Having same problem here:
added new host & osds yesterday evening. while the cluster was still rebalancing removed a...

07/29/2020

03:50 PM Orchestrator Support #46758: ERROR: hostname "rgw1" does not match expected hostname "rgw1.clyso.cloud"
my error. should have added host like this ceph orch host add rgw1 rgw1.clyso.cloud
please delete
02:56 PM Orchestrator Support #46758 (Resolved): ERROR: hostname "rgw1" does not match expected hostname "rgw1.clyso.cl...
looks like cephadm is checking /etc/hostname instead of hostname -f:...
02:44 PM Orchestrator Bug #46098: Exception adding host using cephadm
same here. trying to add a fresh debian buster VM with all updates installed (no additional packages like docker pres...

05/30/2020

03:29 PM bluestore Bug #44359: Raw usage reported by 'ceph osd df' incorrect when using WAL/DB on another drive
Same here. Fresh Cluster - completely empty. "Raw Use" corresponds to Size of DB+WAL/DB Partition located on separate...

05/20/2020

01:59 PM Orchestrator Bug #45032: cephadm: Not recovering from `OSError: cannot send (already closed?)`
fixed after reboot of active mgr
root@one1-ceph4.storage.:~# ceph cephadm check-host one1-ceph5
one1-ceph5 (None)...
01:51 PM Orchestrator Bug #45032: cephadm: Not recovering from `OSError: cannot send (already closed?)`
same here after a reboot of the hosts:
root@one1-ceph4.storage.:~# ceph cephadm check-host one1-ceph4
one1-ceph4 ...

Also available in: Atom