Petr Malkov
- Login: petr108m
- Registered on: 01/31/2017
- Last sign in: 01/15/2019
Issues
open | closed | Total | |
---|---|---|---|
Assigned issues | 0 | 0 | 0 |
Reported issues | 3 | 5 | 8 |
Activity
01/15/2019
- 12:37 PM Ceph Support #37918 (Closed): apt-get upgrade 12.2.8 to 12.2.10 failed
- debian 10 buster
# ceph -v
ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)
# ...
12/20/2018
- 09:24 AM Ceph Bug #37712: Failed to execute command: systemctl enable ceph-mgr@
- I solved it by
cd /lib/systemd/system
wget https://sources.debian.org/data/main/c/ceph/12.2.8+dfsg1-5/systemd/cep...
12/19/2018
- 03:25 PM Ceph Bug #37712: Failed to execute command: systemctl enable ceph-mgr@
- maybe it is connected
later by
http://docs.ceph.com/docs/luminous/install/manual-deployment/
i make filestore
... - 02:34 PM Ceph Bug #37712: Failed to execute command: systemctl enable ceph-mgr@
- apt list --installed | grep ceph
ceph-base/testing,now 12.2.8+dfsg1-5 amd64 [installed,automatic]
ceph-common/tes... - 02:21 PM Ceph Bug #37712: Failed to execute command: systemctl enable ceph-mgr@
- however I can do
ceph-mgr -i d10-mpv1
ceph -s
cluster:
id: 8174d073-d7f9-4dc1-a238-d31e5af6beff
... - 02:16 PM Ceph Bug #37712 (New): Failed to execute command: systemctl enable ceph-mgr@
- cat /etc/*release
PRETTY_NAME="Debian GNU/Linux buster/sid"
ceph manualy installed
apt-get install ceph
from de...
10/09/2017
- 05:50 PM CephFS Bug #21734 (Duplicate): mount client shows total capacity of cluster but not of a pool
- SERVER:
ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
44637G...
09/05/2017
- 11:48 AM Ceph Bug #21245 (New): target_max_bytes doesn't limit a tiering pool
- tiered pool is limited to 2,5GB
ceph osd pool set default.rgw.buckets.data target_max_bytes 2511627776
after uplo... - 11:09 AM RADOS Bug #21243 (Resolved): incorrect erasure-code space in command ceph df
ceph osd erasure-code-profile set ISA plugin=isa k=2 m=2 crush-failure-domain=host crush-device-c...
07/10/2017
- 01:53 PM Ceph Bug #20559: crush_ruleset is invalid command in luminous
- actually it changed to
crush_rule <name>
ceph osd pool set data crush_rule ssd
so you may close
Also available in: Atom