Activity
From 12/14/2020 to 01/12/2021
01/12/2021
- 01:20 PM Bug #47758: fail to create OSDs because the requested extent is too large
- Hello @Jan,
I wanted to report we hit this bug while instantiating a new cluster with physical SSDs.
I have appli... - 08:37 AM Bug #48797: lvm batch calculates wrong extends
- Seems like a duplicate of https://tracker.ceph.com/issues/47758
Btw. I tried this in a rook-ceph cluster with Helm...
01/08/2021
- 10:56 AM Bug #48797 (Duplicate): lvm batch calculates wrong extends
- With version 15.2.8 rook-ceph-osd-prepare cannot create the configured OSD disks on the specific node because of lvm ...
01/07/2021
- 01:21 PM Bug #48783: raw osd's are not started on boot after upgrade from 14.2.11 to 14.2.16 ; ceph-volume...
- Hello
if one start the osd's manually with the method above... - 10:16 AM Bug #48783 (New): raw osd's are not started on boot after upgrade from 14.2.11 to 14.2.16 ; ceph-...
- Hello
After upgrading ceph-osd from 14.2.11 to 14.2.16 the raw osd's on the node does not autostart on boot any lo...
12/22/2020
- 02:02 PM Bug #47758 (Fix Under Review): fail to create OSDs because the requested extent is too large
- Ok I proposed a PR that I think fixes this bug. Would appreciate if you could run the previous test to confirm.
- 08:55 AM Bug #48697 (Resolved): Ceph-volume reports a device as available and the device cannot be used to...
- Two possibilities.. report the device as no available for this cause (GPT headers), or fix the lvm batch command in o...
12/17/2020
- 03:35 PM Backport #48650 (Resolved): nautilus: blkid holds old entries in cache
- https://github.com/ceph/ceph/pull/41114
- 03:35 PM Backport #48649 (Resolved): octopus: blkid holds old entries in cache
- https://github.com/ceph/ceph/pull/41115
- 03:32 PM Bug #48464 (Pending Backport): blkid holds old entries in cache
- 03:04 PM Bug #48648 (Resolved): fix typo in batch log message
- @s/to small/too small/@ in devices/lvm/batch.py:105
12/16/2020
- 04:02 PM Bug #48631 (Fix Under Review): drive-group subcommand potentially passes root disk to batch
- 02:55 PM Bug #48631 (Resolved): drive-group subcommand potentially passes root disk to batch
- # ceph-volume drive-group --spec '{"data_devices":{"all":true},"placement":{"host_pattern":"*"},"service_id":"default...
12/14/2020
- 10:21 PM Bug #44494 (Resolved): prepare: the *-slots arguments have no effect
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:19 PM Bug #47966 (Resolved): Fails to deploy osd in rook, throws index error
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:19 PM Bug #48018 (Resolved): ceph-volume simple activate ignores osd_mount_options_xfs for Filestore OSD
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:19 PM Fix #48039 (Resolved): remove mention of dmcache from docs and help text
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:19 PM Bug #48045 (Resolved): the --log-level flag is not respected
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:18 PM Bug #48271 (Resolved): ceph-volume lvm batch fails activating filestore dymcrypt osds
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
Also available in: Atom