Bug #45980
closedcephadm: implement missing "FileStore not supported" error message and update DriveGroup docs
0%
Description
This one is easy to reproduce.
Ask cephadm to create FileStore OSDs:
master:~ # cat service_spec_osd.yml service_type: osd placement: hosts: - 'master' service_id: generic_osd_deployment data_devices: all: true objectstore: filestore master:~ # ceph orch device ls --refresh master:~ # ceph orch apply osd -i service_spec_osd.yml
Wait awhile. Note that OSDs do not come up. Investigate. The ceph-volume.log contains a Python Traceback:
Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 150, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 42, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 341, in main self.activate(args) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 265, in activate activate_bluestore(lvs, no_systemd=args.no_systemd) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 128, in activate_bluestore raise RuntimeError('could not find a bluestore OSD to activate') RuntimeError: could not find a bluestore OSD to activate
Hmm, I can't think of any good reason why ceph-volume should be saying that in this scenario.
The underlying physical disks I am using are 32 GB in size, yet "ceph orch device ls" complains they are too small (?):
master:~ # ceph orch device ls HOST PATH TYPE SIZE DEVICE AVAIL REJECT REASONS master /dev/vda hdd 42.0G False locked master /dev/vdb hdd 32.0G 615389 False LVM detected, locked, Insufficient space (<5GB) on vgs master /dev/vdc hdd 32.0G 916054 False LVM detected, locked, Insufficient space (<5GB) on vgs master /dev/vdd hdd 32.0G 389182 False LVM detected, locked, Insufficient space (<5GB) on vgs master /dev/vde hdd 32.0G 389751 False LVM detected, locked, Insufficient space (<5GB) on vgs
Updated by Tim Serong almost 4 years ago
https://docs.ceph.com/docs/master/cephadm/adoption/#limitations says "Cephadm only works with BlueStore OSDs. If there are FileStore OSDs in your cluster you cannot manage them", which made me think that filestore is flat out unsupported. Then I looked at https://docs.ceph.com/docs/master/cephadm/drivegroups/, which says you can spectify objectstore = filestore or bluestore in a drivegroup. So, are the docs out of date, or is cephadm broken?
Updated by Jan Fajerski almost 4 years ago
DriveGroups allow specifying filestore, they are not that tightly coupled to cephadm.
I'd argue cephadm should detect when someone is trying to deploy filestore and fail with a nice error message. Certainly not just go ahead and try to activate bluestore.
Updated by Sebastian Wagner almost 4 years ago
- Related to Feature #44874: cephadm: add Filestore support added
Updated by Sebastian Wagner almost 4 years ago
- Subject changed from cephadm fails to deploy FileStore OSDs ("RuntimeError: could not find a bluestore OSD to activate" in ceph-volume.log) to cephadm: Filestore: improve error message ("RuntimeError: could not find a bluestore OSD to activate" in ceph-volume.log)
- Tags set to ux low-hanging-fruit
Updated by Nathan Cutler almost 4 years ago
- Subject changed from cephadm: Filestore: improve error message ("RuntimeError: could not find a bluestore OSD to activate" in ceph-volume.log) to cephadm: implement missing "FileStore not supported" error message
Updated by Nathan Cutler almost 4 years ago
- Subject changed from cephadm: implement missing "FileStore not supported" error message to cephadm: implement missing "FileStore not supported" error message and update DriveGroup docs
Updated by Joshua Schmid almost 4 years ago
- Status changed from New to In Progress
- Assignee set to Joshua Schmid
Updated by Joshua Schmid almost 4 years ago
- Status changed from In Progress to Resolved
Updated by Sebastian Wagner almost 4 years ago
- Status changed from Resolved to Pending Backport
Updated by Sebastian Wagner over 3 years ago
- Status changed from Pending Backport to Resolved
- Target version set to v15.2.5