Feature #2574
closedcrowbar: use data disks automatically, journal inside data directory
0%
Description
Crowbar sets node['crowbar']['disks'] to an array of disks. First one is used for the OS, and disk['usage'] is set to 'OS'.
Use all disks in node['crowbar']['disks'] that don't have 'usage' set, set their usage to 'ceph-osd'. Use the $osd_data_dir/journal file as journal, for now.
Warning: if Swift is installed on the same node, it'll try to use the data disks too! Make barclamp-ceph refuse to use those nodes?
Updated by Sage Weil almost 12 years ago
- Translation missing: en.field_position set to 3
Updated by Sage Weil almost 12 years ago
- Translation missing: en.field_position deleted (
3) - Translation missing: en.field_position set to 1
Updated by Sage Weil almost 12 years ago
- Translation missing: en.field_story_points set to 3
- Translation missing: en.field_position deleted (
2) - Translation missing: en.field_position set to 2
Updated by Sage Weil almost 12 years ago
- Translation missing: en.field_position deleted (
14) - Translation missing: en.field_position set to 1
Updated by Anonymous almost 12 years ago
- Translation missing: en.field_position deleted (
6) - Translation missing: en.field_position set to 2
Updated by Sage Weil almost 12 years ago
- Project changed from Ceph to devops
- Category deleted (
chef)
Updated by Anonymous almost 12 years ago
- Target version set to v0.50
- Translation missing: en.field_position deleted (
2) - Translation missing: en.field_position set to 84
Updated by Anonymous almost 12 years ago
- Status changed from New to In Progress
- Assignee set to JuanJose Galvez
Updated by Anonymous almost 12 years ago
- Target version changed from v0.50 to v0.51
- Translation missing: en.field_position deleted (
91) - Translation missing: en.field_position set to 2
Updated by Anonymous almost 12 years ago
- Translation missing: en.field_position deleted (
3) - Translation missing: en.field_position set to 1
Updated by JuanJose Galvez almost 12 years ago
The most recent pull request for the cookbook has been tested by Tyler and myself. I've setup the following situations during my testing:
3 mons with 1 osd.
3 mons with 3 osd on the mon nodes.
3 mons with 6 osd, none on the mon nodes.
3 mons with 9 osd, three of those on the mon nodes.
I also setup nodes using swift, verified that those disks are skipped.
All of these end with a working ceph cluster.
Updated by Anonymous almost 12 years ago
- Status changed from In Progress to Resolved
There were bugs and the history was wrecked by github pull requests again, so I redid some commits, but this functionality is now in -- except for the unrelated bugfix that did weird things with df that I want to understand better before merging.