Feature #11547
Add ceph-deploy special workunit
0%
Description
We currently use rados, rbd etc work units as part of ceph-deploy qa suite.
It seems it'd be much more effective and time efficient if we designed and added a ceph-deploy work unit specially for ceph-deploy suite
History
#1 Updated by Yuri Weinstein almost 9 years ago
- Description updated (diff)
#2 Updated by Travis Rhoden almost 9 years ago
+1
The ceph-deploy suite is exercising ceph more than it is exercising ceph-deploy. In fact, ceph-deploy is not even being used to install packages because the packages are pre-installed by the time ceph-deploy runs.
Rather than workloads being run against the ceph cluster, the ceph-deploy suite should check that packages can be installed by ceph-deploy, monitors and osd created (this it does already), and that the cluster reaches HEALTH_OK. Once HEALTH_OK is reached, that should be the end of that test.
#3 Updated by Sage Weil almost 9 years ago
I we should gather proper requirements for what ceph_deploy.py should test and fix/rewrite it.
As for the workload we run after the install, i'm not sure it matters too much.
#4 Updated by Yuri Weinstein almost 9 years ago
As a starting point added ceph-deploy/ceph-deploy_hello_world workunit, we can enhance it https://github.com/ceph/ceph/commit/13abae186357f4e9bb40990a7a212f93ec2e1e79
#5 Updated by Tamilarasi muthamizhan over 8 years ago
well, workloads[rados/rbd/rgw] were originally added to the ceph-deploy task as a way to ensure the test case is complete,
1. ceph-deploy installs the ceph branch mentioned
2. creates a cluster
3. run workunit [1 or 2 for each module like rados, rbd and rgw]
in addition to this, ceph-deploy does a few additional testing of [enable or disable dmcrypt] and ability to use a separate disk for journal and data or choose to use the same disk for both data and journal.
the only thing that needs to be added for testing ceph-deploy is
ceph-deploy rgw command and verify if rgw is installed and configured right.
use ceph-deploy rgw command
run rgw workunits to confirm
but since we would soon be moving to a new installer for ceph, adding anything to ceph-deploy now may not make much sense.
however, documenting these in here for future reference :)