Bug #12294
closedteuthology-kill reveals an extra job
0%
Description
When running teuthology-kill on a run, it removes all jobs in the queue but not running. It also sends a kill to all running jobs. Then an extra job comes out of nowhere and is scheduled (ubuntu-2015-07-11_10:03:56-upgrade:hammer-hammer---basic-openstack/56 in the example below). Note that in the example below all jobs except 56 were reported to be scheduled for the run, hours before the kill. The job 56 only showed up right after the kill was issued.
/usr/share/nginx/html/ubuntu-2015-07-11_10:03:56-upgrade:hammer-hammer---basic-openstack/48/orig.config.yaml upgrade:hammer/basic/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} /usr/share/nginx/html/ubuntu-2015-07-11_10:03:56-upgrade:hammer-hammer---basic-openstack/49/orig.config.yaml upgrade:hammer/basic/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} /usr/share/nginx/html/ubuntu-2015-07-11_10:03:56-upgrade:hammer-hammer---basic-openstack/50/orig.config.yaml upgrade:hammer/basic/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} /usr/share/nginx/html/ubuntu-2015-07-11_10:03:56-upgrade:hammer-hammer---basic-openstack/51/orig.config.yaml upgrade:hammer/basic/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} /usr/share/nginx/html/ubuntu-2015-07-11_10:03:56-upgrade:hammer-hammer---basic-openstack/52/orig.config.yaml upgrade:hammer/basic/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} /usr/share/nginx/html/ubuntu-2015-07-11_10:03:56-upgrade:hammer-hammer---basic-openstack/53/orig.config.yaml upgrade:hammer/basic/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml} /usr/share/nginx/html/ubuntu-2015-07-11_10:03:56-upgrade:hammer-hammer---basic-openstack/54/orig.config.yaml upgrade:hammer/point-to-point/{point-to-point.yaml distros/ubuntu_14.04.yaml} /usr/share/nginx/html/ubuntu-2015-07-11_10:03:56-upgrade:hammer-hammer---basic-openstack/56/orig.config.yaml upgrade:hammer/basic/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
Updated by Loïc Dachary almost 9 years ago
I'm marking this issue as Urgent because it may inspire someone who has been chasing something similar, not because it's blocking anything. I think it can be reproduced with the OpenStack backend.
Updated by Zack Cerza almost 9 years ago
- Status changed from New to Need More Info
- Assignee set to Loïc Dachary
- Priority changed from Urgent to Normal
I think I need more information.
What generated the output you originally supplied?
Ideally I'd be able to see the output from scheduling a run affected by this, and the output from killing the same run.
Alternatively, if you can determine if the job is in fact one of these, then we don't have a problem.
https://github.com/ceph/teuthology/blob/master/teuthology/suite.py#L301
Updated by Loïc Dachary almost 9 years ago
- Status changed from Need More Info to Rejected