Actions
Bug #14235
closedsmithi partition issue for ceph deploy tests - needs block device
Status:
Won't Fix
Priority:
High
Assignee:
-
Category:
-
% Done:
0%
Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
ceph-deploy
Crash signature (v1):
Crash signature (v2):
Description
ceph-deploy tests cant run on smithi with existing 4 partitions, I believe some commands need block device instead
of partitions, probably its better to have journal partitions created along with data partitions which can
then be used during test, this would still require handling differently for smithi systems.
2016-01-05T01:08:06.002 INFO:teuthology.orchestra.run.smithi023:Running: 'cd /home/ubuntu/cephtest/ceph-deploy && ./ceph-deploy disk zap smithi025:nvme0n1p1' 2016-01-05T01:08:06.180 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf 2016-01-05T01:08:06.182 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] Invoked (1.5.31): ./ceph-deploy disk zap smithi025:nvme0n1p1 2016-01-05T01:08:06.183 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] ceph-deploy options: 2016-01-05T01:08:06.183 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] username : None 2016-01-05T01:08:06.183 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] verbose : False 2016-01-05T01:08:06.183 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] overwrite_conf : False 2016-01-05T01:08:06.184 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] subcommand : zap 2016-01-05T01:08:06.184 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] quiet : False 2016-01-05T01:08:06.184 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x13be170> 2016-01-05T01:08:06.184 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] cluster : ceph 2016-01-05T01:08:06.184 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] func : <function disk at 0x13b1140> 2016-01-05T01:08:06.185 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] ceph_conf : None 2016-01-05T01:08:06.185 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] default_release : False 2016-01-05T01:08:06.185 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.cli][INFO ] disk : [('smithi025', '/dev/nvme0n1p1', None)] 2016-01-05T01:08:06.185 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.osd][DEBUG ] zapping /dev/nvme0n1p1 on smithi025 2016-01-05T01:08:06.249 INFO:teuthology.orchestra.run.smithi023.stderr:Warning: Permanently added 'smithi025,172.21.15.25' (ECDSA) to the list of known hosts. 2016-01-05T01:08:06.576 INFO:teuthology.orchestra.run.smithi023.stderr:[smithi025][DEBUG ] connection detected need for sudo 2016-01-05T01:08:06.617 INFO:teuthology.orchestra.run.smithi023.stderr:Warning: Permanently added 'smithi025,172.21.15.25' (ECDSA) to the list of known hosts. 2016-01-05T01:08:06.935 INFO:teuthology.orchestra.run.smithi023.stderr:[smithi025][DEBUG ] connected to host: smithi025 2016-01-05T01:08:06.936 INFO:teuthology.orchestra.run.smithi023.stderr:[smithi025][DEBUG ] detect platform information from remote host 2016-01-05T01:08:06.962 INFO:teuthology.orchestra.run.smithi023.stderr:[smithi025][DEBUG ] detect machine type 2016-01-05T01:08:06.969 INFO:teuthology.orchestra.run.smithi023.stderr:[smithi025][DEBUG ] find the location of an executable 2016-01-05T01:08:06.970 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.1.1503 Core 2016-01-05T01:08:06.971 INFO:teuthology.orchestra.run.smithi023.stderr:[smithi025][DEBUG ] zeroing last few blocks of device 2016-01-05T01:08:06.972 INFO:teuthology.orchestra.run.smithi023.stderr:[smithi025][DEBUG ] find the location of an executable 2016-01-05T01:08:06.976 INFO:teuthology.orchestra.run.smithi023.stderr:[smithi025][INFO ] Running command: sudo /usr/sbin/ceph-disk zap /dev/nvme0n1p1 2016-01-05T01:08:07.100 INFO:teuthology.orchestra.run.smithi023.stderr:[smithi025][WARNING] ceph-disk: Error: not full block device; cannot zap: /dev/nvme0n1p1 2016-01-05T01:08:07.108 INFO:teuthology.orchestra.run.smithi023.stderr:[smithi025][ERROR ] RuntimeError: command returned non-zero exit status: 1 2016-01-05T01:08:07.109 INFO:teuthology.orchestra.run.smithi023.stderr:[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk zap /dev/nvme0n1p1 2016-01-05T01:08:07.109 INFO:teuthology.orchestra.run.smithi023.stderr: 2016-01-05T01:08:07.142 INFO:tasks.ceph_deploy:Error encountered, logging exception before tearing down ceph-deploy 2016-01-05T01:08:07.143 INFO:tasks.ceph_deploy:Traceback (most recent call last): File "/var/lib/teuthworker/src/ceph-qa-suite_jewel/tasks/ceph_deploy.py", line 280, in build_ceph_cluster raise RuntimeError("ceph-deploy: Failed to zap osds") RuntimeError: ceph-deploy: Failed to zap osds
Updated by Zack Cerza about 8 years ago
- Status changed from New to Need More Info
- Assignee set to Dan Mick
Dan, you worked on something closely related to this. Was that resolved?
Updated by Dan Mick almost 8 years ago
- Status changed from Need More Info to Won't Fix
- Assignee deleted (
Dan Mick)
No. I didn't find any magic way to make a block device. Right now it's just a limitation in the test. If someone wants to change that they're welcome to submit a test.
Actions