Actions
Bug #5259
closedBug #4984: ceph_deploy: osd create succeeds with an error message (partprobe returns error)
osd create command fails inconsistently on ubuntu
Status:
Duplicate
Priority:
Urgent
Assignee:
-
Category:
ceph-deploy
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
ubuntu@teuthology:/a/teuthology-2013-06-05_01:01:15-ceph-deploy-master-testing-basic/31847
2013-06-05T07:43:15.150 DEBUG:teuthology.orchestra.run:Running [10.214.132.2]: 'cd /home/ubuntu/cephtest/31847/ceph-deploy && ./ceph-deploy osd create --zap-disk plana76:sdb' 2013-06-05T07:43:15.320 INFO:teuthology.orchestra.run.err:DEBUG:ceph_deploy.osd:Preparing cluster ceph disks plana76:/dev/sdb: 2013-06-05T07:43:15.545 INFO:teuthology.orchestra.run.err:DEBUG:ceph_deploy.osd:Deploying osd to plana76 2013-06-05T07:43:15.613 INFO:teuthology.orchestra.run.err:DEBUG:ceph_deploy.osd:Host plana76 is now ready for osd use. 2013-06-05T07:43:15.613 INFO:teuthology.orchestra.run.err:DEBUG:ceph_deploy.osd:Preparing host plana76 disk /dev/sdb journal None activate True 2013-06-05T07:43:20.357 INFO:teuthology.orchestra.run.err:INFO:ceph-disk:Will colocate journal with data on /dev/sdb 2013-06-05T07:43:20.359 INFO:teuthology.orchestra.run.err:Error: Partition(s) 1 on /dev/sdb have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes. 2013-06-05T07:43:20.359 INFO:teuthology.orchestra.run.err:ceph-disk: Error: Command '['partprobe', '/dev/sdb']' returned non-zero exit status 1 2013-06-05T07:43:20.377 INFO:teuthology.orchestra.run.out:Creating new GPT entries. 2013-06-05T07:43:20.377 INFO:teuthology.orchestra.run.out:Warning: The kernel is still using the old partition table. 2013-06-05T07:43:20.378 INFO:teuthology.orchestra.run.out:The new table will be used at the next reboot. 2013-06-05T07:43:20.378 INFO:teuthology.orchestra.run.out:GPT data structures destroyed! You may now partition the disk using fdisk or 2013-06-05T07:43:20.378 INFO:teuthology.orchestra.run.out:other utilities. 2013-06-05T07:43:20.378 INFO:teuthology.orchestra.run.out:Warning: The kernel is still using the old partition table. 2013-06-05T07:43:20.378 INFO:teuthology.orchestra.run.out:The new table will be used at the next reboot. 2013-06-05T07:43:20.378 INFO:teuthology.orchestra.run.out:The operation has completed successfully. 2013-06-05T07:43:20.379 INFO:teuthology.orchestra.run.out:Information: Moved requested sector from 34 to 2048 in 2013-06-05T07:43:20.379 INFO:teuthology.orchestra.run.out:order to align on 2048-sector boundaries. 2013-06-05T07:43:20.379 INFO:teuthology.orchestra.run.out:Warning: The kernel is still using the old partition table. 2013-06-05T07:43:20.379 INFO:teuthology.orchestra.run.out:The new table will be used at the next reboot. 2013-06-05T07:43:20.379 INFO:teuthology.orchestra.run.out:The operation has completed successfully. 2013-06-05T07:43:20.379 INFO:teuthology.orchestra.run.err:Traceback (most recent call last): 2013-06-05T07:43:20.380 INFO:teuthology.orchestra.run.err: File "./ceph-deploy", line 9, in <module> 2013-06-05T07:43:20.380 INFO:teuthology.orchestra.run.err: load_entry_point('ceph-deploy==0.0.1', 'console_scripts', 'ceph-deploy')() 2013-06-05T07:43:20.380 INFO:teuthology.orchestra.run.err: File "/home/ubuntu/cephtest/31847/ceph-deploy/ceph_deploy/cli.py", line 95, in main 2013-06-05T07:43:20.380 INFO:teuthology.orchestra.run.err: return args.func(args) 2013-06-05T07:43:20.380 INFO:teuthology.orchestra.run.err: File "/home/ubuntu/cephtest/31847/ceph-deploy/ceph_deploy/osd.py", line 224, in osd 2013-06-05T07:43:20.380 INFO:teuthology.orchestra.run.err: prepare(args, cfg, activate=True) 2013-06-05T07:43:20.381 INFO:teuthology.orchestra.run.err: File "/home/ubuntu/cephtest/31847/ceph-deploy/ceph_deploy/osd.py", line 178, in prepare 2013-06-05T07:43:20.381 INFO:teuthology.orchestra.run.err: dmcrypt_dir=args.dmcrypt_key_dir, 2013-06-05T07:43:20.381 INFO:teuthology.orchestra.run.err: File "/home/ubuntu/cephtest/31847/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy-0.5.1-py2.7.egg/pushy/protocol/proxy.py", line 255, in <lambda> 2013-06-05T07:43:20.381 INFO:teuthology.orchestra.run.err: (conn.operator(type_, self, args, kwargs)) 2013-06-05T07:43:20.381 INFO:teuthology.orchestra.run.err: File "/home/ubuntu/cephtest/31847/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy-0.5.1-py2.7.egg/pushy/protocol/connection.py", line 66, in operator 2013-06-05T07:43:20.381 INFO:teuthology.orchestra.run.err: return self.send_request(type_, (object, args, kwargs)) 2013-06-05T07:43:20.382 INFO:teuthology.orchestra.run.err: File "/home/ubuntu/cephtest/31847/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy-0.5.1-py2.7.egg/pushy/protocol/baseconnection.py", line 323, in send_request 2013-06-05T07:43:20.382 INFO:teuthology.orchestra.run.err: return self.__handle(m) 2013-06-05T07:43:20.382 INFO:teuthology.orchestra.run.err: File "/home/ubuntu/cephtest/31847/ceph-deploy/virtualenv/local/lib/python2.7/site-packages/pushy-0.5.1-py2.7.egg/pushy/protocol/baseconnection.py", line 639, in __handle 2013-06-05T07:43:20.382 INFO:teuthology.orchestra.run.err: raise e 2013-06-05T07:43:20.383 INFO:teuthology.orchestra.run.err:pushy.protocol.proxy.ExceptionProxy: Command '['ceph-disk-prepare', '--zap-disk', '--', '/dev/sdb']' returned non-zero exit status 1
Updated by Ian Colle almost 11 years ago
- Assignee set to Anonymous
- Priority changed from Normal to Urgent
Updated by Sage Weil almost 11 years ago
- Status changed from New to Duplicate
i think we should call this a dup of the other bug.. this is all about udev vs partprobe vs udevadm settle races. see #4984
Actions