Bug #18694
ceph-disk activate for partition failing
0%
Description
Hi,
I have been trying to test out ceph and at the same time trying to get it installed using ceph-ansible.
I originally submitted this bug against ceph-ansible on github, but it was requested that I also open a bug here because using partitions for osd's should apparently be supported.
I won't repeat the information that's ceph-ansible related here, if relevant it can be found here: https://github.com/ceph/ceph-ansible/issues/1242
But basically it gets to a stage where ansible tries to run this on my hosts:
ceph-disk activate /dev/sda3
But it fails with the following:
Traceback (most recent call last): File "/usr/sbin/ceph-disk", line 9, in <module> load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')() File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5009, in run main(sys.argv[1:]) File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4962, in main main_catch(args.func, args) File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4993, in main_catch msg=e, File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 270, in __str__ return ': '.join([doc] + [_bytes2str(a) for a in self.args]) TypeError: sequence item 2: expected string or Unicode, CalledProcessError found
Nothing is overly special about my setup as far as I can tell.
My hosts have 1TB disks, a GUID parition table and three partitions. Parted output is:
# parted /dev/sda GNU Parted 3.2 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: DELL PERC H330 Adp (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 512MB 511MB fat32 boot, esp 2 512MB 32.5GB 32.0GB ext4 3 35.2GB 1000GB 964GB ceph
Partition 3 was created with:
(parted) mkpart ceph 35.2GB 100%
I also tried creating the third partition by specifying the partition type:
(parted) mkpart Partition name? []? ceph File system type? [ext2]? xfs Start? 35.2GB End? 100% 3 35.2GB 1000GB 964GB xfs ceph
But the results are the same regardless.
Related issues
History
#1 Updated by Leonid Prytuliak about 7 years ago
You have to pay attention to an error:
File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 270, in __str__ return ': '.join([doc] + [_bytes2str(a) for a in self.args])
See here: http://tracker.ceph.com/issues/18371
What would simply correct the error, update the source code for utilities ceph-disk from git: https://github.com/ceph/ceph/blob/master/src/ceph-disk/ceph_disk/main.py
#2 Updated by Nathan Cutler about 7 years ago
- Duplicates Bug #18371: ceph-disk: error on _bytes2str added
#3 Updated by Gary Richards about 7 years ago
Grabbing the raw file directly from git and putting it in place of the original definitely stops me getting the error I was seeing.
Presumably this will make it into the various packages at some point?
#4 Updated by Gary Richards about 7 years ago
Hrm, It would seem I was a bit hasty.
After purging everything I had and starting again, using ceph-ansible to configure everything. I still get a very similar message at exactly the same stage.
If I replace main.py with the one mentioned above using this command:
wget https://github.com/ceph/ceph/raw/master/src/ceph-disk/ceph_disk/main.py -O /usr/lib/python2.7/dist-packages/ceph_disk/main.py2
I get a different error:
root@vm1:~# ceph-disk activate /dev/sda3 Traceback (most recent call last): File "/usr/sbin/ceph-disk", line 9, in <module> load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')() File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5298, in run main(sys.argv[1:]) File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5251, in main main_catch(args.func, args) File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5276, in main_catch func(args) File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3594, in main_activate cluster=args.cluster, AttributeError: 'Namespace' object has no attribute 'cluster'
I'm still using Jewel debians packages as my starting point.
#5 Updated by Gary Richards about 7 years ago
OK, I see that the most recent file from master introduces changes that are incompatible with the rest of the ceph code i'm running. Using an older version from around the time I reported the bug and the command doesn't produce a stack trace.
I think my issue now lies within ceph-ansible somewhere or at least my configuration of it.
#6 Updated by Nathan Cutler about 7 years ago
- Status changed from New to Resolved
#7 Updated by Nathan Cutler about 7 years ago
I looked at the error in Comment 4 and I believe it was caused by this line:
https://github.com/ceph/ceph/blob/776bfa8/src/ceph-disk/ceph_disk/main.py#L3594
which has since been dropped by https://github.com/ceph/ceph/commit/bf6fca7591de9993c4dacee0fbd8218f0bbea89c