Project

General

Profile

Bug #14094

Updated by Loïc Dachary over 8 years ago

After a failed run of the ceph-disk suite on CentOS 7.1, suite, ceph-disk list tries to stat /dev/vdb1 although it does not exist. If run again immediately afterwards, the problem goes away. It is not a timing problem, nothing has been going one for over 15 minutes after the test stopped. 

 It showed twice but I'm not sure how to reproduce it reliably. 

 <pre> 
 [root@target167114225077 ceph-disk]# ceph-disk list 
 Traceback (most recent call last): 
   File "/sbin/ceph-disk", line 4054, in <module> 
     main(sys.argv[1:]) 
   File "/sbin/ceph-disk", line 4010, in main 
     main_catch(args.func, args) 
   File "/sbin/ceph-disk", line 4032, in main_catch 
     func(args) 
   File "/sbin/ceph-disk", line 3324, in main_list 
     devices = list_devices(args.path) 
   File "/sbin/ceph-disk", line 3274, in list_devices 
     ptype = get_partition_type(dev) 
   File "/sbin/ceph-disk", line 3077, in get_partition_type 
     return get_sgdisk_partition_info(part, 'Partition GUID code: (\S+)') 
   File "/sbin/ceph-disk", line 3083, in get_sgdisk_partition_info 
     (base, partnum) = split_dev_base_partnum(dev) 
   File "/sbin/ceph-disk", line 3067, in split_dev_base_partnum 
     if is_mpath(dev): 
   File "/sbin/ceph-disk", line 460, in is_mpath 
     uuid = get_dm_uuid(dev) 
   File "/sbin/ceph-disk", line 448, in get_dm_uuid 
     uuid_path = os.path.join(block_path(dev), 'dm', 'uuid') 
   File "/sbin/ceph-disk", line 443, in block_path 
     rdev = os.stat(path).st_rdev 
 OSError: [Errno 2] No such file or directory: '/dev/vdb1' 
 [root@target167114225077 ceph-disk]# ceph-disk list 
 /dev/vda : 
  /dev/vda1 other, xfs, mounted on / 
 /dev/vdb other, unknown 
 /dev/vdc : 
  /dev/vdc1 ceph journal 
 /dev/vdd other, unknown 
 </pre> 

Back