Project

General

Profile

Actions

Bug #8814

closed

ceph-disk list fails in lxc container

Added by Ricardo Rocha almost 10 years ago. Updated about 9 years ago.

Status:
Won't Fix
Priority:
Low
Assignee:
-
Category:
ceph cli
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Certainly not an urgent or big issue, but we use lxc containers for our continuous integration system and have issues with ceph-disk.

Some of the containers run CEPH, and we rely on puppet-ceph to deploy them. OSD partitions are set as loopback mounts, as we don't want to expose the host's block devices.

One of the commands issued in the puppet OSD setup is ceph-disk list, but on the container i get:

# ceph-disk list
Problem opening /dev/sda for reading! Error is 2.
The specified file does not exist!
Problem opening /dev/sda for reading! Error is 2.
The specified file does not exist!
Problem opening /dev/sda for reading! Error is 2.
The specified file does not exist!
Problem opening /dev/sda for reading! Error is 2.
The specified file does not exist!
Problem opening /dev/sda for reading! Error is 2.
The specified file does not exist!
Problem opening /dev/sda for reading! Error is 2.
The specified file does not exist!
/dev/sda :
Traceback (most recent call last):
  File "/usr/sbin/ceph-disk", line 2579, in <module>
    main()
  File "/usr/sbin/ceph-disk", line 2557, in main
    args.func(args)
  File "/usr/sbin/ceph-disk", line 2211, in main_list
    list_dev(get_dev_path(p), uuid_map, journal_map)
  File "/usr/sbin/ceph-disk", line 2133, in list_dev
    if is_partition(dev):
  File "/usr/sbin/ceph-disk", line 421, in is_partition
    if not stat.S_ISBLK(os.lstat(dev).st_mode):

This is with:

dpkg-query -W ceph
ceph    0.80.1-1precise

To understand how the container gets set, here's some more info:

ls /sys/block/
loop0  loop2  loop4  loop6  ram0  ram10  ram12  ram14  ram2  ram4  ram6  ram8  sda
loop1  loop3  loop5  loop7  ram1  ram11  ram13  ram15  ram3  ram5  ram7  ram9  sr0

lsblk -l
NAME  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda     8:0    0 465.8G  0 disk 
sda1    8:1    0 449.8G  0 part 
sda2    8:2    0     1K  0 part 
sda5    8:5    0    16G  0 part [SWAP]
sr0    11:0    1  1024M  0 rom  

There's an issue here with the container setup, but this is the default config. We could add the missing sdaX nodes, but the actual device will depend on the host setup so it wouldn't help.

One possibility would be to ignore in ceph-disk the partitions that cannot be found - i could submit a patch for it.


Files

ceph-disk.patch (427 Bytes) ceph-disk.patch Ricardo Rocha, 07/10/2014 08:06 PM
Actions #1

Updated by Ricardo Rocha almost 10 years ago

Some more details on the container setup:

ls -l /sys/block/sda
lrwxrwxrwx 1 root root 0 Jul 11 01:10 /sys/block/sda -> ../devices/pci0000:00/0000:00:1f.2/ata3/host2/target2:0:0/2:0:0:0/block/sda

ls -l /sys/block/sda/ | grep sda
drwxr-xr-x 5 root root    0 Jul 11 01:04 sda1
drwxr-xr-x 5 root root    0 Jul 11 01:04 sda2
drwxr-xr-x 5 root root    0 Jul 11 01:04 sda5

ls -l /dev/sda*
ls: cannot access /dev/sda*: No such file or directory

which should make the issue clear.

Actions #2

Updated by Ricardo Rocha almost 10 years ago

Attached a patch which fixes it.

It makes the assumption that if the device is not present, then it's a full disk (not quite the case in the container, but it works).

Actions #3

Updated by Loïc Dachary about 9 years ago

  • Status changed from New to Won't Fix

It can be resolved by exposing /dev to the container.

Actions

Also available in: Atom PDF