Project

General

Profile

Actions

Bug #19489

closed

ceph-disk: failing to activate osd with multipath

Added by Matt Stroud about 7 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
jewel
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

We are trying to setup a new cluster using multipath disks. I'm able to prepare the osd just fine, but it fails to activate. Here is the output:

[root@mon01 ceph-config]# ceph-deploy osd activate osd01:mapper/mpatha
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.37): /usr/bin/ceph-deploy osd activate osd01:mapper/mpatha
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x170e710>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x1701b90>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('osd01', '/dev/mapper/mpatha', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks osd01:/dev/mapper/mpatha:
[osd01][DEBUG ] connected to host: osd01
[osd01][DEBUG ] detect platform information from remote host
[osd01][DEBUG ] detect machine type
[osd01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.osd][DEBUG ] activating host osd01 disk /dev/mapper/mpatha
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[osd01][DEBUG ] find the location of an executable
[osd01][INFO  ] Running command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/mapper/mpatha
[osd01][WARNIN] main_activate: path = /dev/mapper/mpatha
[osd01][WARNIN] get_dm_uuid: get_dm_uuid /dev/mapper/mpatha uuid path is /sys/dev/block/253:3/dm/uuid
[osd01][WARNIN] get_dm_uuid: get_dm_uuid /dev/mapper/mpatha uuid is mpath-360060e80074e840000304e8400004000
[osd01][WARNIN]
[osd01][WARNIN] get_dm_uuid: get_dm_uuid /dev/mapper/mpatha uuid path is /sys/dev/block/253:3/dm/uuid
[osd01][WARNIN] get_dm_uuid: get_dm_uuid /dev/mapper/mpatha uuid is mpath-360060e80074e840000304e8400004000
[osd01][WARNIN]
[osd01][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/mapper/mpatha
[osd01][WARNIN] Traceback (most recent call last):
[osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
[osd01][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[osd01][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5047, in run
[osd01][WARNIN]     main(sys.argv[1:])
[osd01][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4998, in main
[osd01][WARNIN]     args.func(args)
[osd01][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3357, in main_activate
[osd01][WARNIN]     reactivate=args.reactivate,
[osd01][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3067, in mount_activate
[osd01][WARNIN]     e,
[osd01][WARNIN] ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device /dev/mapper/mpatha: Line is truncated:
[osd01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/mapper/mpatha

I have tried switching between using root and the ceph user. With the ceph user I get permission denied while trying to activate. I'm trying to use Jewel due to its long term support, however I'm going to give kraken a try after reporting this.


Related issues 1 (0 open1 closed)

Copied to Ceph - Backport #20837: jewel: ceph-disk: failing to activate osd with multipathResolvedDavid DisseldorpActions
Actions

Also available in: Atom PDF