Actions
Bug #20740
closedstarting an OSD with an mpath device dumps core
Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
OSD
Target version:
-
% Done:
0%
Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Description
Hi, here's the preliminary info:
root@bigdisk2:/var/lib/ceph/osd# ceph --version ceph version 12.1.1 (f3e663a190bf2ed12c7e3cda288b9a159572c800) luminous (rc) root@bigdisk2:/var/lib/ceph/osd# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.2 LTS Release: 16.04 Codename: xenial root@bigdisk2:/var/lib/ceph/osd# uname -a Linux bigdisk2 4.4.0-81-generic #104-Ubuntu SMP Wed Jun 14 08:17:06 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux root@bigdisk2:/var/lib/ceph/osd# root@bigdisk2:/var/lib/ceph/osd# multipath -ll 360060e801333410050203341000000c2 dm-3 HITACHI,OPEN-V size=898G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 1:0:0:1 sdf 8:80 active ready running `- 2:0:0:1 sdd 8:48 active ready running 360060e801333410050203341000000c1 dm-2 HITACHI,OPEN-V size=898G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 1:0:0:0 sde 8:64 active ready running `- 2:0:0:0 sdc 8:32 active ready running root@bigdisk2:~# ceph-disk prepare --bluestore /dev/mapper/360060e801333410050203341000000c1 Creating new GPT entries. Setting name! partNum is 0 REALLY setting name! Warning: The kernel is still using the old partition table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) The operation has completed successfully. Setting name! partNum is 1 REALLY setting name! Warning: The kernel is still using the old partition table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) The operation has completed successfully. Warning: The kernel is still using the old partition table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) The operation has completed successfully. meta-data=/dev/dm-5 isize=2048 agcount=4, agsize=6400 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=0 data = bsize=4096 blocks=25600, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=864, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Warning: The kernel is still using the old partition table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) The operation has completed successfully. root@bigdisk2:~# ceph-disk activate-all Removed symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@10.service. Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@10.service to /lib/systemd/system/ceph-osd@.service. Removed symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@9.service. Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@9.service to /lib/systemd/system/ceph-osd@.service. Removed symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service. Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /lib/systemd/system/ceph-osd@.service.
Please see attached log from osd.10.
Files
Updated by Vasu Kulkarni almost 7 years ago
- Status changed from New to Rejected
It was a manually setup, and later discussion on irc when using ceph-deploy on new node with multipath worked. hence closing.
Actions