Project

General

Profile

Actions

Bug #9187

closed

osds down after fresh deploy in master branch of ceph

Added by Tamilarasi muthamizhan over 9 years ago. Updated over 9 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
-
Category:
ceph-disk
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ceph version 0.84-367-gf71c889

test setup: mira023

ceph-deploy version: 1.5.11

created 4 osds, with a combination of dmcrypt option and without it,
the check for osd status at the end of "osd create command" never reported osds to be up for all 4 osds.

Attaching the command output below,

ubuntu@mira023:~/ceph-deploy$ ./ceph-deploy osd create mira023:sdb:sdb --dmcrypt --zap-disk
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.11): ./ceph-deploy osd create mira023:sdb:sdb --dmcrypt --zap-disk
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks mira023:/dev/sdb:/dev/sdb
[mira023][DEBUG ] connected to host: mira023 
[mira023][DEBUG ] detect platform information from remote host
[mira023][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] Deploying osd to mira023
[mira023][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[mira023][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host mira023 disk /dev/sdb journal /dev/sdb activate True
[mira023][INFO  ] Running command: sudo ceph-disk -v prepare --zap-disk --fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb /dev/sdb
[mira023][DEBUG ] ****************************************************************************
[mira023][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[mira023][DEBUG ] verification and recovery are STRONGLY recommended.
[mira023][DEBUG ] ****************************************************************************
[mira023][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[mira023][DEBUG ] other utilities.
[mira023][DEBUG ] The operation has completed successfully.
[mira023][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[mira023][DEBUG ] order to align on 2048-sector boundaries.
[mira023][DEBUG ] The operation has completed successfully.
[mira023][DEBUG ] Information: Moved requested sector from 10485761 to 10487808 in
[mira023][DEBUG ] order to align on 2048-sector boundaries.
[mira023][DEBUG ] Warning: The kernel is still using the old partition table.
[mira023][DEBUG ] The new table will be used at the next reboot.
[mira023][DEBUG ] The operation has completed successfully.
[mira023][DEBUG ] meta-data=/dev/mapper/e57c09c4-1e2b-4268-a4aa-030e81c00b75 isize=2048   agcount=4, agsize=60719917 blks
[mira023][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0
[mira023][DEBUG ] data     =                       bsize=4096   blocks=242879665, imaxpct=25
[mira023][DEBUG ]          =                       sunit=0      swidth=0 blks
[mira023][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
[mira023][DEBUG ] log      =internal log           bsize=4096   blocks=118593, version=2
[mira023][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[mira023][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[mira023][DEBUG ] Warning: The kernel is still using the old partition table.
[mira023][DEBUG ] The new table will be used at the next reboot.
[mira023][DEBUG ] The operation has completed successfully.
[mira023][WARNIN] DEBUG:ceph-disk:Zapping partition table on /dev/sdb
[mira023][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --zap-all --clear --mbrtogpt -- /dev/sdb
[mira023][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[mira023][WARNIN] backup header from main header.
[mira023][WARNIN] 
[mira023][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[mira023][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[mira023][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[mira023][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[mira023][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[mira023][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[mira023][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdb
[mira023][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:05212d22-bdc6-471a-ab7f-1b9af72814a7 --typecode=2:45b0969e-9b03-4f30-b4c6-5ec00ceff106 --mbrtogpt -- /dev/sdb
[mira023][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdb
[mira023][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdb
[mira023][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[mira023][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/mapper/05212d22-bdc6-471a-ab7f-1b9af72814a7
[mira023][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdb
[mira023][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:e57c09c4-1e2b-4268-a4aa-030e81c00b75 --typecode=1:89c57f98-2fe5-4dc0-89c1-5ec00ceff2be -- /dev/sdb
[mira023][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdb
[mira023][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[mira023][WARNIN] INFO:ceph-disk:Running command: /sbin/cryptsetup --key-file /etc/ceph/dmcrypt-keys/e57c09c4-1e2b-4268-a4aa-030e81c00b75 --key-size 256 create e57c09c4-1e2b-4268-a4aa-030e81c00b75 /dev/sdb1
[mira023][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/mapper/e57c09c4-1e2b-4268-a4aa-030e81c00b75
[mira023][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/mapper/e57c09c4-1e2b-4268-a4aa-030e81c00b75
[mira023][WARNIN] DEBUG:ceph-disk:Mounting /dev/mapper/e57c09c4-1e2b-4268-a4aa-030e81c00b75 on /var/lib/ceph/tmp/mnt.CF2n3f with options noatime
[mira023][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime -- /dev/mapper/e57c09c4-1e2b-4268-a4aa-030e81c00b75 /var/lib/ceph/tmp/mnt.CF2n3f
[mira023][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.CF2n3f
[mira023][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.CF2n3f/journal -> /dev/mapper/05212d22-bdc6-471a-ab7f-1b9af72814a7
[mira023][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.CF2n3f/journal_dmcrypt -> /dev/disk/by-partuuid/05212d22-bdc6-471a-ab7f-1b9af72814a7
[mira023][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.CF2n3f
[mira023][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.CF2n3f
[mira023][WARNIN] INFO:ceph-disk:Running command: /sbin/cryptsetup remove e57c09c4-1e2b-4268-a4aa-030e81c00b75
[mira023][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-5ec00ceff05d -- /dev/sdb
[mira023][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdb
[mira023][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdb
[mira023][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[mira023][INFO  ] checking OSD status...
[mira023][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[mira023][WARNIN] there are 4 OSDs down
[mira023][WARNIN] there are 4 OSDs out

while the disks appear mounted, the osds are down. 

ubuntu@mira023:~/ceph-deploy$ df -h
Filesystem                                        Size  Used Avail Use% Mounted on
/dev/sda1                                         902G  2.2G  854G   1% /
udev                                              7.9G   12K  7.9G   1% /dev
tmpfs                                             1.6G  360K  1.6G   1% /run
none                                              5.0M     0  5.0M   0% /run/lock
none                                              7.9G     0  7.9G   0% /run/shm
/dev/mapper/bb078e9a-a7fd-48bf-bc4a-293cac676bb2  927G   33M  927G   1% /var/lib/ceph/osd/ceph-0
/dev/mapper/92e257c6-20be-49cb-9c7b-91fee509dedf  927G   33M  927G   1% /var/lib/ceph/osd/ceph-1
/dev/sdg1                                         932G   33M  932G   1% /var/lib/ceph/osd/ceph-2
/dev/mapper/e57c09c4-1e2b-4268-a4aa-030e81c00b75  927G   33M  927G   1% /var/lib/ceph/osd/ceph-3
ubuntu@mira023:~/ceph-deploy$ sudo ceph osd tree
# id    weight    type name    up/down    reweight
-1    3.61    root default
-2    3.61        host mira023
0    0.9            osd.0    down    0    
1    0.9            osd.1    down    0    
2    0.91            osd.2    down    0    
3    0.9            osd.3    down    0    
ubuntu@mira023:~/ceph-deploy$ sudo ceph health
HEALTH_WARN 64 pgs stuck inactive; 64 pgs stuck unclean

Actions #1

Updated by Tamilarasi muthamizhan over 9 years ago

  • Subject changed from osds are not activated to osds down after fresh deploy in master branch of ceph
Actions #2

Updated by Sage Weil over 9 years ago

thsi si fixed later today. it was the isa preload thing:

2014-08-20 21:04:58.845739 7f7369af2780 -1 load: jerasure load dlopen(/usr/lib/ceph/erasure-code/libec_isa.so): /usr/lib/ceph/erasure-code/libec_isa.so: cannot open shared object file: No such file or directory

(isa plugin not available on precise or el6, but it was trying to load it anyway)

Actions #3

Updated by Sage Weil over 9 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF