Project

General

Profile

Actions

Bug #10922

closed

ceph-deploy prepare activates the OSD automatically.

Added by Tyler Bishop about 9 years ago. Updated about 9 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

As per the documentation here ceph-deploy prepare should NOT activate the OSD in the pool.

http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/

On EL7 the OSD is provisioned and activated.

ceph-deploy --overwrite-conf  osd prepare ceph0-node3:/dev/sdh
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.21): /usr/bin/ceph-deploy --overwrite-conf osd prepare ceph0-node3:/dev/sdh
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph0-node3:/dev/sdh:
[ceph0-node3][DEBUG ] connection detected need for sudo
[ceph0-node3][DEBUG ] connected to host: ceph0-node3 
[ceph0-node3][DEBUG ] detect platform information from remote host
[ceph0-node3][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.0.1406 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph0-node3
[ceph0-node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph0-node3][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph0-node3 disk /dev/sdh journal None activate False
[ceph0-node3][INFO  ] Running command: sudo ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdh
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[ceph0-node3][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdh
[ceph0-node3][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 12000 on /dev/sdh
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:12000M --change-name=2:ceph journal --partition-guid=2:705da1b1-726d-43f8-8301-7d8db0869c99 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdh
[ceph0-node3][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph0-node3][DEBUG ] order to align on 2048-sector boundaries.
[ceph0-node3][DEBUG ] The operation has completed successfully.
[ceph0-node3][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdh
[ceph0-node3][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdh
[ceph0-node3][WARNIN] partx: /dev/sdh: error adding partition 2
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[ceph0-node3][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/705da1b1-726d-43f8-8301-7d8db0869c99
[ceph0-node3][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdh
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:9cfedbec-803f-4bec-91d3-6b73d0633d5f --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdh
[ceph0-node3][DEBUG ] Information: Moved requested sector from 24576001 to 24578048 in
[ceph0-node3][DEBUG ] order to align on 2048-sector boundaries.
[ceph0-node3][DEBUG ] The operation has completed successfully.
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdh
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle
[ceph0-node3][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdh1
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdh1
[ceph0-node3][DEBUG ] meta-data=/dev/sdh1              isize=2048   agcount=4, agsize=182373597 blks
[ceph0-node3][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
[ceph0-node3][DEBUG ]          =                       crc=0
[ceph0-node3][DEBUG ] data     =                       bsize=4096   blocks=729494385, imaxpct=5
[ceph0-node3][DEBUG ]          =                       sunit=0      swidth=0 blks
[ceph0-node3][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
[ceph0-node3][DEBUG ] log      =internal log           bsize=4096   blocks=356198, version=2
[ceph0-node3][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[ceph0-node3][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[ceph0-node3][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdh1 on /var/lib/ceph/tmp/mnt.amXYlN with options rw,noatime,noquota,logbsize=256k,logbufs=8,inode64,allocsize=4M
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o rw,noatime,noquota,logbsize=256k,logbufs=8,inode64,allocsize=4M -- /dev/sdh1 /var/lib/ceph/tmp/mnt.amXYlN
[ceph0-node3][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.amXYlN
[ceph0-node3][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.amXYlN/journal -> /dev/disk/by-partuuid/705da1b1-726d-43f8-8301-7d8db0869c99
[ceph0-node3][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.amXYlN
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.amXYlN
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdh
[ceph0-node3][DEBUG ] The operation has completed successfully.
[ceph0-node3][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdh
[ceph0-node3][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
[ceph0-node3][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdh
[ceph0-node3][WARNIN] partx: /dev/sdh: error adding partitions 1-2
[ceph0-node3][INFO  ] checking OSD status...
[ceph0-node3][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph0-node3 is now ready for osd use.

[ceph@ceph0-mon0 ~]$ ceph osd tree | grep osd.42
42    2.72            osd.42    up    1

5 minutes after prepare:

Filesystem      Size  Used Avail Use% Mounted on
/dev/sdh1       2.8T   39G  2.7T   2% /var/lib/ceph/osd/ceph-42

Disk is being added to the pool.

Actions #1

Updated by Alfredo Deza about 9 years ago

  • Description updated (diff)
Actions #2

Updated by Loïc Dachary about 9 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Loïc Dachary

The documentation should be fixed : https://github.com/ceph/ceph/pull/3901

Actions #3

Updated by Loïc Dachary about 9 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF