Project

General

Profile

Actions

Bug #7391

closed

ceph-deploy should pass the verbose flag to ceph-disk

Added by Alfredo Deza about 10 years ago. Updated almost 10 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Any and all output from ceph-disk is useful, no need to be quiet about it because it makes it extremely hard
to debug when something is not quite right

Older versions of Ceph used to have multiple different executables, like ceph-disk-prepare instead of `ceph-disk prepare`. The current approach is to
pass the arguments from ceph-disk-prepare to ceph-disk but verbosity control comes before the subcommand, so this would effectively be a backwards
incompatible change.

Actions #1

Updated by Alfredo Deza almost 10 years ago

  • Description updated (diff)
Actions #2

Updated by Alfredo Deza almost 10 years ago

Example output with the changeset

(ceph-deploy)papaya ~/tmp/foo ? ceph-deploy osd create node1:sdb:sdb2
[ceph_deploy.conf][DEBUG ] found configuration file at: /Users/alfredo/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.9): /Users/alfredo/.virtualenvs/ceph-deploy/bin/ceph-deploy osd create node1:sdb:sdb2
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/sdb:/dev/sdb2
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] Deploying osd to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/sdb journal /dev/sdb2 activate True
[node1][INFO  ] Running command: sudo ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdb /dev/sdb2
[node1][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[node1][DEBUG ] order to align on 2048-sector boundaries.
[node1][DEBUG ] The operation has completed successfully.
[node1][DEBUG ] meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=524223 blks
[node1][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0
[node1][DEBUG ] data     =                       bsize=4096   blocks=2096891, imaxpct=25
[node1][DEBUG ]          =                       sunit=0      swidth=0 blks
[node1][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
[node1][DEBUG ] log      =internal log           bsize=4096   blocks=2560, version=2
[node1][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[node1][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] [INFO  ] Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node1][WARNIN] [INFO  ] Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[node1][WARNIN] [INFO  ] Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[node1][WARNIN] [INFO  ] Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[node1][WARNIN] [INFO  ] Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[node1][WARNIN] [INFO  ] Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node1][WARNIN] [DEBUG ] Journal is file /dev/sdb2
[node1][WARNIN] [WARNING] OSD will not be hot-swappable if journal is not the same device as the osd data
[node1][WARNIN] [DEBUG ] Creating osd partition on /dev/sdb
[node1][WARNIN] [INFO  ] Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:97badffc-8d1a-4e5f-8a77-f9bd8de736fc --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdb
[node1][WARNIN] [INFO  ] Running command: /sbin/partprobe /dev/sdb
[node1][WARNIN] [INFO  ] Running command: /sbin/udevadm settle
[node1][WARNIN] [DEBUG ] Creating xfs fs on /dev/sdb1
[node1][WARNIN] [INFO  ] Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[node1][WARNIN] [DEBUG ] Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.9dzJEL with options noatime
[node1][WARNIN] [INFO  ] Running command: /bin/mount -t xfs -o noatime -- /dev/sdb1 /var/lib/ceph/tmp/mnt.9dzJEL
[node1][WARNIN] [DEBUG ] Preparing osd data dir /var/lib/ceph/tmp/mnt.9dzJEL
[node1][WARNIN] [DEBUG ] Creating symlink /var/lib/ceph/tmp/mnt.9dzJEL/journal -> /dev/sdb2
[node1][WARNIN] [DEBUG ] Unmounting /var/lib/ceph/tmp/mnt.9dzJEL
[node1][WARNIN] [INFO  ] Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.9dzJEL
[node1][WARNIN] [INFO  ] Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[node1][WARNIN] [DEBUG ] Calling partprobe on prepared device /dev/sdb
[node1][WARNIN] [INFO  ] Running command: /sbin/partprobe /dev/sdb
[node1][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[node1][INFO  ] checking OSD status...
[node1][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[node1][WARNIN] there are 2 OSDs down
[node1][WARNIN] there are 2 OSDs out
[ceph_deploy.osd][DEBUG ] Host node1 is now ready for osd use.
Actions #3

Updated by Sage Weil almost 10 years ago

  • Priority changed from Normal to High
Actions #4

Updated by Alfredo Deza almost 10 years ago

  • Status changed from In Progress to Fix Under Review
  • Priority changed from High to Normal
Actions #5

Updated by Alfredo Deza almost 10 years ago

  • Status changed from Fix Under Review to Resolved

merged commit 7b0056b into ceph:master

Actions

Also available in: Atom PDF