Project

General

Profile

Actions

Bug #9327

closed

Usability Issue: Ceph-deploy does not print all the commands which it is executing

Added by Hirak Mazumder over 9 years ago. Updated over 9 years ago.

Status:
Rejected
Priority:
Low
Assignee:
Category:
ceph cli
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Issue description: Noticed that during the osd prepare command ceph-deploy script is not printing all the command which it is executing. However returning the success message

Steps:

1. Create a single node cluster
2. Create 1 monitor
3. Create 8 OSDs using the ceph-deploy script

Observations: Noticed that while creating the OSDs, ceph-deploy does not print all the commands which it is executing. However returns the success message for the same.

Here is the output I am referring to

[ceph_deploy.osd][DEBUG ] Preparing host stormeap-0 disk /dev/sdb journal None activate False
[stormeap-0][INFO ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb
[stormeap-0][DEBUG ] The operation has completed successfully.
[stormeap-0][DEBUG ] The operation has completed successfully. >>> Not sure why this success message is coming

Expected Results: Since the same command logs will be posted in the ceph.log file, it will be hard for the user to debug in case if there is any failure in this step. Hence it is always advisable to print the commands which are getting executed by the script in the background. Also since while executing other commands, this convention is followed, expectation is it should print all the commands

Actions #1

Updated by Alfredo Deza over 9 years ago

  • Status changed from New to Rejected

What you are seeing is actually the output of the remote host (stormeap-0 in your case) that is caused by ceph-disk in your example.

That would be up to ceph-disk to output correctly. ceph-deploy is merely informing what it ran.

If you are using the latest ceph version (0.80.5 at the moment) ceph-disk and ceph-deploy got updated to show this as well.

Actions #2

Updated by Ramakrishnan P over 9 years ago

This issue seen in 0.84 build, can we cross check ones.

Actions

Also available in: Atom PDF