Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2019-12-12T12:42:04ZCeph
Redmine ceph-volume - Bug #43283 (New): unit test failure: UnicodeDecodeError: 'utf-8' codec can't decode...https://tracker.ceph.com/issues/432832019-12-12T12:42:04ZJan Fajerskilists@fajerski.name
<pre>
___________________ TestFunctionalCall.test_unicode_encoding ___________________
self = <ceph_volume.tests.test_process.TestFunctionalCall object at 0x7f14a38912b0>
def test_unicode_encoding(self):
> process.call(['echo', u'\xd0'])
ceph_volume/tests/test_process.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
command = ['/bin/echo', '�'], kw = {}, executable = '/bin/echo'
terminal_verbose = False, logfile_verbose = True, verbose_on_failure = True
show_command = False, command_msg = 'Running command: /bin/echo �', stdin = None
def call(command, **kw):
"""
Similar to ``subprocess.Popen`` with the following changes:
* returns stdout, stderr, and exit code (vs. just the exit code)
* logs the full contents of stderr and stdout (separately) to the file log
By default, no terminal output is given, not even the command that is going
to run.
Useful when system calls are needed to act on output, and that same output
shouldn't get displayed on the terminal.
Optionally, the command can be displayed on the terminal and the log file,
and log file output can be turned off. This is useful to prevent sensitive
output going to stderr/stdout and being captured on a log file.
:param terminal_verbose: Log command output to terminal, defaults to False, and
it is forcefully set to True if a return code is non-zero
:param logfile_verbose: Log stderr/stdout output to log file. Defaults to True
:param verbose_on_failure: On a non-zero exit status, it will forcefully set logging ON for
the terminal. Defaults to True
"""
executable = which(command.pop(0))
command.insert(0, executable)
terminal_verbose = kw.pop('terminal_verbose', False)
logfile_verbose = kw.pop('logfile_verbose', True)
verbose_on_failure = kw.pop('verbose_on_failure', True)
show_command = kw.pop('show_command', False)
command_msg = "Running command: %s" % ' '.join(command)
stdin = kw.pop('stdin', None)
logger.info(command_msg)
if show_command:
terminal.write(command_msg)
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
close_fds=True,
**kw
)
if stdin:
stdout_stream, stderr_stream = process.communicate(as_bytes(stdin))
else:
stdout_stream = process.stdout.read()
stderr_stream = process.stderr.read()
returncode = process.wait()
if not isinstance(stdout_stream, str):
> stdout_stream = stdout_stream.decode('utf-8')
E UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd0 in position 0: invalid continuation byte
ceph_volume/process.py:211: UnicodeDecodeError
------------------------------ Captured log call -------------------------------
INFO ceph_volume.process:process.py:191 Running command: /bin/echo �
</pre>
<p>Seen here <a class="external" href="https://jenkins.ceph.com/job/ceph-volume-pr/606/console">https://jenkins.ceph.com/job/ceph-volume-pr/606/console</a></p>
<p>@Alfredo mind having a look at this?</p> Ceph-deploy - Bug #38823 (New): release for nautilushttps://tracker.ceph.com/issues/388232019-03-20T06:18:12ZAlfredo Dezaadeza@redhat.com
<p>Nautilus is out, change the default --release value to it and get a new version out</p> Ceph - Bug #37878 (New): cannot build rpm, ceph-volume fails the buildhttps://tracker.ceph.com/issues/378782019-01-11T18:58:36ZAlfredo Dezaadeza@redhat.com
<p>When building with:</p>
<pre>
CEPH_EXTRA_CMAKE_ARGS=-DALLOCATOR=tcmalloc -DWITH_PYTHON2=OFF -DWITH_PYTHON3=ON -DMGR_PYTHON_VERSION=3
</pre>
<pre>
RPM build errors:
Directory not found: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.0.1-2423-g78f4707/rpm/el7/BUILDROOT/ceph-14.0.1-2423.g78f4707.el7.x86_64/usr/lib/python2.7/site-packages/ceph_volume
File not found by glob: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.0.1-2423-g78f4707/rpm/el7/BUILDROOT/ceph-14.0.1-2423.g78f4707.el7.x86_64/usr/lib/python2.7/site-packages/ceph_volume/*
File not found by glob: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.0.1-2423-g78f4707/rpm/el7/BUILDROOT/ceph-14.0.1-2423.g78f4707.el7.x86_64/usr/lib/python2.7/site-packages/ceph_volume-*
+ rm -fr /tmp/install-deps.7636
Build step 'Execute shell' marked build as failure
[PostBuildScript] - Executing post build scripts.
</pre> ceph-volume - Bug #37589 (New): argument validators can eat up exceptionshttps://tracker.ceph.com/issues/375892018-12-10T16:44:55ZAlfredo Dezaadeza@redhat.com
<p>If an error happens inside an argument validator, the exception is eaten up resulting in a cryptic error message</p>
<pre>
[root@osd0 vagrant]# CEPH_VOLUME_DEBUG=1 ceph-volume lvm batch --bluestore --report /dev/sdb
usage: ceph-volume lvm batch [-h] [--bluestore] [--filestore] [--report]
[--yes] [--format {json,pretty}] [--dmcrypt]
[--crush-device-class CRUSH_DEVICE_CLASS]
[--no-systemd]
[--osds-per-device OSDS_PER_DEVICE]
[--block-db-size BLOCK_DB_SIZE]
[--journal-size JOURNAL_SIZE] [--prepare]
ceph-volume lvm batch: error: argument DEVICES: invalid <ceph_volume.util.arg_validators.ValidDevice object at 0x7f68072497d0> value: '/dev/sdb'
</pre>
<p>The error is eaten up by argparse:</p>
<pre>
>>> from ceph_volume.util import device
>>> device.Device('/dev/sdb')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/ceph_volume/util/device.py", line 92, in __init__
self.device_id = self._get_device_id()
File "/usr/lib/python2.7/site-packages/ceph_volume/util/device.py", line 198, in _get_device_id
dev_id = '_'.join(p['ID_MODEL'], p['ID_SERIAL_SHORT'])
TypeError: join() takes exactly one argument (2 given)
</pre>
<p>A try/except that deals with this should be required, so that breakage like this doesn't occur.</p>
<p>In the event of breakage, I think that the validator should allow the process to continue (with a warning is fine)</p> Ceph - Documentation #24147 (Fix Under Review): --osd-objectstore is undocumentedhttps://tracker.ceph.com/issues/241472018-05-16T17:26:11ZAlfredo Dezaadeza@redhat.com
<p>It doesn't appear in any help menus, and it is set to an empty string in config.cc</p> Ceph-deploy - Bug #13629 (New): using --gpg-url alone doesn't do anythinghttps://tracker.ceph.com/issues/136292015-10-28T13:50:35ZAlfredo Dezaadeza@redhat.com
<p>I gets overwritten by the default gitweb location:</p>
<pre>
[ceph-adm@xxx surveyit-cluster]$ ceph-deploy install ceph1 --release hammer --gpg-url http://download.ceph.com/keys/release.asc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph-adm/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /bin/ceph-deploy install ceph1 --release hammer --gpg-url http://download.ceph.com/keys/release.asc
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd3504ed680>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x7fd3513d9938>
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['ceph1']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : hammer
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : http://download.ceph.com/keys/release.asc
[ceph_deploy.install][DEBUG ] Installing stable version hammer on cluster ceph hosts ceph1
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph1 ...
[ceph1][DEBUG ] connection detected need for sudo
[ceph1][DEBUG ] connected to host: ceph1
[ceph1][DEBUG ] detect platform information from remote host
[ceph1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Red Hat Enterprise Linux Server 7.1 Maipo
[ceph1][INFO ] installing Ceph on ceph1
[ceph1][INFO ] Running command: sudo yum clean all
[ceph1][DEBUG ] Loaded plugins: priorities, product-id, subscription-manager
[ceph1][DEBUG ] Cleaning repos: epel rhel-7-server-extras-rpms rhel-7-server-optional-rpms
[ceph1][DEBUG ] : rhel-7-server-rpms
[ceph1][DEBUG ] Cleaning up everything
[ceph1][INFO ] Running command: sudo yum -y install epel-release
[ceph1][DEBUG ] Loaded plugins: priorities, product-id, subscription-manager
[ceph1][DEBUG ] Package epel-release-7-5.noarch already installed and latest version
[ceph1][DEBUG ] Nothing to do
[ceph1][INFO ] Running command: sudo yum -y install yum-plugin-priorities
[ceph1][DEBUG ] Loaded plugins: priorities, product-id, subscription-manager
[ceph1][DEBUG ] Package yum-plugin-priorities-1.1.31-29.el7.noarch already installed and latest version
[ceph1][DEBUG ] Nothing to do
[ceph1][DEBUG ] Configure Yum priorities to include obsoletes
[ceph1][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[ceph1][INFO ] Running command: sudo rpm --import https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc
^C[ceph_deploy][ERROR ] KeyboardInterrupt
Killed by signal 2.
</pre>
<p>On the other hand, we should also default to using download.ceph.com because gitweb is super slow.</p> Ceph-deploy - Bug #9248 (New): ceph-deploy: mon create command outputs extra linehttps://tracker.ceph.com/issues/92482014-08-26T17:24:11ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>ceph_deploy version: 1.5.12</p>
<p>ceph version: master</p>
<p>mon create command outputs an extra line "Unhandled exception in thread started by".</p>
<p>looks like, this line somehow escaped when the python exceptions were suppressed.</p>
<pre>
ubuntu@vpm117:~/ceph-deploy$ ./ceph-deploy mon create vpm117
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.12): ./ceph-deploy mon create vpm117
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts vpm117
[ceph_deploy.mon][DEBUG ] detecting platform for host vpm117 ...
[vpm117][DEBUG ] connected to host: vpm117
[vpm117][DEBUG ] detect platform information from remote host
[vpm117][DEBUG ] detect machine type
[ceph_deploy.mon][INFO ] distro info: Ubuntu 12.04 precise
[vpm117][DEBUG ] determining if provided host has same hostname in remote
[vpm117][DEBUG ] get remote short hostname
[vpm117][DEBUG ] deploying mon to vpm117
[vpm117][DEBUG ] get remote short hostname
[vpm117][DEBUG ] remote hostname: vpm117
[vpm117][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[vpm117][DEBUG ] create the mon path if it does not exist
[vpm117][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-vpm117/done
[vpm117][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-vpm117/done
[vpm117][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-vpm117.mon.keyring
[vpm117][DEBUG ] create the monitor keyring file
[vpm117][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i vpm117 --keyring /var/lib/ceph/tmp/ceph-vpm117.mon.keyring
[vpm117][DEBUG ] ceph-mon: mon.noname-a 10.214.138.184:6789/0 is local, renaming to mon.vpm117
[vpm117][DEBUG ] ceph-mon: set fsid to 5dd645af-b295-4c27-b3ff-f299cb4afa9f
[vpm117][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-vpm117 for mon.vpm117
[vpm117][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-vpm117.mon.keyring
[vpm117][DEBUG ] create a done file to avoid re-doing the mon deployment
[vpm117][DEBUG ] create the init path if it does not exist
[vpm117][DEBUG ] locating the `service` executable...
[vpm117][INFO ] Running command: sudo initctl emit ceph-mon cluster=ceph id=vpm117
[vpm117][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm117.asok mon_status
[vpm117][DEBUG ] ********************************************************************************
[vpm117][DEBUG ] status for monitor: mon.vpm117
[vpm117][DEBUG ] {
[vpm117][DEBUG ] "election_epoch": 2,
[vpm117][DEBUG ] "extra_probe_peers": [],
[vpm117][DEBUG ] "monmap": {
[vpm117][DEBUG ] "created": "0.000000",
[vpm117][DEBUG ] "epoch": 1,
[vpm117][DEBUG ] "fsid": "5dd645af-b295-4c27-b3ff-f299cb4afa9f",
[vpm117][DEBUG ] "modified": "0.000000",
[vpm117][DEBUG ] "mons": [
[vpm117][DEBUG ] {
[vpm117][DEBUG ] "addr": "10.214.138.184:6789/0",
[vpm117][DEBUG ] "name": "vpm117",
[vpm117][DEBUG ] "rank": 0
[vpm117][DEBUG ] }
[vpm117][DEBUG ] ]
[vpm117][DEBUG ] },
[vpm117][DEBUG ] "name": "vpm117",
[vpm117][DEBUG ] "outside_quorum": [],
[vpm117][DEBUG ] "quorum": [
[vpm117][DEBUG ] 0
[vpm117][DEBUG ] ],
[vpm117][DEBUG ] "rank": 0,
[vpm117][DEBUG ] "state": "leader",
[vpm117][DEBUG ] "sync_provider": []
[vpm117][DEBUG ] }
[vpm117][DEBUG ] ********************************************************************************
[vpm117][INFO ] monitor: mon.vpm117 is running
[vpm117][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.vpm117.asok mon_status
Unhandled exception in thread started by
</pre> teuthology - Feature #8317 (New): ceph-deploy: Add a list of "bucket-type=bucket-name" option whe...https://tracker.ceph.com/issues/83172014-05-08T11:57:40ZGeoffrey Hartzhartz.geoffrey@gmail.com
<p>Using the "ceph-deploy osd <command> ...." at the moment we can't choose in which bucket the OSD will reside.</p>
<p>When adding an OSD with the manual way, we can choose where put the OSD with a list of bucket-type=bucket-name (<a class="external" href="http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-an-osd-manual">http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-an-osd-manual</a>)</p>
<p>With the new features (erasure coding and cache tiering) the defaults pool will not be suitable</p> Ceph-deploy - Feature #6956 (New): ceph-deploy: Prepare node functionhttps://tracker.ceph.com/issues/69562013-12-09T12:44:24ZJohn Wilkinsjowilkin@redhat.com
<p>With ceph-deploy new {node} [{node}] ... we can create a new cluster. If the nodes aren't set up for SSH, ceph-deploy will handle it. That's lovely. However, we have a procedure in "Getting Started" that calls for a user with passwordless sudo, typically named "ceph", and SSH access using that user name. That's sort of a pain to set up manually for each node, and is a common problem with people getting started using Ceph. It would be nice have a ceph-deploy function that could:</p>
<p>1. Create the user with passwordless sudo<br />2. Add the SSH keys</p>
<p>It would be nice if we had something like this:</p>
<p>ceph-deploy node prepare user {node1 [node2 ...]}<br />ceph-deploy node prepare ssh {node1 [node2 ...]}</p>
<p>or simply</p>
<p>ceph-deploy node prepare {node1 [node2 ...]} #implies user and ssh.</p>
<p>We already have <a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: BUG at fs/ceph/caps.c:2178 (Closed)" href="https://tracker.ceph.com/issues/2">#2</a> incorporated into ceph-deploy new, which we could repurpose for:</p>
<p>ceph-deploy node prepare ssh {node1 [node2 ...]}<br />ceph-deploy node prepare ssh --uname username {node1 [node2 ...]}</p>
<p>The user name version may be a bit more tricky, since you'd need to authenticate with root or sudo to create the "ceph" user with passwordless sudo.</p> Ceph-deploy - Bug #6398 (New): default working directory for ceph-deployhttps://tracker.ceph.com/issues/63982013-09-25T14:00:13ZAlfredo Dezaadeza@redhat.com
<p>ceph-deploy creates configurations (and a couple of other things like a keyring) in the current working directory.</p>
<p>This is not entirely bad, but for new users it can be difficult to adhere to this "execute from anywhere but continue to <br />call ceph-deploy from that same directory" mentality.</p>
<p>And ceph-deploy does this because it is easier (lazier!) to assume that wherever it is being executed is where the configs<br />should be.</p>
<p>If we change this to default to something like `$HOME/ceph_deploy/` or `$HOME/.ceph_deploy` <strong>by default</strong> and allow a user to<br />specify a different <em>main</em> working directory via an environment variable (e.g. "$CEPH_DEPLOY_HOME") or with a flag ("--home")<br />it would mean it can allow other users to tweak as needed.</p>
<p>Some thoughts from Ian:</p>
<ul>
<li>most users will not have multiple configs</li>
<li>you should have to do something special if you are a non standard-user (e.g. teuthology)</li>
</ul> Ceph-deploy - Feature #6282 (New): ceph-deploy mon create should update mon_host and/or mon_initi...https://tracker.ceph.com/issues/62822013-09-11T13:26:01ZAnonymous
<p>if mon_host != ip[...ip] (as passed on the CLI) then mon_host should be appended in cwd/{cluster}.conf and /etc/ceph/{cluster}.conf</p>
<p>mon_initial_members should also be updated if the clients refrence this value.</p>
<p>This is necessary so that clients can find any avilable monitor in the event that not all monitors are created at the same time</p> Ceph-deploy - Feature #6021 (New): ceph-deploy should have a quickstart https://tracker.ceph.com/issues/60212013-08-16T12:55:04ZAlfredo Dezaadeza@redhat.com
<p>If you are trying out ceph-deploy, you probably do not want to run 10 different commands with ceph-deploy and just want to play with ceph.</p>
<p>ceph-deploy should accept a `quickstart` command that can take a number of nodes and should set those nodes with a base config.</p>
<p>It should add other features like the radosgw when that is available too.</p>