Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2023-06-06T10:51:16ZCeph
Redmine Ceph - Bug #61598 (New): gcc-14: FTBFS "error: call to non-'constexpr' function 'virtual unsigned...https://tracker.ceph.com/issues/615982023-06-06T10:51:16ZTim Serongtserong@suse.com
<p>gcc 14 has introduced a change which results in ceph build failures:</p>
<pre>
[ 270s] /home/abuild/rpmbuild/BUILD/ceph-18.0.0-4135-g87cd54281c8/src/osd/osd_types.h: In lambda function:
[ 270s] /home/abuild/rpmbuild/BUILD/ceph-18.0.0-4135-g87cd54281c8/src/common/dout.h:184:73: error: call to non-'constexpr' function 'virtual unsigned int DoutPrefixProvider::get_subsys() const'
[ 270s] 184 | dout_impl(pdpp->get_cct(), ceph::dout::need_dynamic(pdpp->get_subsys()), v) \
[ 270s] | ~~~~~~~~~~~~~~~~^~
[ 270s] /home/abuild/rpmbuild/BUILD/ceph-18.0.0-4135-g87cd54281c8/src/common/dout.h:155:58: note: in definition of macro 'dout_impl'
[ 270s] 155 | return (cctX->_conf->subsys.template should_gather<sub, v>()); \
[ 270s] | ^~~
[ 270s] /home/abuild/rpmbuild/BUILD/ceph-18.0.0-4135-g87cd54281c8/src/osd/osd_types.h:3618:3: note: in expansion of macro 'ldpp_dout'
[ 270s] 3618 | ldpp_dout(dpp, 10) << "build_prior all_probe " << all_probe << dendl;
[ 270s] | ^~~~~~~~~
[ 270s] /home/abuild/rpmbuild/BUILD/ceph-18.0.0-4135-g87cd54281c8/src/common/dout.h:51:20: note: 'virtual unsigned int DoutPrefixProvider::get_subsys() const' declared here
[ 270s] 51 | virtual unsigned get_subsys() const = 0;
[ 270s] | ^~~~~~~~~~
</pre>
<p>The gcc change is described at <a class="external" href="https://gcc.gnu.org/pipermail/gcc-patches/2023-May/617196.html">https://gcc.gnu.org/pipermail/gcc-patches/2023-May/617196.html</a>.</p>
<p>The ceph FTBFS was mentioned in a followup post at <a class="external" href="https://gcc.gnu.org/pipermail/gcc-patches/2023-May/618384.html">https://gcc.gnu.org/pipermail/gcc-patches/2023-May/618384.html</a>, and apparently this failure is now expected, as <code> DoutPrefixProvider::get_subsys()</code> isn't declared <code>constexpr</code> but really should be.</p>
<p>I tried to fix this experimentally by simply declaring <code>constexpr get_subsys()</code>, e.g.:</p>
<pre>
diff --git a/src/common/dout.h b/src/common/dout.h
index a1375fbb910..6e91750708a 100644
--- a/src/common/dout.h
+++ b/src/common/dout.h
@@ -61,7 +61,7 @@ class NoDoutPrefix : public DoutPrefixProvider {
std::ostream& gen_prefix(std::ostream& out) const override { return out; }
CephContext *get_cct() const override { return cct; }
- unsigned get_subsys() const override { return subsys; }
+ constexpr unsigned get_subsys() const override { return subsys; }
};
// a prefix provider with static (const char*) prefix
@@ -88,7 +88,7 @@ class DoutPrefixPipe : public DoutPrefixProvider {
return out;
}
CephContext *get_cct() const override { return dpp.get_cct(); }
- unsigned get_subsys() const override { return dpp.get_subsys(); }
+ constexpr unsigned get_subsys() const override { return dpp.get_subsys(); }
virtual void add_prefix(std::ostream& out) const = 0;
};
</pre>
<p>...but that has some problems:</p>
<p>1) Instead of an outright build failure, I get <code>warning: virtual functions cannot be 'constexpr' before C++20 [-Winvalid-constexpr]</code>. I imaging this is undesirable.<br />2) Even if 1 <em>is</em> desirable, there's plenty of other subclasses of <code>DoutPrefixProvider</code> which would all <em>also</em> need to have their <code>get_subsys()</code> methods declared <code>conxtexpr</code> for the build to complete.</p>
<p>TBH the whole <code>dout</code> thing is black magic to me, so I could really use some assistance with how best to fix this.</p> Ceph - Bug #56466 (Resolved): pacific: boost 1.73.0 is incompatible with python 3.10https://tracker.ceph.com/issues/564662022-07-05T05:54:10ZTim Serongtserong@suse.com
<p>Ceph pacific includes boost 1.73.0, which uses the <code>_Py_fopen()</code> function, which is no longer available in python 3.10. This means it's not possible to build ceph pacific RPMs against python 3.10. Builds will fail with:</p>
<pre>[ 182s] libs/python/src/exec.cpp: In function 'boost::python::api::object boost::python::exec_file(const char*, api::object, api::object)':
[ 182s] libs/python/src/exec.cpp:109:14: error: '_Py_fopen' was not declared in this scope; did you mean '_Py_wfopen'?
[ 182s] 109 | FILE *fs = _Py_fopen(f, "r");
[ 182s] | ^~~~~~~~~
[ 182s] | _Py_wfopen
</pre>
<p>This is not a problem with quincy or newer, as those use boost 1.75.0, which includes a patch to switches to using fopen() for python versions >= 3.1.</p> Ceph - Bug #55087 (Resolved): rpm: openSUSE needs libthrift-devel, not thrift-develhttps://tracker.ceph.com/issues/550872022-03-28T09:42:57ZTim Serongtserong@suse.com
<p>In <a class="external" href="https://github.com/ceph/ceph/pull/38783">https://github.com/ceph/ceph/pull/38783</a>, <a class="external" href="https://github.com/ceph/ceph/pull/38783/commits/80e82686eba">https://github.com/ceph/ceph/pull/38783/commits/80e82686eba</a> added "thrift-devel >= 0.13.0" as a BuildRequires. On SUSE distros, this package is named libthrift-devel, so we need an <code>%if 0%{?suse_version}</code> block around that one.</p> Dashboard - Bug #54215 (Resolved): mgr/dashboard: "Please expand your cluster first" shouldn't be...https://tracker.ceph.com/issues/542152022-02-09T04:12:58ZTim Serongtserong@suse.com
<a name="Description-of-problem"></a>
<h3 >Description of problem<a href="#Description-of-problem" class="wiki-anchor">¶</a></h3>
<p><a class="external" href="https://tracker.ceph.com/issues/50336">https://tracker.ceph.com/issues/50336</a> introduced a neat "Create Cluster" workflow, to help set up new clusters. When you first log in to the dashboard you're prompted to expand the cluster (or skip that step). IMO this screen should not be present at all, for clusters that have already been "expanded", for example if I've already used `ceph orch` to create OSDs, add hosts, etc. this step is redundant - I shouldn't have to click "skip", it should just not be there in the first place. Likewise if I'm upgrading from an earlier (pre-Pacific) release.</p>
<a name="Environment"></a>
<h3 >Environment<a href="#Environment" class="wiki-anchor">¶</a></h3>
<ul>
<li><code>ceph version</code> string: 16.2.7-37-gb3be69440db</li>
<li>Platform (OS/distro/release): SUSE Linux Enterprise Server 15 SP3</li>
<li>Cluster details (nodes, monitors, OSDs): 4 nodes, 3 mons, 8 OSDs</li>
<li>Did it happen on a stable environment or after a migration/upgrade?: seen after an upgrade from Octopus</li>
<li>Browser used (e.g.: <code>Version 86.0.4240.198 (Official Build) (64-bit)</code>): Firefox 96.0.1 64 bit</li>
</ul>
<a name="How-reproducible"></a>
<h3 >How reproducible<a href="#How-reproducible" class="wiki-anchor">¶</a></h3>
<p>Steps:</p>
<ol>
<li>Deploy Octopus</li>
<li>Configure the cluster (add some hosts, OSDs etc.)</li>
<li>Upgrade to Pacific</li>
<li>Log in to the dashboard</li>
</ol>
<p>or:</p>
<ol>
<li>Deploy Pacific</li>
<li>Use `ceph orch` to add hosts, deploy OSDs, ...</li>
<li>Log in to the dashboard</li>
</ol>
<a name="Actual-results"></a>
<h3 >Actual results<a href="#Actual-results" class="wiki-anchor">¶</a></h3>
<p>I see a screen that says "Welcome to Ceph Dashboard - Please expand your cluster first"</p>
<a name="Expected-results"></a>
<h3 >Expected results<a href="#Expected-results" class="wiki-anchor">¶</a></h3>
<p>I see the regular status screen</p>
<a name="Additional-info"></a>
<h3 >Additional info<a href="#Additional-info" class="wiki-anchor">¶</a></h3>
<p>Would it be enough to add a check to see if there's > 1 node and/or > 0 OSDs, and in this case assume we don't need to show this screen?</p> ceph-volume - Bug #53846 (Resolved): ceph-volume should ignore /dev/rbd* deviceshttps://tracker.ceph.com/issues/538462022-01-12T07:10:56ZTim Serongtserong@suse.com
<p>If rbd devices are mapped on ceph cluster nodes (as they may be if you're running an iSCSI gateway for example), then <code>ceph-volume inventory</code> will list those RBD devices, and quite possibly list them as being "available". This causes a couple of problems:</p>
<p>1) Because /dev/rbd0 appears in the list of available devices, the orchestrator will actually try to deploy OSDs on top of those RBD devices. Luckily, this will fail, because the various LVM invocations will die with "Device /dev/rbd0 excluded by a filter", but really we shouldn't even be trying to do this in the first place. Let's not rely on luck ;-)<br />2) It's possible for /dev/rbd* devices to be locked/stuck in such a way that when ceph-volume invokes <code>blkid</code>, it hangs indefinitely (the process ends up in D-state). This can actually block the entire orchestrator, because the orchestrator calls out to cephadm periodically to inventory devices, and the latter tries to acquire a lock, which it can't get because a prior invocation is stuck running <code>ceph-volume inventory</code>.</p>
<p>I suggest we make ceph-volume completely ignore /dev/rbd* when doing a device inventory. I know we had a similar discussion on dev@ceph.io regarding ceph-volume listing, or not listing, GPT devices (see <a class="external" href="https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/N3TK4IO2QYHXIZMQTZ4AMPU5BE56J5MP/#T7UM53WCW2MDD62DDH6KLI4EZXKBXZBY">https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/N3TK4IO2QYHXIZMQTZ4AMPU5BE56J5MP/#T7UM53WCW2MDD62DDH6KLI4EZXKBXZBY</a>) but the difference here is that mapped RBD volumes really <em>aren't</em> part of the host inventory, so IMO should be excluded.</p> Ceph - Bug #53060 (Closed): Unable to load libceph_snappy.so due to undefined symbol _ZTIN6snappy...https://tracker.ceph.com/issues/530602021-10-27T09:21:47ZTim Serongtserong@suse.com
<p>If you try to run Ceph with snappy 1.1.9 installed, <code>ceph status</code> will show HEALTH_WARN, and tell you that your OSDs "have broken BlueStore compression". <code>ceph health detail</code> will tell you that each of your OSDs is "unable to load:snappy". The OSD logs will show something like this:</p>
<pre>
Oct 27 08:55:33 node1 ceph-osd[561817]: load failed dlopen(): "/usr/lib64/ceph/compressor/libceph_snappy.so: undefined symbol: _ZTIN6snappy6SourceE" or "/usr/lib64/ceph/libceph_snappy.so: cannot open shared object file: No such file or directory"
Oct 27 08:55:33 node1 ceph-osd[561817]: create cannot load compressor of type snappy
</pre>
<p>This is because RTTI was disabled in snappy 1.1.9, so the typeinfo for the <code>snappy::Source</code> class - which Ceph's SnappyCompressor creates a subclass of - isn't included in libsnappy.so. Ceph still <em>builds</em> just fine, because the compressors are built as shared libraries. The problem only manifests when our snappy plugin is dlopen()ed at runtime, and then the linker kicks in and can't find that missing symbol.</p>
<p>This would ideally be fixed by getting RTTI re-enabled in snappy, so I've gone ahead and opened <a class="external" href="https://github.com/google/snappy/pull/144">https://github.com/google/snappy/pull/144</a></p> mgr - Bug #42958 (Resolved): mgr/PyModule.cc: set_config with an empty value doesn't remove the c...https://tracker.ceph.com/issues/429582019-11-22T09:22:09ZTim Serongtserong@suse.com
<p>PyModuleConfig::set_config() is meant to set a configuration option if a value is provided, and remove the option if no value is passed. The setting part works fine, the removing part, not so much. For example, if you run <code>`ceph orchestrator set backend ''`</code> to unset the orchestrator backend, the mgr/orchestrator_cli/orchestrator config option remains present, and the mgr log says:</p>
<pre>
2019-11-22T04:18:55.840+0100 7f2807c67700 0 mgr[py] `config set mgrmgr/orchestrator_cli/orchestrator --` failed: (22) Invalid argument
2019-11-22T04:18:55.840+0100 7f2807c67700 0 mgr[py] mon returned -22: unrecognized config target ''
</pre> mgr - Bug #39662 (Resolved): ceph-mgr should log an error if it can't find any modules to loadhttps://tracker.ceph.com/issues/396622019-05-10T04:49:39ZTim Serongtserong@suse.com
<p>I had a downstream SUSE build of ceph-14.1.0, which was showing "7 mgr modules have failed" in `ceph status`, and `ceph mgr dump` showed "available_modules": [] (i.e. an empty list). The 7 failed modules were of course the always-on modules. I eventually figured out that mgr_module_path was somehow set to /usr/local/share/ceph/mgr in that build, but the modules were actually correctly installed in /usr/share/ceph/mgr, so mgr couldn't find them. We should log the path we're loading modules from, and log an error if none are found, so that if mgr_module_path is ever set incorrectly in future, the problem will be obvious.</p> mgr - Bug #38469 (Resolved): If installed python packages share names with mgr modules, ceph-mgr ...https://tracker.ceph.com/issues/384692019-02-25T08:04:40ZTim Serongtserong@suse.com
<p>Before loading each mgr module, ceph-mgr sets python's sys.path to the site-packages directories, followed by the mgr module path. For example:</p>
<code class="text syntaxhl"><span class="CodeRay">ceph-mgr[25988]: 2019-02-22 11:06:05.334 7f57f8872b40 10 mgr[py] Computed sys.path '/usr/lib/python36.zip:/usr/lib64/python3.6:/usr/lib64/python3.6:/usr/lib64/python3.6/lib-dynload:/usr/lib64/python3.6/site-packages:/usr/lib/python3.6/site-packages:/usr/local/lib64/python3.6/site-packages:/usr/local/lib/python3.6/site-packages:/usr/lib64/ceph/mgr'
</span></code>
<p>This means that if there's some random python package installed on the system that has the same name as a ceph-mgr module, ceph-mgr will try to load that thing, instead of the actual ceph-mgr module. This will of course fail, and the expected mgr module will not be available (see for example <a class="external" href="https://github.com/SUSE/DeepSea/pull/1543#issuecomment-466369489">https://github.com/SUSE/DeepSea/pull/1543#issuecomment-466369489</a>).</p>
<p>We can fix this by changing sys.path so the mgr module path comes <strong>before</strong> the various site-packages directories.</p> Ceph - Bug #37503 (Resolved): Audit log: mgr module passwords set on CLI written as plaintext in ...https://tracker.ceph.com/issues/375032018-12-03T10:57:04ZTim Serongtserong@suse.com
<p>A number of mgr modules need passwords set for one reason or another, either to authenticate with external systems (deepsea, influx, diskprediction), or to define credentials for users of those modules (dashboard, restful).</p>
<p>In all cases, these passwords are set from the command line, either via module-specific commands (<code>`ceph dashboard ac-user-create`</code>, <code>`deepsea config-set salt_api_password`</code>, etc.) or via <code>`ceph config set`</code> with some particular key (e.g.: mgr/influx/passsword)</p>
<p>All module-specific commands go through <code>DaemonServer::_handle_command()</code>, which then logs the command via <code>audit_clog->debug()</code> (or <code>audit_clog->info()</code> in case of access denied). This all ends up written to <code>/var/log/ceph/ceph-mgr.$ID.log</code>, which is world-readable, e.g.:</p>
<pre>
2018-12-03 10:45:28.864 7f67e7f8f700 0 log_channel(audit) log [DBG] : from='client.343880 172.16.1.254:39896/3560370796' entity='client.admin' cmd=[{"prefix": "deepsea config-set", "key": "salt_api_password", "value": "foo", "target": ["mgr", ""]}]: dispatch
</pre>
<p>Additionally, anything that results in a "config set" lands in the mon log, e.g.:</p>
<pre>
2018-12-03 10:45:28.881552 [INF] from='mgr.295252 172.16.1.21:56636/175641' entity='mgr.data1' cmd='[{"prefix":"config set","who":"mgr","name":"mgr/deepsea/salt_api_password","value":"foo"}]': finished
</pre>
<p>This also appears in the Audit log in the Dashboard.</p>
<p>Some things that land in the mon log probably don't matter; for any module that hashes passwords before saving them, only the hashed password should land in the mon log. But there's still the problem of the CLI commands in the mgr log, and in any case, modules that need to authenticate with external services will need to store plaintext passwords.</p>
<p>ISTM we need to either never log these things, or somehow keep the command logging, but filter the passwords out, so it renders the value as "*****" instead of the actual password.</p>
<p>I'm not sure how best to approach this, given the way command logging is structured. At the point commands are logged, the commands themselves are just strings. Admittedly, they're strings of JSON, but they're effectively opaque at that point - we'd have to parse the JSON, then look for things that might be passwords, blank them out, and turn the whole lot back into a string. Yuck.</p> mgr - Bug #37377 (New): ceph-mgr/influx: verify "no metadata" fix is completehttps://tracker.ceph.com/issues/373772018-11-23T10:11:07ZTim Serongtserong@suse.com
<p>Seen while reviewing <a class="external" href="https://github.com/ceph/ceph/pull/25184">https://github.com/ceph/ceph/pull/25184</a>. The fix for <a class="external" href="http://tracker.ceph.com/issues/25191">http://tracker.ceph.com/issues/25191</a> in <a class="external" href="https://github.com/ceph/ceph/pull/22794">https://github.com/ceph/ceph/pull/22794</a> is applied to the get_pg_summary() function, but not to the get_daemon_stats() function. We need to verify whether this fix should also be applied to the latter function (my guess is "yes", but I don't know for certain).</p> Ceph - Bug #35906 (Resolved): ceph-disk: is_mounted() returns None for mounted OSDs with Python 3https://tracker.ceph.com/issues/359062018-09-10T11:10:49ZTim Serongtserong@suse.com
<p>`ceph-disk list --format=json` on python 3 gives null for the mount member, even for mounted OSDs, e.g.:</p>
<pre>
# ceph-disk list --format=json|json_pp
...
{
"path" : "/dev/vdg",
"partitions" : [
{
"whoami" : "23",
"is_partition" : true,
"path" : "/dev/vdg1",
"ceph_fsid" : "00296336-7bf2-43f1-a48c-24c7212bf478",
"dmcrypt" : {},
"uuid" : "b447f027-f116-47d0-9cd1-ca2348e8e3db",
"block_uuid" : "dfaf6613-f958-497a-9dfb-ad343e897639",
"block_dev" : "/dev/vdg2",
"type" : "data",
*"mount" : null,*
"ptype" : "4fbd7e29-9d25-41b8-afd0-062c0ceff05d",
"magic" : "ceph osd volume v026",
"cluster" : "ceph",
"state" : "prepared",
"fs_type" : "xfs"
},
{
"type" : "block",
"is_partition" : true,
"path" : "/dev/vdg2",
"ptype" : "cafecafe-9b03-4f30-b4c6-b4b80ceff106",
"block_for" : "/dev/vdg1",
"dmcrypt" : {},
"uuid" : "dfaf6613-f958-497a-9dfb-ad343e897639"
}
]
}
...
</pre> Ceph-deploy - Bug #18164 (Resolved): platform.linux_distribution() fails on distros with /etc/os-...https://tracker.ceph.com/issues/181642016-12-07T04:14:41ZTim Serongtserong@suse.com
<p>As with bugs <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: platform.linux_distribution() is deprecated; stop using it (Resolved)" href="https://tracker.ceph.com/issues/18141">#18141</a> and <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: platform.linux_distribution() is deprecated; stop using it (Resolved)" href="https://tracker.ceph.com/issues/18163">#18163</a>, on the latest SUSE systems, platform.linux_distribution() returns ('','',''), causing ceph-deploy to fail with "UnsupportedPlatform: Platform is not supported:"</p>
<p>Given platform.linux_distribution() is deprecated and doesn't understand /etc/os-release, we should stop using it.</p> Ceph - Bug #18163 (Resolved): platform.linux_distribution() is deprecated; stop using ithttps://tracker.ceph.com/issues/181632016-12-07T04:10:56ZTim Serongtserong@suse.com
<p>platform.linux_distribution() is deprecated, so we should stop using it. Notably it uses /etc/SuSE-release on SUSE systems, and the latest SUSE versions don't ship this file; instead they ship /etc/os-release, which platform.linux_distribution() doesn't know about, so it returns ('','','').</p>
<p>AFAICT, platform.linux_distribution() is currently used by ceph-detect-init, which in turn is used by ceph-disk. If ceph-detect-init can't determine the distro because it sees ('','',''), this results in ceph-disk always tagging the init system as sysvinit.</p>
<p>There are also platform.linux_distribution() invocations in qa/workunits/ceph-disk/ceph-disk-no-lockbox and src/ceph-disk/ceph_disk/main.py, but they look like dead code to me.</p>
<p>See also bug <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: platform.linux_distribution() is deprecated; stop using it (Resolved)" href="https://tracker.ceph.com/issues/18141">#18141</a></p> Ceph - Bug #14864 (Resolved): ceph-detect-init requires python-setuptools at runtimehttps://tracker.ceph.com/issues/148642016-02-25T14:58:49ZTim Serongtserong@suse.com
<p>Testing a reasonably recent ceph-10.0.2 on openSUSE Leap 42.1, my OSDs weren't mounting. I tracked this back to /usr/lib/systemd/system/ceph-disk@.service which invokes `flock /var/lock/ceph-disk /usr/sbin/ceph-disk --verbose --log-stdout trigger --sync %f`. This in turn results in:</p>
<pre>
ceph-disk: main_trigger: Namespace(dev='/dev/sdb1', func=<function main_trigger at 0x7fa6ebf6b050>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
ceph-disk: Running command: /sbin/init --version
/sbin/init: unrecognized option '--version'
ceph-disk: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
ceph-disk: Running command: /usr/sbin/sgdisk -i 1 /dev/sdb
ceph-disk: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
ceph-disk: Running command: /usr/sbin/sgdisk -i 1 /dev/sdb
ceph-disk: trigger /dev/sdb1 parttype 4fbd7e29-9d25-41b8-afd0-062c0ceff05d uuid 93b72ed5-7d84-4b0b-a227-330fcd22513e
ceph-disk: Running command: /usr/sbin/ceph-disk activate /dev/sdb1
Traceback (most recent call last):
File "/usr/bin/ceph-detect-init", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
ERROR:ceph-disk:Failed to activate
Traceback (most recent call last):
File "/usr/sbin/ceph-disk", line 4036, in <module>
main(sys.argv[1:])
File "/usr/sbin/ceph-disk", line 3992, in main
main_catch(args.func, args)
File "/usr/sbin/ceph-disk", line 4014, in main_catch
func(args)
File "/usr/sbin/ceph-disk", line 2530, in main_activate
reactivate=args.reactivate,
File "/usr/sbin/ceph-disk", line 2296, in mount_activate
(osd_id, cluster) = activate(path, activate_key_template, init)
File "/usr/sbin/ceph-disk", line 2477, in activate
init = init_get()
File "/usr/sbin/ceph-disk", line 799, in init_get
'--default', 'sysvinit',
File "/usr/sbin/ceph-disk", line 902, in _check_output
raise error
subprocess.CalledProcessError: Command '/usr/bin/ceph-detect-init' returned non-zero exit status 1
</pre>
<p>The important part is:</p>
<pre>
Traceback (most recent call last):
File "/usr/bin/ceph-detect-init", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
</pre>
<p>This is fixable by installing python-setuptools, suggesting that package needs to be added to the RPM Requires and, I assume, the Debian Depends.</p>