Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2023-06-06T10:51:16ZCeph
Redmine Ceph - Bug #61598 (New): gcc-14: FTBFS "error: call to non-'constexpr' function 'virtual unsigned...https://tracker.ceph.com/issues/615982023-06-06T10:51:16ZTim Serongtserong@suse.com
<p>gcc 14 has introduced a change which results in ceph build failures:</p>
<pre>
[ 270s] /home/abuild/rpmbuild/BUILD/ceph-18.0.0-4135-g87cd54281c8/src/osd/osd_types.h: In lambda function:
[ 270s] /home/abuild/rpmbuild/BUILD/ceph-18.0.0-4135-g87cd54281c8/src/common/dout.h:184:73: error: call to non-'constexpr' function 'virtual unsigned int DoutPrefixProvider::get_subsys() const'
[ 270s] 184 | dout_impl(pdpp->get_cct(), ceph::dout::need_dynamic(pdpp->get_subsys()), v) \
[ 270s] | ~~~~~~~~~~~~~~~~^~
[ 270s] /home/abuild/rpmbuild/BUILD/ceph-18.0.0-4135-g87cd54281c8/src/common/dout.h:155:58: note: in definition of macro 'dout_impl'
[ 270s] 155 | return (cctX->_conf->subsys.template should_gather<sub, v>()); \
[ 270s] | ^~~
[ 270s] /home/abuild/rpmbuild/BUILD/ceph-18.0.0-4135-g87cd54281c8/src/osd/osd_types.h:3618:3: note: in expansion of macro 'ldpp_dout'
[ 270s] 3618 | ldpp_dout(dpp, 10) << "build_prior all_probe " << all_probe << dendl;
[ 270s] | ^~~~~~~~~
[ 270s] /home/abuild/rpmbuild/BUILD/ceph-18.0.0-4135-g87cd54281c8/src/common/dout.h:51:20: note: 'virtual unsigned int DoutPrefixProvider::get_subsys() const' declared here
[ 270s] 51 | virtual unsigned get_subsys() const = 0;
[ 270s] | ^~~~~~~~~~
</pre>
<p>The gcc change is described at <a class="external" href="https://gcc.gnu.org/pipermail/gcc-patches/2023-May/617196.html">https://gcc.gnu.org/pipermail/gcc-patches/2023-May/617196.html</a>.</p>
<p>The ceph FTBFS was mentioned in a followup post at <a class="external" href="https://gcc.gnu.org/pipermail/gcc-patches/2023-May/618384.html">https://gcc.gnu.org/pipermail/gcc-patches/2023-May/618384.html</a>, and apparently this failure is now expected, as <code> DoutPrefixProvider::get_subsys()</code> isn't declared <code>constexpr</code> but really should be.</p>
<p>I tried to fix this experimentally by simply declaring <code>constexpr get_subsys()</code>, e.g.:</p>
<pre>
diff --git a/src/common/dout.h b/src/common/dout.h
index a1375fbb910..6e91750708a 100644
--- a/src/common/dout.h
+++ b/src/common/dout.h
@@ -61,7 +61,7 @@ class NoDoutPrefix : public DoutPrefixProvider {
std::ostream& gen_prefix(std::ostream& out) const override { return out; }
CephContext *get_cct() const override { return cct; }
- unsigned get_subsys() const override { return subsys; }
+ constexpr unsigned get_subsys() const override { return subsys; }
};
// a prefix provider with static (const char*) prefix
@@ -88,7 +88,7 @@ class DoutPrefixPipe : public DoutPrefixProvider {
return out;
}
CephContext *get_cct() const override { return dpp.get_cct(); }
- unsigned get_subsys() const override { return dpp.get_subsys(); }
+ constexpr unsigned get_subsys() const override { return dpp.get_subsys(); }
virtual void add_prefix(std::ostream& out) const = 0;
};
</pre>
<p>...but that has some problems:</p>
<p>1) Instead of an outright build failure, I get <code>warning: virtual functions cannot be 'constexpr' before C++20 [-Winvalid-constexpr]</code>. I imaging this is undesirable.<br />2) Even if 1 <em>is</em> desirable, there's plenty of other subclasses of <code>DoutPrefixProvider</code> which would all <em>also</em> need to have their <code>get_subsys()</code> methods declared <code>conxtexpr</code> for the build to complete.</p>
<p>TBH the whole <code>dout</code> thing is black magic to me, so I could really use some assistance with how best to fix this.</p> sepia - Support #55535 (Resolved): Sepia Lab Access Requesthttps://tracker.ceph.com/issues/555352022-05-04T04:39:55ZTim Serongtserong@suse.com
<p>1) Do you just need VPN access or will you also be running teuthology jobs?</p>
<p>Both</p>
<p>2) Desired Username:</p>
<p>tserong</p>
<p>3) Alternate e-mail address(es) we can reach you at:</p>
<p><a class="email" href="mailto:tserong@suse.com">tserong@suse.com</a></p>
<p>4) If you don't already have an established history of code contributions to Ceph, is there an existing community or core developer you've worked with who has reviewed your work and can vouch for your access request?</p>
<p style="padding-left:2em;">If you answered "No" to # 4, please answer the following (paste directly below the question to keep indentation):</p>
<p style="padding-left:2em;">4a) Paste a link to a Blueprint or planning doc of yours that was reviewed at a Ceph Developer Monthly.</p>
<p style="padding-left:2em;">4b) Paste a link to an accepted pull request for a major patch or feature.</p>
<p style="padding-left:2em;">4c) If applicable, include a link to the current project (planning doc, dev branch, or pull request) that you are looking to test.</p>
<p>5) Paste your SSH public key(s) between the <code>pre</code> tags<br /><pre>ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAs6Qm1chlR0DVvDfkjqm7t+mjmXS/AkMBblcruV1qFTOYi4zkryJiqIUe/41gmDlMi4Nqa7dzgAoS/OPMCIXFE+XxyuhggvAa87YxDZCkE5XjZ54jbbSdRYn/xa/S3yLLAXhd2J8hfcBj2qmmosV/iT0pWQm5lXD7fQBqLiSt/5DyLzOFVL45oDYy2mN2u5VWBWrKXMPzCF2HQQWv4gISLBY1gF6WozNDChOrmujK6buMjdA29IbrjTZ1YykngeK8fCAxPTaXWdDxbY5fxPF1dw6xAWz3dvtre2OIcm1Z5Au9yuiMCecA5nb+OxFho8zOhwv1+bQmqpcM4kfjWYb33Q== tserong@suse.com</pre></p>
<p>6) Paste your hashed VPN credentials between the <code>pre</code> tags (Format: <code>user@hostname 22CharacterSalt 65CharacterHashedPassword</code>)<br /><pre>tserong@thor VhPmkUKJdgNE/RJBN0nZvA 11671c34eb25c5fdff7f0002de197c1e72922a0e2122901d062a6d3ed758a9cf</pre></p> ceph-volume - Bug #53846 (Resolved): ceph-volume should ignore /dev/rbd* deviceshttps://tracker.ceph.com/issues/538462022-01-12T07:10:56ZTim Serongtserong@suse.com
<p>If rbd devices are mapped on ceph cluster nodes (as they may be if you're running an iSCSI gateway for example), then <code>ceph-volume inventory</code> will list those RBD devices, and quite possibly list them as being "available". This causes a couple of problems:</p>
<p>1) Because /dev/rbd0 appears in the list of available devices, the orchestrator will actually try to deploy OSDs on top of those RBD devices. Luckily, this will fail, because the various LVM invocations will die with "Device /dev/rbd0 excluded by a filter", but really we shouldn't even be trying to do this in the first place. Let's not rely on luck ;-)<br />2) It's possible for /dev/rbd* devices to be locked/stuck in such a way that when ceph-volume invokes <code>blkid</code>, it hangs indefinitely (the process ends up in D-state). This can actually block the entire orchestrator, because the orchestrator calls out to cephadm periodically to inventory devices, and the latter tries to acquire a lock, which it can't get because a prior invocation is stuck running <code>ceph-volume inventory</code>.</p>
<p>I suggest we make ceph-volume completely ignore /dev/rbd* when doing a device inventory. I know we had a similar discussion on dev@ceph.io regarding ceph-volume listing, or not listing, GPT devices (see <a class="external" href="https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/N3TK4IO2QYHXIZMQTZ4AMPU5BE56J5MP/#T7UM53WCW2MDD62DDH6KLI4EZXKBXZBY">https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/N3TK4IO2QYHXIZMQTZ4AMPU5BE56J5MP/#T7UM53WCW2MDD62DDH6KLI4EZXKBXZBY</a>) but the difference here is that mapped RBD volumes really <em>aren't</em> part of the host inventory, so IMO should be excluded.</p> Ceph - Bug #53060 (Closed): Unable to load libceph_snappy.so due to undefined symbol _ZTIN6snappy...https://tracker.ceph.com/issues/530602021-10-27T09:21:47ZTim Serongtserong@suse.com
<p>If you try to run Ceph with snappy 1.1.9 installed, <code>ceph status</code> will show HEALTH_WARN, and tell you that your OSDs "have broken BlueStore compression". <code>ceph health detail</code> will tell you that each of your OSDs is "unable to load:snappy". The OSD logs will show something like this:</p>
<pre>
Oct 27 08:55:33 node1 ceph-osd[561817]: load failed dlopen(): "/usr/lib64/ceph/compressor/libceph_snappy.so: undefined symbol: _ZTIN6snappy6SourceE" or "/usr/lib64/ceph/libceph_snappy.so: cannot open shared object file: No such file or directory"
Oct 27 08:55:33 node1 ceph-osd[561817]: create cannot load compressor of type snappy
</pre>
<p>This is because RTTI was disabled in snappy 1.1.9, so the typeinfo for the <code>snappy::Source</code> class - which Ceph's SnappyCompressor creates a subclass of - isn't included in libsnappy.so. Ceph still <em>builds</em> just fine, because the compressors are built as shared libraries. The problem only manifests when our snappy plugin is dlopen()ed at runtime, and then the linker kicks in and can't find that missing symbol.</p>
<p>This would ideally be fixed by getting RTTI re-enabled in snappy, so I've gone ahead and opened <a class="external" href="https://github.com/google/snappy/pull/144">https://github.com/google/snappy/pull/144</a></p> Orchestrator - Documentation #49488 (Resolved): Document service specs for iSCSI deployment https://tracker.ceph.com/issues/494882021-02-25T08:58:17ZTim Serongtserong@suse.com
<p>There's currently no documentation on how to deploy iSCSI gateways (or, if there is, I can't find it beyond what's listed in the API documentation at <a class="external" href="https://docs.ceph.com/en/latest/api/mon_command_api/?highlight=iscsi#orch-apply-iscsi">https://docs.ceph.com/en/latest/api/mon_command_api/?highlight=iscsi#orch-apply-iscsi</a>). We should at least add an example service spec somewhere which includes all possible parameters, e.g.:</p>
<pre>service_type: iscsi
service_id: SERVICE_ID
placement:
hosts:
- [...]
spec:
pool: ISCSI_POOL
trusted_ip_list: "IP_ADDRESS_1,IP_ADDRESS_2,IP_ADDRESS_3,..."
api_user: API_USERNAME
api_password: API_PASSWORD
api_secure: true
ssl_cert: |
-----BEGIN CERTIFICATE-----
MIIDtTCCAp2gAwIBAgIYMC4xNzc1NDQxNjEzMzc2MjMyXzxvQ7EcMA0GCSqGSIb3
DQEBCwUAMG0xCzAJBgNVBAYTAlVTMQ0wCwYDVQQIDARVdGFoMRcwFQYDVQQHDA5T
[...]
-----END CERTIFICATE-----
ssl_key: |
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5jdYbjtNTAKW4
/CwQr/7wOiLGzVxChn3mmCIF3DwbL/qvTFTX2d8bDf6LjGwLYloXHscRfxszX/4h
[...]
-----END PRIVATE KEY-----
</pre> Orchestrator - Feature #45996 (New): adopted prometheus instance uses port 9095, regardless of or...https://tracker.ceph.com/issues/459962020-06-15T11:13:22ZTim Serongtserong@suse.com
<p>When adopting prometheus (<code>cephadm adopt --style legacy --name prometheus.HOSTNAME</code>), the new prometheus daemon starts listening on port 9095, regardless of what port the original daemon was running on. This is a problem for upgrades, as if you have an existing grafana instance it will still be looking at the old prometheus port number.</p> Orchestrator - Bug #45973 (Rejected): Adopted MDS daemons are removed by the orchestrator because...https://tracker.ceph.com/issues/459732020-06-11T10:13:13ZTim Serongtserong@suse.com
<p>The <a href="https://docs.ceph.com/docs/master/cephadm/adoption/" class="external">docs</a> say that when converting to cephadm, one needs to redeploy MDS daemons. However, it <strong>is</strong> possible to adopt them (<code>cephadm adopt [...] --name mds.myhost</code> seems to work just fine). The problem is that shortly after being adopted, the cephadm orchestrator decides that the MDS is an orphan (there's no service spec), and goes and removes the daemon.</p>
<p>If the correct procedure is always to redeploy, and never to adopt an MDS, then <code>cephadm adopt</code> should be presumably be changed to refuse to adopt MDSes (the same is possibly true for RGW, but I haven't verified this).</p>
<p>If, on the other hand, it's permitted to adopt an MDS, then I guess a service spec needs to be created for it automatically?</p>
<p>What's the right thing to do here?</p> Orchestrator - Bug #45572 (Rejected): cephadm: ceph-crash isn't deployed anywherehttps://tracker.ceph.com/issues/455722020-05-16T06:50:24ZTim Serongtserong@suse.com
<p>AFAICT when deploying a containerized cluster with cephadm, ceph-crash is never deployed anywhere. This means that if a daemon crashes and dumps info to /var/lib/ceph/crash, the crash data is saved to disk, but <code>ceph crash stat</code> always prints "0 crashes recorded", because the crashes never actually get posted.</p>
<p>Previously, installing ceph-base would enable the ceph-crash service, but if you're running a cephadm-deployed cluster, you never install that package.</p>
<p>Do we perhaps need to run an additional ceph-crash container on each host?</p> Orchestrator - Bug #40801 (Duplicate): ceph orchestrator cli alleges support for json-pretty, xml...https://tracker.ceph.com/issues/408012019-07-17T11:27:42ZTim Serongtserong@suse.com
<p>The <code>_list_devices()</code> and <code>_list_services()</code> methods of the orchestrator CLI both specify <code>"name=format,type=CephChoices,strings=json|plain,req=false"</code> for the format parameter, so it should only accept json or plain as formats. However, if I run <code>`ceph orchestrator service ls --format=some-unsupported-thing`</code>, I get:</p>
<p><code>ceph: error: argument -f/--format: invalid choice: 'json_pretty' (choose from 'json', 'json-pretty', 'xml', 'xml-pretty', 'plain')</code></p>
<p>Given those methods only support json and plain, I'd not expect to see json-pretty, xml or xml-pretty as available options.</p> RADOS - Backport #38850 (Resolved): upgrade: 1 nautilus mon + 1 luminous mon can't automatically ...https://tracker.ceph.com/issues/388502019-03-22T10:19:17ZTim Serongtserong@suse.com
<p>Seen while upgrading Luminous (12.2.10) to Nautilus (14.2.0). Three mon hosts, four osd hosts. The process was:</p>
<p>- Shutdown mon1 (quorum in now mon2+mon3, both Luminous)<br />- Upgrade mon1 to Nautilus<br />- Start mon1 again. mon1 joins cluster, `ceph health` reports all three mons OK<br />- Shutdown mon2 (leaving mon1 = Nautilus and mon3 = Luminous)<br />- `ceph health` is now broken (eventually times out)<br />- mon1 logs repeat:</p>
<pre>2019-03-22 09:50:21.634 7f31906d1700 1 mon.mon1@0(electing) e1 peer v1:172.16.1.13:6789/0 release < min_mon_release, or missing features 0</pre>
<p>- mon3 logs repeat:</p>
<pre>2019-03-22 09:51:59.644672 7fa9f589a700 -1 mon.mon3@2(probing) e1 handle_probe missing features, have 4611087853745930235, required 0, missing 0</pre>
<p>This means that the cluster is effectively down until you're able to complete the upgrade of mon2.</p>
<p>Curiously, on mon1 (Nautilus):</p>
<pre># ceph daemon mon.$(hostname) mon_status|grep min_mon_release
"min_mon_release": 12,
"min_mon_release_name": "luminous",
</pre>
<p>So why is it comlaining about release < min_mon_release?</p>
<p>Even more interesting, I can run this on the Luminous mon:</p>
<pre># ceph daemon mon.$(hostname) quorum enter
started responding to quorum, initiated new election</pre>
<p>...and <strong>bam</strong> a few seconds later, we're in business again:</p>
<pre># ceph status
cluster:
id: 44e4a575-5c31-3c61-88c5-001ea49e8aaa
health: HEALTH_WARN
1/3 mons down, quorum mon1,mon3
services:
mon: 3 daemons, quorum mon1,mon3, out of quorum: mon2
mgr: mon3(active), standbys: mon1
osd: 30 osds: 30 up, 30 in
data:
pools: 1 pools, 512 pgs
objects: 0 objects, 0B
usage: 30.3GiB used, 567GiB / 597GiB avail
pgs: 512 active+clean
</pre>
<p>Not that "quorum enter" doesn't help if run from the Nautilus mon, it only works when run from the Luminous mon.</p> Orchestrator - Bug #37514 (Can't reproduce): mgr CLI commands block one another (indefinitely if ...https://tracker.ceph.com/issues/375142018-12-04T06:46:49ZTim Serongtserong@suse.com
<p>There seems to be two problems here:</p>
<p>1) Only one mgr CLI command runs at a time. This isn't obvious unless you find an mgr command that takes a few seconds to run. For example, when testing the deepsea orchestrator, <code>`ceph orchestrator device ls`</code> takes about five seconds to complete. If I invoke that command in one terminal window, then invoke `<code>ceph osd status`</code> in another terminal window, the latter will block until the former completes. That's irritating, but probably not fatal.</p>
<p>2) Orchestrator CLI commands spin waiting for command completions from whatever orchestrator module is active. If you manage to break an orchestrator in such a way as commands never complete (try e.g.: stopping the salt-api while using DeepSea), <code>`ceph orchestrator device ls`</code> will never complete. Even if you hit CTRL-C, mgr is (presumably) still stuck in that loop waiting for completions that are never going to happen, which means it's unable to service any subsequent CLI command. Trying to run, say, <code>`ceph osd status`</code> at this point will also simply just hang. You can't even quickly restart mgr at this point: <code>`systemctl restart ceph-mgr</code>$(hostname)`@ hangs until it hits "State 'stop-sigterm' timed out." (after maybe a minute and a half), then it sends a SIGKILL and mgr is finally restarted.</p> Ceph - Bug #35906 (Resolved): ceph-disk: is_mounted() returns None for mounted OSDs with Python 3https://tracker.ceph.com/issues/359062018-09-10T11:10:49ZTim Serongtserong@suse.com
<p>`ceph-disk list --format=json` on python 3 gives null for the mount member, even for mounted OSDs, e.g.:</p>
<pre>
# ceph-disk list --format=json|json_pp
...
{
"path" : "/dev/vdg",
"partitions" : [
{
"whoami" : "23",
"is_partition" : true,
"path" : "/dev/vdg1",
"ceph_fsid" : "00296336-7bf2-43f1-a48c-24c7212bf478",
"dmcrypt" : {},
"uuid" : "b447f027-f116-47d0-9cd1-ca2348e8e3db",
"block_uuid" : "dfaf6613-f958-497a-9dfb-ad343e897639",
"block_dev" : "/dev/vdg2",
"type" : "data",
*"mount" : null,*
"ptype" : "4fbd7e29-9d25-41b8-afd0-062c0ceff05d",
"magic" : "ceph osd volume v026",
"cluster" : "ceph",
"state" : "prepared",
"fs_type" : "xfs"
},
{
"type" : "block",
"is_partition" : true,
"path" : "/dev/vdg2",
"ptype" : "cafecafe-9b03-4f30-b4c6-b4b80ceff106",
"block_for" : "/dev/vdg1",
"dmcrypt" : {},
"uuid" : "dfaf6613-f958-497a-9dfb-ad343e897639"
}
]
}
...
</pre> Ceph-deploy - Bug #18164 (Resolved): platform.linux_distribution() fails on distros with /etc/os-...https://tracker.ceph.com/issues/181642016-12-07T04:14:41ZTim Serongtserong@suse.com
<p>As with bugs <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: platform.linux_distribution() is deprecated; stop using it (Resolved)" href="https://tracker.ceph.com/issues/18141">#18141</a> and <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: platform.linux_distribution() is deprecated; stop using it (Resolved)" href="https://tracker.ceph.com/issues/18163">#18163</a>, on the latest SUSE systems, platform.linux_distribution() returns ('','',''), causing ceph-deploy to fail with "UnsupportedPlatform: Platform is not supported:"</p>
<p>Given platform.linux_distribution() is deprecated and doesn't understand /etc/os-release, we should stop using it.</p> Ceph - Bug #18163 (Resolved): platform.linux_distribution() is deprecated; stop using ithttps://tracker.ceph.com/issues/181632016-12-07T04:10:56ZTim Serongtserong@suse.com
<p>platform.linux_distribution() is deprecated, so we should stop using it. Notably it uses /etc/SuSE-release on SUSE systems, and the latest SUSE versions don't ship this file; instead they ship /etc/os-release, which platform.linux_distribution() doesn't know about, so it returns ('','','').</p>
<p>AFAICT, platform.linux_distribution() is currently used by ceph-detect-init, which in turn is used by ceph-disk. If ceph-detect-init can't determine the distro because it sees ('','',''), this results in ceph-disk always tagging the init system as sysvinit.</p>
<p>There are also platform.linux_distribution() invocations in qa/workunits/ceph-disk/ceph-disk-no-lockbox and src/ceph-disk/ceph_disk/main.py, but they look like dead code to me.</p>
<p>See also bug <a class="issue tracker-1 status-3 priority-4 priority-default closed" title="Bug: platform.linux_distribution() is deprecated; stop using it (Resolved)" href="https://tracker.ceph.com/issues/18141">#18141</a></p> Ceph - Bug #14864 (Resolved): ceph-detect-init requires python-setuptools at runtimehttps://tracker.ceph.com/issues/148642016-02-25T14:58:49ZTim Serongtserong@suse.com
<p>Testing a reasonably recent ceph-10.0.2 on openSUSE Leap 42.1, my OSDs weren't mounting. I tracked this back to /usr/lib/systemd/system/ceph-disk@.service which invokes `flock /var/lock/ceph-disk /usr/sbin/ceph-disk --verbose --log-stdout trigger --sync %f`. This in turn results in:</p>
<pre>
ceph-disk: main_trigger: Namespace(dev='/dev/sdb1', func=<function main_trigger at 0x7fa6ebf6b050>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
ceph-disk: Running command: /sbin/init --version
/sbin/init: unrecognized option '--version'
ceph-disk: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
ceph-disk: Running command: /usr/sbin/sgdisk -i 1 /dev/sdb
ceph-disk: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
ceph-disk: Running command: /usr/sbin/sgdisk -i 1 /dev/sdb
ceph-disk: trigger /dev/sdb1 parttype 4fbd7e29-9d25-41b8-afd0-062c0ceff05d uuid 93b72ed5-7d84-4b0b-a227-330fcd22513e
ceph-disk: Running command: /usr/sbin/ceph-disk activate /dev/sdb1
Traceback (most recent call last):
File "/usr/bin/ceph-detect-init", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
ERROR:ceph-disk:Failed to activate
Traceback (most recent call last):
File "/usr/sbin/ceph-disk", line 4036, in <module>
main(sys.argv[1:])
File "/usr/sbin/ceph-disk", line 3992, in main
main_catch(args.func, args)
File "/usr/sbin/ceph-disk", line 4014, in main_catch
func(args)
File "/usr/sbin/ceph-disk", line 2530, in main_activate
reactivate=args.reactivate,
File "/usr/sbin/ceph-disk", line 2296, in mount_activate
(osd_id, cluster) = activate(path, activate_key_template, init)
File "/usr/sbin/ceph-disk", line 2477, in activate
init = init_get()
File "/usr/sbin/ceph-disk", line 799, in init_get
'--default', 'sysvinit',
File "/usr/sbin/ceph-disk", line 902, in _check_output
raise error
subprocess.CalledProcessError: Command '/usr/bin/ceph-detect-init' returned non-zero exit status 1
</pre>
<p>The important part is:</p>
<pre>
Traceback (most recent call last):
File "/usr/bin/ceph-detect-init", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
</pre>
<p>This is fixable by installing python-setuptools, suggesting that package needs to be added to the RPM Requires and, I assume, the Debian Depends.</p>