Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico?1709582103
2017-06-27T11:13:38Z
Ceph
Redmine
Ceph - Backport #20428 (Resolved): jewel: osd: client IOPS drops to zero frequently
https://tracker.ceph.com/issues/20428
2017-06-27T11:13:38Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/15947">https://github.com/ceph/ceph/pull/15947</a></p>
Ceph - Bug #20427 (Resolved): osd: client IOPS drops to zero frequently
https://tracker.ceph.com/issues/20427
2017-06-27T10:17:03Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p>[From <a class="external" href="http://www.spinics.net/lists/ceph-devel/msg37163.html">http://www.spinics.net/lists/ceph-devel/msg37163.html</a>]</p>
<p>At Alibaba, we experienced unstable performance with Jewel on one<br />production cluster, and we can easily reproduce it now with several<br />small test clusters. One test cluster has 30 SSDs, and another test<br />one has 120 SSDs, we are using filestore+async messenger on the<br />backend and fio+librbd to test them. When this issue happens, client<br />fio IOPS drops to zero (or close to zero) frequently during fio runs.<br />And the durations of those drops were very short, about 1 second or<br />so.</p>
<p>For the 30 SSDs test cluster, we use 135 client fio writing into 135<br />rbd images individually, each fio has only 1 job and rate limit is<br />3MB/s. On this fresh created test cluster, for all 135 client fio<br />runs, during first 15 minutes or so, client IOPS were very stable and<br />each OSD server's throughput was very stable as well. After 15 minutes<br />and 360 GB data written, the test cluster entered an unstable state,<br />client fio IOPS dropped to zero (or close) frequently and each OSD<br />server's throughput became very spiky as well (from 500MB/s to less<br />1MB/s). We tried let all fio keeping writing for about 16 hours,<br />cluster was still in this swing state.</p>
<p>This is very easily reproducible. I don't think it's caused by<br />filestore folder splitting, since they were all done during the first<br />15 minutes. And also, OSD server mem/cpu/disk were far from saturated.<br />One thing we noticed from perf counter is that op_latency increased<br />from 0.7 ms to >20 ms after entering this unstable state. Is this<br />normal Jewel/filestore behavior? Anyone knows what causes it?</p>
rbd - Backport #19957 (Resolved): jewel: rbd: Lock release requests not honored after watch is re...
https://tracker.ceph.com/issues/19957
2017-05-17T09:14:43Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/17385">https://github.com/ceph/ceph/pull/17385</a></p>
Ceph - Backport #19926 (Resolved): jewel: mon crash on shutdown, lease_ack_timeout event
https://tracker.ceph.com/issues/19926
2017-05-15T08:16:42Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/15083">https://github.com/ceph/ceph/pull/15083</a></p>
Ceph - Backport #19915 (Resolved): jewel: osd/OSD.h: 706: FAILED assert(removed) in PG::unreg_nex...
https://tracker.ceph.com/issues/19915
2017-05-12T12:42:17Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/15065">https://github.com/ceph/ceph/pull/15065</a></p>
Ceph - Backport #19910 (Resolved): jewel: random OSDs fail to start after reboot with systemd
https://tracker.ceph.com/issues/19910
2017-05-11T13:49:37Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/15051">https://github.com/ceph/ceph/pull/15051</a></p>
Ceph - Backport #19323 (Rejected): hammer: segfault in FileStore::fiemap()
https://tracker.ceph.com/issues/19323
2017-03-21T12:18:47Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/14069">https://github.com/ceph/ceph/pull/14069</a></p>
Ceph - Backport #19265 (Resolved): jewel: An OSD was seen getting ENOSPC even with osd_failsafe_f...
https://tracker.ceph.com/issues/19265
2017-03-13T09:07:53Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/15050">https://github.com/ceph/ceph/pull/15050</a></p>
Ceph - Bug #19116 (Duplicate): Sporadic segfaults in lockdep_locked on startup
https://tracker.ceph.com/issues/19116
2017-03-01T10:16:04Z
Alexey Sheplyakov
asheplyakov@mirantis.com
/home/jenkins-build/build/workspace/ceph-pull-requests/src/ceph_objectstore_tool_dir/out/client.admin.18512.asok is too long! The maximum length on this system is 107
<ul>
<li>Caught signal (Segmentation fault) <b><br /> in thread 2aecb7e21700 thread_name:service<br /> ceph version 10.2.5-6118-g18ba7c9 (18ba7c9ef282ce0d71a34d287fb0654f103c3fd5)<br /> 1: (()+0x10e6c2) [0x2aecab4676c2]<br /> 2: (()+0x10330) [0x2aecb4eb2330]<br /> 3: (std::__detail::_Map_base<unsigned long, std::pair<unsigned long const, std::map<int, ceph::BackTrace*, std::less<int>, std::allocator<std::pair<int const, ceph::BackTrace*> > > >, std::allocator<std::pair<unsigned long const, std::map<int, ceph::BackTrace*, std::less<int>, std::allocator<std::pair<int const, ceph::BackTrace*> > > > >, std::__detail::_Select1st, std::equal_to<unsigned long>, std::hash<unsigned long>, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<false, false, true>, true>::operator[](unsigned long const&)+0x2c) [0x2aecab528a2c]<br /> 4: (lockdep_locked(char const*, int, bool)+0xa8) [0x2aecab5273d8]<br /> 5: (Mutex::Lock(bool)+0x9c) [0x2aecab48d6bc]<br /> 6: (CephContextServiceThread::entry()+0x169) [0x2aecab4c92b9]<br /> 7: (()+0x8184) [0x2aecb4eaa184]<br /> 8: (clone()+0x6d) [0x2aecb657937d]<br />2017-03-01 07:39:57.382206 2aecb7e21700 -1 <strong></b> Caught signal (Segmentation fault) *</strong><br /> in thread 2aecb7e21700 thread_name:service</li>
</ul>
<pre><code>ceph version 10.2.5-6118-g18ba7c9 (18ba7c9ef282ce0d71a34d287fb0654f103c3fd5)<br /> 1: (()+0x10e6c2) [0x2aecab4676c2]<br /> 2: (()+0x10330) [0x2aecb4eb2330]<br /> 3: (std::__detail::_Map_base&lt;unsigned long, std::pair&lt;unsigned long const, std::map&lt;int, ceph::BackTrace*, std::less&lt;int&gt;, std::allocator&lt;std::pair&lt;int const, ceph::BackTrace*&gt; > > >, std::allocator&lt;std::pair&lt;unsigned long const, std::map&lt;int, ceph::BackTrace*, std::less&lt;int&gt;, std::allocator&lt;std::pair&lt;int const, ceph::BackTrace*&gt; > > > >, std::__detail::_Select1st, std::equal_to&lt;unsigned long&gt;, std::hash&lt;unsigned long&gt;, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits&lt;false, false, true&gt;, true>::operator[](unsigned long const&)+0x2c) [0x2aecab528a2c]<br /> 4: (lockdep_locked(char const*, int, bool)+0xa8) [0x2aecab5273d8]<br /> 5: (Mutex::Lock(bool)+0x9c) [0x2aecab48d6bc]<br /> 6: (CephContextServiceThread::entry()+0x169) [0x2aecab4c92b9]<br /> 7: (()+0x8184) [0x2aecb4eaa184]<br /> 8: (clone()+0x6d) [0x2aecb657937d]<br /> NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.</code></pre>
<p>--- begin dump of recent events ---<br /> -21> 2017-03-01 07:39:57.332130 2aecab663fc0 5 asok(0x2aecab668580) register_command perfcounters_dump hook 0x2aecab66b6d0<br /> -20> 2017-03-01 07:39:57.332148 2aecab663fc0 5 asok(0x2aecab668580) register_command 1 hook 0x2aecab66b6d0<br /> -19> 2017-03-01 07:39:57.332160 2aecab663fc0 5 asok(0x2aecab668580) register_command perf dump hook 0x2aecab66b6d0<br /> -18> 2017-03-01 07:39:57.332162 2aecab663fc0 5 asok(0x2aecab668580) register_command perfcounters_schema hook 0x2aecab66b6d0<br /> -17> 2017-03-01 07:39:57.332164 2aecab663fc0 5 asok(0x2aecab668580) register_command 2 hook 0x2aecab66b6d0<br /> -16> 2017-03-01 07:39:57.332166 2aecab663fc0 5 asok(0x2aecab668580) register_command perf schema hook 0x2aecab66b6d0<br /> -15> 2017-03-01 07:39:57.332170 2aecab663fc0 5 asok(0x2aecab668580) register_command perf reset hook 0x2aecab66b6d0<br /> -14> 2017-03-01 07:39:57.332172 2aecab663fc0 5 asok(0x2aecab668580) register_command config show hook 0x2aecab66b6d0<br /> -13> 2017-03-01 07:39:57.332174 2aecab663fc0 5 asok(0x2aecab668580) register_command config set hook 0x2aecab66b6d0<br /> -12> 2017-03-01 07:39:57.332175 2aecab663fc0 5 asok(0x2aecab668580) register_command config get hook 0x2aecab66b6d0<br /> -11> 2017-03-01 07:39:57.332177 2aecab663fc0 5 asok(0x2aecab668580) register_command config diff hook 0x2aecab66b6d0<br /> -10> 2017-03-01 07:39:57.332178 2aecab663fc0 5 asok(0x2aecab668580) register_command log flush hook 0x2aecab66b6d0<br /> -9> 2017-03-01 07:39:57.332182 2aecab663fc0 5 asok(0x2aecab668580) register_command log dump hook 0x2aecab66b6d0<br /> -8> 2017-03-01 07:39:57.332184 2aecab663fc0 5 asok(0x2aecab668580) register_command log reopen hook 0x2aecab66b6d0<br /> -7> 2017-03-01 07:39:57.336645 2aecab663fc0 0 lockdep start<br /> -6> 2017-03-01 07:39:57.336847 2aecab663fc0 -1 WARNING: the following dangerous and experimental features are enabled: <strong><br /> -5> 2017-03-01 07:39:57.336890 2aecab663fc0 -1 WARNING: the following dangerous and experimental features are enabled: *<br /> -4> 2017-03-01 07:39:57.338387 2aecab663fc0 -1 WARNING: the following dangerous and experimental features are enabled: *<br /> -3> 2017-03-01 07:39:57.338392 2aecab663fc0 5 asok(0x2aecab668580) init /home/jenkins-build/build/workspace/ceph-pull-requests/src/ceph_objectstore_tool_dir/out/client.admin.18512.asok<br /> -2> 2017-03-01 07:39:57.338403 2aecab663fc0 5 asok(0x2aecab668580) bind_and_listen /home/jenkins-build/build/workspace/ceph-pull-requests/src/ceph_objectstore_tool_dir/out/client.admin.18512.asok<br /> -1> 2017-03-01 07:39:57.338406 2aecab663fc0 -1 asok(0x2aecab668580) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: The UNIX domain socket path /home/jenkins-build/build/workspace/ceph-pull-requests/src/ceph_objectstore_tool_dir/out/client.admin.18512.asok is too long! The maximum length on this system is 107<br /> 0> 2017-03-01 07:39:57.382206 2aecb7e21700 -1 *</strong>* Caught signal (Segmentation fault) **<br /> in thread 2aecb7e21700 thread_name:service</p>
<pre><code>ceph version 10.2.5-6118-g18ba7c9 (18ba7c9ef282ce0d71a34d287fb0654f103c3fd5)<br /> 1: (()+0x10e6c2) [0x2aecab4676c2]<br /> 2: (()+0x10330) [0x2aecb4eb2330]<br /> 3: (std::__detail::_Map_base&lt;unsigned long, std::pair&lt;unsigned long const, std::map&lt;int, ceph::BackTrace*, std::less&lt;int&gt;, std::allocator&lt;std::pair&lt;int const, ceph::BackTrace*&gt; > > >, std::allocator&lt;std::pair&lt;unsigned long const, std::map&lt;int, ceph::BackTrace*, std::less&lt;int&gt;, std::allocator&lt;std::pair&lt;int const, ceph::BackTrace*&gt; > > > >, std::__detail::_Select1st, std::equal_to&lt;unsigned long&gt;, std::hash&lt;unsigned long&gt;, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits&lt;false, false, true&gt;, true>::operator[](unsigned long const&)+0x2c) [0x2aecab528a2c]<br /> 4: (lockdep_locked(char const*, int, bool)+0xa8) [0x2aecab5273d8]<br /> 5: (Mutex::Lock(bool)+0x9c) [0x2aecab48d6bc]<br /> 6: (CephContextServiceThread::entry()+0x169) [0x2aecab4c92b9]<br /> 7: (()+0x8184) [0x2aecb4eaa184]<br /> 8: (clone()+0x6d) [0x2aecb657937d]<br /> NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.</code></pre>
<p>--- logging levels ---<br /> 0/ 5 none<br /> 0/ 1 lockdep<br /> 0/ 1 context<br /> 1/ 1 crush<br /> 1/ 5 mds<br /> 1/ 5 mds_balancer<br /> 1/ 5 mds_locker<br /> 1/ 5 mds_log<br /> 1/ 5 mds_log_expire<br /> 1/ 5 mds_migrator<br /> 0/ 1 buffer<br /> 0/ 1 timer<br /> 0/ 1 filer<br /> 0/ 1 striper<br /> 0/ 1 objecter<br /> 0/ 5 rados<br /> 0/ 5 rbd<br /> 0/ 5 rbd_mirror<br /> 0/ 5 rbd_replay<br /> 0/ 5 journaler<br /> 0/ 5 objectcacher<br /> 0/ 5 client<br /> 0/ 5 osd<br /> 0/ 5 optracker<br /> 0/ 5 objclass<br /> 1/ 3 filestore<br /> 1/ 3 journal<br /> 0/ 5 ms<br /> 1/ 5 mon<br /> 0/10 monc<br /> 1/ 5 paxos<br /> 0/ 5 tp<br /> 1/ 5 auth<br /> 1/ 5 crypto<br /> 1/ 1 finisher<br /> 1/ 5 heartbeatmap<br /> 1/ 5 perfcounter<br /> 1/ 5 rgw<br /> 1/10 civetweb<br /> 1/ 5 javaclient<br /> 1/ 5 asok<br /> 1/ 1 throttle<br /> 0/ 0 refs<br /> 1/ 5 xio<br /> 1/ 5 compressor<br /> 1/ 5 newstore<br /> 1/ 5 bluestore<br /> 1/ 5 bluefs<br /> 1/ 3 bdev<br /> 1/ 5 kstore<br /> 4/ 5 rocksdb<br /> 4/ 5 leveldb<br /> 1/ 5 kinetic<br /> 1/ 5 fuse<br /> <del>2/-2 (syslog threshold)<br /> 99/99 (stderr threshold)<br /> max_recent 500<br /> max_new 1000<br /> log_file /home/jenkins-build/build/workspace/ceph-pull-requests/src/ceph_objectstore_tool_dir/out/client.admin.18512.log<br />--</del> end dump of recent events ---</p>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/19344/consoleText">https://jenkins.ceph.com/job/ceph-pull-requests/19344/consoleText</a></p>
Ceph - Bug #18740 (Resolved): random OSDs fail to start after reboot with systemd
https://tracker.ceph.com/issues/18740
2017-01-31T08:02:04Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p>After a reboot random OSDs (2 -- 4 of 18) fail to start.<br />The problematic OSDs can be started manually (with ceph-disk activate-lockbox /dev/sdX3) just fine.</p>
<p>Environment: Ubuntu 16.04<br />Hardware: HP ProLiant SL4540 Gen8, 18 HDDs, 4 SSDs</p>
<p>Note: applying <a class="external" href="https://github.com/ceph/ceph/pull/12210/commits/0ab5b7a711ad7037ff0eb7e8281b293ddfc28a2a">https://github.com/ceph/ceph/pull/12210/commits/0ab5b7a711ad7037ff0eb7e8281b293ddfc28a2a</a> does NOT help.</p>
<pre>
sudo journalctl | grep sdm3
Jan 30 21:18:15 ceph-001 systemd[1]: Starting Ceph disk activation: /dev/sdm3...
Jan 30 21:18:16 ceph-001 sh[4071]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdm3', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7f6b776dd668>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True, sysconfdir='/etc/ceph', verbose=True)
Jan 30 21:18:16 ceph-001 sh[4071]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdm3
Jan 30 21:18:16 ceph-001 sh[4071]: command: Running command: /sbin/blkid -o udev -p /dev/sdm3
Jan 30 21:18:16 ceph-001 sh[4071]: main_trigger: trigger /dev/sdm3 parttype fb3aabf9-d25f-47cc-bf5e-721d1816496b uuid 00000000-0000-0000-0000-000000000000
Jan 30 21:18:16 ceph-001 sh[4071]: command: Running command: /usr/sbin/ceph-disk --verbose activate-lockbox /dev/sdm3
Jan 30 21:20:15 ceph-001 systemd[1]: ceph-disk@dev-sdm3.service: Main process exited, code=exited, status=124/n/a
Jan 30 21:20:15 ceph-001 systemd[1]: Failed to start Ceph disk activation: /dev/sdm3.
Jan 30 21:20:15 ceph-001 systemd[1]: ceph-disk@dev-sdm3.service: Unit entered failed state.
Jan 30 21:20:15 ceph-001 systemd[1]: ceph-disk@dev-sdm3.service: Failed with result 'exit-code'.
</pre>
<p>Increasing the timeout in ceph-disk@.service to 900 seconds fixes the problem.</p>
Ceph - Backport #18581 (Rejected): jewel: osd: ENOENT on clone
https://tracker.ceph.com/issues/18581
2017-01-18T11:10:46Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/12978">https://github.com/ceph/ceph/pull/12978</a></p>
Ceph - Backport #18132 (Resolved): hammer: ReplicatedBackend::build_push_op: add a second config ...
https://tracker.ceph.com/issues/18132
2016-12-03T07:51:56Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/12417">https://github.com/ceph/ceph/pull/12417</a></p>
Ceph - Bug #17753 (Resolved): ceph-create-keys loops forever
https://tracker.ceph.com/issues/17753
2016-10-31T16:17:47Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p>ceph-create-keys got stuck while deploying recent 10.2.4 (5efb6b1c2c9eb68f479446e7b42cd8945a18dd53).<br />Syslog contains a lot of the following messages:</p>
<p>Oct 31 16:15:28 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: INFO:ceph-create-keys:Cannot get or create admin key<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: INFO:ceph-create-keys:Talking to monitor...<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: no valid command found; 10 closest matches:<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth rm <entity><br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth del <entity><br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth export {<entity>}<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth get-or-create <entity> {<caps> [<caps>...]}<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth caps <entity> <caps> [<caps>...]<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth get <entity><br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth get-key <entity><br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth print-key <entity><br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth print_key <entity><br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: auth list<br />Oct 31 16:15:29 saceph-mon1 ceph-create-keys<sup><a href="#fn4781">4781</a></sup>: Error EINVAL: invalid command</p>
sepia - Bug #17274 (Resolved): Access to sepia lab: Alexey Sheplyakov
https://tracker.ceph.com/issues/17274
2016-09-14T07:52:26Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<p>1. I don't intend to run jobs manually<br />2. Username: asheplyakov<br />3. ssh key is attached (id_rsa.pub)<br />4. Hashed VPN password: <br /><a class="email" href="mailto:asheplyakov@asheplyakov.srt.mirantis.net">asheplyakov@asheplyakov.srt.mirantis.net</a> wFW0ZgT4cNhKRAGXiUtevQ 1b11f0702b2db42a42aae6579737ece2caad3b80a8186b971686575cb76b3051</p>
Ceph - Backport #14231 (Rejected): hammer: ceph-disk fails to work with udev generated symlinks
https://tracker.ceph.com/issues/14231
2016-01-05T09:21:33Z
Alexey Sheplyakov
asheplyakov@mirantis.com
<a name="description"></a>
<h3 >description<a href="#description" class="wiki-anchor">¶</a></h3>
<p>~# ceph-deploy osd prepare node-9:/dev/sdc3:/dev/disk/by-id/ata-INTEL_SSDSC2BW240A4_PHDA410301812403GN-part3</p>
<p>fails here:</p>
<p>[node-9][WARNIN] DEBUG:ceph-disk:Journal /dev/disk/by-id/ata-INTEL_SSDSC2BW240A4_PHDA410301812403GN-part3 was previously prepared with ceph-disk. Reusing it.<br />[node-9][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk -i 2 /dev/disk/by-id/ata-INTEL_SSDSC<br />[node-9][WARNIN] Problem opening /dev/disk/by-id/ata-INTEL_SSDSC for reading! Error is 2.</p>
<a name="workaround"></a>
<h3 >workaround<a href="#workaround" class="wiki-anchor">¶</a></h3>
<p>~# ceph-deploy osd prepare node-9:/dev/sdc3:$(ssh node-9 realpath /dev/disk/by-id/ata-INTEL_SSDSC2BW240A4_PHDA410301812403GN-part3)</p>