Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2023-09-27T13:55:27ZCeph
Redmine rgw - Bug #63006 (Closed): Unwatch crash on error at RGW startuphttps://tracker.ceph.com/issues/630062023-09-27T13:55:27ZAdam Emersonaemerson@redhat.com
<p>During radosgw initialization, if there is an exception in init_watch that causes the watcher registration to fail, When finalize_watch is executed, a crash occurs due to unregister an unregistered watch.</p> Ceph - Bug #62804 (New): Jaegertracing compile failurehttps://tracker.ceph.com/issues/628042023-09-11T16:48:07ZAdam Emersonaemerson@redhat.com
<p>Compilation fails with:</p>
<pre><code class="text syntaxhl"><span class="CodeRay">In file included from /home/aemerson/work/ceph/main/src/jaegertracing/opentelemetry-cpp/exporters/jaeger/src/TUDPTransport.h:16,
from /home/aemerson/work/ceph/main/src/jaegertracing/opentelemetry-cpp/exporters/jaeger/src/TUDPTransport.cc:6:
/home/aemerson/work/ceph/main/src/jaegertracing/opentelemetry-cpp/exporters/jaeger/src/TUDPTransport.cc: In member function ‘virtual void opentelemetry::v1::exporter::jaeger::TUDPTransport::close()’:
/home/aemerson/work/ceph/main/src/jaegertracing/opentelemetry-cpp/exporters/jaeger/src/TUDPTransport.cc:71:7: error: ‘::close’ has not been declared; did you mean ‘pclose’?
71 | ::THRIFT_CLOSESOCKET(socket_);
| ^~~~~~~~~~~~~~~~~~
</span></code></pre>
<p>It can be fixed with:</p>
<pre><code class="cpp syntaxhl"><span class="CodeRay">diff --git a/exporters/jaeger/src/TUDPTransport.cc b/exporters/jaeger/src/TUDPTransport.cc
index e4111273.<span class="float">.0</span>ea86288 <span class="integer">100644</span>
--- a/exporters/jaeger/src/TUDPTransport.cc
+++ b/exporters/jaeger/src/TUDPTransport.cc
<span class="error">@</span><span class="error">@</span> -<span class="integer">3</span>,<span class="integer">6</span> +<span class="integer">3</span>,<span class="integer">8</span> <span class="error">@</span><span class="error">@</span>
<span class="preprocessor">#include</span> <span class="include"><sstream></span> <span class="comment">// std::stringstream</span>
+<span class="preprocessor">#include</span> <span class="include"><unistd.h></span>
+
<span class="preprocessor">#include</span> <span class="include">"TUDPTransport.h"</span>
<span class="preprocessor">#include</span> <span class="include">"opentelemetry/sdk_config.h"</span>
</span></code></pre>
<p>applied to jaegertracing/opentelemetry-cpp</p> Ceph - Backport #62103 (Resolved): pacific: Quincy and pacific fail to compile after Boost upgrad...https://tracker.ceph.com/issues/621032023-07-20T16:36:44ZAdam Emersonaemerson@redhat.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/52790">https://github.com/ceph/ceph/pull/52790</a></p> Ceph - Backport #62102 (Resolved): quincy: Quincy and pacific fail to compile after Boost upgrade...https://tracker.ceph.com/issues/621022023-07-20T16:24:59ZAdam Emersonaemerson@redhat.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/52564">https://github.com/ceph/ceph/pull/52564</a></p> Ceph - Backport #62101 (Resolved): reef: Quincy and pacific fail to compile after Boost upgrade o...https://tracker.ceph.com/issues/621012023-07-20T16:23:29ZAdam Emersonaemerson@redhat.com
<p>After upgrading Boost on main, the Jenkins `make check` job fails on both quincy and pacific. This is caused by two issues:</p>
<p>1. install-deps on Quincy and Pacific was never updated after the switch to Jammy and it wouldn't surprise me if it'd been building against the wrong packages for a while now.</p>
<p>2. Boost.Phoenix 1.81 introduces an ODR-violation. A workaround for it is included in main, but not in anything earlier. It's probably worth including anyway in case someone tries to compile on a newer boost.</p> Ceph - Bug #62097 (Resolved): Quincy and pacific fail to compile after Boost upgrade of mainhttps://tracker.ceph.com/issues/620972023-07-20T01:20:22ZAdam Emersonaemerson@redhat.com
<p>After upgrading Boost on main, the Jenkins `make check` job fails on both quincy and pacific. This is caused by two issues:</p>
<p>1. install-deps on Quincy and Pacific was never updated after the switch to Jammy and it wouldn't surprise me if it'd been building against the wrong packages for a while now.</p>
<p>2. Boost.Phoenix 1.81 introduces an ODR-violation. A workaround for it is included in main, but not in anything earlier. It's probably worth including anyway in case someone tries to compile on a newer boost.</p> CI - Support #58455 (Closed): Please compile Boost 1.81 and make available on Shamanhttps://tracker.ceph.com/issues/584552023-01-13T19:59:28ZAdam Emersonaemerson@redhat.com
<p>Hello,</p>
<p>Boost 1.79 has a bug where it improperly detects C++20 coroutine support under Clang which breaks testing. This is fixed in 1.81. Please compile Boost 1.81 (<a class="external" href="https://boostorg.jfrog.io/artifactory/main/release/1.81/source/">https://boostorg.jfrog.io/artifactory/main/release/1.81/source/</a>) into an Ubuntu package and make it available at 'https://shaman.ceph.com/api/repos/${project}/master/${sha1}/ubuntu/${codename}/repo' for the use of install-deps.sh.</p>
<p>I don't believe I have access to do this, but if I do, just tell me how (push something to something?) and I'll be happy to do it himself.</p>
<p>Thank you very much</p> rgw - Backport #58403 (Resolved): multisite replication issue on Quincyhttps://tracker.ceph.com/issues/584032023-01-09T19:22:00ZAdam Emersonaemerson@redhat.com
<p>We have encountered replication issues in our multisite settings with Quincy v17.2.3.</p>
<p>Our Ceph clusters are brand new. We tore down our clusters and re-deployed fresh Quincy ones before we did our test.<br />In our environment, we have 3 RGW nodes per site, each node has 2 instances for client traffic and 1 instance dedicated for replication.</p>
<p>Our test was done using cosbench with the following settings:<br />- 10 rgw users<br />- 3000 buckets per user<br />- write only<br />- 6 different object sizes with the following distribution:<br /> 1k: 17%<br /> 2k: 48%<br /> 3k: 14%<br /> 4k: 5%<br /> 1M: 13%<br /> 8M: 3%<br />- trying to write 10 million objects per object size bucket per user to avoid writing to the same objects<br />- no multipart uploads involved<br />The test ran for about 2 hours roughly from 22:50pm 9/14 to 1:00am 9/15. And after that, the replication tail continued for another roughly 4 hours till 4:50am 9/15 with gradually decreasing replication traffic. Then the replication stopped and nothing has been going on in the clusters since.</p>
<p>While we were verifying the replication status, we found many issues.<br />1. The sync status shows the clusters are not fully synced. However all the replication traffic has stopped and nothing is going on in the clusters.<br />Secondary zone:<br /><pre>
realm 8a98f19f-db58-4c09-bde6-ac89560d79b0 (prod-realm)
zonegroup e041ea69-1e0b-4ad7-92f2-74b20aa3edf3 (prod-zonegroup)
zone 1dadcf12-f44c-4940-8acc-9623a48b829e (prod-zone-tt)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: b68a526a-ffaa-4058-9903-6e7c6eac35bb (prod-zone-pw)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is behind on 2 shards
behind shards: [40,74]
</pre><br />Why the replication stopped even though the clusters are still not in-sync?</p>
<p>2. We can see some buckets are not fully synced and we are able to identified some missing objects in our secondary zone.<br />Here is an example bucket. This is its sync status in the secondary zone.<br /><pre>
realm 8a98f19f-db58-4c09-bde6-ac89560d79b0 (prod-realm)
zonegroup e041ea69-1e0b-4ad7-92f2-74b20aa3edf3 (prod-zonegroup)
zone 1dadcf12-f44c-4940-8acc-9623a48b829e (prod-zone-tt)
bucket :mixed-5wrks-dev-4k-thisisbcstestload004178[b68a526a-ffaa-4058-9903-6e7c6eac35bb.89152.78])
source zone b68a526a-ffaa-4058-9903-6e7c6eac35bb (prod-zone-pw)
source bucket :mixed-5wrks-dev-4k-thisisbcstestload004178[b68a526a-ffaa-4058-9903-6e7c6eac35bb.89152.78])
full sync: 0/101 shards
incremental sync: 100/101 shards
bucket is behind on 1 shards
behind shards: [78]
</pre></p>
<p>3. We can see from the above sync status, the behind shard for the example bucket is not in the list of the behind shards for the system sync status. Why is that?</p>
<p>4. Data sync status for these behind shards doesn't list any "pending_buckets" or "recovering_buckets".<br />An example:<br /><pre>
{
"shard_id": 74,
"marker": {
"status": "incremental-sync",
"marker": "00000000000000000003:00000000000003381964",
"next_step_marker": "",
"total_entries": 0,
"pos": 0,
"timestamp": "2022-09-15T00:00:08.718840Z"
},
"pending_buckets": [],
"recovering_buckets": []
}
</pre><br />Shouldn't the not-yet-in-sync buckets be listed here?</p>
<p>5. The sync status of the primary zone is different from the sync status of the secondary zone with different groups of behind shards. The same for the sync status of the same bucket. Is it legitimate? Please see the item 1 for sync status of the secondary zone, and the item 6 for the primary zone.</p>
<p>6. Why the primary zone has behind shards anyway since the replication is from primary to the secondary?|<br />Primary Zone:<br /><pre>
realm 8a98f19f-db58-4c09-bde6-ac89560d79b0 (prod-realm)
zonegroup e041ea69-1e0b-4ad7-92f2-74b20aa3edf3 (prod-zonegroup)
zone b68a526a-ffaa-4058-9903-6e7c6eac35bb (prod-zone-pw)
metadata sync no sync (zone is master)
data sync source: 1dadcf12-f44c-4940-8acc-9623a48b829e (prod-zone-tt)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is behind on 30 shards
behind shards: [6,7,26,28,29,37,47,52,55,56,61,67,68,69,74,79,82,91,95,99,101,104,106,111,112,121,122,123,126,127]
</pre></p>
<p>7. We have buckets in-sync that show correct sync status in secondary zone but still show behind shards in primary. Why is that?<br />Secondary Zone:<br /><pre>
realm 8a98f19f-db58-4c09-bde6-ac89560d79b0 (prod-realm)
zonegroup e041ea69-1e0b-4ad7-92f2-74b20aa3edf3 (prod-zonegroup)
zone 1dadcf12-f44c-4940-8acc-9623a48b829e (prod-zone-tt)
bucket :mixed-5wrks-dev-4k-thisisbcstestload008167[b68a526a-ffaa-4058-9903-6e7c6eac35bb.89754.279])
source zone b68a526a-ffaa-4058-9903-6e7c6eac35bb (prod-zone-pw)
source bucket :mixed-5wrks-dev-4k-thisisbcstestload008167[b68a526a-ffaa-4058-9903-6e7c6eac35bb.89754.279])
full sync: 0/101 shards
incremental sync: 99/101 shards
bucket is caught up with source
</pre></p>
<p>Primary zone:<br /><pre>
realm 8a98f19f-db58-4c09-bde6-ac89560d79b0 (prod-realm)
zonegroup e041ea69-1e0b-4ad7-92f2-74b20aa3edf3 (prod-zonegroup)
zone b68a526a-ffaa-4058-9903-6e7c6eac35bb (prod-zone-pw)
bucket :mixed-5wrks-dev-4k-thisisbcstestload008167[b68a526a-ffaa-4058-9903-6e7c6eac35bb.89754.279])
source zone 1dadcf12-f44c-4940-8acc-9623a48b829e (prod-zone-tt)
source bucket :mixed-5wrks-dev-4k-thisisbcstestload008167[b68a526a-ffaa-4058-9903-6e7c6eac35bb.89754.279])
full sync: 0/101 shards
incremental sync: 97/101 shards
bucket is behind on 11 shards
behind shards: [9,11,14,16,22,31,44,45,67,85,90]
</pre></p>
<p>Our primary goals here are:<br />1. to find out why the replication stopped while the clusters are not in-sync;<br />2. to understand what we need to do resume the replication, and to make sure it runs to the end without too much lagging;<br />3. to understand if all the sync status info is correct. Seems to us there are many conflicts, and some doesn't reflect the real status of the clusters at all.</p>
<p>We attached the following info of our system:<br />- ceph.conf of rgws<br />- ceph config dump<br />- ceph versions output<br />- sync status of cluster, an in-sync bucket, a not-in-sync bucket, and some behind shards<br />- bucket list and bucket stats of a not-in-sync bucket and stat of a not-in-sync object</p> rgw - Backport #58402 (Resolved): multisite replication issue on Quincyhttps://tracker.ceph.com/issues/584022023-01-09T19:21:22ZAdam Emersonaemerson@redhat.com
<p>We have encountered replication issues in our multisite settings with Quincy v17.2.3.</p>
<p>Our Ceph clusters are brand new. We tore down our clusters and re-deployed fresh Quincy ones before we did our test.<br />In our environment, we have 3 RGW nodes per site, each node has 2 instances for client traffic and 1 instance dedicated for replication.</p>
<p>Our test was done using cosbench with the following settings:<br />- 10 rgw users<br />- 3000 buckets per user<br />- write only<br />- 6 different object sizes with the following distribution:<br /> 1k: 17%<br /> 2k: 48%<br /> 3k: 14%<br /> 4k: 5%<br /> 1M: 13%<br /> 8M: 3%<br />- trying to write 10 million objects per object size bucket per user to avoid writing to the same objects<br />- no multipart uploads involved<br />The test ran for about 2 hours roughly from 22:50pm 9/14 to 1:00am 9/15. And after that, the replication tail continued for another roughly 4 hours till 4:50am 9/15 with gradually decreasing replication traffic. Then the replication stopped and nothing has been going on in the clusters since.</p>
<p>While we were verifying the replication status, we found many issues.<br />1. The sync status shows the clusters are not fully synced. However all the replication traffic has stopped and nothing is going on in the clusters.<br />Secondary zone:<br /><pre>
realm 8a98f19f-db58-4c09-bde6-ac89560d79b0 (prod-realm)
zonegroup e041ea69-1e0b-4ad7-92f2-74b20aa3edf3 (prod-zonegroup)
zone 1dadcf12-f44c-4940-8acc-9623a48b829e (prod-zone-tt)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: b68a526a-ffaa-4058-9903-6e7c6eac35bb (prod-zone-pw)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is behind on 2 shards
behind shards: [40,74]
</pre><br />Why the replication stopped even though the clusters are still not in-sync?</p>
<p>2. We can see some buckets are not fully synced and we are able to identified some missing objects in our secondary zone.<br />Here is an example bucket. This is its sync status in the secondary zone.<br /><pre>
realm 8a98f19f-db58-4c09-bde6-ac89560d79b0 (prod-realm)
zonegroup e041ea69-1e0b-4ad7-92f2-74b20aa3edf3 (prod-zonegroup)
zone 1dadcf12-f44c-4940-8acc-9623a48b829e (prod-zone-tt)
bucket :mixed-5wrks-dev-4k-thisisbcstestload004178[b68a526a-ffaa-4058-9903-6e7c6eac35bb.89152.78])
source zone b68a526a-ffaa-4058-9903-6e7c6eac35bb (prod-zone-pw)
source bucket :mixed-5wrks-dev-4k-thisisbcstestload004178[b68a526a-ffaa-4058-9903-6e7c6eac35bb.89152.78])
full sync: 0/101 shards
incremental sync: 100/101 shards
bucket is behind on 1 shards
behind shards: [78]
</pre></p>
<p>3. We can see from the above sync status, the behind shard for the example bucket is not in the list of the behind shards for the system sync status. Why is that?</p>
<p>4. Data sync status for these behind shards doesn't list any "pending_buckets" or "recovering_buckets".<br />An example:<br /><pre>
{
"shard_id": 74,
"marker": {
"status": "incremental-sync",
"marker": "00000000000000000003:00000000000003381964",
"next_step_marker": "",
"total_entries": 0,
"pos": 0,
"timestamp": "2022-09-15T00:00:08.718840Z"
},
"pending_buckets": [],
"recovering_buckets": []
}
</pre><br />Shouldn't the not-yet-in-sync buckets be listed here?</p>
<p>5. The sync status of the primary zone is different from the sync status of the secondary zone with different groups of behind shards. The same for the sync status of the same bucket. Is it legitimate? Please see the item 1 for sync status of the secondary zone, and the item 6 for the primary zone.</p>
<p>6. Why the primary zone has behind shards anyway since the replication is from primary to the secondary?|<br />Primary Zone:<br /><pre>
realm 8a98f19f-db58-4c09-bde6-ac89560d79b0 (prod-realm)
zonegroup e041ea69-1e0b-4ad7-92f2-74b20aa3edf3 (prod-zonegroup)
zone b68a526a-ffaa-4058-9903-6e7c6eac35bb (prod-zone-pw)
metadata sync no sync (zone is master)
data sync source: 1dadcf12-f44c-4940-8acc-9623a48b829e (prod-zone-tt)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is behind on 30 shards
behind shards: [6,7,26,28,29,37,47,52,55,56,61,67,68,69,74,79,82,91,95,99,101,104,106,111,112,121,122,123,126,127]
</pre></p>
<p>7. We have buckets in-sync that show correct sync status in secondary zone but still show behind shards in primary. Why is that?<br />Secondary Zone:<br /><pre>
realm 8a98f19f-db58-4c09-bde6-ac89560d79b0 (prod-realm)
zonegroup e041ea69-1e0b-4ad7-92f2-74b20aa3edf3 (prod-zonegroup)
zone 1dadcf12-f44c-4940-8acc-9623a48b829e (prod-zone-tt)
bucket :mixed-5wrks-dev-4k-thisisbcstestload008167[b68a526a-ffaa-4058-9903-6e7c6eac35bb.89754.279])
source zone b68a526a-ffaa-4058-9903-6e7c6eac35bb (prod-zone-pw)
source bucket :mixed-5wrks-dev-4k-thisisbcstestload008167[b68a526a-ffaa-4058-9903-6e7c6eac35bb.89754.279])
full sync: 0/101 shards
incremental sync: 99/101 shards
bucket is caught up with source
</pre></p>
<p>Primary zone:<br /><pre>
realm 8a98f19f-db58-4c09-bde6-ac89560d79b0 (prod-realm)
zonegroup e041ea69-1e0b-4ad7-92f2-74b20aa3edf3 (prod-zonegroup)
zone b68a526a-ffaa-4058-9903-6e7c6eac35bb (prod-zone-pw)
bucket :mixed-5wrks-dev-4k-thisisbcstestload008167[b68a526a-ffaa-4058-9903-6e7c6eac35bb.89754.279])
source zone 1dadcf12-f44c-4940-8acc-9623a48b829e (prod-zone-tt)
source bucket :mixed-5wrks-dev-4k-thisisbcstestload008167[b68a526a-ffaa-4058-9903-6e7c6eac35bb.89754.279])
full sync: 0/101 shards
incremental sync: 97/101 shards
bucket is behind on 11 shards
behind shards: [9,11,14,16,22,31,44,45,67,85,90]
</pre></p>
<p>Our primary goals here are:<br />1. to find out why the replication stopped while the clusters are not in-sync;<br />2. to understand what we need to do resume the replication, and to make sure it runs to the end without too much lagging;<br />3. to understand if all the sync status info is correct. Seems to us there are many conflicts, and some doesn't reflect the real status of the clusters at all.</p>
<p>We attached the following info of our system:<br />- ceph.conf of rgws<br />- ceph config dump<br />- ceph versions output<br />- sync status of cluster, an in-sync bucket, a not-in-sync bucket, and some behind shards<br />- bucket list and bucket stats of a not-in-sync bucket and stat of a not-in-sync object</p> sepia - Bug #57186 (Closed): Please fix Folio11, which won't boothttps://tracker.ceph.com/issues/571862022-08-18T20:50:14ZAdam Emersonaemerson@redhat.com
<p>I seem to have broken something when trying to upgrade to Centos-Stream-9 to get a newer build toolchain, and I don't think I have the IPMI access needed to fix an unbooting machine.</p>
<p>Could someone fix this? If it's easiest just reimaging with new Fedora, CentOS-9-Stream, or Jammy would be ideal.</p>
<p>(But if that's not an option I'll take whatever's doable.)</p>
<p>Thank you.</p> rgw - Bug #57063 (Resolved): Incorrect resumption from Full to Incremental Sync in RGWDataSyncSha...https://tracker.ceph.com/issues/570632022-08-08T19:43:33ZAdam Emersonaemerson@redhat.com
<p>Because the `switch` statement is outside a resume block, if Full sync yields after the switch to incremental, the remaining code in `full_sync()` is never executed.</p> sepia - Bug #56665 (Resolved): Cobbler access requesthttps://tracker.ceph.com/issues/566652022-07-21T18:49:18ZAdam Emersonaemerson@redhat.com
<p>Please create an account for the Web cobbler UI for me.</p>
<p>Alternatively, please reimage Folio11, the system is old/unmaintained and hooked into dead CentOS repositories.</p> sepia - Support #46026 (Closed): Lost access credentials due to a machine going downhttps://tracker.ceph.com/issues/460262020-06-16T06:22:14ZAdam Emersonaemerson@redhat.com
<p>The machine with my sepia lab credentials went down and I can't go in to the office to fix it because of COVID-19.</p>
<p>My username is aemerson</p>
<p>Could you please add the SSH key:</p>
<p>ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAv8/FJn7hz2WieoQFUGXS0bhwSwL1LCI9kkjWYNkWb8IJPrBetlkr5TsvHEKDPZGF5Sv8LsYA6njPnbOGyAHpYlnv/Gz4Hwq3hLgmXCs6JGWBQv8NS+pjwnG6Nh6JfFI0ANeg652Rk9Qt9uLYeKVKmHXXrVZAD4NMvJGfEvneQzuwwpylUmV80WAdaQw/H0EP4NScZhNhknwAs/+KPFZH+RFoUj/Cb3poIFjwyviIgyal5hjAEsoczCW3hAla/9IyC2VDS027WPiPFmBFaYK5lYYLStBQdRx+6amH76XScLFy0FEPLL5qiN0ClAptvh1oIURgipdXZwaxpf06AXG8qw==</p>
<p>To the authorized_hosts file in my account and send me VPN access credentials to <a class="email" href="mailto:aemerson@redhat.com">aemerson@redhat.com</a>?</p>
<p>Is there some way I can authenticate myself to you? I can send and receive email from my redhat.com address or get on Bluejeans with someone who'd recognize my face and voice?</p> Ceph - Bug #42596 (Resolved): Linker segfaults in Jenkins build on Intel and ARMhttps://tracker.ceph.com/issues/425962019-11-01T20:19:34ZAdam Emersonaemerson@redhat.com
<pre>
[ 60%] Building CXX object src/librbd/CMakeFiles/rbd_internal.dir/journal/Replay.cc.o
[ 60%] Building CXX object src/rgw/CMakeFiles/rgw_common.dir/rgw_rest_pubsub_common.cc.o
Scanning dependencies of target erasure_code_plugins
[ 60%] Built target erasure_code_plugins
[ 60%] Building CXX object src/rgw/CMakeFiles/rgw_common.dir/rgw_rest_realm.cc.o
[ 62%] Building CXX object src/rgw/CMakeFiles/rgw_common.dir/rgw_rest_role.cc.o
[ 62%] Building CXX object src/rgw/CMakeFiles/rgw_common.dir/rgw_rest_s3.cc.o
[ 62%] Building CXX object src/librbd/CMakeFiles/rbd_internal.dir/journal/ResetRequest.cc.o
[ 62%] Building CXX object src/crimson/CMakeFiles/crimson-common.dir/__/osd/osd_types.cc.o
collect2: fatal error: ld terminated with signal 11 [Segmentation fault], core dumped
compilation terminated.
src/librados/CMakeFiles/librados.dir/build.make:123: recipe for target 'lib/librados.so.2.0.0' failed
make[3]: *** [lib/librados.so.2.0.0] Error 1
make[3]: *** Deleting file 'lib/librados.so.2.0.0'
CMakeFiles/Makefile2:4262: recipe for target 'src/librados/CMakeFiles/librados.dir/all' failed
make[2]: *** [src/librados/CMakeFiles/librados.dir/all] Error 2
make[2]: *** Waiting for unfinished jobs....
</pre>
<p>From <a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/37673/consoleFull#-853622487be8fa57d-c354-45fc-9e5c-e33905c76575">https://jenkins.ceph.com/job/ceph-pull-requests/37673/consoleFull#-853622487be8fa57d-c354-45fc-9e5c-e33905c76575</a></p> rgw - Bug #21962 (Resolved): Policy parser may or may not dereference uninitialized boost::option...https://tracker.ceph.com/issues/219622017-10-27T22:25:32ZAdam Emersonaemerson@redhat.com