Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2022-01-28T15:55:57Z
Ceph
Redmine
Orchestrator - Documentation #54056 (New): https://docs.ceph.com/en/latest/cephadm/install/#deplo...
https://tracker.ceph.com/issues/54056
2022-01-28T15:55:57Z
Sebastian Wagner
<p><a class="external" href="https://docs.ceph.com/en/latest/cephadm/install/#deployment-in-an-isolated-environment">https://docs.ceph.com/en/latest/cephadm/install/#deployment-in-an-isolated-environment</a></p>
<ul>
<li>e.g. the monitoring stack and ingress is completely missing</li>
<li>link to the registry-login functionality</li>
</ul>
Orchestrator - Cleanup #54042 (New): move pybind/ceph_argparse.py and pybind/ceph_daemon.py into ...
https://tracker.ceph.com/issues/54042
2022-01-27T15:28:18Z
Sebastian Wagner
<p>Why? because there is not reason we need those as extra RPM packages. We can simply remove those RPM packages and simply make use of python-common.</p>
<p>This is basically just a <strong>git mv src/pybind/ceph_argparse.py src/python_common/ceph/argparse.py</strong> plus some import fixes plus some ceph.spec cleanup.</p>
<p>Benefit is clear: as those modules are huge, we'd have the ability to properly refactor those finally</p>
Orchestrator - Cleanup #54001 (New): type safe Python mon_command API clients
https://tracker.ceph.com/issues/54001
2022-01-24T16:13:41Z
Sebastian Wagner
<p>like for the last 6 years, I always wondered, why we don't have an type<br />safe way to use the mon command api. Turns out I never had the time to<br />actually work on it. But my idea looks more or less like so:</p>
<p><a class="external" href="https://pad.ceph.com/p/mon_command_python_api">https://pad.ceph.com/p/mon_command_python_api</a></p>
<p>what do you think? Is there any use for something like this?</p>
<pre><code class="python syntaxhl"><span class="CodeRay"><span class="comment"># mon_command_api.py</span>
<span class="keyword">class</span> <span class="class">CommandResult</span>(NamedTuple):
ret: <span class="predefined">int</span>
stdout: <span class="predefined">str</span>
stderr: <span class="predefined">str</span>
<span class="keyword">def</span> <span class="function">Call</span>(NamedTuple):
api: MonCommandApi
prefix: <span class="predefined">str</span>
<span class="keyword">def</span> <span class="function">__call__</span>(<span class="predefined-constant">self</span>, **kwargs):
kwargs[<span class="string"><span class="delimiter">'</span><span class="content">prefix</span><span class="delimiter">'</span></span>] = prefix
<span class="keyword">return</span> CommandResult(<span class="predefined-constant">self</span>.api._mon_command(kwargs))
<span class="keyword">class</span> <span class="class">MonCommandApi</span>:
<span class="keyword">def</span> <span class="function">__init__</span>(<span class="predefined-constant">self</span>, rados):
<span class="predefined-constant">self</span>.rados = rados
<span class="keyword">def</span> <span class="function">_mon_command</span>(<span class="predefined-constant">self</span>, <span class="predefined">dict</span>):
<span class="keyword">return</span> CommandResult(<span class="predefined-constant">self</span>.rados.mon_command(<span class="predefined">dict</span>))
<span class="keyword">def</span> <span class="function">__get_attr__</span>(<span class="predefined-constant">self</span>, aname):
<span class="keyword">return</span> Call(<span class="predefined-constant">self</span>, aname.replace(<span class="string"><span class="delimiter">'</span><span class="content">_</span><span class="delimiter">'</span></span>, <span class="string"><span class="delimiter">'</span><span class="content"> </span><span class="delimiter">'</span></span>))
<span class="comment"># mon_command_api.pyi. Autogenerated by a script, exactly like https://docs.ceph.com/en/latest/api/mon_command_api/</span>
<span class="keyword">class</span> <span class="class">MonCommandApi</span>:
<span class="keyword">def</span> <span class="function">__init__</span>(<span class="predefined-constant">self</span>, rados): <span class="predefined-constant">None</span>:
<span class="predefined-constant">self</span>.rados = rados
<span class="keyword">def</span> <span class="function">_mon_command</span>(<span class="predefined-constant">self</span>, <span class="predefined">dict</span>) -> CommandResult:
...
<span class="keyword">def</span> <span class="function">orch_rm_daemon</span>(name: <span class="predefined">str</span>) -> CommandResult
...
<span class="keyword">def</span> <span class="function">osd_create_pool</span>(pool: <span class="predefined">str</span>, pg_num: <span class="predefined">int</span>, size: <span class="predefined">int</span>)
...
<span class="comment"># example use:</span>
<span class="keyword">def</span> <span class="function">orch_rm_daemon</span>(daemon_name):
api = MonCommandApi(...)
<span class="keyword">return</span> api.orch_rm_daemon(name=daemon_name)
<span class="keyword">def</span> <span class="function">other_function</span>(daemon_name):
api = MonCommandApi(...)
<span class="keyword">return</span> api.osd_create_pool(pool=<span class="string"><span class="delimiter">'</span><span class="content">my_pool</span><span class="delimiter">'</span></span>, pg_num=<span class="integer">42</span>, size=<span class="integer">3</span>)
</span></code></pre>
Orchestrator - Cleanup #54000 (New): cephadm: upgrade commands should return yaml
https://tracker.ceph.com/issues/54000
2022-01-24T15:39:53Z
Sebastian Wagner
<p>Right now, commands like</p>
<ul>
<li>ceph orch upgrade ls</li>
<li>ceph orch upgrade status</li>
</ul>
<p>are returning json. YAML is much more readable. Let's return yaml instead!</p>
<p><strong>Note</strong>, now that needs a <strong>--format=...</strong> argument.</p>
Orchestrator - Cleanup #53999 (New): orch interface: cephadm contains a lot of special apply methods
https://tracker.ceph.com/issues/53999
2022-01-24T15:28:49Z
Sebastian Wagner
<pre><code class="python syntaxhl"><span class="CodeRay"> <span class="decorator">@handle_orch_error</span>
<span class="keyword">def</span> <span class="function">apply_rgw</span>(<span class="predefined-constant">self</span>, spec: ServiceSpec) -> <span class="predefined">str</span>:
<span class="keyword">return</span> <span class="predefined-constant">self</span>._apply(spec)
</span></code></pre>
<p>Let's remove them! It's getting out of hand by now. Just use <strong>apply</strong> for everything.</p>
Orchestrator - Feature #53967 (New): support minor downgrades
https://tracker.ceph.com/issues/53967
2022-01-21T16:47:54Z
Sebastian Wagner
<p>There is a general (needs to be properly supported by all components!) demand to support minor downgrades. For that we have to solve things like</p>
<pre>
Jan 21 15:09:08 standalone.localdomain conmon[286544]: debug 2022-01-21T15:09:08.359+0000 7f1657d84700 0 log_channel(cephadm) log [DBG] : mgr option log_to_file = False
Jan 21 15:09:08 standalone.localdomain conmon[286544]: debug 2022-01-21T15:09:08.359+0000 7f1657d84700 0 log_channel(cephadm) log [DBG] : mgr option log_to_cluster_level = debug
Jan 21 15:09:08 standalone.localdomain conmon[286544]: debug 2022-01-21T15:09:08.374+0000 7f1657d84700 -1 mgr load Failed to construct class in 'cephadm'
Jan 21 15:09:08 standalone.localdomain conmon[286544]: debug 2022-01-21T15:09:08.374+0000 7f1657d84700 -1 mgr load Traceback (most recent call last):
Jan 21 15:09:08 standalone.localdomain conmon[286544]: File "/usr/share/ceph/mgr/cephadm/module.py", line 405, in __init__
Jan 21 15:09:08 standalone.localdomain conmon[286544]: self.upgrade = CephadmUpgrade(self)
Jan 21 15:09:08 standalone.localdomain conmon[286544]: File "/usr/share/ceph/mgr/cephadm/upgrade.py", line 76, in __init__
Jan 21 15:09:08 standalone.localdomain conmon[286544]: self.upgrade_state: Optional[UpgradeState] = UpgradeState.from_json(json.loads(t))
Jan 21 15:09:08 standalone.localdomain conmon[286544]: File "/usr/share/ceph/mgr/cephadm/upgrade.py", line 58, in from_json
Jan 21 15:09:08 standalone.localdomain conmon[286544]: return cls(**c)
Jan 21 15:09:08 standalone.localdomain conmon[286544]: TypeError: __init__() got an unexpected keyword argument 'fs_original_allow_standby_replay'
Jan 21 15:09:08 standalone.localdomain conmon[286544]:
Jan 21 15:09:08 standalone.localdomain conmon[286544]: debug 2022-01-21T15:09:08.379+0000 7f1657d84700 -1 mgr operator() Failed to run module in active mode ('cephadm')
Jan 21 15:09:08 standalone.localdomain conmon[286544]: debug 2022-01-21T15:09:08.380+0000 7f1657d84700 0 [crash DEBUG root] setting log level based on debug_mgr: WARNING (1/5)
Jan 21 15:09:08 standalone.localdomain conmon[286544]: debug 2022-01-21T15:09:08.380+0000 7f1657d84700 1 mgr load Constructed class from module: crash
</pre>
Orchestrator - Bug #53965 (New): cephadm: RGW container is crashing at 'rados_nobjects_list_next2...
https://tracker.ceph.com/issues/53965
2022-01-21T15:53:15Z
Sebastian Wagner
<p>when upgrading from ceph-ansible, we're getting:</p>
<pre>
Jan 12 18:30:14 host conmon[12897]: debug 2022-01-12T17:30:14.112+0000 0 starting handler: beast
Jan 12 18:30:14 host conmon[12897]: ceph version 16.2.0-146.x (sha) pacific (stable)
Jan 12 18:30:14 host conmon[12897]: 1: /lib64/libpthread.so.0(+0x12c20) [0x7fb4b1e86c20]
Jan 12 18:30:14 host conmon[12897]: 2: gsignal()
Jan 12 18:30:14 host conmon[12897]: 3: abort()
Jan 12 18:30:14 host conmon[12897]: 4: /lib64/libstdc++.so.6(+0x9009b) [0x7fb4b0e7a09b]
Jan 12 18:30:14 host conmon[12897]: 5: /lib64/libstdc++.so.6(+0x9653c) [0x7fb4b0e8053c]
Jan 12 18:30:14 host conmon[12897]: 6: /lib64/libstdc++.so.6(+0x96597) [0x7fb4b0e80597]
Jan 12 18:30:14 host conmon[12897]: 7: /lib64/libstdc++.so.6(+0x967f8) [0x7fb4b0e807f8]
Jan 12 18:30:14 host conmon[12897]: 8: /lib64/librados.so.2(+0x3a4c0) [0x7fb4bc3f74c0]
Jan 12 18:30:14 host conmon[12897]: 9: /lib64/librados.so.2(+0x809d2) [0x7fb4bc43d9d2]
Jan 12 18:30:14 host conmon[12897]: 10: (librados::v14_2_0::IoCtx::nobjects_begin(librados::v14_2_0::ObjectCursor const&, ceph::buffer::v15_2_0::list const&)+0x5c) [0x7fb4bc43da5c]
Jan 12 18:30:14 host conmon[12897]: 11: (RGWSI_RADOS::Pool::List::init(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, RGWAccessListFilter*)+0x115) [0x7fb4bd26d8e5]
Jan 12 18:30:14 host conmon[12897]: 12: (RGWSI_SysObj_Core::pool_list_objects_init(rgw_pool const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, RGWSI_SysObj::Pool::ListCtx*)+0x24b) [0x7fb4bcd9b0bb]
Jan 12 18:30:14 host conmon[12897]: 13: (RGWSI_MetaBackend_SObj::list_init(RGWSI_MetaBackend::Context*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1e0) [0x7fb4bd2609d0]
Jan 12 18:30:14 host conmon[12897]: 14: (RGWMetadataHandler_GenericMetaBE::list_keys_init(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void**)+0x40) [0x7fb4bcead940]
Jan 12 18:30:14 host conmon[12897]: 15: (RGWMetadataManager::list_keys_init(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void**)+0x73) [0x7fb4bceaff13]
Jan 12 18:30:14 host conmon[12897]: 16: (RGWMetadataManager::list_keys_init(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void**)+0x3e) [0x7fb4bceaffce]
Jan 12 18:30:14 host conmon[12897]: 17: (RGWUserStatsCache::sync_all_users(optional_yield)+0x71) [0x7fb4bd052361]
Jan 12 18:30:14 host conmon[12897]: 18: (RGWUserStatsCache::UserSyncThread::entry()+0x10f) [0x7fb4bd05984f]
Jan 12 18:30:14 host conmon[12897]: 19: /lib64/libpthread.so.0(+0x817a) [0x7fb4b1e7c17a]
Jan 12 18:30:14 host conmon[12897]: 20: clone()
</pre>
<p>One possibility is that the pools might not have the rgw tag enabled.</p>
<p><strong>possible workaround</strong></p>
<p>add the rgw tag to all the rgw pools.</p>
<p><strong>known workaround</strong></p>
<p>change the cephx caps of all affected daemons from</p>
<pre>
mon allow *
mgr allow rw
osd allow rwx tag rgw *=*
</pre>
<p>to</p>
<pre>
mon allow *
mgr allow rw
osd allow rwx
</pre>
Orchestrator - Documentation #53532 (New): cephadm document Prometheus Sizing observations
https://tracker.ceph.com/issues/53532
2021-12-08T11:50:52Z
Sebastian Wagner
Prometheus Sizing observations
<ul>
<li>22/11 48 osds growing to 600+ - 124GB Disk space, peak of 3000 IOPS, 150MB/s, 2 cores (peak 6), 43G RAM</li>
<li>with 4147 OSDs, storage requirement is ~230GB, 2 cores (peak 8)</li>
</ul>
<p>Should be added to <a class="external" href="https://docs.ceph.com/en/latest/cephadm/services/monitoring/">https://docs.ceph.com/en/latest/cephadm/services/monitoring/</a></p>
<p>Paul:</p>
<ul>
<li>What does <strong>22/11</strong> mean?</li>
</ul>
Orchestrator - Bug #53529 (New): ceph orch apply ... --dry-run: Table not properly formatted
https://tracker.ceph.com/issues/53529
2021-12-08T11:28:20Z
Sebastian Wagner
<pre>
root@service-01-08020:~# ceph orch apply -i cadvisor.yaml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound
to the current inventory setup. If any on these conditions changes, the
preview will be invalid. Please make sure to have a minimal
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+-----------+--------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+
|SERVICE |NAME |ADD_TO |REMOVE_FROM |
+-----------+--------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +-------------+
|container |container.cadvisor |hosta08016 hosta08014 hosta08006 hosta08005 hosta08004 hosta08007 hosta08009 hosta08010 hosta08008 hosta08015 hosta08003 hosta08013 hosta08012 hosta08011 hosta08002 hostb08035 hostb08033 hostb08030 hostb08026 hostb08024 hostb08025 hostb08023 hostb08032 hostb08036 hostb08029 hostb08031 hostb08027 hostb08028 hostb08022 hostb08056 hostb08055 hostb08053 hostb08051 hostb08050 hostb08048 hostb08047 hostb08045 hostb08043 hostb08044 hostb08042 hostb08052 hostb08049 hostd08076 hostd08075 hostd08074 hostd08073 hostd08072 hostd08071 hostd08070 hostd08069 hostd08068 hostd08066 hostd08067 hostd08065 hostd08064 hostd08063 hostd08062 hoste08096 hoste08092 hoste08091 hoste08090 hoste08095 hoste08094 hoste08093 hoste08087 hoste08085 hoste08084 hoste08089 hoste08088 hoste08086 hoste08082 hoste08083 hostf08116 hostf08115 hostf08112 hostf08114 hostf08113 hostf08111 hostf08110 hostf08109 hostf08108 hostf08106 hostf08107 hostf08104 hostf08105 hostf08103 hostf08102 hostg08135 hostg08136 hostg08124 hostg08123 hostg08134 hostg08133 hostg08132 hostg08131 hostg08130 hostg08129 hostg08128 hostg08122 hostg08126 hostg08125 hostg08127 hosth08153 hosth08155 hosth08154 hosth08151 hosth08149 hosth08146 hosth08148 hosth08147 hosth08145 hosth08143 hosth08156 hosth08142 hosth08150 hosth08144 hosth08152 hosti08173 hosti08170 hosti08169 hosti08168 hosti08166 hosti08164 hosti08163 hosti08165 hosti08175 hosti08171 hosti08167 hosti08172 hosti08162 hostk08192 hostk08184 hostk08191 hostk08196 hostk08193 hostk08194 hostk08195 hostk08188 hostk08186 hostk08189 hostk08190 hostk08183 hostk08187 hostk08182 hostk08185 hostm08216 hostm08214 hostm08215 hostm08213 hostm08206 hostm08209 hostm08211 hostm08212 hostm08210 hostm08208 hostm08207 hostm08205 hostm08204 hostm08203 hostm08202 hostn08224 hostn08236 hostn08233 hostn08232 hostn08234 hostn08230 hostn08231 hostn08235 hostn08229 hostn08227 hostn08226 hostn08228 hostn08225 hostn08222 hostn08223 | |
+-----------+--------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +-------------+
################
OSDSPEC PREVIEWS
################
+---------+------+------+------+----+-----+
|SERVICE |NAME |HOST |DATA |DB |WAL |
+---------+------+------+------+----+-----+
+---------+------+------+------+----+-----+
</pre>
Orchestrator - Bug #53422 (New): tasks.cephfs.test_nfs.TestNFS.test_export_create_with_non_existi...
https://tracker.ceph.com/issues/53422
2021-11-29T08:47:58Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2021-11-26_13:52:15-orch:cephadm-wip-swagner2-testing-2021-11-26-1129-distro-default-smithi/6528237">https://pulpito.ceph.com/swagner-2021-11-26_13:52:15-orch:cephadm-wip-swagner2-testing-2021-11-26-1129-distro-default-smithi/6528237</a></p>
<pre>
teuthology.orchestra.run.smithi145:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph orch ps --service_name=nfs.test
teuthology.orchestra.run.smithi145.stdout:No daemons reported
test_export_create_with_non_existing_fsname (tasks.cephfs.test_nfs.TestNFS) ... FAIL
======================================================================
FAIL: test_export_create_with_non_existing_fsname (tasks.cephfs.test_nfs.TestNFS)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_ceph-c_c9fd0972675c568f5d517be49820a12e77fab497/qa/tasks/cephfs/test_nfs.py", line 412, in test_export_create_with_non_existing_fsname
self._test_create_cluster()
File "/home/teuthworker/src/git.ceph.com_ceph-c_c9fd0972675c568f5d517be49820a12e77fab497/qa/tasks/cephfs/test_nfs.py", line 125, in _test_create_cluster
self._check_nfs_cluster_status('running', 'NFS Ganesha cluster deployment failed')
File "/home/teuthworker/src/git.ceph.com_ceph-c_c9fd0972675c568f5d517be49820a12e77fab497/qa/tasks/cephfs/test_nfs.py", line 89, in _check_nfs_cluster_status
self.fail(fail_msg)
AssertionError: NFS Ganesha cluster deployment failed
</pre>
<p>Looking at the log, we see the <strong>mfs.test</strong> cluster being created multiple times successfully, but it was never created right before the last test case:</p>
<pre>
2021-11-26T16:46:09: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:46:09: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.phdkqk
2021-11-26T16:46:09: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:46:09: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:46:09: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.phdkqk-rgw
2021-11-26T16:46:09: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.phdkqk on smithi145
2021-11-26T16:46:41: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:46:41: cephadm [INF] Remove service nfs.test
2021-11-26T16:46:41: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.phdkqk...
2021-11-26T16:46:41: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.phdkqk from smithi145
2021-11-26T16:46:45: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.phdkqk
2021-11-26T16:46:45: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.phdkqk-rgw
2021-11-26T16:46:45: cephadm [INF] Purge service nfs.test
2021-11-26T16:46:45: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:46:57: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:46:57: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.vgdnuw
2021-11-26T16:46:57: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:46:57: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:46:57: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.vgdnuw-rgw
2021-11-26T16:46:57: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.vgdnuw on smithi145
2021-11-26T16:47:51: cephadm [INF] Restart service nfs.test
2021-11-26T16:48:22: cephadm [INF] Restart service nfs.test
2021-11-26T16:49:02: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:49:02: cephadm [INF] Remove service nfs.test
2021-11-26T16:49:15: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.vgdnuw...
2021-11-26T16:49:15: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.vgdnuw from smithi145
2021-11-26T16:49:19: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.vgdnuw
2021-11-26T16:49:19: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.vgdnuw-rgw
2021-11-26T16:49:19: cephadm [INF] Purge service nfs.test
2021-11-26T16:49:19: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:49:47: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:49:47: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.heyhit
2021-11-26T16:49:47: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:49:47: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:49:47: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.heyhit-rgw
2021-11-26T16:49:47: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.heyhit on smithi145
2021-11-26T16:50:18: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:50:18: cephadm [INF] Remove service nfs.test
2021-11-26T16:50:18: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.heyhit...
2021-11-26T16:50:18: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.heyhit from smithi145
2021-11-26T16:50:22: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.heyhit
2021-11-26T16:50:22: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.heyhit-rgw
2021-11-26T16:50:22: cephadm [INF] Purge service nfs.test
2021-11-26T16:50:22: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:50:32: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:50:32: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.woegpi
2021-11-26T16:50:32: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:50:32: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:50:32: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.woegpi-rgw
2021-11-26T16:50:32: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.woegpi on smithi145
2021-11-26T16:51:19: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:51:19: cephadm [INF] Remove service nfs.test
2021-11-26T16:51:19: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.woegpi...
2021-11-26T16:51:19: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.woegpi from smithi145
2021-11-26T16:51:34: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.woegpi
2021-11-26T16:51:34: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.woegpi-rgw
2021-11-26T16:51:34: cephadm [INF] Purge service nfs.test
2021-11-26T16:51:34: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:51:56: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:51:56: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.uwhrnz
2021-11-26T16:51:56: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:51:56: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:51:56: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.uwhrnz-rgw
2021-11-26T16:51:56: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.uwhrnz on smithi145
2021-11-26T16:52:27: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:52:27: cephadm [INF] Remove service nfs.test
2021-11-26T16:52:27: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.uwhrnz...
2021-11-26T16:52:27: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.uwhrnz from smithi145
2021-11-26T16:52:32: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.uwhrnz
2021-11-26T16:52:32: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.uwhrnz-rgw
2021-11-26T16:52:32: cephadm [INF] Purge service nfs.test
2021-11-26T16:52:32: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:52:41: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:52:41: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.ccuxek
2021-11-26T16:52:41: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:52:41: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:52:41: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.ccuxek-rgw
2021-11-26T16:52:41: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.ccuxek on smithi145
2021-11-26T16:53:15: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:53:15: cephadm [INF] Remove service nfs.test
2021-11-26T16:53:15: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.ccuxek...
2021-11-26T16:53:15: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.ccuxek from smithi145
2021-11-26T16:53:19: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.ccuxek
2021-11-26T16:53:19: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.ccuxek-rgw
2021-11-26T16:53:19: cephadm [INF] Purge service nfs.test
2021-11-26T16:53:19: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:53:28: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:53:28: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.dduovn
2021-11-26T16:53:28: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:53:28: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:53:28: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.dduovn-rgw
2021-11-26T16:53:28: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.dduovn on smithi145
2021-11-26T16:54:10: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:54:10: cephadm [INF] Remove service nfs.test
2021-11-26T16:54:10: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.dduovn...
2021-11-26T16:54:10: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.dduovn from smithi145
2021-11-26T16:54:14: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.dduovn
2021-11-26T16:54:14: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.dduovn-rgw
2021-11-26T16:54:14: cephadm [INF] Purge service nfs.test
2021-11-26T16:54:14: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:54:23: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.dduovn...
2021-11-26T16:54:23: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.dduovn from smithi145
2021-11-26T16:54:24: cephadm [INF] Saving service nfs.test spec with placement count:1
2021-11-26T16:54:25: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.mcpayf
2021-11-26T16:54:25: cephadm [INF] Ensuring nfs.test.0 is in the ganesha grace table
2021-11-26T16:54:25: cephadm [INF] Rados config object exists: conf-nfs.test
2021-11-26T16:54:25: cephadm [INF] Creating key for client.nfs.test.0.0.smithi145.mcpayf-rgw
2021-11-26T16:54:25: cephadm [INF] Deploying daemon nfs.test.0.0.smithi145.mcpayf on smithi145
2021-11-26T16:55:00: cephadm [INF] Remove service ingress.nfs.test
2021-11-26T16:55:00: cephadm [INF] Remove service nfs.test
2021-11-26T16:55:00: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.mcpayf...
2021-11-26T16:55:00: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.mcpayf from smithi145
2021-11-26T16:55:04: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.mcpayf
2021-11-26T16:55:04: cephadm [INF] Removing key for client.nfs.test.0.0.smithi145.mcpayf-rgw
2021-11-26T16:55:04: cephadm [INF] Purge service nfs.test
2021-11-26T16:55:04: cephadm [INF] Removing grace file for nfs.test
2021-11-26T16:55:17: cephadm [INF] Removing orphan daemon nfs.test.0.0.smithi145.mcpayf...
2021-11-26T16:55:17: cephadm [INF] Removing daemon nfs.test.0.0.smithi145.mcpayf from smithi145
</pre>
Orchestrator - Bug #53321 (Duplicate): cephadm tries to use the system disk for osd specs
https://tracker.ceph.com/issues/53321
2021-11-18T15:31:20Z
Sebastian Wagner
<p>Having this spec:</p>
<pre><code class="yaml syntaxhl"><span class="CodeRay"><span class="key">service_type</span>: <span class="string"><span class="content">osd</span></span>
<span class="key">service_id</span>: <span class="string"><span class="content">hybrid</span></span>
<span class="key">service_name</span>: <span class="string"><span class="content">osd.hybrid</span></span>
<span class="key">placement</span>:
<span class="key">host_pattern</span>: <span class="string"><span class="content">host1</span></span>
<span class="key">spec</span>:
<span class="key">data_devices</span>:
<span class="key">rotational</span>: <span class="string"><span class="content">1</span></span>
<span class="key">db_devices</span>:
<span class="key">rotational</span>: <span class="string"><span class="content">0</span></span>
<span class="key">filter_logic</span>: <span class="string"><span class="content">AND</span></span>
<span class="key">objectstore</span>: <span class="string"><span class="content">bluestore</span></span>
</span></code></pre>
<p>And having the system partition being locked:</p>
<pre>
# ceph orch device ls host1
HOST PATH TYPE DEVICE ID SIZE AVAILABLE REJECT REASONS
host1 /dev/nvme0n1 ssd NVMENVMENVMENVMENVMENVMENVMENVME1 1600G Yes
host1 /dev/nvme1n1 ssd NVMENVMENVMENVMENVMENVMENVMENVME4 1600G Yes
host1 /dev/sda hdd IDSDM_012345678901 64.2G Has GPT headers, locked
host1 /dev/sdb hdd AAAAAAAAAAA_0000000000000b0f1 16.0T Yes
host1 /dev/sdc hdd AAAAAAAAAAA_000000000000089cd 16.0T Yes
host1 /dev/sdd hdd AAAAAAAAAAA_0000000000000af6d 16.0T Yes
host1 /dev/sde hdd AAAAAAAAAAA_00000000000008a4d 16.0T Yes
host1 /dev/sdf hdd AAAAAAAAAAA_0000000000000af9d 16.0T Yes
host1 /dev/sdg hdd BBBBBBBBBBBb_00000000000044a5 16.0T Yes
host1 /dev/sdh hdd BBBBBBBBBBBb_000000000000f7f9 16.0T Yes
host1 /dev/sdi hdd AAAAAAAAAAA_000000000000089a1 16.0T Yes
host1 /dev/sdj hdd AAAAAAAAAAA_00000000000008601 16.0T Yes
host1 /dev/sdk hdd AAAAAAAAAAA_00000000000008a71 16.0T Yes
host1 /dev/sdl hdd CCCCCCCCCCCC_000000000000ebdd 16.0T Yes
host1 /dev/sdm hdd AAAAAAAAAAA_000000000000089bd 16.0T Yes
host1 /dev/sdn hdd AAAAAAAAAAA_0000000000000fd31 16.0T Yes
host1 /dev/sdo hdd AAAAAAAAAAA_0000000000000f9a9 16.0T Yes
host1 /dev/sdp hdd AAAAAAAAAAA_00000000000008565 16.0T Yes
host1 /dev/sdq hdd BBBBBBBBBBBb_000000000000f3e5 16.0T Yes
host1 /dev/sdr hdd MG08SCA16TEY_5000039aa858002d 16.0T Yes
host1 /dev/sds hdd AAAAAAAAAAA_0000000000000fa61 16.0T Yes
host1 /dev/sdt hdd BBBBBBBBBBBb_00000000000046a5 16.0T Yes
host1 /dev/sdu hdd BBBBBBBBBBBb_00000000000041a1 16.0T Yes
host1 /dev/sdv hdd BBBBBBBBBBBb_000000000000f46d 16.0T Yes
host1 /dev/sdw hdd BBBBBBBBBBBb_00000000000046a9 16.0T Yes
host1 /dev/sdx hdd CCCCCCCCCCCC_000000000000eec1 16.0T Yes
host1 /dev/sdy hdd BBBBBBBBBBBb_0000000000004509 16.0T Yes
# mount -l | grep sda
/dev/sda2 on / type ext4 (rw,relatime) [root]
/dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro) [boot-efi]
</pre>
<p>It fails to apply it:</p>
<pre>
# ceph orch ls --format yaml --service-type osd | python3 -c 'import sys, yaml, json; y=yaml.safe_load_all(sys.stdin.read()); print(json.dumps(list(y)))' | jq -r .[0].events[0]
2021-11-18T14:09:26.296374Z service:osd.hybrid [ERROR] "Failed to apply: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:sha1 -e NODE_NAME=host1 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp1gohyav7:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpblgd2lz7:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:sha1 lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.hash", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.hash", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.hash", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.hash", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.hash", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.hash", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.hash", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.hash", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:sha1 -e NODE_NAME=host1 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp1gohyav7:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpblgd2lz7:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:sha1 lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd"
</pre>
Orchestrator - Cleanup #53276 (New): cephadm: don't download any images in `cephadm deploy`
https://tracker.ceph.com/issues/53276
2021-11-16T13:34:06Z
Sebastian Wagner
<p>I think it might make sense to make `deploy` avoid downloading any images and instead let systemd pull them. Let's avoid making ourselves a bottleneck. Deploying a daemon should be a fast operation.</p>
<p>Especially the gid/uid detection should be streamlined</p>
Orchestrator - Bug #53174 (Resolved): `ceph orch daemon rm mgr......` should warn if a user wants...
https://tracker.ceph.com/issues/53174
2021-11-05T16:24:19Z
Sebastian Wagner
<p>I just removed my last mgr....</p>
Orchestrator - Bug #53034 (Can't reproduce): podman-3.0.1-6 crashed
https://tracker.ceph.com/issues/53034
2021-10-25T15:21:05Z
Sebastian Wagner
<pre>
$ file remote/smithi081/coredump/1634847631.144910.core
remote/smithi081/coredump/1634847631.144910.core: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from '/bin/podman stop ceph-a756fc90-32a9-11ec-8c28-001a4aab830c-node-exporter-smithi', real uid: 0, effective uid: 0, real gid: 0, effective gid: 0, execfn: '/bin/podman', platform: 'x86_64'
</pre>
<pre>
2021-10-21T20:22:17.589 INFO:teuthology.task.internal:Transferring binaries for coredumps...
2021-10-21T20:22:17.615 ERROR:teuthology.run_tasks:Manager failed: internal.archive
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/teuthology/run_tasks.py", line 176, in run_tasks
suppress = manager.__exit__(*exc_info)
File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__
next(self.gen)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/teuthology/task/internal/__init__.py", line 398, in archive
fetch_binaries_for_coredumps(path, rem)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_c56135d151713269e811ede3163c9743c2e269de/teuthology/task/internal/__init__.py", line 320, in fetch_binaries_for_coredumps
dump_program = dump_out.split("from '")[1].split(' ')[0]
IndexError: list index out of range
2021-10-21T20:22:17.616 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload
</pre>
<pre>
2021-10-21T20:01:17.705 INFO:teuthology.orchestra.run.smithi081.stdout:Upgraded:
2021-10-21T20:01:17.706 INFO:teuthology.orchestra.run.smithi081.stdout: containernetworking-plugins-0.9.1-1.module_el8.4.0+786+4668b267.x86_64
2021-10-21T20:01:17.706 INFO:teuthology.orchestra.run.smithi081.stdout: criu-3.15-1.module_el8.4.0+786+4668b267.x86_64
2021-10-21T20:01:17.706 INFO:teuthology.orchestra.run.smithi081.stdout: libslirp-4.3.1-1.module_el8.4.0+786+4668b267.x86_64
2021-10-21T20:01:17.706 INFO:teuthology.orchestra.run.smithi081.stdout: slirp4netns-1.1.8-1.module_el8.4.0+786+4668b267.x86_64
2021-10-21T20:01:17.707 INFO:teuthology.orchestra.run.smithi081.stdout:Downgraded:
2021-10-21T20:01:17.707 INFO:teuthology.orchestra.run.smithi081.stdout: conmon-2:2.0.26-1.module_el8.4.0+786+4668b267.x86_64
2021-10-21T20:01:17.707 INFO:teuthology.orchestra.run.smithi081.stdout: container-selinux-2:2.158.0-1.module_el8.4.0+786+4668b267.noarch
2021-10-21T20:01:17.707 INFO:teuthology.orchestra.run.smithi081.stdout: containers-common-1:1.2.2-7.module_el8.4.0+786+4668b267.x86_64
2021-10-21T20:01:17.708 INFO:teuthology.orchestra.run.smithi081.stdout: crun-0.18-2.module_el8.4.0+786+4668b267.x86_64
2021-10-21T20:01:17.708 INFO:teuthology.orchestra.run.smithi081.stdout: fuse-overlayfs-1.4.0-2.module_el8.4.0+786+4668b267.x86_64
2021-10-21T20:01:17.708 INFO:teuthology.orchestra.run.smithi081.stdout: podman-3.0.1-6.module_el8.4.0+786+4668b267.x86_64
2021-10-21T20:01:17.708 INFO:teuthology.orchestra.run.smithi081.stdout: podman-catatonit-3.0.1-6.module_el8.4.0+786+4668b267.x86_64
2021-10-21T20:01:17.708 INFO:teuthology.orchestra.run.smithi081.stdout: podman-docker-3.0.1-6.module_el8.4.0+786+4668b267.noarch
2021-10-21T20:01:17.709 INFO:teuthology.orchestra.run.smithi081.stdout: runc-1.0.0-71.rc92.module_el8.4.0+833+9763146c.x86_64
2021-10-21T20:01:17.709 INFO:teuthology.orchestra.run.smithi081.stdout:Installed:
2021-10-21T20:01:17.709 INFO:teuthology.orchestra.run.smithi081.stdout: buildah-1.19.7-1.module_el8.4.0+786+4668b267.x86_64
2021-10-21T20:01:17.709 INFO:teuthology.orchestra.run.smithi081.stdout: cockpit-podman-29-2.module_el8.4.0+786+4668b267.noarch
2021-10-21T20:01:17.709 INFO:teuthology.orchestra.run.smithi081.stdout: skopeo-1:1.2.2-7.module_el8.4.0+786+4668b267.x86_64
2021-10-21T20:01:17.710 INFO:teuthology.orchestra.run.smithi081.stdout: toolbox-0.0.8-1.module_el8.4.0+786+4668b267.noarch
2021-10-21T20:01:17.710 INFO:teuthology.orchestra.run.smithi081.stdout: udica-0.2.4-1.module_el8.4.0+786+4668b267.noarch
</pre>
<p><a class="external" href="http://pulpito.front.sepia.ceph.com/adking-2021-10-21_19:20:35-rados:cephadm-wip-adk-testing-2021-10-21-1228-distro-basic-smithi/6456288">http://pulpito.front.sepia.ceph.com/adking-2021-10-21_19:20:35-rados:cephadm-wip-adk-testing-2021-10-21-1228-distro-basic-smithi/6456288</a></p>
Orchestrator - Cleanup #52918 (New): two competing MTU checks
https://tracker.ceph.com/issues/52918
2021-10-13T16:07:31Z
Sebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/blob/master/src/pybind/mgr/cephadm/configchecks.py#L482">https://github.com/ceph/ceph/blob/master/src/pybind/mgr/cephadm/configchecks.py#L482</a></p>
<p><a class="external" href="https://github.com/ceph/ceph/blob/master/monitoring/prometheus/alerts/ceph_default_alerts.yml#L235">https://github.com/ceph/ceph/blob/master/monitoring/prometheus/alerts/ceph_default_alerts.yml#L235</a></p>