Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2021-12-08T11:28:20ZCeph
Redmine Orchestrator - Bug #53529 (New): ceph orch apply ... --dry-run: Table not properly formattedhttps://tracker.ceph.com/issues/535292021-12-08T11:28:20ZSebastian Wagner
<pre>
root@service-01-08020:~# ceph orch apply -i cadvisor.yaml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound
to the current inventory setup. If any on these conditions changes, the
preview will be invalid. Please make sure to have a minimal
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+-----------+--------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+
|SERVICE |NAME |ADD_TO |REMOVE_FROM |
+-----------+--------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +-------------+
|container |container.cadvisor |hosta08016 hosta08014 hosta08006 hosta08005 hosta08004 hosta08007 hosta08009 hosta08010 hosta08008 hosta08015 hosta08003 hosta08013 hosta08012 hosta08011 hosta08002 hostb08035 hostb08033 hostb08030 hostb08026 hostb08024 hostb08025 hostb08023 hostb08032 hostb08036 hostb08029 hostb08031 hostb08027 hostb08028 hostb08022 hostb08056 hostb08055 hostb08053 hostb08051 hostb08050 hostb08048 hostb08047 hostb08045 hostb08043 hostb08044 hostb08042 hostb08052 hostb08049 hostd08076 hostd08075 hostd08074 hostd08073 hostd08072 hostd08071 hostd08070 hostd08069 hostd08068 hostd08066 hostd08067 hostd08065 hostd08064 hostd08063 hostd08062 hoste08096 hoste08092 hoste08091 hoste08090 hoste08095 hoste08094 hoste08093 hoste08087 hoste08085 hoste08084 hoste08089 hoste08088 hoste08086 hoste08082 hoste08083 hostf08116 hostf08115 hostf08112 hostf08114 hostf08113 hostf08111 hostf08110 hostf08109 hostf08108 hostf08106 hostf08107 hostf08104 hostf08105 hostf08103 hostf08102 hostg08135 hostg08136 hostg08124 hostg08123 hostg08134 hostg08133 hostg08132 hostg08131 hostg08130 hostg08129 hostg08128 hostg08122 hostg08126 hostg08125 hostg08127 hosth08153 hosth08155 hosth08154 hosth08151 hosth08149 hosth08146 hosth08148 hosth08147 hosth08145 hosth08143 hosth08156 hosth08142 hosth08150 hosth08144 hosth08152 hosti08173 hosti08170 hosti08169 hosti08168 hosti08166 hosti08164 hosti08163 hosti08165 hosti08175 hosti08171 hosti08167 hosti08172 hosti08162 hostk08192 hostk08184 hostk08191 hostk08196 hostk08193 hostk08194 hostk08195 hostk08188 hostk08186 hostk08189 hostk08190 hostk08183 hostk08187 hostk08182 hostk08185 hostm08216 hostm08214 hostm08215 hostm08213 hostm08206 hostm08209 hostm08211 hostm08212 hostm08210 hostm08208 hostm08207 hostm08205 hostm08204 hostm08203 hostm08202 hostn08224 hostn08236 hostn08233 hostn08232 hostn08234 hostn08230 hostn08231 hostn08235 hostn08229 hostn08227 hostn08226 hostn08228 hostn08225 hostn08222 hostn08223 | |
+-----------+--------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +-------------+
################
OSDSPEC PREVIEWS
################
+---------+------+------+------+----+-----+
|SERVICE |NAME |HOST |DATA |DB |WAL |
+---------+------+------+------+----+-----+
+---------+------+------+------+----+-----+
</pre> Orchestrator - Documentation #52704 (Won't Fix): network 'flow' diagram for cephadmhttps://tracker.ceph.com/issues/527042021-09-22T14:50:37ZSebastian Wagner
<p>it sounds like you're looking for a network 'flow' diagram for cephadm + the normal ceph architecture</p> Orchestrator - Feature #51010 (Resolved): add lamport clockhttps://tracker.ceph.com/issues/510102021-05-27T15:50:42ZSebastian WagnerOrchestrator - Feature #51007 (Resolved): add cephadm command that push results every so oftenhttps://tracker.ceph.com/issues/510072021-05-27T15:50:17ZSebastian WagnerOrchestrator - Feature #51006 (Resolved): add cephadm command to push results to endpointhttps://tracker.ceph.com/issues/510062021-05-27T15:50:08ZSebastian WagnerOrchestrator - Feature #51005 (Resolved): add mgr/cephadm endpointhttps://tracker.ceph.com/issues/510052021-05-27T15:49:58ZSebastian WagnerOrchestrator - Bug #50535 (Resolved): add local cephadm bootstrap dev env. https://tracker.ceph.com/issues/505352021-04-27T09:05:17ZSebastian Wagner
<pre>
sudo ./cephadm bootstrap \
--mon-ip 127.0.0.1 \
--ssh-private-key /home/<user>/.ssh/id_rsa \
--skip-mon-network \
--skip-monitoring-stack \
--single-host-defaults \
--shared_ceph_folder /path/to/ceph/repo
</pre>
<p>should be all you need to set up a cluster</p> Orchestrator - Bug #45462 (Resolved): 'https://download.ceph.com/debian-octopus focal Release' do...https://tracker.ceph.com/issues/454622020-05-11T09:52:40ZSebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/">http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/</a></p>
<pre>
2020-05-08T15:24:54.694 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm/cephadm -v add-repo --release octopus
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:DEBUG:cephadm:Could not locate podman: podman not found
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo GPG key from https://download.ceph.com/keys/release.asc...
2020-05-08T15:24:54.964 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo file at /etc/apt/sources.list.d/ceph.list...
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ test_install_uninstall
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo apt update
2020-05-08T15:24:55.009 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.139 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
2020-05-08T15:24:55.270 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
2020-05-08T15:24:55.284 INFO:tasks.workunit.client.0.smithi204.stdout:Ign:3 https://download.ceph.com/debian-octopus focal InRelease
2020-05-08T15:24:55.320 INFO:tasks.workunit.client.0.smithi204.stdout:Err:4 https://download.ceph.com/debian-octopus focal Release
2020-05-08T15:24:55.321 INFO:tasks.workunit.client.0.smithi204.stdout: 404 Not Found [IP: 158.69.68.124 443]
2020-05-08T15:24:55.351 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease
2020-05-08T15:24:55.434 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
2020-05-08T15:24:56.423 INFO:tasks.workunit.client.0.smithi204.stdout:Reading package lists...
2020-05-08T15:24:56.442 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:E: The repository 'https://download.ceph.com/debian-octopus focal Release' does not have a Release file.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.444 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo yum -y install cephadm
2020-05-08T15:24:56.452 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: yum: command not found
2020-05-08T15:24:56.453 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo dnf -y install cephadm
2020-05-08T15:24:56.459 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: dnf: command not found
2020-05-08T15:24:56.460 DEBUG:teuthology.orchestra.run:got remote process result: 1
</pre>
<p>Are we going to get an ubuntu repo for focal or should I disable this test for it?</p> Infrastructure - Bug #45453 (New): 'https://download.ceph.com/debian-octopus focal Release' does ...https://tracker.ceph.com/issues/454532020-05-08T20:28:48ZSebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/">http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/</a></p>
<pre>
2020-05-08T15:24:54.694 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm/cephadm -v add-repo --release octopus
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:DEBUG:cephadm:Could not locate podman: podman not found
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo GPG key from https://download.ceph.com/keys/release.asc...
2020-05-08T15:24:54.964 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo file at /etc/apt/sources.list.d/ceph.list...
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ test_install_uninstall
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo apt update
2020-05-08T15:24:55.009 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.139 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
2020-05-08T15:24:55.270 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
2020-05-08T15:24:55.284 INFO:tasks.workunit.client.0.smithi204.stdout:Ign:3 https://download.ceph.com/debian-octopus focal InRelease
2020-05-08T15:24:55.320 INFO:tasks.workunit.client.0.smithi204.stdout:Err:4 https://download.ceph.com/debian-octopus focal Release
2020-05-08T15:24:55.321 INFO:tasks.workunit.client.0.smithi204.stdout: 404 Not Found [IP: 158.69.68.124 443]
2020-05-08T15:24:55.351 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease
2020-05-08T15:24:55.434 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
2020-05-08T15:24:56.423 INFO:tasks.workunit.client.0.smithi204.stdout:Reading package lists...
2020-05-08T15:24:56.442 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:E: The repository 'https://download.ceph.com/debian-octopus focal Release' does not have a Release file.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.444 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo yum -y install cephadm
2020-05-08T15:24:56.452 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: yum: command not found
2020-05-08T15:24:56.453 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo dnf -y install cephadm
2020-05-08T15:24:56.459 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: dnf: command not found
2020-05-08T15:24:56.460 DEBUG:teuthology.orchestra.run:got remote process result: 1
</pre>
<p>Are we going to get an ubuntu repo for focal or should I disable this test for it?</p> Orchestrator - Bug #44302 (Resolved): cehpadm: apply_mon: NotImplementedErrorhttps://tracker.ceph.com/issues/443022020-02-26T09:08:37ZSebastian Wagner
<pre>
2020-02-25T17:09:39.919 INFO:teuthology.orchestra.run.smithi202.stderr:2020-02-25T17:09:39.916+0000 7f48cec61700 1 -- 172.21.15.202:0/1095025698 --> v2:172.21.15.202:6800/1 -- mgr_command(tid 0: {"prefix": "orch apply mon", "num": 2, "hosts": ["smithi202:[v2:172.21.15.202:3301,v1:172.21.15.202:6790]=c"], "target": ["mon-mgr", ""]}) v1 -- 0x7f48c8072980 con 0x7f48ac020d20
2020-02-25T17:09:39.921 INFO:teuthology.orchestra.run.smithi202.stderr:2020-02-25T17:09:39.919+0000 7f48be7fc700 1 -- 172.21.15.202:0/1095025698 <== mgr.14130 v2:172.21.15.202:6800/1 1 ==== mgr_command_reply(tid 0: -22 Traceback (most recent call last):
2020-02-25T17:09:39.921 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 1070, in _handle_command
2020-02-25T17:09:39.921 INFO:teuthology.orchestra.run.smithi202.stderr: return self.handle_command(inbuf, cmd)
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 191, in handle_command
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: return dispatch[cmd['prefix']].call(self, cmd, inbuf)
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 309, in call
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: return self.func(mgr, **kwargs)
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 153, in <lambda>
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 144, in wrapper
2020-02-25T17:09:39.922 INFO:teuthology.orchestra.run.smithi202.stderr: return func(*args, **kwargs)
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/module.py", line 688, in _apply_mon
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: completion = self.apply_mon(spec)
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1718, in inner
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: completion = self._oremote(method_name, args, kwargs)
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1788, in _oremote
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: return mgr.remote(o, meth, *args, **kwargs)
2020-02-25T17:09:39.923 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 1432, in remote
2020-02-25T17:09:39.924 INFO:teuthology.orchestra.run.smithi202.stderr: args, kwargs)
2020-02-25T17:09:39.924 INFO:teuthology.orchestra.run.smithi202.stderr:RuntimeError: Remote method threw exception: Traceback (most recent call last):
2020-02-25T17:09:39.924 INFO:teuthology.orchestra.run.smithi202.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1002, in apply_mon
2020-02-25T17:09:39.924 INFO:teuthology.orchestra.run.smithi202.stderr: raise NotImplementedError()
2020-02-25T17:09:39.924 INFO:teuthology.orchestra.run.smithi202.stderr:NotImplementedError
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2020-02-25_16:47:03-rados-wip-swagner-testing-2020-02-25-1426-distro-basic-smithi/4801966/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2020-02-25_16:47:03-rados-wip-swagner-testing-2020-02-25-1426-distro-basic-smithi/4801966/teuthology.log</a></p>
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-02-25_16:47:03-rados-wip-swagner-testing-2020-02-25-1426-distro-basic-smithi/">http://pulpito.ceph.com/swagner-2020-02-25_16:47:03-rados-wip-swagner-testing-2020-02-25-1426-distro-basic-smithi/</a></p>
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-02-25_16:51:40-rados-wip-swagner2-testing-2020-02-25-1434-distro-basic-smithi/">http://pulpito.ceph.com/swagner-2020-02-25_16:51:40-rados-wip-swagner2-testing-2020-02-25-1434-distro-basic-smithi/</a></p> Orchestrator - Bug #44121 (Resolved): calling cephadm shell again looses bash historyhttps://tracker.ceph.com/issues/441212020-02-13T14:23:04ZSebastian Wagner
<p>As we're always creating a new container, we obviously also loosing the bash history, which is unfortunate.</p>
<p>especially one sometimes has to carefully craft ceph-orch calls.</p>
<p>Anyway we can easily fix this, without having a persistent "tooldbox" container?</p> Orchestrator - Bug #44039 (Rejected): bin/cephadm: Remove --allow-fqdn-hostnamehttps://tracker.ceph.com/issues/440392020-02-07T14:55:10ZSebastian Wagner
<p>imo `hostname` should never return an fqdn. What bout removing this flag, until someone demands it?</p>
<p>If we remove this ability, then we should also adjust the 'orchestrator host add' check, which currently allows a fqdn (as long as it matches the configured hostname).</p>
<p><a class="external" href="https://github.com/ceph/ceph/pull/33042">https://github.com/ceph/ceph/pull/33042</a></p> ceph-volume - Bug #37390 (Resolved): c-v inventory returns invalid JSON https://tracker.ceph.com/issues/373902018-11-26T13:16:39ZSebastian Wagner
<p>print() uses single-quotes by default, which is invalid JSON.</p> RADOS - Bug #23360 (Duplicate): call to 'ceph osd erasure-code-profile set' asserts the monitorshttps://tracker.ceph.com/issues/233602018-03-14T14:37:14ZSebastian Wagner
<p>I've attached `thread apply all bt` mixed with `thread apply all py-bt`</p>
<p>Threads 38 35 34 32 and 31 are waiting for futex 0x55a285204640</p>
<p>Thread 37 waits in<br />File "/src/pybind/mgr/mgr_module.py", line 71, in wait<br /> self.ev.wait()</p>
<p>AFAICT, all other threads are not part of this deadlock.</p> rbd - Bug #22253 (Can't reproduce): "rbd info" crashed: stack smashing detectedhttps://tracker.ceph.com/issues/222532017-11-27T14:37:58ZSebastian Wagner
<p>Environment: quite small vstart cluster.</p>
<p>This is the stack trace:<br /><pre>
#3 0x00007fffed44711c in __GI___fortify_fail (msg=<optimized out>, msg@entry=0x7fffed4bd441 "stack smashing detected") at fortify_fail.c:37
#4 0x00007fffed4470c0 in __stack_chk_fail () at stack_chk_fail.c:28
#5 0x00007ffff78f0beb in librbd::ImageCtx::perf_start (this=this@entry=0x555555b7bf70, name="librbd-8c39e2ae8944a-rbd-huge2") at /home/sebastian/Repos/ceph/src/librbd/ImageCtx.cc:397
#6 0x00007ffff78f3cb4 in librbd::ImageCtx::init (this=0x555555b7bf70) at /home/sebastian/Repos/ceph/src/librbd/ImageCtx.cc:275
#7 0x00007ffff799dacd in librbd::image::OpenRequest<librbd::ImageCtx>::send_register_watch (this=this@entry=0x555555b7fe60) at /home/sebastian/Repos/ceph/src/librbd/image/OpenRequest.cc:477
#8 0x00007ffff79a3102 in librbd::image::OpenRequest<librbd::ImageCtx>::handle_v2_apply_metadata (this=this@entry=0x555555b7fe60, result=result@entry=0x7fffb77fa374) at /home/sebastian/Repos/ceph/src/librbd/image/OpenRequest.cc:471
#9 0x00007ffff79a351f in librbd::util::detail::rados_state_callback<librbd::image::OpenRequest<librbd::ImageCtx>, &librbd::image::OpenRequest<librbd::ImageCtx>::handle_v2_apply_metadata, true> (c=<optimized out>, arg=0x555555b7fe60) at /home/sebastian/Repos/ceph/src/librbd/Utils.h:39
#10 0x00007ffff75d678d in librados::C_AioComplete::finish (this=0x7fffd0001b60, r=<optimized out>) at /home/sebastian/Repos/ceph/src/librados/AioCompletionImpl.h:169
#11 0x0000555555613949 in Context::complete (this=0x7fffd0001b60, r=<optimized out>) at /home/sebastian/Repos/ceph/src/include/Context.h:70
#12 0x00007fffeeab6010 in Finisher::finisher_thread_entry (this=0x555555acb3e8) at /home/sebastian/Repos/ceph/src/common/Finisher.cc:72
#13 0x00007fffee3a86ba in start_thread (arg=0x7fffb77fe700) at pthread_create.c:333
#14 0x00007fffed4353dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
</pre></p>