Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2021-11-05T16:24:19Z
Ceph
Redmine
Orchestrator - Bug #53174 (Resolved): `ceph orch daemon rm mgr......` should warn if a user wants...
https://tracker.ceph.com/issues/53174
2021-11-05T16:24:19Z
Sebastian Wagner
<p>I just removed my last mgr....</p>
Orchestrator - Bug #51634 (Closed): Validate allowed characters for rgw reams
https://tracker.ceph.com/issues/51634
2021-07-12T12:53:18Z
Sebastian Wagner
<pre>
# ceph orch ls --service-type rgw --format yaml
</pre>
<pre><code class="yaml syntaxhl"><span class="CodeRay"><span class="key">service_type</span>: <span class="string"><span class="content">rgw</span></span>
<span class="key">service_id</span>: <span class="string"><span class="content">rgw.all</span></span>
<span class="key">service_name</span>: <span class="string"><span class="content">rgw.rgw.all</span></span>
<span class="key">placement</span>:
<span class="key">label</span>: <span class="string"><span class="content">rgw</span></span>
<span class="key">spec</span>:
<span class="key">rgw_frontend_port</span>: <span class="string"><span class="content">443</span></span>
<span class="key">rgw_realm</span>: <span class="string"><span class="content">/etc/ssl/certs/server.pem</span></span>
<span class="key">ssl</span>: <span class="string"><span class="content">true</span></span>
<span class="key">status</span>:
<span class="key">created</span>: <span class="string"><span class="content">'2021-07-12T12:45:04.261390Z'</span></span>
<span class="key">running</span>: <span class="string"><span class="content">0</span></span>
<span class="key">size</span>: <span class="string"><span class="content">1</span></span>
<span class="key">events</span>:
- <span class="string"><span class="content">2021-07-12T06:26:04.982069Z service:rgw.rgw.all [INFO] "service was created" </span></span>
</span></code></pre>
Orchestrator - Bug #51272 (Resolved): upgrade job: mgr.x getting removed by cephadm task: UPGRADE...
https://tracker.ceph.com/issues/51272
2021-06-18T08:47:37Z
Sebastian Wagner
<p>I think this bug is not yet merged.</p>
<ul>
<li><a class="external" href="https://github.com/ceph/ceph/pull/41478/">https://github.com/ceph/ceph/pull/41478/</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/41568">https://github.com/ceph/ceph/pull/41568</a></li>
</ul>
<pre>
rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-start-upgrade 4-wait fixed-2}
</pre>
<pre>
roles:
- - mon.a
- mon.c
- mgr.y
- osd.0
- osd.1
- osd.2
- osd.3
- client.0
- node-exporter.a
- alertmanager.a
- - mon.b
- mgr.x
- osd.4
- osd.5
- osd.6
- osd.7
- client.1
- prometheus.a
- grafana.a
- node-exporter.b
</pre>
<p><strong>then</strong></p>
<pre>
: audit 2021-06-15T20:14:24.260141+0000 mgr.y (mgr.14138) 64 : audit [DBG] from='client.34106 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;smithi143=x", "target">
</pre>
<p>notice the placement only contains <strong>2;smithi143=x</strong></p>
<pre>
2021-06-15T20:14:29.203 INFO:journalctl@ceph.mgr.y.smithi135.stdout:Jun 15 20:14:29 smithi135 systemd[1]: Stopping Ceph mgr.y for e2a4517e-ce15-11eb-8c13-001a4aab830c...
</pre>
<p>*resulting in *</p>
<pre>
cluster 2021-06-15T20:21:09.388112+0000 mgr.x (mgr.34112) 238 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 0 B data, 3.7 MiB used, 707 GiB / 715 GiB avail
: debug 2021-06-15T20:21:11.241+0000 7ffa34117700 -1 log_channel(cephadm) log [ERR] : Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon
: audit 2021-06-15T20:21:11.239485+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.34112 ' entity='mgr.x'
: audit 2021-06-15T20:21:11.241293+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.34112 172.21.15.143:0/2430240313' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
: cephadm 2021-06-15T20:21:11.241839+0000 mgr.x (mgr.34112) 239 : cephadm [INF] Upgrade: Target is quay.ceph.io/ceph-ci/ceph:da5e8184007182fa3cd5c8385fee4e08c5620fe2 with id 219a75e51380d5cdf3af7b1fa194d1bedd11>
: cephadm 2021-06-15T20:21:11.244338+0000 mgr.x (mgr.34112) 240 : cephadm [INF] Upgrade: Checking mgr daemons...
: cephadm 2021-06-15T20:21:11.244711+0000 mgr.x (mgr.34112) 241 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x)
: cephadm 2021-06-15T20:21:11.247775+0000 mgr.x (mgr.34112) 242 : cephadm [ERR] Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon
: audit 2021-06-15T20:21:11.253146+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.34112 ' entity='mgr.x'
: cluster 2021-06-15T20:21:11.255641+0000 mgr.x (mgr.34112) 243 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 0 B data, 3.7 MiB used, 707 GiB / 715 GiB avail
: audit 2021-06-15T20:21:11.259712+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.34112 ' entity='mgr.x'
</pre>
<pre>
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:alertmanager.a smithi135 running (117s) 107s ago 2m 0.20.0 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f d7ab1fc469b4
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:grafana.a smithi143 running (2m) 107s ago 2m 6.6.2 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a bdf08596362b
2021-06-15T20:21:16.892 INFO:teuthology.orchestra.run.smithi135.stdout:mgr.x smithi143 running (6m) 107s ago 6m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 bf659290d1ab
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.a smithi135 running (8m) 107s ago 9m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 a0083afbce6f
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.b smithi143 running (7m) 107s ago 7m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 177430b8b423
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:mon.c smithi135 running (7m) 107s ago 7m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 881e672542be
2021-06-15T20:21:16.893 INFO:teuthology.orchestra.run.smithi135.stdout:node-exporter.a smithi135 running (2m) 107s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf acd96e0cc12e
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:node-exporter.b smithi143 running (2m) 107s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf a3c897228c6d
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.0 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 9805ecc9628d
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.1 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 29d8fc3fbb7f
2021-06-15T20:21:16.894 INFO:teuthology.orchestra.run.smithi135.stdout:osd.2 smithi135 running (5m) 107s ago 5m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 193e0a2a0487
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.3 smithi135 running (4m) 107s ago 4m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 e2dea4bf5490
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.4 smithi143 running (4m) 107s ago 4m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 e0e19361a64a
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.5 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 71c57f8c0e3d
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.6 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 4da5baa064d1
2021-06-15T20:21:16.895 INFO:teuthology.orchestra.run.smithi135.stdout:osd.7 smithi143 running (3m) 107s ago 3m 15.2.9 docker.io/ceph/ceph:v15.2.9 dfc483079636 098193d20e10
2021-06-15T20:21:16.896 INFO:teuthology.orchestra.run.smithi135.stdout:prometheus.a smithi143 running (110s) 107s ago 2m 2.18.1 docker.io/prom/prometheus:v2.18.1 de242295e225 fb7dd6cd2280
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/yuriw-2021-06-15_18:44:29-rados-wip-yuri8-testing-2021-06-15-0839-octopus-distro-basic-smithi/6174184/teuthology.log">http://qa-proxy.ceph.com/teuthology/yuriw-2021-06-15_18:44:29-rados-wip-yuri8-testing-2021-06-15-0839-octopus-distro-basic-smithi/6174184/teuthology.log</a></p>
Orchestrator - Bug #50685 (Resolved): wrong exception type: Exception("No filters applied")
https://tracker.ceph.com/issues/50685
2021-05-07T10:09:01Z
Sebastian Wagner
<pre>
Traceback (most recent call last):
File "/usr/share/ceph/mgr/cephadm/serve.py", line 466, in _apply_all_services
if self._apply_service(spec):
File "/usr/share/ceph/mgr/cephadm/serve.py", line 523, in _apply_service
self.mgr.osd_service.create_from_spec(cast(DriveGroupSpec, spec))
File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 68, in create_from_spec
ret = create_from_spec_one(self.prepare_drivegroup(drive_group))
File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 171, in prepare_drivegroup
existing_daemons=len(dd_for_spec_and_host))
File "/lib/python3.6/site-packages/ceph/deployment/drive_selection/selector.py", line 26, in __init__
self._data = self.assign_devices(self.spec.data_devices)
File "/lib/python3.6/site-packages/ceph/deployment/drive_selection/selector.py", line 130, in assign_devices
if not all(m.compare(disk) for m in FilterGenerator(device_filter)):
File "/lib/python3.6/site-packages/ceph/deployment/drive_selection/selector.py", line 130, in <genexpr>
if not all(m.compare(disk) for m in FilterGenerator(device_filter)):
File "/lib/python3.6/site-packages/ceph/deployment/drive_selection/matchers.py", line 407, in compare
raise Exception("No filters applied")
May 06 07:20:31 host conmon[2216]: debug 2021-05-06T07:20:31.114+0000 7f69546c4700 -1 log_channel(cephadm) log [ERR] : Failed to apply osd.dashboard-spec DriveGroupSpec(name=dashboard-1620208717516->placement=PlacementSpec(host_pattern='host'), service_id='dashboard-1620208717516', service_type='osd', data_devices=DeviceSelection(size='931.5GB', all=False), osd_id_claims={}, unmanaged=False, filter_logic='AND', preview_only=False): No filters applied
</pre>
<p>this should not end in the logs like this</p>
Orchestrator - Bug #49739 (Can't reproduce): `ceph` not found in $PATH: No such file or directory...
https://tracker.ceph.com/issues/49739
2021-03-11T14:29:13Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/teuthology-2021-03-10_03:31:01-rados-pacific-distro-basic-smithi/5952020/">https://pulpito.ceph.com/teuthology-2021-03-10_03:31:01-rados-pacific-distro-basic-smithi/5952020/</a></p>
<pre>
2021-03-10T20:33:29.912 INFO:teuthology.orchestra.run.smithi199.stdout:Added host 'smithi199'
2021-03-10T20:33:30.800 DEBUG:teuthology.orchestra.run.smithi199:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:87cd59fd3ebb82c3ebadd5335e99940fbb5394cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 88dd262a-81df-11eb-9080-001a4aab830c -- ceph orch host ls --format=json
2021-03-10T20:33:43.508 INFO:teuthology.orchestra.run.smithi199.stderr:Error: executable file `ceph` not found in $PATH: No such file or directory: OCI not found
2021-03-10T20:33:43.540 DEBUG:teuthology.orchestra.run:got remote process result: 127
</pre>
Orchestrator - Bug #49223 (Resolved): unrecognized arguments: --container-init
https://tracker.ceph.com/issues/49223
2021-02-09T09:51:10Z
Sebastian Wagner
<pre>
Feb 09 10:29:57 ubuntu conmon[22007]: debug 2021-02-09T09:29:57.127+0000 7f130910d700 0 [cephadm DEBUG cephadm.serve] _run_cephadm : command = gather-facts
Feb 09 10:29:57 ubuntu conmon[22007]: debug 2021-02-09T09:29:57.127+0000 7f130910d700 0 [cephadm DEBUG cephadm.serve] _run_cephadm : args = []
Feb 09 10:29:57 ubuntu conmon[22007]: debug 2021-02-09T09:29:57.127+0000 7f130910d700 0 [cephadm DEBUG root] Have connection to ubuntu
Feb 09 10:29:57 ubuntu conmon[22007]: debug 2021-02-09T09:29:57.127+0000 7f130910d700 0 [cephadm DEBUG cephadm.serve] args: gather-facts --container-init
Feb 09 10:29:57 ubuntu conmon[22007]: debug 2021-02-09T09:29:57.371+0000 7f131165d700 0 [restful DEBUG root] Unhandled notification type 'service_map'
Feb 09 10:29:57 ubuntu conmon[22007]: debug 2021-02-09T09:29:57.375+0000 7f12fe8f8700 0 [rbd_support DEBUG root] PerfHandler: tick
Feb 09 10:29:57 ubuntu conmon[22007]: debug 2021-02-09T09:29:57.375+0000 7f12fe0f7700 0 [rbd_support DEBUG root] TaskHandler: tick
Feb 09 10:29:57 ubuntu conmon[22007]: debug 2021-02-09T09:29:57.507+0000 7f130910d700 0 [cephadm DEBUG cephadm.serve] code: 2
Feb 09 10:29:57 ubuntu conmon[22007]: debug 2021-02-09T09:29:57.507+0000 7f130910d700 0 [cephadm DEBUG cephadm.serve] err: usage: [-h] [--image IMAGE] [--docker] [--data-dir DATA_DIR]
Feb 09 10:29:57 ubuntu conmon[22007]: [--log-dir LOG_DIR] [--logrotate-dir LOGROTATE_DIR]
Feb 09 10:29:57 ubuntu conmon[22007]: [--unit-dir UNIT_DIR] [--verbose] [--timeout TIMEOUT] [--retry RETRY]
Feb 09 10:29:57 ubuntu conmon[22007]: [--env ENV]
Feb 09 10:29:57 ubuntu conmon[22007]: {version,pull,inspect-image,ls,list-networks,adopt,rm-daemon,rm-cluster,run,shell,enter,ceph-volume,unit,logs,bootstrap,deploy,check-host,prepare-host,add-repo,rm-repo,install,registry-login,gather-facts,exporter,host-maintenance,verify-prereqs}
Feb 09 10:29:57 ubuntu conmon[22007]: ...
Feb 09 10:29:57 ubuntu conmon[22007]: : error: unrecognized arguments: --container-init
Feb</pre>
<p>using</p>
<pre>
{
"style": "cephadm:v1",
"name": "mgr.ubuntu.micfpd",
"fsid": "943f28ea-6ab7-11eb-923e-0242b47faa5c",
"systemd_unit": "ceph-943f28ea-6ab7-11eb-923e-0242b47faa5c@mgr.ubuntu.micfpd",
"enabled": true,
"state": "running",
"container_id": "38547b1f4168a81bfa2dc8fd831286045e1acd12197e7b2638b61c27d96a8ba9",
"container_image_name": "docker.io/ceph/daemon-base:latest-master-devel",
"container_image_id": "7146f2bd66bd219e642f5ac73b1371f3c169477afcfa92fe097a7e923fd397cc",
"container_image_digests": [
"docker.io/ceph/daemon-base@sha256:2f08b03807623cf4702f489659ddfef224fd3bb6aeb83b317f69128b0b782749"
],
"version": "17.0.0-389-gcced65aa",
"started": "2021-02-09T09:17:16.708559Z",
"created": "2021-02-09T09:17:17.013180Z",
"deployed": "2021-02-09T09:17:16.045155Z",
"configured": "2021-02-09T09:17:17.013180Z"
},
</pre>
<p>Looks like this is the old cephadm binary and the old mgr/cephadm module. but with --container-init enabled for all commands even though they don't support it<br /><pre>
$ sudo ./cephadm enter --name mgr.ubuntu.micfpd
[sudo] Passwort für sebastian:
Inferring fsid 943f28ea-6ab7-11eb-923e-0242b47faa5c
[ceph: root@ubuntu /]# cephadm gather-facts --container-init
usage: cephadm [-h] [--image IMAGE] [--docker] [--data-dir DATA_DIR]
[--log-dir LOG_DIR] [--logrotate-dir LOGROTATE_DIR]
[--unit-dir UNIT_DIR] [--verbose] [--timeout TIMEOUT]
[--retry RETRY] [--env ENV]
{version,pull,inspect-image,ls,list-networks,adopt,rm-daemon,rm-cluster,run,shell,enter,ceph-volume,unit,logs,bootstrap,deploy,check-host,prepare-host,add-repo,rm-repo,install,registry-login,gather-facts,exporter,host-maintenance,verify-prereqs}
...
cephadm: error: unrecognized arguments: --container-init
[ceph: root@ubuntu /]# cd /usr/share/ceph/mgr/cephadm
[ceph: root@ubuntu cephadm]# grep -C 3 container-init serve.py
final_args += ['--fsid', self.mgr._cluster_fsid]
if self.mgr.container_init:
final_args += ['--container-init']
final_args += args
</pre></p>
<p>Look, deploy does support container-init:</p>
<pre>
usage: cephadm deploy [-h] --name NAME --fsid FSID [--config CONFIG]
[--config-json CONFIG_JSON] [--keyring KEYRING]
[--key KEY] [--osd-fsid OSD_FSID] [--skip-firewalld]
[--tcp-ports TCP_PORTS] [--reconfig] [--allow-ptrace]
[--container-init]
cephadm deploy: error: the following arguments are required: --name, --fsid
</pre>
<p><strong>workaround</strong>:</p>
<pre>
sudo ./cephadm shell --fsid 943f28ea-6ab7-11eb-923e-0242b47faa5c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring ceph config set mgr mgr/cephadm/container_init false
</pre>
Orchestrator - Bug #47340 (Duplicate): _list_devices: 'NoneType' object has no attribute 'get'
https://tracker.ceph.com/issues/47340
2020-09-07T16:01:49Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-09-07_12:17:11-rados:cephadm-wip-swagner2-testing-2020-09-07-1101-distro-basic-smithi/5415754/">https://pulpito.ceph.com/swagner-2020-09-07_12:17:11-rados:cephadm-wip-swagner2-testing-2020-09-07-1101-distro-basic-smithi/5415754/</a></p>
<pre>
2020-09-07T12:59:11.859 INFO:teuthology.orchestra.run.smithi150.stderr:Error EINVAL: Traceback (most recent call last):
2020-09-07T12:59:11.860 INFO:teuthology.orchestra.run.smithi150.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 1177, in _handle_command
2020-09-07T12:59:11.860 INFO:teuthology.orchestra.run.smithi150.stderr: return self.handle_command(inbuf, cmd)
2020-09-07T12:59:11.860 INFO:teuthology.orchestra.run.smithi150.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 141, in handle_command
2020-09-07T12:59:11.861 INFO:teuthology.orchestra.run.smithi150.stderr: return dispatch[cmd['prefix']].call(self, cmd, inbuf)
2020-09-07T12:59:11.861 INFO:teuthology.orchestra.run.smithi150.stderr: File "/usr/share/ceph/mgr/mgr_module.py", line 318, in call
2020-09-07T12:59:11.861 INFO:teuthology.orchestra.run.smithi150.stderr: return self.func(mgr, **kwargs)
2020-09-07T12:59:11.861 INFO:teuthology.orchestra.run.smithi150.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 103, in <lambda>
2020-09-07T12:59:11.861 INFO:teuthology.orchestra.run.smithi150.stderr: wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
2020-09-07T12:59:11.862 INFO:teuthology.orchestra.run.smithi150.stderr: File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 92, in wrapper
2020-09-07T12:59:11.862 INFO:teuthology.orchestra.run.smithi150.stderr: return func(*args, **kwargs)
2020-09-07T12:59:11.862 INFO:teuthology.orchestra.run.smithi150.stderr: File "/usr/share/ceph/mgr/orchestrator/module.py", line 421, in _list_devices
2020-09-07T12:59:11.862 INFO:teuthology.orchestra.run.smithi150.stderr: if d.lsm_data.get('ledSupport', None):
2020-09-07T12:59:11.862 INFO:teuthology.orchestra.run.smithi150.stderr:AttributeError: 'NoneType' object has no attribute 'get'
</pre>
Orchestrator - Bug #47336 (Can't reproduce): `orch device ls`: Unexpected argument '--wide'
https://tracker.ceph.com/issues/47336
2020-09-07T11:04:34Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/teuthology-2020-09-03_07:01:02-rados-master-distro-basic-smithi/">https://pulpito.ceph.com/teuthology-2020-09-03_07:01:02-rados-master-distro-basic-smithi/</a></p>
<pre>
2020-09-06T13:05:01.941 INFO:teuthology.orchestra.run.smithi187:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early orch device ls --wide
2020-09-06T13:05:02.197 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 podman[55616]: 2020-09-06 13:05:02.001339685 +0000 UTC m=+0.747795535 container died 10e3f81f605e50e0f4a16983e07a3902e4e81ecd873f8fe9498ad5c65102be68 (image=quay.ceph.i
o/ceph-ci/ceph:4e4c926faf627bcfcf316bfbde0da6544658fad4, name=ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0-deactivate)
2020-09-06T13:05:02.262 INFO:teuthology.orchestra.run.smithi187.stderr:Invalid command: Unexpected argument '--wide'
2020-09-06T13:05:02.263 INFO:teuthology.orchestra.run.smithi187.stderr:orch device ls [<hostname>...] [plain|json|json-pretty|yaml] [--refresh] : List devices on a host
2020-09-06T13:05:02.263 INFO:teuthology.orchestra.run.smithi187.stderr:Error EINVAL: invalid command
2020-09-06T13:05:02.265 DEBUG:teuthology.orchestra.run:got remote process result: 22
2020-09-06T13:05:02.266 INFO:teuthology.orchestra.run.smithi187:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early log 'Ended test tasks.cephadm_cases.test_cli.TestCephadmCLI.test_device_ls_wide
'
2020-09-06T13:05:02.932 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 podman[55616]: 2020-09-06 13:05:02.657678294 +0000 UTC m=+1.404134128 container remove 10e3f81f605e50e0f4a16983e07a3902e4e81ecd873f8fe9498ad5c65102be68 (image=quay.ceph
.io/ceph-ci/ceph:4e4c926faf627bcfcf316bfbde0da6544658fad4, name=ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0-deactivate)
2020-09-06T13:05:02.932 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 systemd[1]: Stopped Ceph osd.0 for 1e94ddde-f040-11ea-a080-001a4aab830c.
2020-09-06T13:05:02.933 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 systemd[1]: Starting Ceph osd.0 for 1e94ddde-f040-11ea-a080-001a4aab830c...
2020-09-06T13:05:02.933 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 podman[55771]: Error: no container with name or ID ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0 found: no such container
2020-09-06T13:05:02.933 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 bash[55786]: Error: Failed to evict container: "": Failed to find container "ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0-activate" in state: no container with name
or ID ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0-activate found: no such container
2020-09-06T13:05:02.933 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 bash[55786]: ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0-activate
2020-09-06T13:05:02.934 INFO:journalctl@ceph.osd.0.smithi187.stdout:Sep 06 13:05:02 smithi187 bash[55786]: Error: no container with ID or name "ceph-1e94ddde-f040-11ea-a080-001a4aab830c-osd.0-activate" found: no such container
2020-09-06T13:05:02.934 INFO:tasks.cephfs_test_runner:test_device_ls_wide (tasks.cephadm_cases.test_cli.TestCephadmCLI) ... ERROR
2020-09-06T13:05:02.935 INFO:tasks.cephfs_test_runner:
2020-09-06T13:05:02.936 INFO:tasks.cephfs_test_runner:======================================================================
2020-09-06T13:05:02.936 INFO:tasks.cephfs_test_runner:ERROR: test_device_ls_wide (tasks.cephadm_cases.test_cli.TestCephadmCLI)
2020-09-06T13:05:02.936 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-09-06T13:05:02.937 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-09-06T13:05:02.937 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/cephadm_cases/test_cli.py", line 55, in test_device_ls_wide
2020-09-06T13:05:02.937 INFO:tasks.cephfs_test_runner: self._orch_cmd('device', 'ls', '--wide')
2020-09-06T13:05:02.938 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/cephadm_cases/test_cli.py", line 13, in _orch_cmd
2020-09-06T13:05:02.938 INFO:tasks.cephfs_test_runner: return self._cmd("orch", *args)
2020-09-06T13:05:02.938 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/cephadm_cases/test_cli.py", line 10, in _cmd
2020-09-06T13:05:02.939 INFO:tasks.cephfs_test_runner: return self.mgr_cluster.mon_manager.raw_cluster_cmd(*args)
2020-09-06T13:05:02.939 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/ceph_manager.py", line 1357, in raw_cluster_cmd
2020-09-06T13:05:02.939 INFO:tasks.cephfs_test_runner: 'stdout': StringIO()}).stdout.getvalue()
2020-09-06T13:05:02.940 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/ceph_manager.py", line 1350, in run_cluster_cmd
2020-09-06T13:05:02.940 INFO:tasks.cephfs_test_runner: return self.controller.run(**kwargs)
2020-09-06T13:05:02.940 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 213, in run
2020-09-06T13:05:02.940 INFO:tasks.cephfs_test_runner: r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2020-09-06T13:05:02.941 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 446, in run
2020-09-06T13:05:02.941 INFO:tasks.cephfs_test_runner: r.wait()
2020-09-06T13:05:02.941 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 160, in wait
2020-09-06T13:05:02.942 INFO:tasks.cephfs_test_runner: self._raise_for_status()
2020-09-06T13:05:02.942 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 182, in _raise_for_status
2020-09-06T13:05:02.942 INFO:tasks.cephfs_test_runner: node=self.hostname, label=self.label
2020-09-06T13:05:02.943 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi187 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early orch device ls --wide'
</pre>
Orchestrator - Bug #47109 (Resolved): while doing an upgrade: Module 'osd_support' has failed: No...
https://tracker.ceph.com/issues/47109
2020-08-24T12:09:04Z
Sebastian Wagner
<pre>
root@smithi117:~# /home/ubuntu/cephtest/cephadm shell -- ceph health detail
INFO:cephadm:Inferring fsid 4b46e0ee-e5f9-11ea-a073-001a4aab830c
INFO:cephadm:Using recent ceph image quay.ceph.io/ceph-ci/ceph:c50517bdb9ff569dff7acb095450622c2f0ebc1f
HEALTH_ERR Module 'osd_support' has failed: Not found or unloadable
[ERR] MGR_MODULE_ERROR: Module 'osd_support' has failed: Not found or unloadable
Module 'osd_support' has failed: Not found or unloadable
root@smithi117:~# /home/ubuntu/cephtest/cephadm shell -- ceph mgr module disable osd_support
INFO:cephadm:Inferring fsid 4b46e0ee-e5f9-11ea-a073-001a4aab830c
INFO:cephadm:Using recent ceph image quay.ceph.io/ceph-ci/ceph:c50517bdb9ff569dff7acb095450622c2f0ebc1f
Error EINVAL: module 'osd_support' cannot be disabled (always-on)
</pre>
<pre>
root@smithi117:~# /home/ubuntu/cephtest/cephadm shell -- ceph orch ps
INFO:cephadm:Inferring fsid 4b46e0ee-e5f9-11ea-a073-001a4aab830c
INFO:cephadm:Using recent ceph image quay.ceph.io/ceph-ci/ceph:c50517bdb9ff569dff7acb095450622c2f0ebc1f
NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
alertmanager.a smithi117 running (52m) 52m ago 53m 0.21.0 prom/alertmanager c876f5897d7b 9338ca7aea88
grafana.a smithi098 running (52m) 50m ago 52m 6.6.2 ceph/ceph-grafana:latest 87a51ecf0b1c 07770c120c4d
mgr.x smithi098 running 50m ago 56m <unknown> quay.ceph.io/ceph-ci/ceph:c50517bdb9ff569dff7acb095450622c2f0ebc1f <unknown> <unknown>
mgr.y smithi117 running (58m) 52m ago 58m 15.2.0 docker.io/ceph/ceph:v15.2.0 204a01f9b0b6 70068fb83065
mon.a smithi117 running (58m) 52m ago 59m 15.2.0 docker.io/ceph/ceph:v15.2.0 204a01f9b0b6 55dd64c1cff6
mon.b smithi098 running (57m) 50m ago 57m 15.2.0 docker.io/ceph/ceph:v15.2.0 204a01f9b0b6 2ba303e3a1f6
mon.c smithi117 running (57m) 52m ago 57m 15.2.0 docker.io/ceph/ceph:v15.2.0 204a01f9b0b6 b4e826b6da4d
node-exporter.a smithi117 running (53m) 52m ago 53m 1.0.1 prom/node-exporter 0e0218889c33 19a9aae8e7ae
node-exporter.b smithi098 running (53m) 50m ago 53m 1.0.1 prom/node-exporter 0e0218889c33 5ddc55635992
osd.0 smithi117 running (56m) 52m ago 56m 15.2.0 docker.io/ceph/ceph:v15.2.0 204a01f9b0b6 a1391411566b
osd.1 smithi117 running (56m) 52m ago 56m 15.2.0 docker.io/ceph/ceph:v15.2.0 204a01f9b0b6 4389efd6a55d
osd.2 smithi117 running (55m) 52m ago 55m 15.2.0 docker.io/ceph/ceph:v15.2.0 204a01f9b0b6 f38d6c25d803
osd.3 smithi117 running (55m) 52m ago 55m 15.2.0 docker.io/ceph/ceph:v15.2.0 204a01f9b0b6 43f2fe4cb9d9
osd.4 smithi098 running (54m) 50m ago 55m 15.2.0 docker.io/ceph/ceph:v15.2.0 204a01f9b0b6 50244faca23f
osd.5 smithi098 running (54m) 50m ago 54m 15.2.0 docker.io/ceph/ceph:v15.2.0 204a01f9b0b6 958701aeace1
osd.6 smithi098 running (54m) 50m ago 54m 15.2.0 docker.io/ceph/ceph:v15.2.0 204a01f9b0b6 cc407493948e
osd.7 smithi098 running (53m) 50m ago 53m 15.2.0 docker.io/ceph/ceph:v15.2.0 204a01f9b0b6 d4b650a76224
prometheus.a smithi098 running (52m) 50m ago 53m 2.20.1 prom/prometheus:latest b205ccdd28d3 b5f818abe327
</pre>
Orchestrator - Bug #46813 (Resolved): `ceph orch * --refresh` is broken
https://tracker.ceph.com/issues/46813
2020-08-03T08:09:19Z
Sebastian Wagner
<p>Those call violate <a class="external" href="https://docs.ceph.com/docs/master/dev/cephadm/#note-regarding-network-calls-from-cli-handlers">https://docs.ceph.com/docs/master/dev/cephadm/#note-regarding-network-calls-from-cli-handlers</a></p>
<p>and thus are dangerous as they might render cephadm unresponsive.</p>
<p>We just can't synchronously provide that information. Note that we then have to provide a way for users to trigger a refresh.</p>
rgw-testing - Bug #46734 (Resolved): unittest_rgw_dmclock_scheduler: Queue.SyncRequest: ***Timeou...
https://tracker.ceph.com/issues/46734
2020-07-28T10:46:32Z
Sebastian Wagner
<pre>
204/204 Test #183: unittest_rgw_dmclock_scheduler ............***Timeout 3600.01 sec
did not load config file, using default settings.
[==========] Running 8 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 8 tests from Queue
[ RUN ] Queue.SyncRequest
2020-07-27T20:34:39.555+0000 7fec06b58c80 -1 Errors while parsing config file!
2020-07-27T20:34:39.555+0000 7fec06b58c80 -1 parse_file: filesystem error: cannot get file size: No such file or directory [ceph.conf]
2020-07-27T20:34:39.555+0000 7fec06b58c80 -1 Errors while parsing config file!
2020-07-27T20:34:39.555+0000 7fec06b58c80 -1 parse_file: filesystem error: cannot get file size: No such file or directory [ceph.conf]
99% tests passed, 1 tests failed out of 204
Total Test time (real) = 3620.01 sec
The following tests FAILED:
183 - unittest_rgw_dmclock_scheduler (Timeout)
Errors while running CTest
Build step 'Execute shell' marked build as failure
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/56416/consoleFull#1569702623e840cee4-f4a4-4183-81dd-42855615f2c1">https://jenkins.ceph.com/job/ceph-pull-requests/56416/consoleFull#1569702623e840cee4-f4a4-4183-81dd-42855615f2c1</a></p>
Infrastructure - Bug #46333 (Resolved): unittest_rgw_dmclock_scheduler: error while loading share...
https://tracker.ceph.com/issues/46333
2020-07-02T15:35:09Z
Sebastian Wagner
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/54797/consoleFull#984723906e840cee4-f4a4-4183-81dd-42855615f2c1">https://jenkins.ceph.com/job/ceph-pull-requests/54797/consoleFull#984723906e840cee4-f4a4-4183-81dd-42855615f2c1</a></p>
<pre>
Start 183: unittest_rgw_dmclock_scheduler
/home/jenkins-build/build/workspace/ceph-pull-requests/build/bin/unittest_rgw_dmclock_scheduler: error while loading shared libraries: libboost_thread.so.1.73.0: cannot open shared object file: No such file or directory
...
99% tests passed, 1 tests failed out of 204
Total Test time (real) = 785.78 sec
The following tests FAILED:
183 - unittest_rgw_dmclock_scheduler (Failed)
Errors while running CTest
Build step 'Execute shell' marked build as failure
</pre>
Ceph - Feature #44745 (New): YAMLFormatter for common/Formatter.h
https://tracker.ceph.com/issues/44745
2020-03-25T11:36:11Z
Sebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/pull/34061">https://github.com/ceph/ceph/pull/34061</a> add a new value <code>yaml</code> for <code>--format</code> in order to support yaml in <code>mgr/cephadm</code>.</p>
<p>Having a YAMLFormatter for common/Formatter.h would be great, too!</p>
Orchestrator - Feature #43675 (Resolved): workflow for using a signed dashboard cert
https://tracker.ceph.com/issues/43675
2020-01-20T13:45:16Z
Sebastian Wagner
<p>How should users deploy their dashboard cert to --bootstrap?</p>
rbd - Bug #43274 (Need More Info): unittest_rbd_mirror: Exception: SegFault
https://tracker.ceph.com/issues/43274
2019-12-12T09:30:06Z
Sebastian Wagner
<p>Unfortunately, I don't know what exactly went wrong:</p>
<pre>
185/191 Test #184: unittest_rbd_mirror .....................***Exception: SegFault 11.74 sec
[==========] Running 279 tests from 34 test suites.
[----------] Global test environment set-up.
[----------] 13 tests from TestMockImageMap
[ RUN ] TestMockImageMap.SetLocalImages
seed 1526
[ OK ] TestMockImageMap.SetLocalImages (8 ms)
[ RUN ] TestMockImageMap.AddRemoveLocalImage
[ OK ] TestMockImageMap.AddRemoveLocalImage (25 ms)
[ RUN ] TestMockImageMap.AddRemoveRemoteImage
[ OK ] TestMockImageMap.AddRemoveRemoteImage (15 ms)
[ RUN ] TestMockImageMap.AddRemoveRemoteImageDuplicateNotification
[ OK ] TestMockImageMap.AddRemoveRemoteImageDuplicateNotification (5 ms)
[ RUN ] TestMockImageMap.AcquireImageErrorRetry
[ OK ] TestMockImageMap.AcquireImageErrorRetry (2 ms)
[ RUN ] TestMockImageMap.RemoveRemoteAndLocalImage
[ OK ] TestMockImageMap.RemoveRemoteAndLocalImage (2 ms)
[ RUN ] TestMockImageMap.AddInstance
[ OK ] TestMockImageMap.AddInstance (4 ms)
[ RUN ] TestMockImageMap.RemoveInstance
[ OK ] TestMockImageMap.RemoveInstance (7 ms)
[ RUN ] TestMockImageMap.AddInstancePingPongImageTest
[ OK ] TestMockImageMap.AddInstancePingPongImageTest (34 ms)
[ RUN ] TestMockImageMap.RemoveInstanceWithRemoveImage
[ OK ] TestMockImageMap.RemoveInstanceWithRemoveImage (23 ms)
[ RUN ] TestMockImageMap.AddErrorAndRemoveImage
[ OK ] TestMockImageMap.AddErrorAndRemoveImage (35 ms)
[ RUN ] TestMockImageMap.MirrorUUIDUpdated
[ OK ] TestMockImageMap.MirrorUUIDUpdated (44 ms)
[ RUN ] TestMockImageMap.RebalanceImageMap
[ OK ] TestMockImageMap.RebalanceImageMap (40 ms)
[----------] 13 tests from TestMockImageMap (244 ms total)
[----------] 14 tests from TestMockImageReplayer
[ RUN ] TestMockImageReplayer.StartStop
Failed to load class: cas (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_cas.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_cas.so: undefined symbol: _Z13cls_has_chunkPvNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
Failed to load class: log (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_log.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_log.so: undefined symbol: _Z24cls_cxx_map_write_headerPvPN4ceph6buffer7v14_2_04listE
Failed to load class: rgw (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_rgw.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_rgw.so: undefined symbol: _Z19cls_current_versionPv
Failed to load class: user (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_user.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_user.so: undefined symbol: _Z24cls_cxx_map_write_headerPvPN4ceph6buffer7v14_2_04listE
[ OK ] TestMockImageReplayer.StartStop (317 ms)
[ RUN ] TestMockImageReplayer.LocalImagePrimary
[ OK ] TestMockImageReplayer.LocalImagePrimary (146 ms)
[ RUN ] TestMockImageReplayer.LocalImageDNE
[ OK ] TestMockImageReplayer.LocalImageDNE (196 ms)
[ RUN ] TestMockImageReplayer.PrepareLocalImageError
[ OK ] TestMockImageReplayer.PrepareLocalImageError (194 ms)
[ RUN ] TestMockImageReplayer.GetRemoteImageIdDNE
[ OK ] TestMockImageReplayer.GetRemoteImageIdDNE (174 ms)
[ RUN ] TestMockImageReplayer.GetRemoteImageIdNonLinkedDNE
[ OK ] TestMockImageReplayer.GetRemoteImageIdNonLinkedDNE (224 ms)
[ RUN ] TestMockImageReplayer.GetRemoteImageIdError
[ OK ] TestMockImageReplayer.GetRemoteImageIdError (228 ms)
[ RUN ] TestMockImageReplayer.BootstrapError
[ OK ] TestMockImageReplayer.BootstrapError (154 ms)
[ RUN ] TestMockImageReplayer.StopBeforeBootstrap
[ OK ] TestMockImageReplayer.StopBeforeBootstrap (215 ms)
[ RUN ] TestMockImageReplayer.StartExternalReplayError
[ OK ] TestMockImageReplayer.StartExternalReplayError (152 ms)
[ RUN ] TestMockImageReplayer.StopError
[ OK ] TestMockImageReplayer.StopError (169 ms)
[ RUN ] TestMockImageReplayer.Replay
[ OK ] TestMockImageReplayer.Replay (177 ms)
[ RUN ] TestMockImageReplayer.DecodeError
[ OK ] TestMockImageReplayer.DecodeError (157 ms)
[ RUN ] TestMockImageReplayer.DelayedReplay
[ OK ] TestMockImageReplayer.DelayedReplay (2153 ms)
[----------] 14 tests from TestMockImageReplayer (4663 ms total)
[----------] 5 tests from TestMockImageSync
[ RUN ] TestMockImageSync.SimpleSync
[ OK ] TestMockImageSync.SimpleSync (198 ms)
[ RUN ] TestMockImageSync.RestartSync
[ OK ] TestMockImageSync.RestartSync (173 ms)
[ RUN ] TestMockImageSync.CancelNotifySyncRequest
[ OK ] TestMockImageSync.CancelNotifySyncRequest (159 ms)
[ RUN ] TestMockImageSync.CancelImageCopy
[ OK ] TestMockImageSync.CancelImageCopy (195 ms)
[ RUN ] TestMockImageSync.CancelAfterCopyImage
[ OK ] TestMockImageSync.CancelAfterCopyImage (166 ms)
[----------] 5 tests from TestMockImageSync (898 ms total)
[----------] 3 tests from TestMockInstanceReplayer
[ RUN ] TestMockInstanceReplayer.AcquireReleaseImage
[ OK ] TestMockInstanceReplayer.AcquireReleaseImage (16 ms)
[ RUN ] TestMockInstanceReplayer.RemoveFinishedImage
[ OK ] TestMockInstanceReplayer.RemoveFinishedImage (24 ms)
[ RUN ] TestMockInstanceReplayer.Reacquire
[ OK ] TestMockInstanceReplayer.Reacquire (2 ms)
[----------] 3 tests from TestMockInstanceReplayer (42 ms total)
[----------] 11 tests from TestMockInstanceWatcher
[ RUN ] TestMockInstanceWatcher.InitShutdown
[ OK ] TestMockInstanceWatcher.InitShutdown (23 ms)
[ RUN ] TestMockInstanceWatcher.InitError
[ OK ] TestMockInstanceWatcher.InitError (18 ms)
[ RUN ] TestMockInstanceWatcher.ShutdownError
[ OK ] TestMockInstanceWatcher.ShutdownError (15 ms)
[ RUN ] TestMockInstanceWatcher.Remove
[ OK ] TestMockInstanceWatcher.Remove (16 ms)
[ RUN ] TestMockInstanceWatcher.RemoveNoent
[ OK ] TestMockInstanceWatcher.RemoveNoent (12 ms)
[ RUN ] TestMockInstanceWatcher.ImageAcquireRelease
[ OK ] TestMockInstanceWatcher.ImageAcquireRelease (36 ms)
[ RUN ] TestMockInstanceWatcher.PeerImageRemoved
[ OK ] TestMockInstanceWatcher.PeerImageRemoved (36 ms)
[ RUN ] TestMockInstanceWatcher.ImageAcquireReleaseCancel
[ OK ] TestMockInstanceWatcher.ImageAcquireReleaseCancel (31 ms)
[ RUN ] TestMockInstanceWatcher.PeerImageAcquireWatchDNE
[ OK ] TestMockInstanceWatcher.PeerImageAcquireWatchDNE (17 ms)
[ RUN ] TestMockInstanceWatcher.PeerImageReleaseWatchDNE
[ OK ] TestMockInstanceWatcher.PeerImageReleaseWatchDNE (32 ms)
[ RUN ] TestMockInstanceWatcher.PeerImageRemovedCancel
[ OK ] TestMockInstanceWatcher.PeerImageRemovedCancel (12 ms)
[----------] 11 tests from TestMockInstanceWatcher (250 ms total)
[----------] 11 tests from TestMockInstanceWatcher_NotifySync
[ RUN ] TestMockInstanceWatcher_NotifySync.StartStopOnLeader
[ OK ] TestMockInstanceWatcher_NotifySync.StartStopOnLeader (48 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.CancelStartedOnLeader
[ OK ] TestMockInstanceWatcher_NotifySync.CancelStartedOnLeader (49 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.StartStopOnNonLeader
[ OK ] TestMockInstanceWatcher_NotifySync.StartStopOnNonLeader (36 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.CancelStartedOnNonLeader
[ OK ] TestMockInstanceWatcher_NotifySync.CancelStartedOnNonLeader (41 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.CancelWaitingOnNonLeader
[ OK ] TestMockInstanceWatcher_NotifySync.CancelWaitingOnNonLeader (46 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.InFlightPrevNotification
[ OK ] TestMockInstanceWatcher_NotifySync.InFlightPrevNotification (46 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.NoInFlightReleaseAcquireLeader
[ OK ] TestMockInstanceWatcher_NotifySync.NoInFlightReleaseAcquireLeader (46 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.StartedOnLeaderReleaseLeader
[ OK ] TestMockInstanceWatcher_NotifySync.StartedOnLeaderReleaseLeader (34 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.WaitingOnLeaderReleaseLeader
[ OK ] TestMockInstanceWatcher_NotifySync.WaitingOnLeaderReleaseLeader (46 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.StartedOnNonLeaderAcquireLeader
[ OK ] TestMockInstanceWatcher_NotifySync.StartedOnNonLeaderAcquireLeader (29 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.WaitingOnNonLeaderAcquireLeader
[ OK ] TestMockInstanceWatcher_NotifySync.WaitingOnNonLeaderAcquireLeader (34 ms)
[----------] 11 tests from TestMockInstanceWatcher_NotifySync (456 ms total)
[----------] 4 tests from TestMockLeaderWatcher
[ RUN ] TestMockLeaderWatcher.InitShutdown
[ OK ] TestMockLeaderWatcher.InitShutdown (33 ms)
[ RUN ] TestMockLeaderWatcher.InitReleaseShutdown
[ OK ] TestMockLeaderWatcher.InitReleaseShutdown (19 ms)
[ RUN ] TestMockLeaderWatcher.AcquireError
[ OK ] TestMockLeaderWatcher.AcquireError (12 ms)
[ RUN ] TestMockLeaderWatcher.Break
[ OK ] TestMockLeaderWatcher.Break (2012 ms)
[----------] 4 tests from TestMockLeaderWatcher (2076 ms total)
[----------] 12 tests from TestMockMirrorStatusUpdater
[ RUN ] TestMockMirrorStatusUpdater.InitShutDown
[ OK ] TestMockMirrorStatusUpdater.InitShutDown (13 ms)
[ RUN ] TestMockMirrorStatusUpdater.InitStatusWatcherError
[ OK ] TestMockMirrorStatusUpdater.InitStatusWatcherError (26 ms)
[ RUN ] TestMockMirrorStatusUpdater.ShutDownStatusWatcherError
[ OK ] TestMockMirrorStatusUpdater.ShutDownStatusWatcherError (14 ms)
[ RUN ] TestMockMirrorStatusUpdater.SmallBatch
[ OK ] TestMockMirrorStatusUpdater.SmallBatch (24 ms)
[ RUN ] TestMockMirrorStatusUpdater.LargeBatch
[ OK ] TestMockMirrorStatusUpdater.LargeBatch (30 ms)
[ RUN ] TestMockMirrorStatusUpdater.OverwriteStatus
[ OK ] TestMockMirrorStatusUpdater.OverwriteStatus (11 ms)
[ RUN ] TestMockMirrorStatusUpdater.OverwriteStatusInFlight
[ OK ] TestMockMirrorStatusUpdater.OverwriteStatusInFlight (7 ms)
[ RUN ] TestMockMirrorStatusUpdater.ImmediateUpdate
[ OK ] TestMockMirrorStatusUpdater.ImmediateUpdate (9 ms)
[ RUN ] TestMockMirrorStatusUpdater.RemoveIdleStatus
[ OK ] TestMockMirrorStatusUpdater.RemoveIdleStatus (20 ms)
[ RUN ] TestMockMirrorStatusUpdater.RemoveInFlightStatus
[ OK ] TestMockMirrorStatusUpdater.RemoveInFlightStatus (9 ms)
[ RUN ] TestMockMirrorStatusUpdater.ShutDownWhileUpdating
[ OK ] TestMockMirrorStatusUpdater.ShutDownWhileUpdating (14 ms)
[ RUN ] TestMockMirrorStatusUpdater.MirrorPeerSitePing
[ OK ] TestMockMirrorStatusUpdater.MirrorPeerSitePing (24 ms)
[----------] 12 tests from TestMockMirrorStatusUpdater (201 ms total)
[----------] 6 tests from TestMockNamespaceReplayer
[ RUN ] TestMockNamespaceReplayer.Init_LocalMirrorStatusUpdaterError
[ OK ] TestMockNamespaceReplayer.Init_LocalMirrorStatusUpdaterError (55 ms)
[ RUN ] TestMockNamespaceReplayer.Init_RemoteMirrorStatusUpdaterError
[ OK ] TestMockNamespaceReplayer.Init_RemoteMirrorStatusUpdaterError (32 ms)
[ RUN ] TestMockNamespaceReplayer.Init_InstanceReplayerError
[ OK ] TestMockNamespaceReplayer.Init_InstanceReplayerError (12 ms)
[ RUN ] TestMockNamespaceReplayer.Init_InstanceWatcherError
[ OK ] TestMockNamespaceReplayer.Init_InstanceWatcherError (20 ms)
[ RUN ] TestMockNamespaceReplayer.Init
[ OK ] TestMockNamespaceReplayer.Init (16 ms)
[ RUN ] TestMockNamespaceReplayer.AcuqireLeader
[ OK ] TestMockNamespaceReplayer.AcuqireLeader (9 ms)
[----------] 6 tests from TestMockNamespaceReplayer (144 ms total)
[----------] 4 tests from TestMockPoolReplayer
[ RUN ] TestMockPoolReplayer.ConfigKeyOverride
[ OK ] TestMockPoolReplayer.ConfigKeyOverride (47 ms)
[ RUN ] TestMockPoolReplayer.AcquireReleaseLeader
[ OK ] TestMockPoolReplayer.AcquireReleaseLeader (55 ms)
[ RUN ] TestMockPoolReplayer.Namespaces
[ OK ] TestMockPoolReplayer.Namespaces (2075 ms)
[ RUN ] TestMockPoolReplayer.NamespacesError
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/40443/console">https://jenkins.ceph.com/job/ceph-pull-requests/40443/console</a></p>
<p><a class="external" href="https://github.com/ceph/ceph/pull/32182">https://github.com/ceph/ceph/pull/32182</a></p>