Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2021-12-08T11:02:11ZCeph
Redmine ceph-volume - Bug #53524 (New): CEPHADM_APPLY_SPEC_FAIL is very verbosehttps://tracker.ceph.com/issues/535242021-12-08T11:02:11ZSebastian Wagner
<pre>
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
root@service-01-08020:~# ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.hybrid; OSD count 0 < osd_pool_default_size 3
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.hybrid
osd.hybrid: cephadm exited with an error code: 1, stderr:Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr [--auto] [--no-auto] [--bluestore] [--filestore]
/usr/bin/docker: stderr [--report] [--yes]
/usr/bin/docker: stderr [--format {json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr [--crush-device-class CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr [--no-systemd]
/usr/bin/docker: stderr [--osds-per-device OSDS_PER_DEVICE]
/usr/bin/docker: stderr [--data-slots DATA_SLOTS]
/usr/bin/docker: stderr [--data-allocate-fraction DATA_ALLOCATE_FRACTION]
/usr/bin/docker: stderr [--block-db-size BLOCK_DB_SIZE]
/usr/bin/docker: stderr [--block-db-slots BLOCK_DB_SLOTS]
/usr/bin/docker: stderr [--block-wal-size BLOCK_WAL_SIZE]
/usr/bin/docker: stderr [--block-wal-slots BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr [--journal-size JOURNAL_SIZE]
/usr/bin/docker: stderr [--journal-slots JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr [--osd-ids [OSD_IDS [OSD_IDS ...]]]
/usr/bin/docker: stderr [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
Traceback (most recent call last):
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8331, in <module>
main()
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 8319, in main
r = ctx.func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1735, in _infer_config
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1676, in _infer_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1763, in _infer_image
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1663, in _validate_fsid
return func(ctx)
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 5285, in command_ceph_volume
out, err, code = call_throws(ctx, c.run_cmd())
File "/var/lib/ceph/fsid/cephadm.f28d191a6bebefc928858ad3a798620c4b2dcb13f2c9f454ba758f88d7664da6", line 1465, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d -e NODE_NAME=storage-01-08002 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=hybrid -v /var/run/ceph/fsid:/var/run/ceph:z -v /var/log/ceph/fsid:/var/log/ceph:z -v /var/lib/ceph/fsid/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmpfteczv3s:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp6_nx8uhw:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph@sha256:94aec5086f8d9581e861925e04d6ca74dd1397e6c721e0576a3defcf0a25377d lvm batch --no-auto /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy --db-devices /dev/nvme0n1 /dev/nvme1n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3
</pre> Orchestrator - Bug #51361 (New): KillMode=none is deprecatedhttps://tracker.ceph.com/issues/513612021-06-25T09:05:39ZSebastian Wagner
<p>We chaged systemd unit file killmode to none in <a class="external" href="https://github.com/ceph/ceph/pull/33162#issuecomment-584183316">https://github.com/ceph/ceph/pull/33162#issuecomment-584183316</a></p>
<p>Now we're getting a new warning:</p>
<pre>
Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
</pre> Orchestrator - Documentation #51032 (New): Document port requirementshttps://tracker.ceph.com/issues/510322021-06-01T08:15:07ZSebastian Wagner
<p>(In reply to Ernesto Puerta from comment <a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: leaked dentry ref on umount (Closed)" href="https://tracker.ceph.com/issues/3">#3</a>)</p>
<blockquote>
<p>- Port 9095 for Prometheus<br />- Port 2049 for nfs-ganesha <br />- ISCSI ports. Should be open in the firewall anyway.</p>
</blockquote>
<blockquote>
<p>- I also guess that port 22 should be open between ceph-mgr node to all<br />other nodes (cephadm/ssh connection)</p>
</blockquote>
<p>I'd be more liberal here. In order to avoid deadlock situations, I'd propose to allow port 22 from all cluster hosts. I.e. if the MGR ends up on an OSD host due to a wrong placement spec.</p> Orchestrator - Bug #50776 (New): cephadm: CRUSH uses bare host nameshttps://tracker.ceph.com/issues/507762021-05-12T15:01:00ZSebastian Wagner
<p><a class="external" href="https://github.com/ceph/ceph/blob/master/src/pybind/mgr/cephadm/module.py#L1411">https://github.com/ceph/ceph/blob/master/src/pybind/mgr/cephadm/module.py#L1411</a></p>
<p><a class="external" href="https://github.com/ceph/ceph/blob/04d9194fd36551b0b45abc06816449c1c0ff248b/src/pybind/mgr/cephadm/services/osd.py#L325">https://github.com/ceph/ceph/blob/04d9194fd36551b0b45abc06816449c1c0ff248b/src/pybind/mgr/cephadm/services/osd.py#L325</a></p>
<p><a class="external" href="https://github.com/ceph/ceph/blob/04d9194fd36551b0b45abc06816449c1c0ff248b/src/pybind/mgr/cephadm/services/osd.py#L50">https://github.com/ceph/ceph/blob/04d9194fd36551b0b45abc06816449c1c0ff248b/src/pybind/mgr/cephadm/services/osd.py#L50</a></p>
<pre>
def bare_name_to_hostname(bare)
...
</pre> Orchestrator - Documentation #48806 (New): documentation trobleshooting: cephadm; how do i start ...https://tracker.ceph.com/issues/488062021-01-08T19:38:43ZSebastian Wagner
<p>cephadm; how do i start all containers?</p> Orchestrator - Documentation #46691 (New): Document manually deploment of OSDshttps://tracker.ceph.com/issues/466912020-07-23T12:53:40ZSebastian Wagner
<p>Sometimes, users want to deploy OSDs completely manual. Might be drive groups are not expressive enough, or there is some bug preventing the automation from succeeding.</p>
<p>Therfor we should document a manual way like so:</p>
<a name="prepare"></a>
<h3 >prepare<a href="#prepare" class="wiki-anchor">¶</a></h3>
<pre>
ceph-volume prepare
</pre>
as you wish (note that you should add `--no-systemd`)
<a name="List"></a>
<h3 >List<a href="#List" class="wiki-anchor">¶</a></h3>
<p>run `ceph-volume lvm list` and note the created OSDs</p>
<a name="Manual-container-deployment"></a>
<h3 >Manual container deployment<a href="#Manual-container-deployment" class="wiki-anchor">¶</a></h3>
<p>for each listed OSD, run:</p>
<pre>
cephadm deploy --name osd.x
</pre> Orchestrator - Documentation #45937 (New): cephadm: setting the various certificateshttps://tracker.ceph.com/issues/459372020-06-08T14:34:51ZSebastian Wagner
<p><strong>Grafana</strong></p>
<p>how to set the grafana certificate and key:</p>
<pre>
ceph config-key set mgr/cephadm/grafana_crt -i <cert-file>
ceph config-key set mgr/cephadm/grafana_key -i <key-file>
</pre>
<p><b>RGW</b></p>
<pre>
ceph config-key set rgw/cert/{rgw_realm}/{rgw_zone}.crt -i <cert-file>
ceph config-key set rgw/cert/{rgw_realm}/{rgw_zone}.key -i <key-file>
</pre>
<p><b>IGW</b></p>
<pre>
ceph config-key set iscsi/client.iscsi.{igw_id}/iscsi-gateway.crt -i <cert-file>
ceph config-key set iscsi/client.iscsi.{igw_id}/iscsi-gateway.key -i <key-file>
</pre>
<p><b>mgr/dashboard</b></p>
<ul>
<li><a class="external" href="https://docs.ceph.com/docs/master/man/8/cephadm/#bootstrap">https://docs.ceph.com/docs/master/man/8/cephadm/#bootstrap</a> -> dashboard certificate </li>
<li><a class="external" href="https://docs.ceph.com/docs/master/mgr/dashboard/#dashboard-ssl-tls-support">https://docs.ceph.com/docs/master/mgr/dashboard/#dashboard-ssl-tls-support</a></li>
</ul> Orchestrator - Documentation #45936 (New): cephadm: document restart the whole clusterhttps://tracker.ceph.com/issues/459362020-06-08T13:46:40ZSebastian Wagner
<pre>
[15:23:28] <dcapone2004> I have a ceph dev cluster of 3 nodes deployed using cephadm with octopus on centos 8
[15:23:42] <-- beigestair (~beigestai@0BGAAAFRJ.tor-irc.dnsbl.oftc.net) hat das Netzwerk verlassen (Remote host closed the connection)
[15:24:26] <dcapone2004> this is going to be a hyperconverged openstack cluster, that i am essentially testing....a key element is that the location that we deploy this cluster at will change in about 18 months, so I have been trying to write up procedures to safely shut down the cluster and power it back up
[15:24:35] <dcapone2004> this is the latest place where i have run into an issue
[15:24:52] --> ragedragon (~ragedrago@i16-lef02-th2-89-83-167-245.ft.lns.abo.bbox.fr) hat #ceph betreten
[15:25:39] <dcapone2004> I stopped all disk activity on the cluster, set osd noout, then sht down the nodes of the cluster 1 by 1, with the active Manager being LAST
[15:25:55] --> beigestair (~beigestai@9J5AADHU6.tor-irc.dnsbl.oftc.net) hat #ceph betreten
[15:26:24] <dcapone2004> a few hours later, I tried to power the cluster back up starting with the last active manager and going in the reverse order that i shut the down
[15:26:44] <dcapone2004> and no I lost 2 OSD containers and all my manager containers
[15:26:48] <dcapone2004> now*
[15:27:10] <dcapone2004> ceph orch daemon redploy does nothing, nor does restart
[15:27:43] <dcapone2004> and when simply trying podman start, podman claims to not know about those containers, but the ceph dashboard shows the OSDs are in but down
[15:28:38] <SebastianW> dcapone2004: what does "I lost my manager containers" mean?
[15:28:57] <dcapone2004> meaning ceph -s shows no active manager containers
[15:29:31] <dcapone2004> podman start mgr.dev-lx-ceph11 (my hostname) says the container doesnt exist
[15:31:22] <dcapone2004> originally when i first started it up I only lost 1 of 2 containers, then after trying to use redeploy, the second disappeared....I am unsure if this is/was related to my attempt to upgrade to 15.2.3 which failed (and I filed a bug report for) and major the inconsistant version numbers between containers caused this issue
</pre> Orchestrator - Documentation #45896 (New): cephadm: Need a manual howto: "upgrade the cluster man...https://tracker.ceph.com/issues/458962020-06-04T13:18:43ZSebastian Wagner
<p>symptom:</p>
<pre>
$ ceph mon ok-to-stop host2
Error EBUSY: not enough monitors would be available (host1) after stopping mons [host2]
</pre>
<p>Maybe something like this could work:</p>
<pre>
ceph orch upgrade start --force-daemon mon.host1
</pre>
<p>possible workaround would be to manually perform the upgrade step:</p>
<pre>
ceph config set mon.host1 <container-image>
ceph orch redeploy mon.host1
<wait, till cephadm refreshes `ceph orch ps`
</pre> Infrastructure - Bug #45453 (New): 'https://download.ceph.com/debian-octopus focal Release' does ...https://tracker.ceph.com/issues/454532020-05-08T20:28:48ZSebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/">http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/</a></p>
<pre>
2020-05-08T15:24:54.694 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm/cephadm -v add-repo --release octopus
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:DEBUG:cephadm:Could not locate podman: podman not found
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo GPG key from https://download.ceph.com/keys/release.asc...
2020-05-08T15:24:54.964 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo file at /etc/apt/sources.list.d/ceph.list...
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ test_install_uninstall
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo apt update
2020-05-08T15:24:55.009 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.139 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
2020-05-08T15:24:55.270 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
2020-05-08T15:24:55.284 INFO:tasks.workunit.client.0.smithi204.stdout:Ign:3 https://download.ceph.com/debian-octopus focal InRelease
2020-05-08T15:24:55.320 INFO:tasks.workunit.client.0.smithi204.stdout:Err:4 https://download.ceph.com/debian-octopus focal Release
2020-05-08T15:24:55.321 INFO:tasks.workunit.client.0.smithi204.stdout: 404 Not Found [IP: 158.69.68.124 443]
2020-05-08T15:24:55.351 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease
2020-05-08T15:24:55.434 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
2020-05-08T15:24:56.423 INFO:tasks.workunit.client.0.smithi204.stdout:Reading package lists...
2020-05-08T15:24:56.442 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:E: The repository 'https://download.ceph.com/debian-octopus focal Release' does not have a Release file.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.444 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo yum -y install cephadm
2020-05-08T15:24:56.452 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: yum: command not found
2020-05-08T15:24:56.453 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo dnf -y install cephadm
2020-05-08T15:24:56.459 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: dnf: command not found
2020-05-08T15:24:56.460 DEBUG:teuthology.orchestra.run:got remote process result: 1
</pre>
<p>Are we going to get an ubuntu repo for focal or should I disable this test for it?</p> ceph-volume - Feature #44630 (New): cephadm: improve behaviour with virtual diskshttps://tracker.ceph.com/issues/446302020-03-16T17:44:09ZSebastian Wagner
<p>when your disks are virtual, the identify OSDs should just msg...it's<br />a no-op anyway. this could be extended to kvm virt disks and vmware to<br />cover the bases.</p>
<p>the vendor 0x1af4 is the "known" descriptor for VirtIO Disk - so we<br />should use that in the inventory display</p> Messengers - Bug #44012 (New): shaman build error: (bionic+crimson): infiniband: error: aggregate...https://tracker.ceph.com/issues/440122020-02-06T12:23:54ZSebastian Wagner
<p><a class="external" href="https://shaman.ceph.com/builds/ceph/wip-swagner-testing/3f9622d20ae8c91019b8fb97e46196113164006a/crimson/188435/">https://shaman.ceph.com/builds/ceph/wip-swagner-testing/3f9622d20ae8c91019b8fb97e46196113164006a/crimson/188435/</a><br /><a class="external" href="https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=bionic,DIST=bionic,MACHINE_SIZE=huge/36282//consoleFull">https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=bionic,DIST=bionic,MACHINE_SIZE=huge/36282//consoleFull</a></p>
<pre>
Run Build Command:"/usr/bin/make" "cmTC_1399f/fast"
make[2]: Entering directory '/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp'
/usr/bin/make -f CMakeFiles/cmTC_1399f.dir/build.make CMakeFiles/cmTC_1399f.dir/build
make[3]: Entering directory '/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_1399f.dir/src.cxx.o
/usr/bin/c++ -g -O2 -fdebug-prefix-map=/build/ceph-15.1.0-264-g3f9622d=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -DHAVE_IBV_EXP -fPIE -o CMakeFiles/cmTC_1399f.dir/src.cxx.o -c /build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp/src.cxx
/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp/src.cxx: In function 'int main()':
/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp/src.cxx:5:31: error: aggregate 'main()::ibv_exp_gid_attr gid_attr' has incomplete type and cannot be defined
5 | struct ibv_exp_gid_attr gid_attr;
| ^~~~~~~~
/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp/src.cxx:6:7: error: 'ibv_exp_query_gid_attr' was not declared in this scope; did you mean 'ibv_exp_gid_attr'?
6 | ibv_exp_query_gid_attr(ctxt, 1, 0, &gid_attr);
| ^~~~~~~~~~~~~~~~~~~~~~
| ibv_exp_gid_attr
CMakeFiles/cmTC_1399f.dir/build.make:65: recipe for target 'CMakeFiles/cmTC_1399f.dir/src.cxx.o' failed
make[3]: *** [CMakeFiles/cmTC_1399f.dir/src.cxx.o] Error 1
make[3]: Leaving directory '/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_1399f/fast' failed
make[2]: *** [cmTC_1399f/fast] Error 2
make[2]: Leaving directory '/build/ceph-15.1.0-264-g3f9622d/obj-x86_64-linux-gnu/CMakeFiles/CMakeTmp'
Source file was:
#include <infiniband/verbs.h>
int main() {
struct ibv_context* ctxt;
struct ibv_exp_gid_attr gid_attr;
ibv_exp_query_gid_attr(ctxt, 1, 0, &gid_attr);
return 0;
}
dh_auto_configure: cd obj-x86_64-linux-gnu && cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_VERBOSE_MAKEFILE=ON -DCMAKE_BUILD_TYPE=None -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -DCMAKE_EXPORT_NO_PACKAGE_REGISTRY=ON -DCMAKE_FIND_PACKAGE_NO_PACKAGE_REGISTRY=ON -DWITH_OCF=ON -DWITH_LTTNG=ON -DWITH_MGR_DASHBOARD_FRONTEND=OFF -DWITH_PYTHON3=3 -DWITH_CEPHFS_JAVA=ON -DWITH_CEPHFS_SHELL=ON -DWITH_SYSTEMD=ON -DCEPH_SYSTEMD_ENV_DIR=/etc/default -DCMAKE_INSTALL_LIBDIR=/usr/lib -DCMAKE_INSTALL_LIBEXECDIR=/usr/lib -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_SYSTEMD_SERVICEDIR=/lib/systemd/system -DBOOST_J=8 -DWITH_BOOST_CONTEXT=ON -DWITH_SEASTAR=ON -DWITH_STATIC_LIBSTDCXX=OFF returned exit code 1
debian/rules:47: recipe for target 'override_dh_auto_configure' failed
make[1]: *** [override_dh_auto_configure] Error 2
make[1]: Leaving directory '/build/ceph-15.1.0-264-g3f9622d'
debian/rules:44: recipe for target 'build' failed
make: *** [build] Error 2
</pre> ceph-volume - Bug #43858 (New): ceph-volume: lvm zap requires `/dev/` prefixhttps://tracker.ceph.com/issues/438582020-01-28T12:38:22ZSebastian Wagner
<p>This is different to the other lvm commands:</p>
<p>preparing:</p>
<pre>
root@ubuntu:~# losetup -f
/dev/loop12
root@ubuntu:~# LANG=C losetup /dev/loop12 disk-image
root@ubuntu:~# pvcreate /dev/loop12
Physical volume "/dev/loop12" successfully created.
root@ubuntu:~# vgcreate MyVg /dev/loop12
Volume group "MyVg" successfully created
root@ubuntu:~# lvcreate --size 5500M --name MyLV MyVg
Logical volume "MyLV" created.
root@ubuntu:~# ll /dev/MyVg/MyLV
lrwxrwxrwx 1 root root 7 Jan 28 11:38 /dev/MyVg/MyLV -> ../dm-0
root@ubuntu:~# vgs -o vg_tags MyVg
VG Tags
root@ubuntu:~# vgs -o lv_tags MyVg
LV Tags
</pre>
<pre>
# ceph-volume lvm zap MyVg/MyLV
stderr: lsblk: MyVg/MyLV: kein blockorientiertes Gerät
stderr: blkid: Fehler: MyVg/MyLV: Datei oder Verzeichnis nicht gefunden
stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
[--osd-fsid OSD_FSID]
[DEVICES [DEVICES ...]]
ceph-volume lvm zap: error: Unable to proceed with non-existing device: MyVg/MyLV
</pre>
<pre>
# ceph-volume lvm zap /dev/MyVg/MyLV
--> Zapping: /dev/MyVg/MyLV
Running command: /bin/dd if=/dev/zero of=/dev/MyVg/MyLV bs=1M count=10
stderr: 10+0 Datensätze ein
10+0 Datensätze aus
10485760 Bytes (10 MB, 10 MiB) kopiert, 0,00413076 s, 2,5 GB/s
--> Zapping successful for: <LV: /dev/MyVg/MyLV>
</pre> ceph-cm-ansible - Bug #43738 (New): cephadm: conflicts between attempted installs of libstoragemg...https://tracker.ceph.com/issues/437382020-01-21T10:27:32ZSebastian Wagner
<pre>
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 420, in begin
super(CephLab, self).begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 263, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 290, in execute_playbook
self._handle_failure(command, status)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 314, in _handle_failure
raise AnsibleFailedError(failures)
failure_reason: '86 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz
conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
Error Summary
-------------
''}}Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure
log.error(yaml.safe_dump(failure))
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent
node = self.represent_data(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u''tag:yaml.org,2002:map'', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)RepresenterError: (''cannot represent an object'', u''/'')Failure
object was: {''smithi161.front.sepia.ceph.com'': {''_ansible_no_log'': False, ''changed'':
False, u''results'': [], u''rc'': 1, u''invocation'': {u''module_args'': {u''install_weak_deps'':
True, u''autoremove'': False, u''lock_timeout'': 0, u''download_dir'': None, u''install_repoquery'':
True, u''enable_plugin'': [], u''update_cache'': False, u''disable_excludes'': None,
u''exclude'': [], u''installroot'': u''/'', u''allow_downgrade'': False, u''name'':
[u''@core'', u''@base'', u''dnf-utils'', u''git-all'', u''sysstat'', u''libedit'',
u''boost-thread'', u''xfsprogs'', u''gdisk'', u''parted'', u''libgcrypt'', u''fuse-libs'',
u''openssl'', u''libuuid'', u''attr'', u''ant'', u''lsof'', u''gettext'', u''bc'',
u''xfsdump'', u''blktrace'', u''usbredir'', u''podman'', u''podman-docker'', u''libev-devel'',
u''valgrind'', u''nfs-utils'', u''ncurses-devel'', u''gcc'', u''git'', u''python3-nose'',
u''python3-virtualenv'', u''genisoimage'', u''qemu-img'', u''qemu-kvm-core'', u''qemu-kvm-block-rbd'',
u''libacl-devel'', u''dbench'', u''autoconf''], u''download_only'': False, u''bugfix'':
False, u''list'': None, u''disable_gpg_check'': False, u''conf_file'': None, u''update_only'':
False, u''state'': u''present'', u''disablerepo'': [], u''releasever'': None, u''disable_plugin'':
[], u''enablerepo'': [], u''skip_broken'': False, u''security'': False, u''validate_certs'':
True}}, u''failures'': [], u''msg'': u''Unknown Error occured: Transaction check
error:
file /usr/lib/tmpfiles.d/libstoragemgmt.conf conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/doc/libstoragemgmt/NEWS conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmcli.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/lsmd.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man1/simc_lsmplugin.1.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
file /usr/share/man/man5/lsmd.conf.5.gz conflicts between attempted installs of libstoragemgmt-1.6.2-9.el8.i686 and libstoragemgmt-1.8.1-2.el8.x86_64
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2020-01-20_14:55:57-rados-wip-swagner-testing-distro-basic-smithi/4688381/teuthology.log</a></p>
<ul>
<li>os_type: rhel</li>
<li>os_version: '8.0'</li>
<li>description: rados/cephadm/{fixed-2.yaml mode/packaged.yaml msgr/async-v1only.yaml start.yaml supported-random-distro$/{rhel_8.yaml} tasks/rados_api_tests.yaml}</li>
</ul>
<p>As this is an ansible error, I'm not sure if this is a cephadm issue. Any clues?</p> mgr - Bug #40016 (New): mgr/selftest: test_selftest_config_update failes in a vstart clusterhttps://tracker.ceph.com/issues/400162019-05-23T12:43:50ZSebastian Wagner
<pre>
2019-05-23 14:37:59,692.692 INFO:__main__:Running ['./bin/ceph', 'log', 'Ended test tasks.mgr.test_module_selftest.TestModuleSelftest.test_selftest_config_update']
2019-05-23 14:38:02,798.798 INFO:__main__:Stopped test: test_selftest_config_update (tasks.mgr.test_module_selftest.TestModuleSelftest) in 39.056024s
2019-05-23 14:38:02,799.799 INFO:__main__:
2019-05-23 14:38:02,800.800 INFO:__main__:======================================================================
2019-05-23 14:38:02,800.800 INFO:__main__:FAIL: test_selftest_config_update (tasks.mgr.test_module_selftest.TestModuleSelftest)
2019-05-23 14:38:02,800.800 INFO:__main__:----------------------------------------------------------------------
2019-05-23 14:38:02,800.800 INFO:__main__:Traceback (most recent call last):
2019-05-23 14:38:02,801.801 INFO:__main__: File "/home/sebastian/Repos/ceph/qa/tasks/mgr/test_module_selftest.py", line 94, in test_selftest_config_update
2019-05-23 14:38:02,801.801 INFO:__main__: self.assertEqual(get_value(), "None")
2019-05-23 14:38:02,801.801 INFO:__main__:AssertionError: 'testvalue' != 'None'
2019-05-23 14:38:02,801.801 INFO:__main__:
2019-05-23 14:38:02,802.802 INFO:__main__:----------------------------------------------------------------------
2019-05-23 14:38:02,802.802 INFO:__main__:Ran 14 tests in 951.565s
2019-05-23 14:38:02,802.802 INFO:__main__:
2019-05-23 14:38:02,802.802 INFO:__main__:FAILED (failures=1)
2019-05-23 14:38:02,803.803 INFO:__main__:
2019-05-23 14:38:02,803.803 INFO:__main__:======================================================================
2019-05-23 14:38:02,803.803 INFO:__main__:FAIL: test_selftest_config_update (tasks.mgr.test_module_selftest.TestModuleSelftest)
2019-05-23 14:38:02,803.803 INFO:__main__:----------------------------------------------------------------------
2019-05-23 14:38:02,803.803 INFO:__main__:Traceback (most recent call last):
2019-05-23 14:38:02,804.804 INFO:__main__: File "/home/sebastian/Repos/ceph/qa/tasks/mgr/test_module_selftest.py", line 94, in test_selftest_config_update
2019-05-23 14:38:02,804.804 INFO:__main__: self.assertEqual(get_value(), "None")
2019-05-23 14:38:02,804.804 INFO:__main__:AssertionError: 'testvalue' != 'None'
</pre>
<p>running with vstart_running</p>