Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2020-09-14T14:25:51ZCeph
Redmine teuthology - Bug #47441 (Closed): teuthology/task/install: verify_package_version: RuntimeError: ...https://tracker.ceph.com/issues/474412020-09-14T14:25:51ZSebastian Wagner
<pre>
2020-09-14T13:32:56.135 INFO:teuthology.packaging:The installed version of ceph is 16.0.0-5509.g7f41e68.el8
2020-09-14T13:32:56.136 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/contextutil.py", line 31, in nested
vars.append(enter())
File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/install/__init__.py", line 218, in install
install_packages(ctx, package_list, config)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/install/__init__.py", line 87, in install_packages
verify_package_version(ctx, config, remote)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/install/__init__.py", line 61, in verify_package_version
pkg=pkg_to_check
RuntimeError: ceph version 16.0.0-5509.g7f41e68c8af was not installed, found 16.0.0-5509.g7f41e68.el8.
</pre>
<p>Looks like the builds were duplicated: See <a class="external" href="https://shaman.ceph.com/repos/ceph/wip-swagner-testing-2020-09-14-1230/7f41e68c8afa3f6a917ca548770374067fdb433f/">https://shaman.ceph.com/repos/ceph/wip-swagner-testing-2020-09-14-1230/7f41e68c8afa3f6a917ca548770374067fdb433f/</a></p> teuthology - Feature #46834 (New): Presets for teuthology-suitehttps://tracker.ceph.com/issues/468342020-08-05T08:53:16ZSebastian Wagner
<p>When scheduling cephadm runs, I typically schedule them like so:</p>
<pre>
--suite rados/cephadm --subset 0/3
</pre>
<p>But that's not obvious to anyone else. I'd like to have an easy way to schedule runs for specific things in Ceph. Like the dashboard or CephFS, without first asking the component lead for clues how to schedule them.</p> rgw-testing - Bug #46734 (Resolved): unittest_rgw_dmclock_scheduler: Queue.SyncRequest: ***Timeou...https://tracker.ceph.com/issues/467342020-07-28T10:46:32ZSebastian Wagner
<pre>
204/204 Test #183: unittest_rgw_dmclock_scheduler ............***Timeout 3600.01 sec
did not load config file, using default settings.
[==========] Running 8 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 8 tests from Queue
[ RUN ] Queue.SyncRequest
2020-07-27T20:34:39.555+0000 7fec06b58c80 -1 Errors while parsing config file!
2020-07-27T20:34:39.555+0000 7fec06b58c80 -1 parse_file: filesystem error: cannot get file size: No such file or directory [ceph.conf]
2020-07-27T20:34:39.555+0000 7fec06b58c80 -1 Errors while parsing config file!
2020-07-27T20:34:39.555+0000 7fec06b58c80 -1 parse_file: filesystem error: cannot get file size: No such file or directory [ceph.conf]
99% tests passed, 1 tests failed out of 204
Total Test time (real) = 3620.01 sec
The following tests FAILED:
183 - unittest_rgw_dmclock_scheduler (Timeout)
Errors while running CTest
Build step 'Execute shell' marked build as failure
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/56416/consoleFull#1569702623e840cee4-f4a4-4183-81dd-42855615f2c1">https://jenkins.ceph.com/job/ceph-pull-requests/56416/consoleFull#1569702623e840cee4-f4a4-4183-81dd-42855615f2c1</a></p> sepia - Bug #46336 (New): https://download-cc-rdu01.fedoraproject.org is unreliablehttps://tracker.ceph.com/issues/463362020-07-03T09:58:16ZSebastian Wagner
<pre>
2020-07-03T09:05:49.488 INFO:teuthology.orchestra.run.smithi058:> sudo yum -y install ceph-test
2020-07-03T09:05:49.626 INFO:teuthology.orchestra.run.smithi195.stdout:Transaction test succeeded.
2020-07-03T09:05:49.627 INFO:teuthology.orchestra.run.smithi195.stdout:Running transaction
2020-07-03T09:05:49.924 INFO:teuthology.orchestra.run.smithi058.stdout:Last metadata expiration check: 0:00:36 ago on Fri 03 Jul 2020 09:05:13 AM UTC.
2020-07-03T09:05:50.065 INFO:teuthology.orchestra.run.smithi195.stdout: Preparing : 1/1
2020-07-03T09:05:50.238 INFO:teuthology.orchestra.run.smithi195.stdout: Installing : libxslt-1.1.32-3.el8.x86_64 1/6
2020-07-03T09:05:50.310 INFO:teuthology.orchestra.run.smithi058.stdout:Dependencies resolved.
2020-07-03T09:05:50.311 INFO:teuthology.orchestra.run.smithi058.stdout:================================================================================
2020-07-03T09:05:50.311 INFO:teuthology.orchestra.run.smithi058.stdout: Package Arch Version Repository Size
2020-07-03T09:05:50.312 INFO:teuthology.orchestra.run.smithi058.stdout:================================================================================
2020-07-03T09:05:50.312 INFO:teuthology.orchestra.run.smithi058.stdout:Installing:
2020-07-03T09:05:50.312 INFO:teuthology.orchestra.run.smithi058.stdout: ceph-test x86_64 2:16.0.0-3122.ge1d6abcdc6f.el8 ceph 45 M
2020-07-03T09:05:50.313 INFO:teuthology.orchestra.run.smithi058.stdout:Installing dependencies:
2020-07-03T09:05:50.313 INFO:teuthology.orchestra.run.smithi058.stdout: jq x86_64 1.5-12.el8 CentOS-AppStream 161 k
2020-07-03T09:05:50.313 INFO:teuthology.orchestra.run.smithi058.stdout: oniguruma x86_64 6.8.2-1.el8 CentOS-AppStream 188 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout: socat x86_64 1.7.3.2-6.el8 CentOS-AppStream 298 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout: libxslt x86_64 1.1.32-3.el8 CentOS-Base 249 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout: xmlstarlet x86_64 1.6.1-11.el8 epel 69 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout:
2020-07-03T09:05:50.315 INFO:teuthology.orchestra.run.smithi058.stdout:Transaction Summary
2020-07-03T09:05:50.315 INFO:teuthology.orchestra.run.smithi058.stdout:================================================================================
2020-07-03T09:05:50.315 INFO:teuthology.orchestra.run.smithi058.stdout:Install 6 Packages
2020-07-03T09:05:50.316 INFO:teuthology.orchestra.run.smithi058.stdout:
2020-07-03T09:05:50.317 INFO:teuthology.orchestra.run.smithi058.stdout:Total download size: 46 M
2020-07-03T09:05:50.317 INFO:teuthology.orchestra.run.smithi058.stdout:Installed size: 194 M
2020-07-03T09:05:50.317 INFO:teuthology.orchestra.run.smithi058.stdout:Downloading Packages:
2020-07-03T09:05:50.348 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] jq-1.5-12.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/AppStream/x86_64/os/Packages/jq-1.5-12.el8.x86_64.rpm
2020-07-03T09:05:50.348 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] oniguruma-6.8.2-1.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/AppStream/x86_64/os/Packages/oniguruma-6.8.2-1.el8.x86_64.rpm
2020-07-03T09:05:50.394 INFO:teuthology.orchestra.run.smithi195.stdout: Installing : xmlstarlet-1.6.1-11.el8.x86_64 2/6
2020-07-03T09:05:50.454 INFO:teuthology.orchestra.run.smithi058.stdout:(1/6): jq-1.5-12.el8.x86_64.rpm 1.1 MB/s | 161 kB 00:00
2020-07-03T09:05:50.463 INFO:teuthology.orchestra.run.smithi058.stdout:(2/6): oniguruma-6.8.2-1.el8.x86_64.rpm 1.2 MB/s | 188 kB 00:00
2020-07-03T09:05:50.487 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.491 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] libxslt-1.1.32-3.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/BaseOS/x86_64/os/Packages/libxslt-1.1.32-3.el8.x86_64.rpm
2020-07-03T09:05:50.492 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 404 for http://mirror.linux.duke.edu/pub/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.502 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 404 for http://packages.oit.ncsu.edu/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.508 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] libxslt-1.1.32-3.el8.x86_64.rpm: Status code: 404 for http://mirror.linux.duke.edu/pub/centos/8/BaseOS/x86_64/os/Packages/libxslt-1.1.32-3.el8.x86_64.rpm
2020-07-03T09:05:50.508 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] libxslt-1.1.32-3.el8.x86_64.rpm: Status code: 404 for http://packages.oit.ncsu.edu/centos/8/BaseOS/x86_64/os/Packages/libxslt-1.1.32-3.el8.x86_64.rpm
2020-07-03T09:05:50.539 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 404 for http://distro.ibiblio.org/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.539 INFO:teuthology.orchestra.run.smithi058.stdout:[FAILED] socat-1.7.3.2-6.el8.x86_64.rpm: No more mirrors to try - All mirrors were already tried without success
2020-07-03T09:05:50.541 INFO:teuthology.orchestra.run.smithi058.stdout:
2020-07-03T09:05:50.542 INFO:teuthology.orchestra.run.smithi058.stdout:The downloaded packages were saved in cache until the next successful transaction.
2020-07-03T09:05:50.542 INFO:teuthology.orchestra.run.smithi058.stdout:You can remove cached packages by executing 'dnf clean packages'.
2020-07-03T09:05:50.555 INFO:teuthology.orchestra.run.smithi195.stdout: Installing : socat-1.7.3.2-6.el8.x86_64 3/6
2020-07-03T09:05:50.615 INFO:teuthology.orchestra.run.smithi058.stderr:Error: Error downloading packages:
2020-07-03T09:05:50.615 INFO:teuthology.orchestra.run.smithi058.stderr: Cannot download Packages/socat-1.7.3.2-6.el8.x86_64.rpm: All mirrors were tried
2
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-07-03_08:12:34-rados:cephadm-wip-swagner-testing-2020-07-02-1034-distro-basic-smithi/">https://pulpito.ceph.com/swagner-2020-07-03_08:12:34-rados:cephadm-wip-swagner-testing-2020-07-02-1034-distro-basic-smithi/</a></p> teuthology - Bug #46300 (Resolved): SELinux: denied { module_request } for comm="ksmtuned" kmod=...https://tracker.ceph.com/issues/463002020-07-01T13:11:21ZSebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-07-01_10:16:24-rados:cephadm-wip-swagner3-testing-2020-07-01-1013-distro-basic-smithi/5194327/">https://pulpito.ceph.com/swagner-2020-07-01_10:16:24-rados:cephadm-wip-swagner3-testing-2020-07-01-1013-distro-basic-smithi/5194327/</a></p>
<p>Saw this today in a PR run:<br /><pre>
2020-07-01T11:23:01.692 INFO:teuthology.orchestra.run.smithi071:> sudo grep -a 'avc: .*denied' /var/log/audit/audit.log | grep -av '\(comm="dmidecode"\|chronyd.service\|name="cephtest"\|scontext=system_u:system_r:nrpe_t:s0\|scontext=system_u:system_r:pcp_pmlogger_t\|scontext=system_u:system_r:pcp_pmcd_t:s0\|comm="rhsmd"\|scontext=system_u:system_r:syslogd_t:s0\|tcontext=system_u:system_r:nrpe_t:s0\|comm="updatedb"\|comm="smartd"\|comm="rhsmcertd-worke"\|comm="setroubleshootd"\|comm="rpm"\|tcontext=system_u:object_r:container_runtime_exec_t:s0\|scontext=system_u:system_r:logrotate_t:s0\)'
2020-07-01T11:23:01.722 DEBUG:teuthology.orchestra.run:got remote process result: 1
2020-07-01T11:23:01.723 ERROR:teuthology.run_tasks:Manager failed: selinux
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 171, in run_tasks
suppress = manager.__exit__(*exc_info)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 136, in __exit__
self.teardown()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/selinux.py", line 158, in teardown
self.get_new_denials()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/selinux.py", line 208, in get_new_denials
denials=new_denials[remote.name])
teuthology.exceptions.SELinuxError: SELinux denials found on ubuntu@smithi174.front.sepia.ceph.com: ['type=AVC msg=audit(1593601294.109:4683): avc: denied { module_request } for pid=18957 comm="ksmtuned" kmod="binfmt-464c" scontext=system_u:system_r:ksmtuned_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=system permissive=1']
</pre></p>
<p>no clue where this comes from. Might be related to the PRs I tested, but seems unrelated:</p>
<ul>
<li><a class="external" href="https://github.com/ceph/ceph/pull/35850">https://github.com/ceph/ceph/pull/35850</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/35846">https://github.com/ceph/ceph/pull/35846</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/35816">https://github.com/ceph/ceph/pull/35816</a></li>
<li><a class="external" href="https://github.com/ceph/ceph/pull/35747">https://github.com/ceph/ceph/pull/35747</a></li>
</ul> sepia - Bug #46299 (Closed): Trying to pull docker.io/prom/prometheus:v2.18.1: too many request t...https://tracker.ceph.com/issues/462992020-07-01T10:35:53ZSebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-07-01_09:26:21-rados:cephadm-wip-swagner-testing-2020-07-01-0956-distro-basic-smithi/5194228/">https://pulpito.ceph.com/swagner-2020-07-01_09:26:21-rados:cephadm-wip-swagner-testing-2020-07-01-0956-distro-basic-smithi/5194228/</a></p>
<pre>
2020-07-01T10:04:28.253 INFO:tasks.cephadm:Adding local image mirror vossi04.front.sepia.ceph.com:5000
2020-07-01T10:04:28.301 DEBUG:teuthology.orchestra.remote:smithi189:/etc/containers/registries.conf is 4KB
2020-07-01T10:04:28.340 INFO:teuthology.orchestra.run.smithi189:> sudo sh -c 'cat > /etc/containers/registries.conf'
2020-07-01T10:04:28.400 DEBUG:teuthology.orchestra.remote:smithi205:/etc/containers/registries.conf is 4KB
2020-07-01T10:04:28.447 INFO:teuthology.orchestra.run.smithi205:> sudo sh -c 'cat > /etc/containers/registries.conf'
...
2020-07-01T10:14:26.759 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: debug 2020-07-01T10:14:26.700+0000 7f9405ffb700 -1 log_channel(cephadm) log [ERR] : cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon prometheus.a ...
2020-07-01T10:14:26.759 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:Verifying port 9095 ...
2020-07-01T10:14:26.760 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:Non-zero exit code 125 from /bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=prom/prometheus:v2.18.1 -e NODE_NAME=smithi205 --entrypoint stat prom/prometheus:v2.18.1 -c %u %g /etc/prometheus
2020-07-01T10:14:26.760 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr Trying to pull registry.access.redhat.com/prom/prometheus:v2.18.1...
2020-07-01T10:14:26.760 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr name unknown: Repo not found
2020-07-01T10:14:26.760 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr Trying to pull registry.fedoraproject.org/prom/prometheus:v2.18.1...
2020-07-01T10:14:26.760 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr manifest unknown: manifest unknown
2020-07-01T10:14:26.761 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr Trying to pull registry.centos.org/prom/prometheus:v2.18.1...
2020-07-01T10:14:26.761 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr manifest unknown: manifest unknown
2020-07-01T10:14:26.761 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr Trying to pull docker.io/prom/prometheus:v2.18.1...
2020-07-01T10:14:26.761 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr time="2020-07-01T10:10:18Z" level=error msg="HEADER map[Cache-Control:[no-cache] Content-Type:[application/json] Retry-After:[60]]"
2020-07-01T10:14:26.761 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr time="2020-07-01T10:11:20Z" level=error msg="HEADER map[Cache-Control:[no-cache] Content-Type:[application/json] Retry-After:[60]]"
2020-07-01T10:14:26.762 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr time="2020-07-01T10:12:22Z" level=error msg="HEADER map[Cache-Control:[no-cache] Content-Type:[application/json] Retry-After:[60]]"
2020-07-01T10:14:26.763 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr time="2020-07-01T10:13:24Z" level=error msg="HEADER map[Cache-Control:[no-cache] Content-Type:[application/json] Retry-After:[60]]"
2020-07-01T10:14:26.763 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr too many request to registry
2020-07-01T10:14:26.763 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr Error: unable to pull prom/prometheus:v2.18.1: 4 errors occurred:
2020-07-01T10:14:26.763 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr * Error initializing source docker://registry.access.redhat.com/prom/prometheus:v2.18.1: Error reading manifest v2.18.1 in registry.access.redhat.com/prom/prometheus: name unknown: Repo not found
2020-07-01T10:14:26.763 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr * Error initializing source docker://registry.fedoraproject.org/prom/prometheus:v2.18.1: Error reading manifest v2.18.1 in registry.fedoraproject.org/prom/prometheus: manifest unknown: manifest unknown
2020-07-01T10:14:26.764 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr * Error initializing source docker://registry.centos.org/prom/prometheus:v2.18.1: Error reading manifest v2.18.1 in registry.centos.org/prom/prometheus: manifest unknown: manifest unknown
2020-07-01T10:14:26.764 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr * Error parsing image configuration: too many request to registry
2020-07-01T10:14:26.764 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: INFO:cephadm:stat:stderr
2020-07-01T10:14:26.764 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: Traceback (most recent call last):
2020-07-01T10:14:26.764 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: File "<stdin>", line 4847, in <module>
2020-07-01T10:14:26.765 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: File "<stdin>", line 1187, in _default_image
2020-07-01T10:14:26.765 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: File "<stdin>", line 2886, in command_deploy
2020-07-01T10:14:26.765 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: File "<stdin>", line 2818, in extract_uid_gid_monitoring
2020-07-01T10:14:26.765 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: File "<stdin>", line 1803, in extract_uid_gid
2020-07-01T10:14:26.766 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: File "<stdin>", line 2280, in run
2020-07-01T10:14:26.766 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: File "<stdin>", line 866, in call_throws
2020-07-01T10:14:26.766 INFO:journalctl@ceph.mgr.x.smithi205.stdout:Jul 01 10:14:26 smithi205 bash[31753]: RuntimeError: Failed command: /bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=prom/prometheus:v2.18.1 -e NODE_NAME=smithi205 --entrypoint stat prom/prometheus:v2.18.1 -c %u %g /etc/prometheus
</pre>
<p>Changing the registry mirror to `docker-mirror.front.sepia.ceph.com:5000` should work. I just can't do that myself.</p> sepia - Bug #46154 (New): unable to pull ceph/ceph-grafana: connection reset by peerhttps://tracker.ceph.com/issues/461542020-06-23T13:20:34ZSebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-smithi/5172323/">http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-smithi/5172323/</a></p>
<pre>
2020-06-23T12:20:41.349 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:Deploy daemon grafana.a ...
2020-06-23T12:20:41.350 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:Verifying port 3000 ...
2020-06-23T12:20:46.563 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:Non-zero exit code 125 from /usr/bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=ceph/ceph-grafana:latest -e NODE_NAME=smithi198 --entrypoint stat ceph/ceph-grafana:latest -c %u %g /var/lib/grafana
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Trying to pull docker.io/ceph/ceph-grafana:latest...
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Getting image source signatures
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Copying blob sha256:003efafe5a84678b58http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-smithi/5172323/5af8a06810c47079aa4705e60d07f1c31a52f0e35ce0b5
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr read tcp 172.21.15.198:55100->104.18.125.25:443: read: connection reset by peer
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Error: unable to pull ceph/ceph-grafana:latest: 1 error occurred:
2020-06-23T12:20:46.565 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr * Error writing blob: error storing blob to file "/var/tmp/storage459839576/1": read tcp 172.21.15.198:55100->104.18.125.25:443: read: connection reset by peer
2020-06-23T12:20:46.565 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr
2020-06-23T12:20:46.565 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr
2020-06-23T12:20:46.571 INFO:tasks.workunit.client.0.smithi198.stderr:Traceback (most recent call last):
2020-06-23T12:20:46.571 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 4825, in <module>
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: r = args.func()
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 1182, in _default_image
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: return func()
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 2863, in command_deploy
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: uid, gid = extract_uid_gid_monitoring(daemon_type)
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 2799, in extract_uid_gid_monitoring
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: uid, gid = extract_uid_gid(file_path='/var/lib/grafana')
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 1798, in extract_uid_gid
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: args=['-c', '%u %g', file_path]
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 2275, in run
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr: self.run_cmd(), desc=self.entrypoint, timeout=timeout)
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 861, in call_throws
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr: raise RuntimeError('Failed command: %s' % ' '.join(command))
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr:RuntimeError: Failed command: /usr/bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=ceph/ceph-grafana:latest -e NODE_NAME=smithi198 --entrypoint stat ceph/ceph-grafana:latest -c %u %g /var/lib/grafana
</pre>
<p>does this mean, we have to retry fetching containers?</p> teuthology - Feature #45722 (New): split requiremenets.txt into teuthology/requirements.txt and c...https://tracker.ceph.com/issues/457222020-05-27T08:26:28ZSebastian Wagner
<p>It's super irritating to have the dependencies of /ceph/qa specified in teuthology/requirements.txt making it super awkward and hard to justify to add any dependencies to ceph tasks, workunits or tests.</p>
<p>Would be great to have the dependencies of /ceph/qa specified in /ceph/qa (either in a stray /ceph/qa/requirements.txt or via a proper python package, like /ceph/qa/setup.py)</p>
<p>See <a class="external" href="https://github.com/ceph/teuthology/pull/1493">https://github.com/ceph/teuthology/pull/1493</a> for an example</p> teuthology - Bug #45583 (New): teuthology-suite: "--subset" combined with "--filter" generates du...https://tracker.ceph.com/issues/455832020-05-18T11:03:34ZSebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/">http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/</a></p>
<p>scheduled via</p>
<pre>
teuthology-suite -k distro --priority 75 --suite rados --filter cephadm --subset 1135/9999 --email swagner@suse.com --ceph wip-swagner-testing-2020-05-15-2348 --machine-type smithi
</pre>
<p>scheduled</p>
<ul>
<li><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066708">http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066708</a> </li>
<li><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066741">http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066741</a></li>
</ul>
<p>both having the description:</p>
<pre>
rados/cephadm/upgrade/{1-start.yaml 2-start-upgrade.yaml 3-wait.yaml distro$/{rhel_8.0.yaml} fixed-2.yaml}
</pre> teuthology - Bug #45442 (New): ubuntu 20.02: Hang on: "The following packages will be REMOVED:"https://tracker.ceph.com/issues/454422020-05-08T07:43:16ZSebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-07_16:10:50-rados-wip-swagner2-testing-2020-05-07-1308-distro-basic-smithi/5030975/">http://pulpito.ceph.com/swagner-2020-05-07_16:10:50-rados-wip-swagner2-testing-2020-05-07-1308-distro-basic-smithi/5030975/</a></p>
<pre>
2020-05-07T17:31:41.061 INFO:teuthology.orchestra.run.smithi086:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done
2020-05-07T17:31:41.179 INFO:teuthology.orchestra.run.smithi086.stdout:Reading package lists...
2020-05-07T17:31:41.315 INFO:teuthology.orchestra.run.smithi086.stdout:Building dependency tree...
2020-05-07T17:31:41.315 INFO:teuthology.orchestra.run.smithi086.stdout:Reading state information...
2020-05-07T17:31:41.442 INFO:teuthology.orchestra.run.smithi086.stdout:The following packages were automatically installed and are no longer required:
2020-05-07T17:31:41.443 INFO:teuthology.orchestra.run.smithi086.stdout: ceph-mon ceph-osd libboost-iostreams1.71.0
2020-05-07T17:31:41.443 INFO:teuthology.orchestra.run.smithi086.stdout:Use 'sudo apt autoremove' to remove them.
2020-05-07T17:31:41.455 INFO:teuthology.orchestra.run.smithi086.stdout:The following packages will be REMOVED:
2020-05-07T17:31:41.456 INFO:teuthology.orchestra.run.smithi086.stdout: ceph*
2020-05-08T05:03:17.376 DEBUG:teuthology.exit:Got signal 15; running 2 handlers...
2020-05-08T05:03:17.396 DEBUG:teuthology.task.console_log:Killing console logger for smithi086
</pre>
<p>Looks like as if <code>-y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"</code> is still not enough.</p> sepia - Bug #45009 (Closed): https://download.ceph.com/keys/release.asc: ignored as the file has ...https://tracker.ceph.com/issues/450092020-04-09T07:47:51ZSebastian Wagner
<p><a class="external" href="https://download.ceph.com/keys/release.asc">https://download.ceph.com/keys/release.asc</a> is a file format that is not understood by apt:</p>
<pre>
root@buster:~# wget https://download.ceph.com/keys/release.asc
root@buster:~# file release.asc
release.asc: PGP public key block Public-Key (old)
root@buster:~# cp release.asc /etc/apt/trusted.gpg
root@buster:~# apt update
Hit:1 http://httpredir.debian.org/debian buster InRelease
Hit:2 https://download.ceph.com/debian-octopus buster InRelease
Err:2 https://download.ceph.com/debian-octopus buster InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E84AC2C0460F3994
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
W: http://httpredir.debian.org/debian/dists/buster/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg are ignored as the file has an unsupported filetype.
W: https://download.ceph.com/debian-octopus/dists/buster/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg are ignored as the file has an unsupported filetype.
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://download.ceph.com/debian-octopus buster InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E84AC2C0460F3994
W: Failed to fetch https://download.ceph.com/debian-octopus/dists/buster/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E84AC2C0460F3994
W: Some index files failed to download. They have been ignored, or old ones used instead.
</pre>
<p>However, when converting this to GPG v4, it works:</p>
<pre>
root@buster:~# apt-key add release.asc
root@buster:~# file /etc/apt/trusted.gpg
/etc/apt/trusted.gpg: PGP/GPG key public ring (v4) created Tue Sep 15 20:56:41 2015 RSA (Encrypt or Sign) 4096 bits MPI=0xcbaa7e8ef94169f9...
root@buster:~# apt update
Hit:1 http://httpredir.debian.org/debian buster InRelease
Get:2 https://download.ceph.com/debian-octopus buster InRelease [8557 B]
Get:3 https://download.ceph.com/debian-octopus buster/main amd64 Packages [15.7 kB]
Fetched 24.2 kB in 4s (6765 B/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
root@buster:~# apt-key list
/etc/apt/trusted.gpg
--------------------
pub rsa4096 2015-09-15 [SC]
08B7 3419 AC32 B4E9 66C1 A330 E84A C2C0 460F 3994
uid [ unknown] Ceph.com (release key) <security@ceph.com>
</pre>
<p>This has an impact on cephadm, which needs to install gnupg on <strong>all</strong> cluster machines in order to convert the key to GPG v4.</p>
<p>Can we provide a key in the correct format?</p> teuthology - Bug #44181 (New): Error in syslog: task.internal.syslog: random "*BUG*" in log messagehttps://tracker.ceph.com/issues/441812020-02-18T10:33:27ZSebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/sage-2020-02-18_02:48:28-rados-wip-sage4-testing-2020-02-17-1727-distro-basic-smithi/4776502">http://pulpito.ceph.com/sage-2020-02-18_02:48:28-rados-wip-sage4-testing-2020-02-17-1727-distro-basic-smithi/4776502</a></p>
<p>This job failure was caused by</p>
<p><a class="external" href="https://github.com/ceph/teuthology/blob/291d40053a7a5caedc1d683f08d25399bc3b9ccd/teuthology/task/internal/syslog.py#L98-L144">https://github.com/ceph/teuthology/blob/291d40053a7a5caedc1d683f08d25399bc3b9ccd/teuthology/task/internal/syslog.py#L98-L144</a></p>
<pre>
2020-02-18T06:48:18.388 ERROR:teuthology.task.internal.syslog:Error in syslog on ubuntu@smithi060.front.sepia.ceph.com: /home/ubuntu/cephtest/archive/syslog/misc.log:2020-02-18T06:42:25.361371+00:00 smithi060 bash[10468]: audit 2020-02-18T06:42:24.442267+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.14124 172.21.15.60:0/1' entity='mgr.y' cmd=[{"prefix":"config-key set","key":"mgr/dashboard/key","val":"-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCnskmhDB10Jk6M\nXNpzP+7hOWVV7TYIeAGSapYoNcgQcPrQU0STPGuyUORmnKO4taTVz8EBvPL4p6Mv\niZFEIhL2OL07UexgDqKaAD4lne/KIhYQJVtkqPu/TYYemxa6xyl/V4LGPrSGYx0C\n8huP7RsqEMNLNMr/wG34hG7LCdGtcWk8aylma8XrXgukEHMsJJIeb+ZZKw3AnZCT\nbO++2B+V5DPtE4LId4x3G1PrVumH/whd6ciTtcImFspCRQgwlaPnDLf68bFXF67G\nBWJZZRFoTCc73fu/rUW1vGYk7WFiVi52WVbpgYrPc/AhWNOaH6d0xooBoohRjAYh\n7GDdVa+LAgMBAAECggEAM1HEZpymht0SPLJNx+dQ22wNLvahCoZvNLeZrESJLT7m\nAsr4uXZMHw3SV/SnxecQwr4JetawJJhowCuBYTBsTR2gC39OrzbLXAWm/ywOLfWw\netBz36I3KJw45zTfB9nbQTUuuCyIYngCcNxWwvz0yzLGEUXeudXR0bP1k/01RbZo\nhGe8oQSJzSN4zmfQtx/rSGCXJr23HUjPs0mVHqml2bZL9UZcuKu5RuN8PWSo0aOM\nZwGYa/1pcoo1OsN3XtujY9tU0Ykrd3rteAARMIBFzrktaWWhSdaiOQS0fYAnyGrX\njI7cjlsbtJfTt741wF0hmCZIGS40+HWTmwCTkapWwQKBgQDQkX4ZHyvFD96W81rz\nXLIdSEfgv0+andTC2v5kvlk4cxIYgic2g1R59gekZioOpVIQG/eCwiSFW5ndqjzI\nSGMj2bflL8vXv5q4EX+TL3W5LWOnR8k7FVxJpPsJbbyX93qbpcU/oKNSt5VDUPaL\neooha6lDP+HEAdfWHqe1PxP+9wKBgQDN1UzVWU8ur2tdlql2BVrwfi/J32/ZUFQG\nCPvC9RMdavZjKITu2Rg5LA6kYOnJ9MvVTU59Mf2c+6kKWQTaRhqTqhfAJYtjvmoC\naTGm7HGPywEOeMphF+LAb23DNcCzQFhBVduOfL8MSkTjJOjmaxyYc2qs+ts87NMt\nqCENAaPrDQKBgHvuV/1ZdkqsOVl81QhShku8DWnQg96d9jSqqAr4yE8woQoLHH3Z\n37JwrO3U/xygw3hrBdGextCvM2hxpZhk2vQMhKcclYVnhunlC+dLhio4fESD9WC0\nOphP/hMGL9Ak76fZArHiI+ocyAat7zHF6JofPP6G0QIFDlle8cxS5PDVAoGBAMRB\nByQ5JkV2HqG6YFNWYdICDuClOQj0DVk/wYSulY4sCUacQLtXpUAF4OQcP20/CgaT\n0i2Ot6ixTwi9veG8i+SVflXHtnLhAETSNfRZZyHaRmSdCSGwW5Rt6jMBkn2W8U9C\nZLgj+yjlu270J1hjcn1tNp4+BUG+8M+Mig7TrI4VAoGAYYltCD4bc2bBAPWnF6nk\nqrx16kKg0kjNdhATkBWt76jpsJYRmyo5NALLaB5/k0dS7ftuTmGEZLSnNyl44O2B\n7QH6PaRoP0hX5LtLwSZhiJxd6tDrfwMFzpVGiJHeUNKGS/GKQzlvlxUJb2aOhNWu\nMgFlLWfPOMgxiRpwUhtg0Is=\n-----END PRIVATE KEY-----\n"}]: dispatch
</pre>
<p>Unfortunately this log message contains "<code>BUG</code>" somewhere in the key.</p> teuthology - Feature #40972 (New): Make priority field more descriptivehttps://tracker.ceph.com/issues/409722019-07-26T10:26:43ZSebastian Wagner
<p>The --priority flag accepts a number, but it's not totally clear, which number I should use.</p>
<p>My proposal would be to also accept names, like<br /><pre>
--priority baseline
--priority pr-run
--priority pr-run-high
--priority pr-run-urgent
--priority pr-run-low
</pre></p>
<p>Where each name corresponds to a specific value</p>
<pre>
{
'baseline': 1000
'pr-run': 100
'pr-run-high': 90
'pr-run-urgent': 50
'pr-run-low': 110
}[priority]
</pre> teuthology - Bug #40749 (New): /task/ansible.py: AnsibleFailedError: RepresenterError: ('cannot r...https://tracker.ceph.com/issues/407492019-07-12T09:41:13ZSebastian Wagner
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/teuthology.log</a></p>
<pre>
Thursday 11 July 2019 14:06:56 +0000 (0:00:00.272) 0:03:00.166 *********
===============================================================================
Check for /usr/bin/python ---------------------------------------------- 27.06s
2019-07-11T14:06:56.061 INFO:teuthology.task.ansible.out:users : Create all admin users with sudo access. ----------------------- 19.15s
users : Update authorized_keys using the keys repo --------------------- 18.43s
testnode : Zap all non-root disks --------------------------------------- 9.59s
testnode : Ensure packages are not present. ----------------------------- 9.53s
testnode : Install packages --------------------------------------------- 6.20s
testnode : ifdown and ifup ---------------------------------------------- 5.15s
users : Remove revoked users -------------------------------------------- 4.99s
common : Update apt cache ----------------------------------------------- 4.01s
testnode : Update apt cache. -------------------------------------------- 3.65s
testnode : Install python-apt ------------------------------------------- 3.11s
testnode : Blow away lingering OSD data and FSIDs ----------------------- 2.94s
testnode : Install apt keys --------------------------------------------- 2.09s
common : Install nrpe package and dependencies (Ubuntu) ----------------- 1.99s
testnode : Install packages via pip ------------------------------------- 1.72s
users : Update authorized_keys for each user with literal keys ---------- 1.72s
ansible-managed : Add authorized keys for the ansible user. ------------- 1.59s
Gathering Facts --------------------------------------------------------- 1.59s
testnode : Stop apache2 ------------------------------------------------- 1.45s
common : Upload megacli and cli64 for raid monitoring and smart.pl to /usr/sbin/. --- 1.18s
2019-07-11T14:06:56.319 ERROR:teuthology.task.ansible:Failed to parse ansible failure log: /tmp/teuth_ansible_failures_mF91TY (while parsing a flow mapping
in "/tmp/teuth_ansible_failures_mF91TY", line 1, column 54
expected ',' or '}', but got ':'
in "/tmp/teuth_ansible_failures_mF91TY", line 1, column 274)
2019-07-11T14:06:56.320 INFO:teuthology.task.ansible:Archiving ansible failure log at: /home/teuthworker/archive/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/ansible_failures.yaml
2019-07-11T14:06:56.323 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks
manager.__enter__()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 426, in begin
super(CephLab, self).begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 268, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 295, in execute_playbook
self._handle_failure(command, status)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 319, in _handle_failure
raise AnsibleFailedError(failures)
AnsibleFailedError: 7
/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)
RepresenterError: ('cannot represent an object', u'mira062')
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/ansible_failures.yaml">http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/ansible_failures.yaml</a></p>
<pre>
Failure object was: {'mira062.front.sepia.ceph.com': {'_ansible_no_log': False, u'invocation': {u'module_args': {u'name': u'mira062'}}, 'changed': False, u'msg': u"Command failed rc=1, out=, err=Could not get property: Failed to activate service 'org.freedesktop.hostname1': timed out\n"}}
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure
log.error(yaml.safe_dump(failure))
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent
node = self.represent_data(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)
RepresenterError: ('cannot represent an object', u'mira062')
</pre>
<p>Is this a Teuthology issue or ceph-ansible or is this just because mira062 timed out?</p> sepia - Support #37709 (Resolved): Sepia Lab Access Requesthttps://tracker.ceph.com/issues/377092018-12-19T11:48:25ZSebastian Wagner
<p>1) Do you just need VPN access or will you also be running teuthology jobs?</p>
<p>I'm going to need access to Teuthology and to the lab's k8s cluster.</p>
<p>2) Desired Username:</p>
<p>swagner</p>
<p>3) Alternate e-mail address(es) we can reach you at:</p>
<p><a class="email" href="mailto:cephth@spawnhost.de">cephth@spawnhost.de</a></p>
<p>4) If you don't already have an established history of code contributions to Ceph, is there an existing community or core developer you've worked with who has reviewed your work and can vouch for your access request?</p>
<p><a class="external" href="https://github.com/ceph/ceph/commit/e0eb2dbd98d930eb0bd5b29b051f3639fc805c40#diff-5e08bfe65cc656745656d8042a5fd8b8">https://github.com/ceph/ceph/commit/e0eb2dbd98d930eb0bd5b29b051f3639fc805c40#diff-5e08bfe65cc656745656d8042a5fd8b8</a></p>
<p style="padding-left:2em;">If you answered "No" to # 4, please answer the following (paste directly below the question to keep indentation):</p>
<p style="padding-left:2em;">4a) Paste a link to a Blueprint or planning doc of yours that was reviewed at a Ceph Developer Monthly.</p>
<p style="padding-left:2em;">4b) Paste a link to an accepted pull request for a major patch or feature.</p>
<p style="padding-left:2em;">4c) If applicable, include a link to the current project (planning doc, dev branch, or pull request) that you are looking to test.</p>
<p>5) Paste your SSH public key(s) between the <code>pre</code> tags<br /><pre>ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDh9bzZJulXGES+l9Xh6Aq15RZ8uQCGuDNhlNQtDblE1ISKJ/DqGYXp6wUW54+oVNA7eZiXz+fi3mq5pPEtZZOfd3ixEDzDJ4E3cVXoDZCqeWEmea6KvybeY10YxvB56TEI8U2KKAd56PRl20klXpLCjzXNqG7n0aXFcpCMbXDu369VX6lOk24K/7++7Wc5SttcvVu19sT2kqzsB/S1Y5cxiE6RM6wVtqBoksp/kRCIA16ruNwx3GUabDfbEoUGXlkkUP7+TZgAbHtBsYy6mCQhwi0S2+WG+HhHDUPjHhV+MdN9ffibCtOEGo52itVLVky09VeBocuA6H22JDGXPBgjcgf2NsZIqcKqGHhUkXmH92fhRSFOBLKHstrBq8jWRP/mNrgj8cQksDsakQYQbDg5dyabp+M0/iL2Q3YVq7erZI8aZMA7ZF3WgoQNYZg5E7oejM8URIlFP3x1ne2ClRC9a74phSCxeU/NVamGN3G3dImzEXGOSNyRggHJ4jGrIGc7tLPCzmI5OkomcB5OxReqf0r1TNXuUAqw8M4EoWt+0xoAmH5zVlUHf+psUCJIEV/4pbgtoJiSNq+LVY4jyDEFbvTAL7MqyXorMV7Tqlj+/3d7RXfhW9lR/SHb1Z3jcHvz4ZzYMzKWeJEjz+Y0NwIdFDhcPmUOYtEDmuRRrgYiSQ== sebstian.wagner@it-novum.com</pre></p>
<p>6) Paste your hashed VPN credentials between the <code>pre</code> tags (Format: <code>user@hostname 22CharacterSalt 65CharacterHashedPassword</code>)<br /><pre>swagner@ubuntu HKUxZQFMdbrCq3VhYt+jDQ d0ad7e9f21a90d2c51fee2ef5e87ce9b13b13c2fa81dbaf3361c827ebb9b045a</pre></p>