Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2020-08-10T01:03:45Z
Ceph
Redmine
rbd - Bug #46875 (New): TestLibRBD.TestPendingAio: test_librbd.cc:4539: Failure or SIGSEGV
https://tracker.ceph.com/issues/46875
2020-08-10T01:03:45Z
Sebastian Wagner
<pre>
[ RUN ] TestLibRBD.TestPendingAio
using new format!
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/librbd/test_librbd.cc:4539: Failure
Expected equality of these values:
1
rbd_aio_is_complete(comps[i])
Which is: 0
[ FAILED ] TestLibRBD.TestPendingAio (68 ms)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/57209/consoleFull#-361705261e840cee4-f4a4-4183-81dd-42855615f2c1">https://jenkins.ceph.com/job/ceph-pull-requests/57209/consoleFull#-361705261e840cee4-f4a4-4183-81dd-42855615f2c1</a></p>
sepia - Bug #46336 (New): https://download-cc-rdu01.fedoraproject.org is unreliable
https://tracker.ceph.com/issues/46336
2020-07-03T09:58:16Z
Sebastian Wagner
<pre>
2020-07-03T09:05:49.488 INFO:teuthology.orchestra.run.smithi058:> sudo yum -y install ceph-test
2020-07-03T09:05:49.626 INFO:teuthology.orchestra.run.smithi195.stdout:Transaction test succeeded.
2020-07-03T09:05:49.627 INFO:teuthology.orchestra.run.smithi195.stdout:Running transaction
2020-07-03T09:05:49.924 INFO:teuthology.orchestra.run.smithi058.stdout:Last metadata expiration check: 0:00:36 ago on Fri 03 Jul 2020 09:05:13 AM UTC.
2020-07-03T09:05:50.065 INFO:teuthology.orchestra.run.smithi195.stdout: Preparing : 1/1
2020-07-03T09:05:50.238 INFO:teuthology.orchestra.run.smithi195.stdout: Installing : libxslt-1.1.32-3.el8.x86_64 1/6
2020-07-03T09:05:50.310 INFO:teuthology.orchestra.run.smithi058.stdout:Dependencies resolved.
2020-07-03T09:05:50.311 INFO:teuthology.orchestra.run.smithi058.stdout:================================================================================
2020-07-03T09:05:50.311 INFO:teuthology.orchestra.run.smithi058.stdout: Package Arch Version Repository Size
2020-07-03T09:05:50.312 INFO:teuthology.orchestra.run.smithi058.stdout:================================================================================
2020-07-03T09:05:50.312 INFO:teuthology.orchestra.run.smithi058.stdout:Installing:
2020-07-03T09:05:50.312 INFO:teuthology.orchestra.run.smithi058.stdout: ceph-test x86_64 2:16.0.0-3122.ge1d6abcdc6f.el8 ceph 45 M
2020-07-03T09:05:50.313 INFO:teuthology.orchestra.run.smithi058.stdout:Installing dependencies:
2020-07-03T09:05:50.313 INFO:teuthology.orchestra.run.smithi058.stdout: jq x86_64 1.5-12.el8 CentOS-AppStream 161 k
2020-07-03T09:05:50.313 INFO:teuthology.orchestra.run.smithi058.stdout: oniguruma x86_64 6.8.2-1.el8 CentOS-AppStream 188 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout: socat x86_64 1.7.3.2-6.el8 CentOS-AppStream 298 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout: libxslt x86_64 1.1.32-3.el8 CentOS-Base 249 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout: xmlstarlet x86_64 1.6.1-11.el8 epel 69 k
2020-07-03T09:05:50.314 INFO:teuthology.orchestra.run.smithi058.stdout:
2020-07-03T09:05:50.315 INFO:teuthology.orchestra.run.smithi058.stdout:Transaction Summary
2020-07-03T09:05:50.315 INFO:teuthology.orchestra.run.smithi058.stdout:================================================================================
2020-07-03T09:05:50.315 INFO:teuthology.orchestra.run.smithi058.stdout:Install 6 Packages
2020-07-03T09:05:50.316 INFO:teuthology.orchestra.run.smithi058.stdout:
2020-07-03T09:05:50.317 INFO:teuthology.orchestra.run.smithi058.stdout:Total download size: 46 M
2020-07-03T09:05:50.317 INFO:teuthology.orchestra.run.smithi058.stdout:Installed size: 194 M
2020-07-03T09:05:50.317 INFO:teuthology.orchestra.run.smithi058.stdout:Downloading Packages:
2020-07-03T09:05:50.348 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] jq-1.5-12.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/AppStream/x86_64/os/Packages/jq-1.5-12.el8.x86_64.rpm
2020-07-03T09:05:50.348 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] oniguruma-6.8.2-1.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/AppStream/x86_64/os/Packages/oniguruma-6.8.2-1.el8.x86_64.rpm
2020-07-03T09:05:50.394 INFO:teuthology.orchestra.run.smithi195.stdout: Installing : xmlstarlet-1.6.1-11.el8.x86_64 2/6
2020-07-03T09:05:50.454 INFO:teuthology.orchestra.run.smithi058.stdout:(1/6): jq-1.5-12.el8.x86_64.rpm 1.1 MB/s | 161 kB 00:00
2020-07-03T09:05:50.463 INFO:teuthology.orchestra.run.smithi058.stdout:(2/6): oniguruma-6.8.2-1.el8.x86_64.rpm 1.2 MB/s | 188 kB 00:00
2020-07-03T09:05:50.487 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.491 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] libxslt-1.1.32-3.el8.x86_64.rpm: Status code: 503 for https://download-cc-rdu01.fedoraproject.org/pub/centos/8/BaseOS/x86_64/os/Packages/libxslt-1.1.32-3.el8.x86_64.rpm
2020-07-03T09:05:50.492 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 404 for http://mirror.linux.duke.edu/pub/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.502 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 404 for http://packages.oit.ncsu.edu/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.508 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] libxslt-1.1.32-3.el8.x86_64.rpm: Status code: 404 for http://mirror.linux.duke.edu/pub/centos/8/BaseOS/x86_64/os/Packages/libxslt-1.1.32-3.el8.x86_64.rpm
2020-07-03T09:05:50.508 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] libxslt-1.1.32-3.el8.x86_64.rpm: Status code: 404 for http://packages.oit.ncsu.edu/centos/8/BaseOS/x86_64/os/Packages/libxslt-1.1.32-3.el8.x86_64.rpm
2020-07-03T09:05:50.539 INFO:teuthology.orchestra.run.smithi058.stdout:[MIRROR] socat-1.7.3.2-6.el8.x86_64.rpm: Status code: 404 for http://distro.ibiblio.org/centos/8/AppStream/x86_64/os/Packages/socat-1.7.3.2-6.el8.x86_64.rpm
2020-07-03T09:05:50.539 INFO:teuthology.orchestra.run.smithi058.stdout:[FAILED] socat-1.7.3.2-6.el8.x86_64.rpm: No more mirrors to try - All mirrors were already tried without success
2020-07-03T09:05:50.541 INFO:teuthology.orchestra.run.smithi058.stdout:
2020-07-03T09:05:50.542 INFO:teuthology.orchestra.run.smithi058.stdout:The downloaded packages were saved in cache until the next successful transaction.
2020-07-03T09:05:50.542 INFO:teuthology.orchestra.run.smithi058.stdout:You can remove cached packages by executing 'dnf clean packages'.
2020-07-03T09:05:50.555 INFO:teuthology.orchestra.run.smithi195.stdout: Installing : socat-1.7.3.2-6.el8.x86_64 3/6
2020-07-03T09:05:50.615 INFO:teuthology.orchestra.run.smithi058.stderr:Error: Error downloading packages:
2020-07-03T09:05:50.615 INFO:teuthology.orchestra.run.smithi058.stderr: Cannot download Packages/socat-1.7.3.2-6.el8.x86_64.rpm: All mirrors were tried
2
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-07-03_08:12:34-rados:cephadm-wip-swagner-testing-2020-07-02-1034-distro-basic-smithi/">https://pulpito.ceph.com/swagner-2020-07-03_08:12:34-rados:cephadm-wip-swagner-testing-2020-07-02-1034-distro-basic-smithi/</a></p>
RADOS - Bug #46178 (Duplicate): slow request osd_op(... (undecoded) ondisk+retry+read+ignore_over...
https://tracker.ceph.com/issues/46178
2020-06-24T12:57:47Z
Sebastian Wagner
<p>Saw this error yesterday for the first time:</p>
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-06-23_13:15:09-rados:cephadm-wip-swagner3-testing-2020-06-23-1058-distro-basic-smithi/5172444">http://pulpito.ceph.com/swagner-2020-06-23_13:15:09-rados:cephadm-wip-swagner3-testing-2020-06-23-1058-distro-basic-smithi/5172444</a></p>
<pre>
2020-06-23T14:14:24.479 INFO:tasks.cephadm:Deploying osd.1 on smithi140 with /dev/vg_nvme/lv_3...
...
2020-06-24T01:44:38.508 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532712+0000 osd.1 (osd.1) 951804 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T17:02:28.014118+0000 currently delayed
2020-06-24T01:44:38.508 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532721+0000 osd.1 (osd.1) 951805 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T01:02:28.112645+0000 currently delayed
2020-06-24T01:44:38.508 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532732+0000 osd.1 (osd.1) 951806 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T17:17:28.017258+0000 currently delayed
2020-06-24T01:44:38.508 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532741+0000 osd.1 (osd.1) 951807 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T01:17:28.116826+0000 currently delayed
2020-06-24T01:44:38.509 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532749+0000 osd.1 (osd.1) 951808 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T17:32:28.021231+0000 currently delayed
2020-06-24T01:44:38.509 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532758+0000 osd.1 (osd.1) 951809 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T01:32:28.117176+0000 currently delayed
2020-06-24T01:44:38.509 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532770+0000 osd.1 (osd.1) 951810 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T17:47:28.021867+0000 currently delayed
2020-06-24T01:44:38.509 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:38 smithi180 bash[11465]: cluster 2020-06-24T01:44:37.532795+0000 osd.1 (osd.1) 951811 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T18:02:28.024273+0000 currently delayed
2020-06-24T01:44:38.779 INFO:ceph.osd.1.smithi140.stdout:Jun 24 01:44:38 smithi140 bash[20025]: debug 2020-06-24T01:44:38.512+0000 7f660a6f2700 -1 osd.1 49 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+read+ignore_overlay+known_if_redirected e49)
2020-06-24T01:44:39.499 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.476728+0000 mgr.x (mgr.34109) 20737 : cluster [DBG] pgmap v20741: 33 pgs: 3 creating+peering, 30 active+clean; 780 B data, 3.4 MiB used, 707 GiB / 715 GiB avail
2020-06-24T01:44:39.500 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515272+0000 osd.1 (osd.1) 951812 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T18:17:28.028627+0000 currently delayed
2020-06-24T01:44:39.500 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515294+0000 osd.1 (osd.1) 951813 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T18:32:28.033173+0000 currently delayed
2020-06-24T01:44:39.500 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515312+0000 osd.1 (osd.1) 951814 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T18:47:28.037863+0000 currently delayed
2020-06-24T01:44:39.501 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515329+0000 osd.1 (osd.1) 951815 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T19:02:28.037117+0000 currently delayed
2020-06-24T01:44:39.501 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515344+0000 osd.1 (osd.1) 951816 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T19:17:28.041383+0000 currently delayed
2020-06-24T01:44:39.501 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515363+0000 osd.1 (osd.1) 951817 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-23T19:32:28.045582+0000 currently delayed
2020-06-24T01:44:39.501 INFO:ceph.mon.b.smithi180.stdout:Jun 24 01:44:39 smithi180 bash[11465]: cluster 2020-06-24T01:44:38.515379+0000 osd.1 (osd.1) 951818 : cluster [WRN] slow request osd_op(client.34367.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overla
</pre>
<p>now it happened again:</p>
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-06-24_11:29:20-rados:cephadm-wip-swagner-testing-2020-06-24-1032-distro-basic-smithi/5175427/">http://pulpito.ceph.com/swagner-2020-06-24_11:29:20-rados:cephadm-wip-swagner-testing-2020-06-24-1032-distro-basic-smithi/5175427/</a></p>
<pre>
2020-06-24T11:56:54.575 INFO:tasks.cephadm:Deploying osd.1 on smithi118 with /dev/vg_nvme/lv_3...
...
7f44d2a96700 -1 osd.1 49 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+read+ignore_overlay+known_if_redirected e49)
0000 mgr.x (mgr.34103) 1527 : cluster [DBG] pgmap v1531: 33 pgs: 3 creating+peering, 30 active+clean; 780 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
0000 osd.1 (osd.1) 5934 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T11:58:48.006893+0000 currently delayed
0000 osd.1 (osd.1) 5935 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:13:48.005126+0000 currently delayed
0000 osd.1 (osd.1) 5936 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:28:48.005918+0000 currently delayed
0000 osd.1 (osd.1) 5937 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:43:48.009047+0000 currently delayed
0000 mgr.x (mgr.34103) 1527 : cluster [DBG] pgmap v1531: 33 pgs: 3 creating+peering, 30 active+clean; 780 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
0000 osd.1 (osd.1) 5934 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T11:58:48.006893+0000 currently delayed
0000 osd.1 (osd.1) 5935 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:13:48.005126+0000 currently delayed
0000 osd.1 (osd.1) 5936 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:28:48.005918+0000 currently delayed
0000 osd.1 (osd.1) 5937 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:43:48.009047+0000 currently delayed
0000 mgr.x (mgr.34103) 1527 : cluster [DBG] pgmap v1531: 33 pgs: 3 creating+peering, 30 active+clean; 780 B data, 3.9 MiB used, 707 GiB / 715 GiB avail
0000 osd.1 (osd.1) 5934 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T11:58:48.006893+0000 currently delayed
0000 osd.1 (osd.1) 5935 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:13:48.005126+0000 currently delayed
0000 osd.1 (osd.1) 5936 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:28:48.005918+0000 currently delayed
0000 osd.1 (osd.1) 5937 : cluster [WRN] slow request osd_op(client.34343.0:13 2.a 2.a (undecoded) ondisk+retry+read+ignore_overlay+known_if_redirected e49) initiated 2020-06-24T12:43:48.009047+0000 currently delayed
</pre>
<p>Unfortunately, I don't know where this comes from.</p>
teuthology - Bug #45583 (New): teuthology-suite: "--subset" combined with "--filter" generates du...
https://tracker.ceph.com/issues/45583
2020-05-18T11:03:34Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/">http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/</a></p>
<p>scheduled via</p>
<pre>
teuthology-suite -k distro --priority 75 --suite rados --filter cephadm --subset 1135/9999 --email swagner@suse.com --ceph wip-swagner-testing-2020-05-15-2348 --machine-type smithi
</pre>
<p>scheduled</p>
<ul>
<li><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066708">http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066708</a> </li>
<li><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066741">http://pulpito.ceph.com/swagner-2020-05-18_08:24:15-rados-wip-swagner-testing-2020-05-15-2348-distro-basic-smithi/5066741</a></li>
</ul>
<p>both having the description:</p>
<pre>
rados/cephadm/upgrade/{1-start.yaml 2-start-upgrade.yaml 3-wait.yaml distro$/{rhel_8.0.yaml} fixed-2.yaml}
</pre>
teuthology - Bug #45442 (New): ubuntu 20.02: Hang on: "The following packages will be REMOVED:"
https://tracker.ceph.com/issues/45442
2020-05-08T07:43:16Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-07_16:10:50-rados-wip-swagner2-testing-2020-05-07-1308-distro-basic-smithi/5030975/">http://pulpito.ceph.com/swagner-2020-05-07_16:10:50-rados-wip-swagner2-testing-2020-05-07-1308-distro-basic-smithi/5030975/</a></p>
<pre>
2020-05-07T17:31:41.061 INFO:teuthology.orchestra.run.smithi086:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done
2020-05-07T17:31:41.179 INFO:teuthology.orchestra.run.smithi086.stdout:Reading package lists...
2020-05-07T17:31:41.315 INFO:teuthology.orchestra.run.smithi086.stdout:Building dependency tree...
2020-05-07T17:31:41.315 INFO:teuthology.orchestra.run.smithi086.stdout:Reading state information...
2020-05-07T17:31:41.442 INFO:teuthology.orchestra.run.smithi086.stdout:The following packages were automatically installed and are no longer required:
2020-05-07T17:31:41.443 INFO:teuthology.orchestra.run.smithi086.stdout: ceph-mon ceph-osd libboost-iostreams1.71.0
2020-05-07T17:31:41.443 INFO:teuthology.orchestra.run.smithi086.stdout:Use 'sudo apt autoremove' to remove them.
2020-05-07T17:31:41.455 INFO:teuthology.orchestra.run.smithi086.stdout:The following packages will be REMOVED:
2020-05-07T17:31:41.456 INFO:teuthology.orchestra.run.smithi086.stdout: ceph*
2020-05-08T05:03:17.376 DEBUG:teuthology.exit:Got signal 15; running 2 handlers...
2020-05-08T05:03:17.396 DEBUG:teuthology.task.console_log:Killing console logger for smithi086
</pre>
<p>Looks like as if <code>-y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"</code> is still not enough.</p>
RADOS - Feature #45079 (New): HEALTH_WARN, if require-osd-release is < mimic and OSD wants to joi...
https://tracker.ceph.com/issues/45079
2020-04-14T10:25:33Z
Sebastian Wagner
<p>When upgrading a cluster to octopus, users should get a warning, if require-osd-release is < mimic as this prevents osds from joining the cluster.</p>
<p>Right now, we get a INF in the logs:</p>
<pre>
cluster [INF] disallowing boot of octopus+ OSD osd.1 v1:172.16.1.25:6800/3051821808 because require_osd
</pre>
<p>this should be a HEALTH_WARN instead.</p>
teuthology - Bug #44181 (New): Error in syslog: task.internal.syslog: random "*BUG*" in log message
https://tracker.ceph.com/issues/44181
2020-02-18T10:33:27Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/sage-2020-02-18_02:48:28-rados-wip-sage4-testing-2020-02-17-1727-distro-basic-smithi/4776502">http://pulpito.ceph.com/sage-2020-02-18_02:48:28-rados-wip-sage4-testing-2020-02-17-1727-distro-basic-smithi/4776502</a></p>
<p>This job failure was caused by</p>
<p><a class="external" href="https://github.com/ceph/teuthology/blob/291d40053a7a5caedc1d683f08d25399bc3b9ccd/teuthology/task/internal/syslog.py#L98-L144">https://github.com/ceph/teuthology/blob/291d40053a7a5caedc1d683f08d25399bc3b9ccd/teuthology/task/internal/syslog.py#L98-L144</a></p>
<pre>
2020-02-18T06:48:18.388 ERROR:teuthology.task.internal.syslog:Error in syslog on ubuntu@smithi060.front.sepia.ceph.com: /home/ubuntu/cephtest/archive/syslog/misc.log:2020-02-18T06:42:25.361371+00:00 smithi060 bash[10468]: audit 2020-02-18T06:42:24.442267+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.14124 172.21.15.60:0/1' entity='mgr.y' cmd=[{"prefix":"config-key set","key":"mgr/dashboard/key","val":"-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCnskmhDB10Jk6M\nXNpzP+7hOWVV7TYIeAGSapYoNcgQcPrQU0STPGuyUORmnKO4taTVz8EBvPL4p6Mv\niZFEIhL2OL07UexgDqKaAD4lne/KIhYQJVtkqPu/TYYemxa6xyl/V4LGPrSGYx0C\n8huP7RsqEMNLNMr/wG34hG7LCdGtcWk8aylma8XrXgukEHMsJJIeb+ZZKw3AnZCT\nbO++2B+V5DPtE4LId4x3G1PrVumH/whd6ciTtcImFspCRQgwlaPnDLf68bFXF67G\nBWJZZRFoTCc73fu/rUW1vGYk7WFiVi52WVbpgYrPc/AhWNOaH6d0xooBoohRjAYh\n7GDdVa+LAgMBAAECggEAM1HEZpymht0SPLJNx+dQ22wNLvahCoZvNLeZrESJLT7m\nAsr4uXZMHw3SV/SnxecQwr4JetawJJhowCuBYTBsTR2gC39OrzbLXAWm/ywOLfWw\netBz36I3KJw45zTfB9nbQTUuuCyIYngCcNxWwvz0yzLGEUXeudXR0bP1k/01RbZo\nhGe8oQSJzSN4zmfQtx/rSGCXJr23HUjPs0mVHqml2bZL9UZcuKu5RuN8PWSo0aOM\nZwGYa/1pcoo1OsN3XtujY9tU0Ykrd3rteAARMIBFzrktaWWhSdaiOQS0fYAnyGrX\njI7cjlsbtJfTt741wF0hmCZIGS40+HWTmwCTkapWwQKBgQDQkX4ZHyvFD96W81rz\nXLIdSEfgv0+andTC2v5kvlk4cxIYgic2g1R59gekZioOpVIQG/eCwiSFW5ndqjzI\nSGMj2bflL8vXv5q4EX+TL3W5LWOnR8k7FVxJpPsJbbyX93qbpcU/oKNSt5VDUPaL\neooha6lDP+HEAdfWHqe1PxP+9wKBgQDN1UzVWU8ur2tdlql2BVrwfi/J32/ZUFQG\nCPvC9RMdavZjKITu2Rg5LA6kYOnJ9MvVTU59Mf2c+6kKWQTaRhqTqhfAJYtjvmoC\naTGm7HGPywEOeMphF+LAb23DNcCzQFhBVduOfL8MSkTjJOjmaxyYc2qs+ts87NMt\nqCENAaPrDQKBgHvuV/1ZdkqsOVl81QhShku8DWnQg96d9jSqqAr4yE8woQoLHH3Z\n37JwrO3U/xygw3hrBdGextCvM2hxpZhk2vQMhKcclYVnhunlC+dLhio4fESD9WC0\nOphP/hMGL9Ak76fZArHiI+ocyAat7zHF6JofPP6G0QIFDlle8cxS5PDVAoGBAMRB\nByQ5JkV2HqG6YFNWYdICDuClOQj0DVk/wYSulY4sCUacQLtXpUAF4OQcP20/CgaT\n0i2Ot6ixTwi9veG8i+SVflXHtnLhAETSNfRZZyHaRmSdCSGwW5Rt6jMBkn2W8U9C\nZLgj+yjlu270J1hjcn1tNp4+BUG+8M+Mig7TrI4VAoGAYYltCD4bc2bBAPWnF6nk\nqrx16kKg0kjNdhATkBWt76jpsJYRmyo5NALLaB5/k0dS7ftuTmGEZLSnNyl44O2B\n7QH6PaRoP0hX5LtLwSZhiJxd6tDrfwMFzpVGiJHeUNKGS/GKQzlvlxUJb2aOhNWu\nMgFlLWfPOMgxiRpwUhtg0Is=\n-----END PRIVATE KEY-----\n"}]: dispatch
</pre>
<p>Unfortunately this log message contains "<code>BUG</code>" somewhere in the key.</p>
rbd - Bug #43274 (Need More Info): unittest_rbd_mirror: Exception: SegFault
https://tracker.ceph.com/issues/43274
2019-12-12T09:30:06Z
Sebastian Wagner
<p>Unfortunately, I don't know what exactly went wrong:</p>
<pre>
185/191 Test #184: unittest_rbd_mirror .....................***Exception: SegFault 11.74 sec
[==========] Running 279 tests from 34 test suites.
[----------] Global test environment set-up.
[----------] 13 tests from TestMockImageMap
[ RUN ] TestMockImageMap.SetLocalImages
seed 1526
[ OK ] TestMockImageMap.SetLocalImages (8 ms)
[ RUN ] TestMockImageMap.AddRemoveLocalImage
[ OK ] TestMockImageMap.AddRemoveLocalImage (25 ms)
[ RUN ] TestMockImageMap.AddRemoveRemoteImage
[ OK ] TestMockImageMap.AddRemoveRemoteImage (15 ms)
[ RUN ] TestMockImageMap.AddRemoveRemoteImageDuplicateNotification
[ OK ] TestMockImageMap.AddRemoveRemoteImageDuplicateNotification (5 ms)
[ RUN ] TestMockImageMap.AcquireImageErrorRetry
[ OK ] TestMockImageMap.AcquireImageErrorRetry (2 ms)
[ RUN ] TestMockImageMap.RemoveRemoteAndLocalImage
[ OK ] TestMockImageMap.RemoveRemoteAndLocalImage (2 ms)
[ RUN ] TestMockImageMap.AddInstance
[ OK ] TestMockImageMap.AddInstance (4 ms)
[ RUN ] TestMockImageMap.RemoveInstance
[ OK ] TestMockImageMap.RemoveInstance (7 ms)
[ RUN ] TestMockImageMap.AddInstancePingPongImageTest
[ OK ] TestMockImageMap.AddInstancePingPongImageTest (34 ms)
[ RUN ] TestMockImageMap.RemoveInstanceWithRemoveImage
[ OK ] TestMockImageMap.RemoveInstanceWithRemoveImage (23 ms)
[ RUN ] TestMockImageMap.AddErrorAndRemoveImage
[ OK ] TestMockImageMap.AddErrorAndRemoveImage (35 ms)
[ RUN ] TestMockImageMap.MirrorUUIDUpdated
[ OK ] TestMockImageMap.MirrorUUIDUpdated (44 ms)
[ RUN ] TestMockImageMap.RebalanceImageMap
[ OK ] TestMockImageMap.RebalanceImageMap (40 ms)
[----------] 13 tests from TestMockImageMap (244 ms total)
[----------] 14 tests from TestMockImageReplayer
[ RUN ] TestMockImageReplayer.StartStop
Failed to load class: cas (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_cas.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_cas.so: undefined symbol: _Z13cls_has_chunkPvNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
Failed to load class: log (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_log.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_log.so: undefined symbol: _Z24cls_cxx_map_write_headerPvPN4ceph6buffer7v14_2_04listE
Failed to load class: rgw (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_rgw.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_rgw.so: undefined symbol: _Z19cls_current_versionPv
Failed to load class: user (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_user.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_user.so: undefined symbol: _Z24cls_cxx_map_write_headerPvPN4ceph6buffer7v14_2_04listE
[ OK ] TestMockImageReplayer.StartStop (317 ms)
[ RUN ] TestMockImageReplayer.LocalImagePrimary
[ OK ] TestMockImageReplayer.LocalImagePrimary (146 ms)
[ RUN ] TestMockImageReplayer.LocalImageDNE
[ OK ] TestMockImageReplayer.LocalImageDNE (196 ms)
[ RUN ] TestMockImageReplayer.PrepareLocalImageError
[ OK ] TestMockImageReplayer.PrepareLocalImageError (194 ms)
[ RUN ] TestMockImageReplayer.GetRemoteImageIdDNE
[ OK ] TestMockImageReplayer.GetRemoteImageIdDNE (174 ms)
[ RUN ] TestMockImageReplayer.GetRemoteImageIdNonLinkedDNE
[ OK ] TestMockImageReplayer.GetRemoteImageIdNonLinkedDNE (224 ms)
[ RUN ] TestMockImageReplayer.GetRemoteImageIdError
[ OK ] TestMockImageReplayer.GetRemoteImageIdError (228 ms)
[ RUN ] TestMockImageReplayer.BootstrapError
[ OK ] TestMockImageReplayer.BootstrapError (154 ms)
[ RUN ] TestMockImageReplayer.StopBeforeBootstrap
[ OK ] TestMockImageReplayer.StopBeforeBootstrap (215 ms)
[ RUN ] TestMockImageReplayer.StartExternalReplayError
[ OK ] TestMockImageReplayer.StartExternalReplayError (152 ms)
[ RUN ] TestMockImageReplayer.StopError
[ OK ] TestMockImageReplayer.StopError (169 ms)
[ RUN ] TestMockImageReplayer.Replay
[ OK ] TestMockImageReplayer.Replay (177 ms)
[ RUN ] TestMockImageReplayer.DecodeError
[ OK ] TestMockImageReplayer.DecodeError (157 ms)
[ RUN ] TestMockImageReplayer.DelayedReplay
[ OK ] TestMockImageReplayer.DelayedReplay (2153 ms)
[----------] 14 tests from TestMockImageReplayer (4663 ms total)
[----------] 5 tests from TestMockImageSync
[ RUN ] TestMockImageSync.SimpleSync
[ OK ] TestMockImageSync.SimpleSync (198 ms)
[ RUN ] TestMockImageSync.RestartSync
[ OK ] TestMockImageSync.RestartSync (173 ms)
[ RUN ] TestMockImageSync.CancelNotifySyncRequest
[ OK ] TestMockImageSync.CancelNotifySyncRequest (159 ms)
[ RUN ] TestMockImageSync.CancelImageCopy
[ OK ] TestMockImageSync.CancelImageCopy (195 ms)
[ RUN ] TestMockImageSync.CancelAfterCopyImage
[ OK ] TestMockImageSync.CancelAfterCopyImage (166 ms)
[----------] 5 tests from TestMockImageSync (898 ms total)
[----------] 3 tests from TestMockInstanceReplayer
[ RUN ] TestMockInstanceReplayer.AcquireReleaseImage
[ OK ] TestMockInstanceReplayer.AcquireReleaseImage (16 ms)
[ RUN ] TestMockInstanceReplayer.RemoveFinishedImage
[ OK ] TestMockInstanceReplayer.RemoveFinishedImage (24 ms)
[ RUN ] TestMockInstanceReplayer.Reacquire
[ OK ] TestMockInstanceReplayer.Reacquire (2 ms)
[----------] 3 tests from TestMockInstanceReplayer (42 ms total)
[----------] 11 tests from TestMockInstanceWatcher
[ RUN ] TestMockInstanceWatcher.InitShutdown
[ OK ] TestMockInstanceWatcher.InitShutdown (23 ms)
[ RUN ] TestMockInstanceWatcher.InitError
[ OK ] TestMockInstanceWatcher.InitError (18 ms)
[ RUN ] TestMockInstanceWatcher.ShutdownError
[ OK ] TestMockInstanceWatcher.ShutdownError (15 ms)
[ RUN ] TestMockInstanceWatcher.Remove
[ OK ] TestMockInstanceWatcher.Remove (16 ms)
[ RUN ] TestMockInstanceWatcher.RemoveNoent
[ OK ] TestMockInstanceWatcher.RemoveNoent (12 ms)
[ RUN ] TestMockInstanceWatcher.ImageAcquireRelease
[ OK ] TestMockInstanceWatcher.ImageAcquireRelease (36 ms)
[ RUN ] TestMockInstanceWatcher.PeerImageRemoved
[ OK ] TestMockInstanceWatcher.PeerImageRemoved (36 ms)
[ RUN ] TestMockInstanceWatcher.ImageAcquireReleaseCancel
[ OK ] TestMockInstanceWatcher.ImageAcquireReleaseCancel (31 ms)
[ RUN ] TestMockInstanceWatcher.PeerImageAcquireWatchDNE
[ OK ] TestMockInstanceWatcher.PeerImageAcquireWatchDNE (17 ms)
[ RUN ] TestMockInstanceWatcher.PeerImageReleaseWatchDNE
[ OK ] TestMockInstanceWatcher.PeerImageReleaseWatchDNE (32 ms)
[ RUN ] TestMockInstanceWatcher.PeerImageRemovedCancel
[ OK ] TestMockInstanceWatcher.PeerImageRemovedCancel (12 ms)
[----------] 11 tests from TestMockInstanceWatcher (250 ms total)
[----------] 11 tests from TestMockInstanceWatcher_NotifySync
[ RUN ] TestMockInstanceWatcher_NotifySync.StartStopOnLeader
[ OK ] TestMockInstanceWatcher_NotifySync.StartStopOnLeader (48 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.CancelStartedOnLeader
[ OK ] TestMockInstanceWatcher_NotifySync.CancelStartedOnLeader (49 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.StartStopOnNonLeader
[ OK ] TestMockInstanceWatcher_NotifySync.StartStopOnNonLeader (36 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.CancelStartedOnNonLeader
[ OK ] TestMockInstanceWatcher_NotifySync.CancelStartedOnNonLeader (41 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.CancelWaitingOnNonLeader
[ OK ] TestMockInstanceWatcher_NotifySync.CancelWaitingOnNonLeader (46 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.InFlightPrevNotification
[ OK ] TestMockInstanceWatcher_NotifySync.InFlightPrevNotification (46 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.NoInFlightReleaseAcquireLeader
[ OK ] TestMockInstanceWatcher_NotifySync.NoInFlightReleaseAcquireLeader (46 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.StartedOnLeaderReleaseLeader
[ OK ] TestMockInstanceWatcher_NotifySync.StartedOnLeaderReleaseLeader (34 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.WaitingOnLeaderReleaseLeader
[ OK ] TestMockInstanceWatcher_NotifySync.WaitingOnLeaderReleaseLeader (46 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.StartedOnNonLeaderAcquireLeader
[ OK ] TestMockInstanceWatcher_NotifySync.StartedOnNonLeaderAcquireLeader (29 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.WaitingOnNonLeaderAcquireLeader
[ OK ] TestMockInstanceWatcher_NotifySync.WaitingOnNonLeaderAcquireLeader (34 ms)
[----------] 11 tests from TestMockInstanceWatcher_NotifySync (456 ms total)
[----------] 4 tests from TestMockLeaderWatcher
[ RUN ] TestMockLeaderWatcher.InitShutdown
[ OK ] TestMockLeaderWatcher.InitShutdown (33 ms)
[ RUN ] TestMockLeaderWatcher.InitReleaseShutdown
[ OK ] TestMockLeaderWatcher.InitReleaseShutdown (19 ms)
[ RUN ] TestMockLeaderWatcher.AcquireError
[ OK ] TestMockLeaderWatcher.AcquireError (12 ms)
[ RUN ] TestMockLeaderWatcher.Break
[ OK ] TestMockLeaderWatcher.Break (2012 ms)
[----------] 4 tests from TestMockLeaderWatcher (2076 ms total)
[----------] 12 tests from TestMockMirrorStatusUpdater
[ RUN ] TestMockMirrorStatusUpdater.InitShutDown
[ OK ] TestMockMirrorStatusUpdater.InitShutDown (13 ms)
[ RUN ] TestMockMirrorStatusUpdater.InitStatusWatcherError
[ OK ] TestMockMirrorStatusUpdater.InitStatusWatcherError (26 ms)
[ RUN ] TestMockMirrorStatusUpdater.ShutDownStatusWatcherError
[ OK ] TestMockMirrorStatusUpdater.ShutDownStatusWatcherError (14 ms)
[ RUN ] TestMockMirrorStatusUpdater.SmallBatch
[ OK ] TestMockMirrorStatusUpdater.SmallBatch (24 ms)
[ RUN ] TestMockMirrorStatusUpdater.LargeBatch
[ OK ] TestMockMirrorStatusUpdater.LargeBatch (30 ms)
[ RUN ] TestMockMirrorStatusUpdater.OverwriteStatus
[ OK ] TestMockMirrorStatusUpdater.OverwriteStatus (11 ms)
[ RUN ] TestMockMirrorStatusUpdater.OverwriteStatusInFlight
[ OK ] TestMockMirrorStatusUpdater.OverwriteStatusInFlight (7 ms)
[ RUN ] TestMockMirrorStatusUpdater.ImmediateUpdate
[ OK ] TestMockMirrorStatusUpdater.ImmediateUpdate (9 ms)
[ RUN ] TestMockMirrorStatusUpdater.RemoveIdleStatus
[ OK ] TestMockMirrorStatusUpdater.RemoveIdleStatus (20 ms)
[ RUN ] TestMockMirrorStatusUpdater.RemoveInFlightStatus
[ OK ] TestMockMirrorStatusUpdater.RemoveInFlightStatus (9 ms)
[ RUN ] TestMockMirrorStatusUpdater.ShutDownWhileUpdating
[ OK ] TestMockMirrorStatusUpdater.ShutDownWhileUpdating (14 ms)
[ RUN ] TestMockMirrorStatusUpdater.MirrorPeerSitePing
[ OK ] TestMockMirrorStatusUpdater.MirrorPeerSitePing (24 ms)
[----------] 12 tests from TestMockMirrorStatusUpdater (201 ms total)
[----------] 6 tests from TestMockNamespaceReplayer
[ RUN ] TestMockNamespaceReplayer.Init_LocalMirrorStatusUpdaterError
[ OK ] TestMockNamespaceReplayer.Init_LocalMirrorStatusUpdaterError (55 ms)
[ RUN ] TestMockNamespaceReplayer.Init_RemoteMirrorStatusUpdaterError
[ OK ] TestMockNamespaceReplayer.Init_RemoteMirrorStatusUpdaterError (32 ms)
[ RUN ] TestMockNamespaceReplayer.Init_InstanceReplayerError
[ OK ] TestMockNamespaceReplayer.Init_InstanceReplayerError (12 ms)
[ RUN ] TestMockNamespaceReplayer.Init_InstanceWatcherError
[ OK ] TestMockNamespaceReplayer.Init_InstanceWatcherError (20 ms)
[ RUN ] TestMockNamespaceReplayer.Init
[ OK ] TestMockNamespaceReplayer.Init (16 ms)
[ RUN ] TestMockNamespaceReplayer.AcuqireLeader
[ OK ] TestMockNamespaceReplayer.AcuqireLeader (9 ms)
[----------] 6 tests from TestMockNamespaceReplayer (144 ms total)
[----------] 4 tests from TestMockPoolReplayer
[ RUN ] TestMockPoolReplayer.ConfigKeyOverride
[ OK ] TestMockPoolReplayer.ConfigKeyOverride (47 ms)
[ RUN ] TestMockPoolReplayer.AcquireReleaseLeader
[ OK ] TestMockPoolReplayer.AcquireReleaseLeader (55 ms)
[ RUN ] TestMockPoolReplayer.Namespaces
[ OK ] TestMockPoolReplayer.Namespaces (2075 ms)
[ RUN ] TestMockPoolReplayer.NamespacesError
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/40443/console">https://jenkins.ceph.com/job/ceph-pull-requests/40443/console</a></p>
<p><a class="external" href="https://github.com/ceph/ceph/pull/32182">https://github.com/ceph/ceph/pull/32182</a></p>
rbd - Bug #42768 (Duplicate): unittest_journal: TestFutureImpl.Getters failed: Timeout
https://tracker.ceph.com/issues/42768
2019-11-12T11:43:33Z
Sebastian Wagner
<p>This might be a rare deadlock?</p>
<pre>
189/189 Test #129: unittest_journal ........................***Timeout 3600.11 sec
did not load config file, using default settings.
[==========] Running 117 tests from 11 test suites.
[----------] Global test environment set-up.
[----------] 14 tests from TestFutureImpl
2019-11-11T18:46:38.616+0000 7f6bcf064e80 -1 Errors while parsing config file!
2019-11-11T18:46:38.616+0000 7f6bcf064e80 -1 parse_file: filesystem error: cannot get file size: No such file or directory [ceph.conf]
2019-11-11T18:46:38.616+0000 7f6bcf064e80 -1 Errors while parsing config file!
2019-11-11T18:46:38.616+0000 7f6bcf064e80 -1 parse_file: filesystem error: cannot get file size: No such file or directory [ceph.conf]
[ RUN ] TestFutureImpl.Getters
Failed to load class: cas (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_cas.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_cas.so: undefined symbol: _Z13cls_has_chunkPvNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
2019-11-11T18:46:38.672+0000 7f6bcf064e80 0 <cls> /home/jenkins-build/build/workspace/ceph-pull-requests/src/cls/cephfs/cls_cephfs.cc:198: loading cephfs
2019-11-11T18:46:38.676+0000 7f6bcf064e80 0 <cls> /home/jenkins-build/build/workspace/ceph-pull-requests/src/cls/hello/cls_hello.cc:313: loading cls_hello
Failed to load class: log (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_log.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_log.so: undefined symbol: _Z24cls_cxx_map_write_headerPvPN4ceph6buffer7v14_2_04listE
Failed to load class: rgw (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_rgw.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_rgw.so: undefined symbol: _Z19cls_current_versionPv
Failed to load class: user (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_user.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_user.so: undefined symbol: _Z24cls_cxx_map_write_headerPvPN4ceph6buffer7v14_2_04listE
[ OK ] TestFutureImpl.Getters (57 ms)
[ RUN ] TestFutureImpl.Attach
[ OK ] TestFutureImpl.Attach (8 ms)
[ RUN ] TestFutureImpl.AttachWithPendingFlush
[ OK ] TestFutureImpl.AttachWithPendingFlush (28 ms)
... snip successful tests ...
[ RUN ] TestObjectRecorder.AppendFlushByCount
[ OK ] TestObjectRecorder.AppendFlushByCount (12 ms)
[ RUN ] TestObjectRecorder.AppendFlushByBytes
[ OK ] TestObjectRecorder.AppendFlushByBytes (9 ms)
[ RUN ] TestObjectRecorder.AppendFlushByAge
[ OK ] TestObjectRecorder.AppendFlushByAge (11 ms)
[ RUN ] TestObjectRecorder.AppendFilledObject
99% tests passed, 1 tests failed out of 189
Total Test time (real) = 3830.24 sec
The following tests FAILED:
129 - unittest_journal (Timeout)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/38349/console">https://jenkins.ceph.com/job/ceph-pull-requests/38349/console</a></p>
rbd - Bug #41931 (Closed): mgr/rbd_support: TypeError: '>' not supported between instances of 'st...
https://tracker.ceph.com/issues/41931
2019-09-19T12:20:08Z
Sebastian Wagner
<p>Hi, I'm getting this exception nowadays:</p>
<pre>
2019-09-19T14:15:44.614+0200 7f5e73f84700 0 mgr[rbd_support] Fatal runtime error: '>' not supported between instances of 'str' and 'int'
Traceback (most recent call last):
File "/home/sebastian/Repos/ceph/src/pybind/mgr/rbd_support/module.py", line 165, in run
self.query_condition.wait(stats_period)
File "/usr/lib/python3.6/threading.py", line 298, in wait
if timeout > 0:
TypeError: '>' not supported between instances of 'str' and 'int'
</pre>
<p>Might be a Python 3 issue.</p>
rgw - Bug #40902 (Duplicate): make check: unittest_rgw_reshard_wait failed (ReshardWait.wait_yield)
https://tracker.ceph.com/issues/40902
2019-07-23T09:13:40Z
Sebastian Wagner
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/29742">https://jenkins.ceph.com/job/ceph-pull-requests/29742</a></p>
<pre>
155/178 Test #162: unittest_rgw_reshard_wait ...............***Failed 1.06 sec
Running main() from gmock_main.cc
[==========] Running 5 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 5 tests from ReshardWait
[ RUN ] ReshardWait.wait_block
[ OK ] ReshardWait.wait_block (10 ms)
[ RUN ] ReshardWait.stop_block
[ OK ] ReshardWait.stop_block (13 ms)
[ RUN ] ReshardWait.wait_yield
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/rgw/test_rgw_reshard_wait.cc:72: Failure
Expected equality of these values:
1u
Which is: 1
context.poll()
Which is: 2
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/rgw/test_rgw_reshard_wait.cc:73: Failure
Value of: context.stopped()
Actual: true
Expected: false
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/rgw/test_rgw_reshard_wait.cc:75: Failure
Expected equality of these values:
1u
Which is: 1
context.run_one()
Which is: 0
[ FAILED ] ReshardWait.wait_yield (15 ms)
[ RUN ] ReshardWait.stop_yield
[ OK ] ReshardWait.stop_yield (10 ms)
[ RUN ] ReshardWait.stop_multiple
[ OK ] ReshardWait.stop_multiple (20 ms)
[----------] 5 tests from ReshardWait (68 ms total)
[----------] Global test environment tear-down
[==========] 5 tests from 1 test suite ran. (68 ms total)
[ PASSED ] 4 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] ReshardWait.wait_yield
1 FAILED TEST
</pre>
<p>Sorry, but I cannot provide any details, as the PR was not related to this failure.</p>
rgw - Documentation #38721 (Resolved): Remove OpenStack Kilo reference in the keystone documentation
https://tracker.ceph.com/issues/38721
2019-03-13T12:14:49Z
Sebastian Wagner
<p>Kilo is EOL since 2016: <a class="external" href="https://releases.openstack.org/">https://releases.openstack.org/</a></p>
<p>Ocata is the oldest (non-EOLed) version of OpenStack.</p>
<p>Relates to <a class="external" href="http://tracker.ceph.com/issues/18197">http://tracker.ceph.com/issues/18197</a></p>
ceph-volume - Bug #37390 (Resolved): c-v inventory returns invalid JSON
https://tracker.ceph.com/issues/37390
2018-11-26T13:16:39Z
Sebastian Wagner
<p>print() uses single-quotes by default, which is invalid JSON.</p>
RADOS - Bug #23360 (Duplicate): call to 'ceph osd erasure-code-profile set' asserts the monitors
https://tracker.ceph.com/issues/23360
2018-03-14T14:37:14Z
Sebastian Wagner
<p>I've attached `thread apply all bt` mixed with `thread apply all py-bt`</p>
<p>Threads 38 35 34 32 and 31 are waiting for futex 0x55a285204640</p>
<p>Thread 37 waits in<br />File "/src/pybind/mgr/mgr_module.py", line 71, in wait<br /> self.ev.wait()</p>
<p>AFAICT, all other threads are not part of this deadlock.</p>
rbd - Bug #22253 (Can't reproduce): "rbd info" crashed: stack smashing detected
https://tracker.ceph.com/issues/22253
2017-11-27T14:37:58Z
Sebastian Wagner
<p>Environment: quite small vstart cluster.</p>
<p>This is the stack trace:<br /><pre>
#3 0x00007fffed44711c in __GI___fortify_fail (msg=<optimized out>, msg@entry=0x7fffed4bd441 "stack smashing detected") at fortify_fail.c:37
#4 0x00007fffed4470c0 in __stack_chk_fail () at stack_chk_fail.c:28
#5 0x00007ffff78f0beb in librbd::ImageCtx::perf_start (this=this@entry=0x555555b7bf70, name="librbd-8c39e2ae8944a-rbd-huge2") at /home/sebastian/Repos/ceph/src/librbd/ImageCtx.cc:397
#6 0x00007ffff78f3cb4 in librbd::ImageCtx::init (this=0x555555b7bf70) at /home/sebastian/Repos/ceph/src/librbd/ImageCtx.cc:275
#7 0x00007ffff799dacd in librbd::image::OpenRequest<librbd::ImageCtx>::send_register_watch (this=this@entry=0x555555b7fe60) at /home/sebastian/Repos/ceph/src/librbd/image/OpenRequest.cc:477
#8 0x00007ffff79a3102 in librbd::image::OpenRequest<librbd::ImageCtx>::handle_v2_apply_metadata (this=this@entry=0x555555b7fe60, result=result@entry=0x7fffb77fa374) at /home/sebastian/Repos/ceph/src/librbd/image/OpenRequest.cc:471
#9 0x00007ffff79a351f in librbd::util::detail::rados_state_callback<librbd::image::OpenRequest<librbd::ImageCtx>, &librbd::image::OpenRequest<librbd::ImageCtx>::handle_v2_apply_metadata, true> (c=<optimized out>, arg=0x555555b7fe60) at /home/sebastian/Repos/ceph/src/librbd/Utils.h:39
#10 0x00007ffff75d678d in librados::C_AioComplete::finish (this=0x7fffd0001b60, r=<optimized out>) at /home/sebastian/Repos/ceph/src/librados/AioCompletionImpl.h:169
#11 0x0000555555613949 in Context::complete (this=0x7fffd0001b60, r=<optimized out>) at /home/sebastian/Repos/ceph/src/include/Context.h:70
#12 0x00007fffeeab6010 in Finisher::finisher_thread_entry (this=0x555555acb3e8) at /home/sebastian/Repos/ceph/src/common/Finisher.cc:72
#13 0x00007fffee3a86ba in start_thread (arg=0x7fffb77fe700) at pthread_create.c:333
#14 0x00007fffed4353dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
</pre></p>