Project

General

Profile

Tasks #20613

jewel v10.2.10

Added by Nathan Cutler 4 months ago. Updated about 2 months ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Target version:
Start date:
07/13/2017
Due date:
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Release:
Needs Doc:
No

Description

Workflow

  • Preparing the release
    • Nathan patches upgrade/jewel-x/point-to-point-x to do 10.2.0 -> current Jewel point release -> x - SKIPPED (this test is currently broken because of a PR prematurely merged to the s3-tests repo; the fix will be in 10.2.10, after which we can update the test and it should pass)
  • Cutting the release
    • Loic asks Abhishek L. if a point release should be published - YES 20170907
    • Loic gets approval from all leads
      • Yehuda, rgw - YES 20170912
      • Patrick, fs - YES 20170907
      • Jason, rbd - YES 20170907
      • Josh, rados - YES 20170907
    • Abhishek L. writes and commits the release notes
    • Nathan informs Yuri that the branch is ready for testing - DONE 20170919
    • Yuri runs additional integration tests
      • If Yuri discovers new bugs that need to be backported urgently (i.e. their priority is set to Urgent or Immediate), the release goes back to being prepared; it was not ready after all
    • Yuri informs Alfredo that the branch is ready for release - DONE
    • Alfredo creates the packages and sets the release tag - DONE
    • Abhishek L. posts release announcement on https://ceph.com/community/blog - DONE https://ceph.com/releases/v10-2-10-jewel-released/

Release information

History

#1 Updated by Nathan Cutler 4 months ago

  • Status changed from New to In Progress
  • Priority changed from Normal to Urgent

#2 Updated by Nathan Cutler 4 months ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/4a8e1d8d2302849964e39405fa78ce4c6f553378/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#3 Updated by Nathan Cutler 3 months ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

5 fail, 2 dead, 220 pass (227 total) http://pulpito.ceph.com:80/smithfarm-2017-08-21_19:38:42-rados-wip-jewel-backports-distro-basic-smithi/

  • SELinux denials
  • infrastructure noise? dpkg: error: dpkg status database is locked by another process
  • "EAGAIN: pg 1.0 primary osd.1 not up" on ceph pg scrub
  • 1547951 -> test_mon_ping: ceph ping 'mon.*' causes /usr/bin/python to dump core (!) in conjunction with injected socket failure
  • 1547956 -> failed test ObjectStore/StoreTest.FiemapHoles/3, where GetParam() = "kstore" (test/objectstore/store_test.cc:310: Failure, Expected: (m[SKIP_STEP]) >= (3u), actual: 0 vs 3)

Rerun:

1 fail, 2 dead, 4 pass (7 total) http://pulpito.ceph.com:80/smithfarm-2017-08-22_13:00:32-rados-wip-jewel-backports-distro-basic-smithi/

#4 Updated by Nathan Cutler 3 months ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 999 -l 2 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-21_19:59:29-powercycle-wip-jewel-backports-distro-basic-smithi/

#5 Updated by Nathan Cutler 3 months ago

Upgrade jewel point-to-point-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x/point-to-point-x --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-21_20:00:27-upgrade:jewel-x:point-to-point-x-wip-jewel-backports-distro-basic-vps/

#6 Updated by Nathan Cutler 3 months ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

2 fail, 1 dead, 15 pass (18 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-21_20:01:15-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

Rerun on smithi:

2 fail, 1 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-22_12:32:46-upgrade:hammer-x-wip-jewel-backports-distro-basic-smithi/

  • SELinux denials
  • failed to complete snaptrimming before timeout

#7 Updated by Nathan Cutler 3 months ago

fs

teuthology-suite -k distro --priority 999 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

2 fail, 86 pass (88 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-21_20:02:22-fs-wip-jewel-backports-distro-basic-smithi/

Re-run:

1 fail, 1 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-22_08:31:06-fs-wip-jewel-backports-distro-basic-smithi/

One of the failures appears to be reproducible, rerunning three times:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-22_11:30:11-fs-wip-jewel-backports-distro-basic-smithi/

Since http://tracker.ceph.com/issues/16881 is a known issue due to a racy test, ruled a pass

#8 Updated by Nathan Cutler 3 months ago

rgw

teuthology-suite -k distro --priority 999 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-21_20:04:44-rgw-wip-jewel-backports-distro-basic-smithi/

#9 Updated by Nathan Cutler 3 months ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-21_20:06:49-ceph-disk-wip-jewel-backports-distro-basic-vps/

#10 Updated by Nathan Cutler 3 months ago

rbd

teuthology-suite -k distro --priority 999 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 4)/4

27 fail, 82 pass (109 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-21_20:07:45-rbd-wip-jewel-backports-distro-basic-smithi/

#12 Updated by Nathan Cutler 3 months ago

rbd for PR#14663

teuthology-suite -k distro --priority 999 --suite rbd --email ncutler@suse.com --ceph wip-19228-jewel --machine-type smithi --subset $(expr $RANDOM % 4)/4

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-22_09:54:05-rbd-wip-19228-jewel-distro-basic-smithi/

#13 Updated by Nathan Cutler 3 months ago

ceph-disk for pr#17133

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-21035-jewel --machine-type vps --priority 999 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-22_10:17:52-ceph-disk-wip-21035-jewel-distro-basic-vps/

#14 Updated by Nathan Cutler 3 months ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/bdfc04f416a87e5c1c0f6010b28ab7be0e3ded2e/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#15 Updated by Nathan Cutler 3 months ago

rados

Rerunning 2 dead and 1 failed job from the earlier run:

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --rerun smithfarm-2017-08-22_13:00:32-rados-wip-jewel-backports-distro-basic-smithi

1 fail, 2 pass (3 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-23_08:55:27-rados-wip-jewel-backports-distro-basic-smithi/

Since the segfault is in KStore which is not supported, ruled a pass

#16 Updated by Nathan Cutler 3 months ago

Upgrade hammer-x

Re-running 2 failed jobs from the previous run:

teuthology-suite -k distro --verbose --ceph wip-jewel-backports --machine-type smithi --priority 101 --email ncutler@suse.com --rerun smithfarm-2017-08-22_12:32:46-upgrade:hammer-x-wip-jewel-backports-distro-basic-smithi

1 fail, 1 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-23_08:57:27-upgrade:hammer-x-wip-jewel-backports-distro-basic-smithi/

Failure is SELinux related; ruled a pass

#17 Updated by Nathan Cutler 3 months ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/8594575b28187a778fcacc2b4313e3506502bbee/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#18 Updated by Nathan Cutler 3 months ago

rgw

teuthology-suite -k distro --priority 999 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-24_15:19:32-rgw-wip-jewel-backports-distro-basic-smithi/

#19 Updated by Nathan Cutler 3 months ago

rbd

teuthology-suite -k distro --priority 999 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 4)/4

regression http://pulpito.ceph.com:80/smithfarm-2017-08-24_15:58:59-rbd-wip-jewel-backports-distro-basic-smithi/

#20 Updated by Nathan Cutler 3 months ago

rados

teuthology-suite -k distro --priority 999 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

2 fail, 1 dead, 224 pass (227 total) http://pulpito.ceph.com:80/smithfarm-2017-08-24_16:00:40-rados-wip-jewel-backports-distro-basic-smithi/

Re-running 2 failed and 1 dead job:

1 fail, 2 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-25_17:29:04-rados-wip-jewel-backports-distro-basic-smithi/

  • Failure is SELinux related, ignoring.

Ruled a pass

#21 Updated by Nathan Cutler 3 months ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/daf32194cadd7fbe1c96ebee10069e4e4a25738d/ (build failure: env noise)
https://shaman.ceph.com/builds/ceph/wip-jewel-backports/883ecf1b30a9fe876efba26fec4ccbec24bc8b09/ (rebased to trigger a new build)

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#22 Updated by Nathan Cutler 3 months ago

rbd

teuthology-suite -k distro --priority 999 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 4)/4

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-25_17:26:46-rbd-wip-jewel-backports-distro-basic-smithi/

#23 Updated by Nathan Cutler 3 months ago

cephfs note

https://github.com/ceph/ceph/pull/16248 was merged by oversight, without any testing, but if this test runs it would probably cover it (but needs double-checking of SHA1):

running http://pulpito.ceph.com/teuthology-2017-08-27_04:10:02-fs-jewel-distro-basic-smithi/

#24 Updated by Nathan Cutler 3 months ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/fe8b99d8aec719b5dd9aacaf126db1854f34cdc8/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#25 Updated by Nathan Cutler 3 months ago

rados

teuthology-suite -k distro --priority 999 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

4 fail, 2 dead, 221 pass (227 total) http://pulpito.ceph.com:80/smithfarm-2017-08-27_18:29:25-rados-wip-jewel-backports-distro-basic-smithi/

  • rados/thrash/...thrashers/pggrow.yaml workloads/rgw_snaps.yaml failed because something in s3-tests (read-write tests) apparently sent a negative size to os.urandom(size)
  • rados/thrash/...thrashers/morepggrow.yaml workloads/rgw_snaps.yaml failed because something in s3-tests (read-write tests) apparently sent a negative size to os.urandom(size)

Rerunning 4 failed and 2 dead jobs:

3 fail, 2 dead, 1 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-28_13:25:20-rados-wip-jewel-backports-distro-basic-smithi/

  • rados/thrash/...thrashers/pggrow.yaml workloads/rgw_snaps.yaml failed because it's possibly racing with rgw to create .rgw.control
  • rados/thrash/...thrashers/morepggrow.yaml workloads/rgw_snaps.yaml failed because it's possibly racing with rgw to create .rgw.control

Rerunning 3 failed jobs individually:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-28_14:40:33-rados-wip-jewel-backports-distro-basic-smithi/

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-28_16:11:10-rados-wip-jewel-backports-distro-basic-smithi/

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-28_17:18:59-rados-wip-jewel-backports-distro-basic-smithi/

The 2 dead jobs are objectstore idempotent tests that have been problematic from the beginning and do not pass in master, either; opened https://github.com/ceph/ceph/pull/17317 to drop them, but Josh determined that this is known bug http://tracker.ceph.com/issues/20981

Ruled a pass

#26 Updated by Nathan Cutler 3 months ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 999 -l 2 --email ncutler@suse.com

1 fail, 1 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-27_18:55:53-powercycle-wip-jewel-backports-distro-basic-smithi/

Rerunning the failed job:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-28_02:57:08-powercycle-wip-jewel-backports-distro-basic-smithi/

#27 Updated by Nathan Cutler 3 months ago

Upgrade jewel point-to-point-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x/point-to-point-x --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

fail http://pulpito.ceph.com:80/smithfarm-2017-08-27_18:56:14-upgrade:jewel-x:point-to-point-x-wip-jewel-backports-distro-basic-vps/

  • FAIL: s3tests.functional.test_s3.test_versioned_object_acl_no_version_specified

Rerunning:

fail http://pulpito.ceph.com:80/smithfarm-2017-08-28_02:58:24-upgrade:jewel-x:point-to-point-x-wip-jewel-backports-distro-basic-vps/

  • FAIL: s3tests.functional.test_s3.test_versioned_object_acl_no_version_specified

The failure is due to a new test that was merged into s3-tests repo: https://github.com/ceph/s3-tests/pull/160

Ignoring for now.

#28 Updated by Nathan Cutler 3 months ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

2 fail, 16 pass (18 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-27_18:56:34-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

  • ("bad handshake: SysCallError(-1, 'Unexpected EOF')",)
  • timed out waiting for admin_socket to appear after osd.4 restart

Re-running 2 failed jobs on smithi:

1 fail, 1 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-28_13:27:46-upgrade:hammer-x-wip-jewel-backports-distro-basic-smithi/

  • failure is SELinux-related

Re-running 1 failed job on vps:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-28_16:15:24-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

Ruled a pass

#29 Updated by Nathan Cutler 3 months ago

rgw

teuthology-suite -k distro --priority 999 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-27_18:57:04-rgw-wip-jewel-backports-distro-basic-smithi/

#30 Updated by Nathan Cutler 3 months ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-27_18:58:39-ceph-disk-wip-jewel-backports-distro-basic-vps/

#31 Updated by Nathan Cutler 3 months ago

rbd

teuthology-suite -k distro --priority 999 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 4)/4

1 dead, 106 pass (107 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-27_18:58:54-rbd-wip-jewel-backports-distro-basic-smithi/

Rerunning 1 dead job:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-28_13:28:39-rbd-wip-jewel-backports-distro-basic-smithi/

#32 Updated by Nathan Cutler 3 months ago

Upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-27_19:00:19-upgrade:client-upgrade-wip-jewel-backports-distro-basic-vps/

#33 Updated by Nathan Cutler 3 months ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/0856b44fc586f2d8620f4841e3e792c34b4affad/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

Bisect:

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/3b2d20d4819d8e43915cce0d95850a7f0971e818/

Bisect:

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/50d2a25f7249e4ea746b6479e2ff90881bc09641/

#34 Updated by Nathan Cutler 3 months ago

  • Description updated (diff)

#35 Updated by Nathan Cutler 3 months ago

rados

teuthology-suite -k distro --priority 999 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

massive failure http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-31_18:01:53-rados-wip-jewel-backports-distro-basic-smithi/

Bisect 3b2d20d4819d8e43915cce0d95850a7f0971e818

teuthology-suite -k distro --priority 999 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --limit 20 --rerun smithfarm-2017-08-31_18:01:53-rados-wip-jewel-backports-distro-basic-smithi

fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_06:01:51-rados-wip-jewel-backports-distro-basic-smithi/

Looks like I got lucky.

Bisect 50d2a25f7249e4ea746b6479e2ff90881bc09641

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_12:14:13-rados-wip-jewel-backports-distro-basic-smithi/

Bisect continues:

Now I know that either https://github.com/ceph/ceph/pull/16169 or https://github.com/ceph/ceph/pull/16293 is the culprit.

16169: https://shaman.ceph.com/builds/ceph/wip-jewel-backports/947b78cc61c3750bebe036440a6bf444fd864213/

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_16:06:40-rados-wip-jewel-backports-distro-basic-smithi/

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_17:10:53-upgrade:client-upgrade-wip-jewel-backports-distro-basic-vps/

16293: https://shaman.ceph.com/builds/ceph/wip-20460-jewel/d2eea3f7d59507714b04563c6811a29c8d7120b7/

fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_16:07:40-rados-wip-20460-jewel-distro-basic-smithi/

fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_17:10:21-upgrade:client-upgrade-wip-20460-jewel-distro-basic-vps/

#36 Updated by Nathan Cutler 3 months ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

4 fail, 1 dead, 13 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-31_18:06:37-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

#37 Updated by Nathan Cutler 3 months ago

Upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

all fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-31_19:54:21-upgrade:client-upgrade-wip-jewel-backports-distro-basic-vps/

Bisect 3b2d20d4819d8e43915cce0d95850a7f0971e818

all fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_05:40:53-upgrade:client-upgrade-wip-jewel-backports-distro-basic-vps/

#38 Updated by Nathan Cutler 3 months ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/10bf477a3226b55773dc9863cf8e489354007519/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#39 Updated by Nathan Cutler 3 months ago

Upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_19:32:42-upgrade:client-upgrade-wip-jewel-backports-distro-basic-vps/

#40 Updated by Nathan Cutler 3 months ago

rados

teuthology-suite -k distro --priority 999 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

4 fail, 4 dead, 219 pass (227 total) http://pulpito.ceph.com:80/smithfarm-2017-09-01_19:59:45-rados-wip-jewel-backports-distro-basic-smithi/

  • 1587155 SELinux denial in ceph-mds
  • 1587225, 1587250 known bug http://tracker.ceph.com/issues/20981
  • 1587244 failed to recover before timeout expired
  • 1587271, 1587323 known bug, won't fix http://tracker.ceph.com/issues/18739
  • 1587349 Command failed on smithi200 with status 1: '/home/ubuntu/cephtest/s3-tests/virtualenv/bin/s3tests-test-readwrite'
  • 1587357 Socket is closed -> actually osd crash with FAILED assert(0 "unexpected error") in cls_rgw.test_implicit

Rerunning 4 failed and 4 dead jobs

1 fail, 2 dead, 5 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-02_10:38:36-rados-wip-jewel-backports-distro-basic-smithi/

  • 1590956 Socket is closed -> actually osd crash with FAILED assert(0 "unexpected error") in cls_rgw.test_implicit
  • the 2 dead are known bug http://tracker.ceph.com/issues/20981 -> ignoring

Reproducer for the failed job is: teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --suite rados --filter="rados/verify/{1thrash/default.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml}"

Rerunning just the reproducer:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-02_15:41:33-rados-wip-jewel-backports-distro-basic-smithi/ (Ubuntu)

See if it only fails on CentOS?

pass --num 1 http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-03_07:24:07-rados-wip-jewel-backports-distro-basic-smithi/

pass --num 5 http://pulpito.front.sepia.ceph.com/smithfarm-2017-09-03_07:48:08-rados-wip-jewel-backports-distro-basic-smithi/

pass --num 20 http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-03_08:29:02-rados-wip-jewel-backports-distro-basic-smithi/

#41 Updated by Nathan Cutler 3 months ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 999 -l 2 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_20:13:08-powercycle-wip-jewel-backports-distro-basic-smithi/

#42 Updated by Nathan Cutler 3 months ago

fs

teuthology-suite -k distro --priority 999 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

4 fail, 84 pass (88 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_20:13:34-fs-wip-jewel-backports-distro-basic-smithi/

  • java (btrfs)
  • bad handshake (btrfs)
  • coredumps (btrfs)
  • Test failure: test_files_throttle (tasks.cephfs.test_strays.TestStrays) <- this was on XFS

Re-running 4 failed jobs

1 fail, 3 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-02_08:23:50-fs-wip-jewel-backports-distro-basic-smithi/

  • java (btrfs) -> ignoring

Ruled a pass

#43 Updated by Nathan Cutler 3 months ago

rgw

teuthology-suite -k distro --priority 999 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_20:14:41-rgw-wip-jewel-backports-distro-basic-smithi/

#44 Updated by Nathan Cutler 3 months ago

rbd

teuthology-suite -k distro --priority 999 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 4)/4

2 fail, 107 pass (109 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_20:16:01-rbd-wip-jewel-backports-distro-basic-smithi/

  • both failures are SELinux denials in logrotate

Rerunning 2 failed jobs

pass http://pulpito.ceph.com/smithfarm-2017-09-02_08:19:06-rbd-wip-jewel-backports-distro-basic-smithi/

#45 Updated by Nathan Cutler 3 months ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

2 fail, 16 pass (18 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-01_19:39:12-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

  • failed to recover before timeout expired
  • Command failed on vpm003 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 4'

Re-running 2 failed jobs

1 fail, 1 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-02_08:21:46-upgrade:hammer-x-wip-jewel-backports-distro-basic-smithi/

  • failure due to SELinux denials in ceph-mon, ignoring

Ruled a pass

#46 Updated by Josh Durgin 3 months ago

"actually osd crash with FAILED assert(0 "unexpected error")" -> this usually means an fs bug (we saw this with btrfs giving ENOSPC very early in recent months) or a failing disk giving EIO

#47 Updated by Nathan Cutler 3 months ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/4540388d4b7a960e74c1c2b59f220a603b0333c4/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#48 Updated by Nathan Cutler 3 months ago

rados

teuthology-suite -k distro --priority 999 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Initial run with --limit 10 to rule out a major regression:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-06_15:12:51-rados-wip-jewel-backports-distro-basic-smithi/

Full run

3 fail, 3 dead, 222 pass (228 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-06_16:01:34-rados-wip-jewel-backports-distro-basic-smithi/

  • SELinux denials
  • 1601811 rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml} "WARNING: max attr value size (1024) is smaller than osd_max_object_name_len (2048). Your backend filesystem appears to not support attrs large enough to handle the configured max rados name size. You may get unexpected ENAMETOOLONG errors on rados operations or buggy behavior" followed by "AttributeError: managers" (caused by including "mgr.x" role in backport https://github.com/ceph/ceph/pull/16473 - pretty sure it does not affect any other backports in this run)
  • [ FAILED ] ObjectStore/StoreTest.FiemapHoles/3, where GetParam() = "kstore" known bug caused by PR#15189
  • 1601664 FAILED assert(0 == "unexpected error") in FileStore::_do_transaction -> on BTRFS, ignoring
  • 1601695, 1601722 known bug http://tracker.ceph.com/issues/20981 -> ignoring

Rerunning 6 jobs:

2 fail, 4 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-07_07:08:05-rados-wip-jewel-backports-distro-basic-smithi/

  • 1604961 rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml} "AttributeError: managers" (known issue with PR#16473) -> ignoring
  • 1604962 ObjectStore/StoreTest.FiemapHoles/3, where GetParam() = "kstore" (known issue with PR#15189) -> ignoring

Ruled a pass

#49 Updated by Nathan Cutler 3 months ago

Upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-06_15:14:42-upgrade:client-upgrade-wip-jewel-backports-distro-basic-vps/

#50 Updated by Nathan Cutler 3 months ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

3 fail, 15 pass (18 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-06_15:15:02-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

Rerunning 3 failed jobs

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-07_06:12:40-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

#51 Updated by Nathan Cutler 3 months ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 999 -l 2 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-06_15:14:26-powercycle-wip-jewel-backports-distro-basic-smithi/

#52 Updated by Nathan Cutler 3 months ago

fs

teuthology-suite -k distro --priority 999 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

2 fail, 86 pass (88 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-06_17:19:07-fs-wip-jewel-backports-distro-basic-smithi/

  • one failure is java-related
  • the other is "cluster [ERR] unmatched rstat on 100, inode has n(v1 rc2017-09-07 00:31:31.113264 10=0+10), dirfrags have n(v0 rc2017-09-07 00:31:31.113264 11=0+11)" in cluster log"

Rerunning 2 failed jobs:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-07_02:17:01-fs-wip-jewel-backports-distro-basic-smithi/

#53 Updated by Nathan Cutler 3 months ago

rgw

teuthology-suite -k distro --priority 999 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2

1 fail, 95 pass (96 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-06_17:17:25-rgw-wip-jewel-backports-distro-basic-smithi/

  • saw valgrind issues with apache frontend

Rerunning 1 failed job

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-07_01:51:44-rgw-wip-jewel-backports-distro-basic-smithi/

#54 Updated by Nathan Cutler 3 months ago

rbd

teuthology-suite -k distro --priority 999 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 4)/4

1 fail, 1 dead, 106 pass (108 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-06_16:07:51-rbd-wip-jewel-backports-distro-basic-smithi/

  • rbd/maintenance/... -> OSD fails to start because underlying XFS data store is unavailable
  • kernel crash (oops)

Rerunning 2 jobs:

1 pass, 1 fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-07_06:16:37-rbd-wip-jewel-backports-distro-basic-smithi/

#55 Updated by Nathan Cutler 3 months ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-07_07:21:58-ceph-disk-wip-jewel-backports-distro-basic-vps/

#56 Updated by Nathan Cutler 3 months ago

  • Description updated (diff)

#57 Updated by Nathan Cutler 3 months ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/a396dcfc519dd631a7e8bea62bd5e66b489e5ff9/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#58 Updated by Nathan Cutler 3 months ago

rados

teuthology-suite -k distro --priority 999 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

4 fail, 2 dead, 222 pass (228 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-07_18:58:43-rados-wip-jewel-backports-distro-basic-smithi/

Rerun:

2 fail, 4 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-10_07:03:27-rados-wip-jewel-backports-distro-basic-smithi/

Pushed two additional commits to https://github.com/ceph/ceph/pull/16473 to address the "managers" failure. Rerunning just that one test:

fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-11_20:59:06-rados-wip-20723-jewel-distro-basic-smithi/

Pushed two more commits to https://github.com/ceph/ceph/pull/16473 and rerunning:

*

#59 Updated by Nathan Cutler 3 months ago

rbd

teuthology-suite -k distro --priority 999 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 4)/4

1 fail, 107 pass (108 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-07_19:03:02-rbd-wip-jewel-backports-distro-basic-smithi/

  • Log excerpt from failed job:
2017-09-10T02:24:24.308 INFO:tasks.thrashosds.thrasher:Reweighting osd 1 to 0.765611010463
2017-09-10T02:24:24.311 INFO:teuthology.orchestra.run.smithi193:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd reweight 1 0.765611010463'
2017-09-10T02:24:24.372 INFO:teuthology.orchestra.run.smithi193.stdout:Size error: expected 0x1a1200 stat 0x0
2017-09-10T02:24:24.376 INFO:teuthology.orchestra.run.smithi193.stdout:LOG DUMP (3091 total operations):

Rerun:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-10_07:04:38-rbd-wip-jewel-backports-distro-basic-smithi/

#60 Updated by Nathan Cutler 3 months ago

  • Description updated (diff)

#61 Updated by Nathan Cutler 2 months ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/34f51d2dca8149110a6a335eb865800a36ce7d1b/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#62 Updated by Nathan Cutler 2 months ago

rados

teuthology-suite -k distro --priority 999 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

3 fail, 2 dead, 223 pass (228 total) http://pulpito.ceph.com/smithfarm-2017-09-12_17:11:54-rados-wip-jewel-backports-distro-basic-smithi/

Rerun 5 jobs:

2 fail 3 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-13_02:44:35-rados-wip-jewel-backports-distro-basic-smithi/

  • One of the failed jobs is a known test issue fixed by https://github.com/ceph/ceph/pull/17677
  • The failed job is rados/singleton-nomsgr/{all/11429.yaml rados.yaml} - different failure message from last time

upgrade/client-upgrade

To test https://github.com/ceph/ceph/pull/17780

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph-repo https://github.com/ceph/ceph-ci.git --ceph wip-jewel-backports --suite-repo https://github.com/smithfarm/ceph.git --suite-branch wip-rh-74-jewel --machine-type vps --priority 101 --email ncutler@suse.com

4 fail, 9 pass (13 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-18_20:26:52-upgrade:client-upgrade-wip-jewel-backports-distro-basic-vps/

  • All four failures were due to absence of CentOS 7.4 on vps; hopefully fixed yesterday by David G.

Rerunning 4 CentOS 7.4 jobs:

fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-19_06:48:10-upgrade:client-upgrade-wip-jewel-backports-distro-basic-vps/

running http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-19_20:25:28-upgrade:client-upgrade-wip-jewel-backports-distro-basic-vps/

#63 Updated by Nathan Cutler 2 months ago

rgw

teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-09-12_17:28:52-rgw-wip-jewel-backports-distro-basic-smithi/

#64 Updated by Nathan Cutler 2 months ago

  • Description updated (diff)

#65 Updated by Yuri Weinstein 2 months ago

QE VALIDATION (STARTED 9/19/17)

(Note: PASSED / FAILED - indicates "TEST IS IN PROGRESS")

re-runs command lines and filters are captured in http://pad.ceph.com/p/hammer_v10.2.10_QE_validation_notes

command line CEPH_QA_MAIL="ceph-qa@ceph.com"; MACHINE_NAME=smithi; CEPH_BRANCH=jewel; SHA1=189f0c6f2703758a6be917a3c4086f6a26e42366 ; teuthology-suite -v --ceph-repo https://github.com/ceph/ceph.git --suite-repo https://github.com/ceph/ceph.git -c $CEPH_BRANCH -S $SHA1 -m $MACHINE_NAME -s rados --subset 35/50 -k distro -p 100 -e $CEPH_QA_MAIL --suite-branch jewel --dry-run

teuthology-suite -v -c $CEPH_BRANCH -S $SHA1 -m $MACHINE_NAME -r $RERUN --suite-repo https://github.com/ceph/ceph.git --ceph-repo https://github.com/ceph/ceph.git --suite-branch jewel -p 90 -R fail,dead,running

Suite Runs/Reruns Notes/Issues
rados http://pulpito.ceph.com/yuriw-2017-09-19_20:43:32-rados-jewel-distro-basic-smithi/ FAILED Approved by Josh #15454
http://pulpito.ceph.com/yuriw-2017-09-20_15:28:55-rados-jewel-distro-basic-smithi/
http://pulpito.ceph.com/yuriw-2017-09-21_00:22:29-rados-jewel-distro-basic-smithi/
rgw http://pulpito.ceph.com/yuriw-2017-09-19_20:47:02-rgw-jewel-distro-basic-smithi/ PASSED
http://pulpito.ceph.com/yuriw-2017-09-20_15:30:09-rgw-jewel-distro-basic-smithi/
rbd http://pulpito.ceph.com/yuriw-2017-09-19_20:49:39-rbd-jewel-distro-basic-smithi/ PASSED
http://pulpito.ceph.com/yuriw-2017-09-20_15:30:44-rbd-jewel-distro-basic-smithi/
fs http://pulpito.ceph.com/yuriw-2017-09-19_20:52:59-fs-jewel-distro-basic-smithi/ FAILED #21481 approved by Patrick
http://pulpito.ceph.com/yuriw-2017-09-20_15:31:18-fs-jewel-distro-basic-smithi/
krbd http://pulpito.ceph.com/yuriw-2017-09-19_20:54:44-krbd-jewel-testing-basic-smithi/ PASSED / FAILED approved by Ilya
http://pulpito.ceph.com/yuriw-2017-09-20_15:32:12-krbd-jewel-testing-basic-smithi/
kcephfs http://pulpito.ceph.com/yuriw-2017-09-19_20:55:34-kcephfs-jewel-testing-basic-smithi/ PASSED
knfs http://pulpito.ceph.com/yuriw-2017-09-19_20:56:28-knfs-jewel-testing-basic-smithi/ PASSED
rest http://pulpito.ceph.com/yuriw-2017-09-19_20:58:27-rest-jewel-distro-basic-smithi/ PASSED
hadoop EXCLUDED FROM THIS RELEASE
samba EXCLUDED FROM THIS RELEASE
ceph-deploy http://pulpito.ceph.com/yuriw-2017-09-19_20:59:16-ceph-deploy-jewel-distro-basic-vps/ PASSED #21482
http://pulpito.ceph.com/yuriw-2017-09-20_15:41:51-ceph-deploy-jewel-distro-basic-ovh/ ovh
ceph-disk http://pulpito.ceph.com/yuriw-2017-09-19_20:59:46-ceph-disk-jewel-distro-basic-vps/ PASSED
upgrade/client-upgrade http://pulpito.ceph.com/yuriw-2017-09-19_21:00:44-upgrade:client-upgrade-jewel-distro-basic-smithi/ PASSED
http://pulpito.ceph.com/yuriw-2017-09-20_21:11:46-upgrade:client-upgrade-jewel-distro-basic-vps/
upgrade/hammer-x (jewel) http://pulpito.ceph.com/yuriw-2017-09-19_21:02:23-upgrade:hammer-x-jewel-distro-basic-vps/ PASSED
http://pulpito.ceph.com/yuriw-2017-09-20_15:38:02-upgrade:hammer-x-jewel-distro-basic-vps/
http://pulpito.ceph.com/yuriw-2017-09-21_16:14:07-upgrade:hammer-x-jewel-distro-basic-vps/
upgrade/jewel-x/point-to-point-x http://pulpito.ceph.com/yuriw-2017-09-19_21:03:53-upgrade:jewel-x:point-to-point-x-jewel-distro-basic-vps/ FAILED Nathan's fixing
http://pulpito.ceph.com/yuriw-2017-09-20_16:06:24-upgrade:jewel-x:point-to-point-x-jewel-distro-basic-smithi/
http://pulpito.ceph.com/yuriw-2017-09-20_21:10:02-upgrade:jewel-x:point-to-point-x-jewel-distro-basic-vps/
http://pulpito.ceph.com/yuriw-2017-09-20_21:10:16-upgrade:jewel-x:point-to-point-x-jewel-distro-basic-ovh/
powercycle http://pulpito.ceph.com/yuriw-2017-09-19_20:57:15-powercycle-jewel-testing-basic-smithi/ PASSED
ceph-ansible http://pulpito.ceph.com/yuriw-2017-09-19_21:04:38-ceph-ansible-jewel-distro-basic-vps/ FAILED approved by Vasu
per Vasu "Ceph-ansible is green and thats what i told in irc as well http://pulpito.ceph.com/vasu-2017-09-20_19:51:59-ceph-ansible-jewel-distro-basic-vps/
(it uses stable-2.1 branch of ceph-ansible)
The failing tests during purge-cluster at the end can be ignored for now."
PASSED / FAILED

#66 Updated by Nathan Cutler 2 months ago

  • Description updated (diff)

#67 Updated by Nathan Cutler 2 months ago

  • Description updated (diff)

#68 Updated by Nathan Cutler 2 months ago

  • Description updated (diff)

#69 Updated by Nathan Cutler about 2 months ago

  • Description updated (diff)
  • Status changed from In Progress to Resolved

#70 Updated by Nathan Cutler about 2 months ago

  • Target version set to v10.2.10

Also available in: Atom PDF