Project

General

Profile

Tasks #19538

jewel v10.2.8

Added by Nathan Cutler about 4 years ago. Updated almost 4 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Target version:
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

Workflow

  • Preparing the release
    • Nathan patches upgrade/jewel-x/point-to-point-x to do 10.2.0 -> current Jewel point release -> x - SKIPPED
  • Cutting the release
    • Loic asks Abhishek L. if a point release should be published - YES
    • Loic gets approval from all leads
      • Yehuda, rgw - YES
      • John, CephFS - YES
      • Jason, RBD - YES
      • Josh, rados - YES
    • Abhishek L. writes and commits the release notes - https://github.com/ceph/ceph/pull/16274 (merged)
    • Nathan informs Yuri that the branch is ready for testing - DONE June 30th on ceph-devel ML
    • Yuri runs additional integration tests - DONE
      • If Yuri discovers new bugs that need to be backported urgently (i.e. their priority is set to Urgent or Immediate), the release goes back to being prepared; it was not ready after all
    • Yuri informs Alfredo that the branch is ready for release - DONE
    • Alfredo creates the packages and sets the release tag - DONE
    • Abhishek L. posts release announcement on https://ceph.com/community/blog - DONE
    • Abhishek L. sends release announcement to community mailing lists
    • Abhishek L. informs Patrick M. about the release so he can tweet about it

Release information

History

#1 Updated by Nathan Cutler about 4 years ago

  • Target version changed from 536 to v10.2.8

#2 Updated by Nathan Cutler about 4 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#3 Updated by Nathan Cutler about 4 years ago

  • Status changed from New to In Progress

#4 Updated by Nathan Cutler about 4 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Re-running 5 failed jobs:

Re-running 2 failed jobs 5 times each:

Problematic jobs are:

  • fails every time: rados/verify/{1thrash/default.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/async.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml}
  • fails almost every time: rados/singleton/{all/ec-lost-unfound-upgrade.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/async.yaml rados.yaml}

Of these, Josh thinks only the async messenger-related valgrind issues are a real issue - these might be caused by https://github.com/ceph/ceph/pull/13585 or https://github.com/ceph/ceph/pull/13212

#5 Updated by Nathan Cutler about 4 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 1000 -l 2 --email ncutler@suse.com

#6 Updated by Nathan Cutler about 4 years ago

Upgrade jewel point-to-point-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x/point-to-point-x --ceph wip-jewel-backports --machine-type vps --priority 1000 --email ncutler@suse.com

#7 Updated by Nathan Cutler about 4 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

Re-running 2 dead jobs:

Re-running the last remaining failed job 5 times:

Pushed https://github.com/ceph/ceph-ci/commit/d49d11e714020220e49949c591b0743538212beb to fix http://tracker.ceph.com/issues/19556

Ruled a pass

#8 Updated by Nathan Cutler about 4 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

#9 Updated by Nathan Cutler about 4 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Re-running 2 failed jobs:

2017-04-09T13:40:44.797 INFO:tasks.ceph.osd.3.smithi059.stderr:os/filestore/FileStore.cc: In function 'void FileStore::_do_transaction(ObjectStore::Transaction&, uint64_t, int, ThreadPool::TPHandle*)' thread 7f65124fc700 time 2017-04-09 13:40:44.804790
2017-04-09T13:40:44.797 INFO:tasks.ceph.osd.3.smithi059.stderr:os/filestore/FileStore.cc: 2920: FAILED assert(0 == "unexpected error")

Re-running failed job 6 times:

All the same error, i.e.:

2017-04-09T14:29:48.837 INFO:tasks.ceph.osd.3.smithi063.stderr:os/filestore/FileStore.cc: In function 'void FileStore::_do_transaction(ObjectStore::Transaction&, uint64_t, int, ThreadPool::TPHandle*)' thread 7fcd5a7f8700 time 2017-04-09 14:29:48.854235
2017-04-09T14:29:48.837 INFO:tasks.ceph.osd.3.smithi063.stderr:os/filestore/FileStore.cc: 2920: FAILED assert(0 == "unexpected error")

So, fs is ruled a pass but there are no fs backports staged (correction: there is one)

#10 Updated by Nathan Cutler about 4 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi
2017-04-09T00:20:37.988 INFO:teuthology.orchestra.run.smithi167.stderr:======================================================================
2017-04-09T00:20:37.989 INFO:teuthology.orchestra.run.smithi167.stderr:FAIL: s3tests.functional.test_s3.test_versioning_obj_suspend_versions
2017-04-09T00:20:37.989 INFO:teuthology.orchestra.run.smithi167.stderr:----------------------------------------------------------------------
2017-04-09T00:20:37.989 INFO:teuthology.orchestra.run.smithi167.stderr:Traceback (most recent call last):
2017-04-09T00:20:37.989 INFO:teuthology.orchestra.run.smithi167.stderr:  File "/home/ubuntu/cephtest/s3-tests/virtualenv/local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
2017-04-09T00:20:37.989 INFO:teuthology.orchestra.run.smithi167.stderr:    self.test(*self.arg)
2017-04-09T00:20:37.989 INFO:teuthology.orchestra.run.smithi167.stderr:  File "/home/ubuntu/cephtest/s3-tests/s3tests/functional/test_s3.py", line 6385, in test_versioning_obj_suspend_versions
2017-04-09T00:20:37.989 INFO:teuthology.orchestra.run.smithi167.stderr:    overwrite_suspended_versioning_obj(bucket, objname, k, c, 'null content 2')
2017-04-09T00:20:37.989 INFO:teuthology.orchestra.run.smithi167.stderr:  File "/home/ubuntu/cephtest/s3-tests/s3tests/functional/test_s3.py", line 6243, in overwrite_suspended_versioning_obj
2017-04-09T00:20:37.989 INFO:teuthology.orchestra.run.smithi167.stderr:    check_obj_versions(bucket, objname, k, c)
2017-04-09T00:20:37.989 INFO:teuthology.orchestra.run.smithi167.stderr:  File "/home/ubuntu/cephtest/s3-tests/s3tests/functional/test_s3.py", line 6080, in check_obj_versions
2017-04-09T00:20:37.990 INFO:teuthology.orchestra.run.smithi167.stderr:    eq(keys[i].version_id or 'null', key.version_id)
2017-04-09T00:20:37.990 INFO:teuthology.orchestra.run.smithi167.stderr:AssertionError: u'yGSvpxjEbJRBE.JL76y4OzeJISqDtmJ' != u'null'

#11 Updated by Nathan Cutler about 4 years ago

rbd

teuthology-suite -k distro --priority 1000 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi
2017-04-09T01:48:49.750 INFO:teuthology.orchestra.run.smithi051:Running: 'sudo adjust-ulimits ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-2 --journal-path /var/lib/ceph/osd/ceph-2/journal --log-file=/var/log/ceph/objectstore_tool.\\$pid.log --op export --pgid 0.7 --file /home/ubuntu/cephtest/ceph.data/exp.0.7.2'
2017-04-09T01:48:49.850 INFO:teuthology.orchestra.run.smithi051.stderr:osd/PG.cc: In function 'static int PG::peek_map_epoch(ObjectStore*, spg_t, epoch_t*, ceph::bufferlist*)' thread 7f052ae89980 time 2017-04-09 01:48:49.853858
2017-04-09T01:48:49.850 INFO:teuthology.orchestra.run.smithi051.stderr:osd/PG.cc: 2967: FAILED assert(values.size() == 2)

Re-running 1 dead job:

Ruled a pass by Jason

#12 Updated by Nathan Cutler about 4 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#13 Updated by Nathan Cutler about 4 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

#14 Updated by Nathan Cutler about 4 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

#15 Updated by Nathan Cutler about 4 years ago

upgrade/client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

Re-running the 1 problematic job:

Ruled a pass

#16 Updated by Nathan Cutler about 4 years ago

upgrade/client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

Re-running 2 failed and 1 dead jobs

Ruled a pass

#17 Updated by Nathan Cutler about 4 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

Re-running 9 failed and 1 dead jobs:

#18 Updated by Nathan Cutler about 4 years ago

rgw

teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

#19 Updated by Nathan Cutler about 4 years ago

Made an integration branch "wip-jewel-backports-rgw" consisting only of jewel PRs labeled "rgw". Will try to reproduce the s3tests.functional.test_s3.test_versioning_obj_suspend_versions failure on it.

The hypothesis is that there is a single problematic PR and it carries the label "rgw".

Reproducer: teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports-rgw --machine-type smithi --filter="rgw/verify/{clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml msgr-failures/few.yaml overrides.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_s3tests.yaml validater/lockdep.yaml}"

#20 Updated by Nathan Cutler about 4 years ago

wip-jewel-backports-rgw

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports-rgw | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#21 Updated by Nathan Cutler about 4 years ago

RGW bisect

teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports-rgw --machine-type smithi --filter="rgw/verify/{clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml msgr-failures/few.yaml overrides.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_s3tests.yaml validater/lockdep.yaml}"

Bug reproduced; hypothesis confirmed!

Re-running with a new integration branch containing just PRs:

  • 13865
  • 13863
  • 13842
  • 13837
  • 13834
  • 13833
  • 13779
  • 13724
  • 13552

Bug reproduced!

Re-running with a new integration branch containing just PRs:

  • 13834
  • 13833
  • 13779
  • 13724
  • 13552

Bug reproduced!

Re-running with subset:

  • 13779
  • 13724
  • 13552

Bug reproduced!

Last test was run manually with the conclusion that PR#13552 is to blame. The test branch is https://github.com/ceph/ceph-ci/commits/wip-jewel-backports-rgw (contains just v10.2.7 plus this one PR).

#22 Updated by Nathan Cutler about 4 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#23 Updated by Nathan Cutler about 4 years ago

assert no massive rgw failure

teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --filter="rgw/verify/{clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml msgr-failures/few.yaml overrides.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_s3tests.yaml validater/lockdep.yaml}"

Bisect result verified.

#24 Updated by Nathan Cutler about 4 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Re-running 3 failed jobs:

#25 Updated by Nathan Cutler about 4 years ago

assert no async messenger leak

teuthology-suite -k distro --priority 101 --suite rados --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --filter="rados/verify/{1thrash/default.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/async.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml}"

Confirmed that the leak is (most likely) caused by https://github.com/ceph/ceph/pull/13212

#26 Updated by Nathan Cutler about 4 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 2000)/2000 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi
  • 2 fail, 110 pass (112 total) http://pulpito.ceph.com:80/smithfarm-2017-04-13_13:54:30-rados-wip-jewel-backports-distro-basic-smithi/
    • Command failed on smithi001 with status 6: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph tell osd.0 flush_pg_stats' rados/singleton/{all/ec-lost-unfound-upgrade.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/async.yaml rados.yaml}
    • Command failed on smithi171 with status 11: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph pg scrub 1.0'

Note that the 2 failed jobs also failed in the earlier rados run - #note-4 above

Re-running 2 failed jobs:

fail http://pulpito.ceph.com:80/smithfarm-2017-04-14_19:18:15-rados-wip-jewel-backports-distro-basic-smithi/

  • same failure in rados/singleton-nomsgr/{all/lfn-upgrade-hammer.yaml rados.yaml}
2017-04-14T19:24:04.696 INFO:teuthology.orchestra.run.smithi161:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph pg scrub 1.0'
2017-04-14T19:24:04.838 INFO:teuthology.orchestra.run.smithi161.stderr:Error EAGAIN: pg 1.0 primary osd.1 not up
  • same failure in rados/singleton/{all/ec-lost-unfound-upgrade.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/async.yaml rados.yaml}

Test first installs infernalis (new build from tip of infernalis branch - see #18089) and then upgrades all but one OSD to wip-jewel-backports. Then it runs the "ec_lost_unfound" task, at which point we see:

2017-04-14T19:25:42.516 INFO:teuthology.orchestra.run.smithi132:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph tell osd.0 flush_pg_stats'
...
2017-04-14T19:25:42.756 INFO:teuthology.orchestra.run.smithi132.stderr:Error ENXIO: problem getting command descriptions from osd.0

Test if failure is reproducible on wip-v10.2.7:

teuthology-suite -k distro --verbose --suite rados --priority 101 --email ncutler@suse.com --ceph wip-v10.2.7 --machine-type smithi --filter="rados/singleton/{all/ec-lost-unfound-upgrade.yaml fs/xfs.yaml msgr-failures/few.yaml msgr/async.yaml rados.yaml}"

Test if failure is reproducible on wip-v10.2.7:

teuthology-suite -k distro --verbose --suite rados --priority 101 --email ncutler@suse.com --ceph wip-v10.2.7 --machine-type smithi --filter="rados/singleton-nomsgr/{all/lfn-upgrade-hammer.yaml rados.yaml}"

Test if failure is reproducible on jewel:

teuthology-suite -k distro --verbose --suite rados --ceph jewel --ceph-repo https://github.com/ceph/ceph --suite-repo https://github.com/ceph/ceph --machine-type vps --priority 101 --email ncutler@suse.com --filter="rados/singleton-nomsgr/{all/lfn-upgrade-hammer.yaml rados.yaml}"

#27 Updated by Nathan Cutler about 4 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

#28 Updated by Nathan Cutler about 4 years ago

upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

Ruled a pass

#29 Updated by Nathan Cutler about 4 years ago

bisect regression in jewel

It appears we somehow managed to merge a PR that introduced a regression.

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph jewel --ceph-repo https://github.com/ceph/ceph --suite-repo https://github.com/ceph/ceph --machine-type vps --priority 101 --email ncutler@suse.com --filter="upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-jewel.yaml 4-after.yaml ubuntu_14.04.yaml}"

Running same test on smithi:

Re-running 5 times on VPS:

The next step is to run the reproducer on v10.2.7 to assert it is free of the regression. Assuming the test passes on v10.2.7, we will have to bisect :-(

Pushed wip-v10.2.7 (v10.2.7+PR#14371) to Shaman

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-v10.2.7 --machine-type vps --priority 101 --email ncutler@suse.com --filter="upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-jewel.yaml 4-after.yaml ubuntu_14.04.yaml}"

Preparing first bisect round - here are the PRs merged since v10.2.7:

$ git log --merges --oneline --no-color v10.2.7..HEAD
e31a540 Merge pull request #13834 from smithfarm/wip-18969-jewel
7c36d16 Merge pull request #13833 from smithfarm/wip-18908-jewel
0e3aa2c Merge pull request #13214 from ovh/bp-osd-updateable-throttles-jewel
8d5a5dd Merge pull request #14326 from shinobu-x/wip-15025-jewel
091aaa2 Merge pull request #13874 from smithfarm/wip-19171-jewel
3f2e4cd Merge pull request #13492 from shinobu-x/wip-18516-jewel
ea0bc6c Merge pull request #13254 from shinobu-x/wip-14609-jewel
845972f Merge pull request #13489 from shinobu-x/wip-18955-jewel
a3deef9 Merge pull request #14070 from smithfarm/wip-19339-jewel
702edb5 Merge pull request #14329 from smithfarm/wip-19493-jewel
f509ccc Merge pull request #14427 from smithfarm/wip-19140-jewel
c8c4bff Merge pull request #14324 from shinobu-x/wip-19371-jewel
349baea Merge pull request #14112 from shinobu-x/wip-19192-jewel
dd466b7 Merge pull request #14150 from smithfarm/wip-18823-jewel
b8f8bd0 Merge pull request #14152 from smithfarm/wip-18893-jewel
222916a Merge pull request #14154 from smithfarm/wip-18948-jewel
49f84b1 Merge pull request #14148 from smithfarm/wip-18778-jewel
2a232d4 Merge pull request #14083 from smithfarm/wip-19357-jewel
413ac58 Merge pull request #13154 from smithfarm/wip-18496-jewel
23d595b Merge pull request #13244 from smithfarm/wip-18775-jewel
4add6f5 Merge pull request #13809 from asheplyakov/18321-bp-jewel
37ab19c Merge pull request #13107 from smithfarm/wip-18669-jewel
f7c04e3 Merge pull request #13585 from asheplyakov/jewel-bp-16585
d2909bd Merge pull request #14371 from tchaikov/wip-19429-jewel
cd74860 Merge pull request #14325 from shinobu-x/wip-18619-jewel
1a20c12 Merge pull request #14236 from smithfarm/wip-19392-jewel
4838c4d Merge pull request #14181 from mslovy/wip-19394-jewel
e26b703 Merge pull request #14113 from shinobu-x/wip-19319-jewel
389150b Merge pull request #14047 from asheplyakov/reindex-on-pg-split
a8b1008 Merge pull request #14044 from mslovy/wip-19311-jewel
32ed9b7 Merge pull request #13932 from asheplyakov/18911-bp-jewel
6705e91 Merge pull request #13831 from jan--f/wip-19206-jewel
3d21a00 Merge pull request #13827 from tchaikov/wip-19185-jewel
8a6d643 Merge pull request #13788 from shinobu-x/wip-18235-jewel
f96392a Merge pull request #13786 from shinobu-x/wip-19129-jewel
8fe6ffc Merge pull request #13732 from liewegas/wip-19119-jewel
6f589a1 Merge pull request #13541 from shinobu-x/wip-18929-jewel
b8f2d35 Merge pull request #13477 from asheplyakov/jewel-bp-18951
40d1443 Merge pull request #13261 from shinobu-x/wip-18587-jewel

Total of 40 PRs excluding 14371 (which must be included in any case); grabbing the first 20 (starting from the bottom of the list, which is in reverse chronological order) for the bisect branch. Populating with following script:

set -ex
reviewer='Nathan Cutler <ncutler@suse.com>'

milestone=jewel
base_branch=wip-v10.2.7
bisect_branch=${base_branch}-bisect

PRS="13261
13477
13541
13732
13786
13788
13827
13831
13932
14044
14047
14113
14181
14236
14325
13585
13107
13809
13244
13154
14083
" 

git checkout $milestone
git fetch ceph
git branch -D $bisect_branch || :
git checkout -b $bisect_branch ceph/$base_branch
git reset --hard ceph/$base_branch
for pr in $PRS ; do eval title=$(curl --silent https://api.github.com/repos/ceph/ceph/pulls/$pr?access_token=$github_token | jq .title) ; echo "PR $pr $title" ; git --no-pager log --oneline ceph/pull/$pr/merge^1..ceph/pull/$pr/merge^2 ; git --no-pager merge --no-ff -m "$(echo -e "Merge pull request #$pr: $title\n\nReviewed-by: $reviewer")" ceph/pull/$pr/head ; done
git push --force ceph-ci $bisect_branch
git --no-pager log --format='%H %s' --graph ceph-ci/wip-v10.2.7..wip-v10.2.7-bisect | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;' teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-v10.2.7-bisect --machine-type vps --priority 101 --email ncutler@suse.com --filter="upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-jewel.yaml 4-after.yaml ubuntu_14.04.yaml}"

And wip-v10.2.7 (the base branch) again for comparison:

Will continue the bisect in a new comment.

#30 Updated by Nathan Cutler about 4 years ago

jewel regression bisect, round 2

In round 1 we prepared an integration branch consisting of v10.2.7 + PR#14371 (which is required in any case) + the first 21 PRs merged into jewel after the v10.2.7 release. Although logic would dictate that the regression is one of the following PRs

e31a540 Merge pull request #13834 from smithfarm/wip-18969-jewel
7c36d16 Merge pull request #13833 from smithfarm/wip-18908-jewel
0e3aa2c Merge pull request #13214 from ovh/bp-osd-updateable-throttles-jewel
8d5a5dd Merge pull request #14326 from shinobu-x/wip-15025-jewel
091aaa2 Merge pull request #13874 from smithfarm/wip-19171-jewel
3f2e4cd Merge pull request #13492 from shinobu-x/wip-18516-jewel
ea0bc6c Merge pull request #13254 from shinobu-x/wip-14609-jewel
845972f Merge pull request #13489 from shinobu-x/wip-18955-jewel
a3deef9 Merge pull request #14070 from smithfarm/wip-19339-jewel
702edb5 Merge pull request #14329 from smithfarm/wip-19493-jewel
f509ccc Merge pull request #14427 from smithfarm/wip-19140-jewel
c8c4bff Merge pull request #14324 from shinobu-x/wip-19371-jewel
349baea Merge pull request #14112 from shinobu-x/wip-19192-jewel
dd466b7 Merge pull request #14150 from smithfarm/wip-18823-jewel
b8f8bd0 Merge pull request #14152 from smithfarm/wip-18893-jewel
222916a Merge pull request #14154 from smithfarm/wip-18948-jewel
49f84b1 Merge pull request #14148 from smithfarm/wip-18778-jewel

I would like to get a clear reproducer, so I prepared a wip-v10.2.7-bisect-2 branch consisting of v10.2.7 + PR#14371 + these PRs.

git --no-pager log --format='%H %s' --graph ceph-ci/wip-v10.2.7..wip-v10.2.7-bisect-2 | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;' teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-v10.2.7-bisect-2 --machine-type vps --priority 101 --email ncutler@suse.com --filter="upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-jewel.yaml 4-after.yaml ubuntu_14.04.yaml}"

builds OK https://shaman.ceph.com/builds/ceph/wip-v10.2.7-bisect-2/2b15bdd5a425e2d20a146af19ad06fda24adc2d2/

Examining the test yaml again, it seems strange that a test called "upgrade:hammer-x/f-h-x-offline" should install firefly and then upgrade directly to "x" (jewel in this case)? Opened http://tracker.ceph.com/issues/19687 to track.

#31 Updated by Nathan Cutler about 4 years ago

jewel "regression" bisect, round 3

Pushed wip-v10.2.7-bisect-3

git --no-pager log --format='%H %s' --graph ceph-ci/wip-v10.2.7..wip-v10.2.7-bisect-3 | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

https://shaman.ceph.com/builds/ceph/wip-v10.2.7-bisect-3/0dfc1333c5ff95624e8825bb4af339b67b2a1d1d/

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-v10.2.7-bisect-3 --machine-type vps --priority 101 --email ncutler@suse.com --filter="upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-jewel.yaml 4-after.yaml ubuntu_14.04.yaml}"

jewel "regression" bisect, round 4

Pushed wip-v10.2.7-bisect-4

git --no-pager log --format='%H %s' --graph ceph-ci/wip-v10.2.7..wip-v10.2.7-bisect-4 | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

https://shaman.ceph.com/builds/ceph/wip-v10.2.7-bisect-4/77358532ce0d07ae7afc317c304c2e255058aad0/

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-v10.2.7-bisect-4 --machine-type vps --priority 101 --email ncutler@suse.com --filter="upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-jewel.yaml 4-after.yaml ubuntu_14.04.yaml}"

#32 Updated by Nathan Cutler about 4 years ago

jewel "regression" grand finale

Starting the grand finale by cherry-picking (not merging as before) the following PRs (one of which should be the cause of the "regression" according to the bisect results so far) on top of wip-v10.2.7:

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-10.2.7-13254 --machine-type vps --priority 101 --email ncutler@suse.com --filter="upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-jewel.yaml 4-after.yaml ubuntu_14.04.yaml}" teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-v10.2.7-14427 --machine-type vps --priority 101 --email ncutler@suse.com --filter="upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-jewel.yaml 4-after.yaml ubuntu_14.04.yaml}" teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-v10.2.7-14324 --machine-type vps --priority 101 --email ncutler@suse.com --filter="upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-jewel.yaml 4-after.yaml ubuntu_14.04.yaml}"

CONCLUSION: https://github.com/ceph/ceph/pull/14427 would seem to be the culprit. Opened https://github.com/ceph/ceph/pull/14643 to revert it.

#33 Updated by Nathan Cutler about 4 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#34 Updated by Nathan Cutler about 4 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 2000)/2000 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Re-running 4 failed jobs:

#35 Updated by Nathan Cutler about 4 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

Ruled a pass

#36 Updated by Nathan Cutler about 4 years ago

upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

Ruled a pass

#37 Updated by Nathan Cutler about 4 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

#38 Updated by Nathan Cutler about 4 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#39 Updated by Nathan Cutler about 4 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

--rerun

#40 Updated by Nathan Cutler about 4 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 1000 -l 2 --email ncutler@suse.com

#41 Updated by Nathan Cutler about 4 years ago

Upgrade jewel point-to-point-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x/point-to-point-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

#42 Updated by Nathan Cutler about 4 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

Re-running 1 failed job:

#43 Updated by Nathan Cutler about 4 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

#44 Updated by Nathan Cutler about 4 years ago

upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

Re-running 1 dead job

#45 Updated by Nathan Cutler about 4 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

--rerun

Marked https://github.com/ceph/ceph/pull/14699 DNM for now.

#46 Updated by Nathan Cutler about 4 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

--rerun

#47 Updated by Nathan Cutler about 4 years ago

rbd

teuthology-suite -k distro --priority 1000 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 4)/4

--rerun

Marked https://github.com/ceph/ceph/pull/14663 DNM - needs another run on repopulated integration branch.

#48 Updated by Nathan Cutler about 4 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/572fb344af805709327f270fcf8743bc62ef4b3d

#49 Updated by Nathan Cutler about 4 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

--rerun

Running the failed job 4 more times:

Ruled a pass

#50 Updated by Nathan Cutler about 4 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 1000 -l 2 --email ncutler@suse.com

#51 Updated by Nathan Cutler about 4 years ago

Upgrade jewel point-to-point-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x/point-to-point-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

#52 Updated by Nathan Cutler about 4 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

#53 Updated by Nathan Cutler about 4 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

#54 Updated by Nathan Cutler about 4 years ago

upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

--rerun

#55 Updated by Nathan Cutler about 4 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

--rerun

#56 Updated by Nathan Cutler about 4 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2

--rerun

#57 Updated by Nathan Cutler about 4 years ago

rbd

teuthology-suite -k distro --priority 1000 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 4)/4

#58 Updated by Nathan Cutler about 4 years ago

rgw suite for ragweed support

Special request by Yehuda

build: https://shaman.ceph.com/builds/ceph/wip-rgw-support-ragweed-jewel/

teuthology-suite -k distro --priority 999 --suite rgw --email ncutler@suse.com --ceph wip-rgw-support-ragweed-jewel --machine-type smithi --subset $(expr $RANDOM % 2)/2

#60 Updated by Abhishek Lekshmanan about 4 years ago

Adding an integration branch scheduling only the RGW memleak fix PRs (and the above rados PR which was already merged in Jewel)

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports-rgw-fixes | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull-?request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#61 Updated by Nathan Cutler about 4 years ago

rgw suite on wip-jewel-backports-rgw-fixes branch

86 pass, 10 fail (96 total) http://pulpito.ceph.com/abhi-2017-06-02_14:29:52-rgw-wip-jewel-backports-rgw-fixes-distro-basic-smithi/

#63 Updated by Nathan Cutler almost 4 years ago

rados

Using teuthology branch wip-20171 to avoid the silly regression http://tracker.ceph.com/issues/20171

teuthology-suite -k distro --priority 101 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --teuthology-branch wip-20171

2 failed, 225 pass (227 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-22_11:36:26-rados-wip-jewel-backports-distro-basic-smithi/

#64 Updated by Nathan Cutler almost 4 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 101 -l 2 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-22_06:27:32-powercycle-wip-jewel-backports-distro-basic-smithi/

#65 Updated by Nathan Cutler almost 4 years ago

Upgrade jewel point-to-point-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x/point-to-point-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-22_06:28:08-upgrade:jewel-x:point-to-point-x-wip-jewel-backports-distro-basic-vps/

#66 Updated by Nathan Cutler almost 4 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com --teuthology-branch wip-20171

fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-22_08:13:28-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

Re-running with https://github.com/ceph/ceph/pull/15842 included:

8 failed, 10 passed (18 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-22_11:47:33-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

#67 Updated by Nathan Cutler almost 4 years ago

fs

teuthology-suite -k distro --priority 101 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --teuthology-branch wip-20171

1 failed, 87 passed (88 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-22_11:51:56-fs-wip-jewel-backports-distro-basic-smithi/

Re-running 1 failed job

fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-25_07:23:04-fs-wip-jewel-backports-distro-basic-smithi/ - #20412 again

#68 Updated by Nathan Cutler almost 4 years ago

rgw

teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2 --teuthology-branch wip-20171

fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-22_11:53:25-rgw-wip-jewel-backports-distro-basic-smithi/

  • new bug #20392 (Most, if not all, of the failures are due to incompatible tests added to ceph/swift.git master and to the fact that we are using a single swift.py task for all Ceph versions.)

#69 Updated by Nathan Cutler almost 4 years ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/98045e76d74a57a5d859b4e2e742dc64722f70cb/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#70 Updated by Nathan Cutler almost 4 years ago

rgw

Partial run to verify fix is viable:

teuthology-suite -k distro --priority 101 --rerun smithfarm-2017-06-22_11:53:25-rgw-wip-jewel-backports-distro-basic-smithi --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --teuthology-branch wip-20392

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-25_13:13:17-rgw-wip-jewel-backports-distro-basic-smithi/

Full rgw run:

teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2 --teuthology-branch wip-20392

1 fail, 95 pass (96 total) http://pulpito.front.sepia.ceph.com/smithfarm-2017-06-25_16:38:58-rgw-wip-jewel-backports-distro-basic-smithi/

  • failed job is with apache frontend, so not a high priority to fix

Re-running 1 failed job:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-25_18:51:45-rgw-wip-jewel-backports-distro-basic-smithi/

#71 Updated by Nathan Cutler almost 4 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com --teuthology-branch wip-20392

3 failed, 15 passed (18 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-25_17:11:13-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

#72 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

#73 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

#74 Updated by Nathan Cutler almost 4 years ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/015dd1136459b15885142a76769efb360c945baf/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#75 Updated by Nathan Cutler almost 4 years ago

upgrade/hammer-x

teuthology-suite -k distro --ceph wip-jewel-backports --rerun smithfarm-2017-06-25_17:11:13-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps --machine-type vps --priority 101 --email ncutler@suse.com --teuthology-branch wip-20392

running with bad PR#14930, but the other two tests are valid re-runs http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-26_20:18:43-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

#77 Updated by Nathan Cutler almost 4 years ago

fs

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --rerun smithfarm-2017-06-22_11:51:56-fs-wip-jewel-backports-distro-basic-smithi

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-27_10:16:32-fs-wip-jewel-backports-distro-basic-smithi/

Result reported in https://github.com/ceph/ceph/pull/15936 (merged)

#78 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

#79 Updated by Nathan Cutler almost 4 years ago

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/ca7ab74ae7884f24983d94b729cc262108ff6aba/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#80 Updated by Nathan Cutler almost 4 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com --teuthology-branch wip-20392

2 fail, 16 pass (18 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-27_15:24:41-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

Rerun on smithi:

1 fail, 1 pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-30_09:08:18-upgrade:hammer-x-wip-jewel-backports-distro-basic-smithi/

  • failure is "ceph-objectstore-tool: exp list-pgs failure with status 1"

Rerun on vps:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-30_09:08:35-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

Ruled a pass

#81 Updated by Nathan Cutler almost 4 years ago

rados

teuthology-suite -k distro --priority 101 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --teuthology-branch wip-20392

7 fail, 220 pass (227 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-27_19:13:42-rados-wip-jewel-backports-distro-basic-smithi/

  • five of the failures are infrastructure noise
  • the sixth might be a new bug: http://tracker.ceph.com/issues/20449
  • the seventh is ENOSPC, presumably because the smithis have smaller disks (so, infrastructure noise)

Re-run:

3 pass, 4 fail (7 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-28_10:10:16-rados-wip-jewel-backports-distro-basic-smithi/

  • all four failures are ansible-related

Re-run:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-29_05:54:25-rados-jewel-distro-basic-smithi/

Ruled a pass

#82 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

#84 Updated by Nathan Cutler almost 4 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-06-30_09:03:26-ceph-disk-wip-jewel-backports-distro-basic-vps/

#85 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

#86 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

#87 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

#88 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

#89 Updated by Yuri Weinstein almost 4 years ago

QE VALIDATION (STARTED 7/1/17)

(Note: PASSED / FAILED - indicates "TEST IS IN PROGRESS")

re-runs command lines and filters are captured in http://pad.ceph.com/p/hammer_v10.2.8_QE_validation_notes

command line CEPH_QA_MAIL="ceph-qa@ceph.com"; MACHINE_NAME=smithi; CEPH_BRANCH=jewel; SHA1=53a3be7261cfeb12445fbdba8238eefa40ed09f5 ; teuthology-suite -v --ceph-repo https://github.com/ceph/ceph.git --suite-repo https://github.com/ceph/ceph.git -c $CEPH_BRANCH -S $SHA1 -m $MACHINE_NAME -s rados --subset 35/50 -k distro -p 100 -e $CEPH_QA_MAIL --suite-branch jewel --dry-run

teuthology-suite -v -c $CEPH_BRANCH -S $SHA1 -m $MACHINE_NAME -r $RERUN --suite-repo https://github.com/ceph/ceph.git --ceph-repo https://github.com/ceph/ceph.git --suite-branch jewel -p 90 -R fail,dead,running

Suite Runs/Reruns Notes/Issues
rados http://pulpito.ceph.com/yuriw-2017-06-30_22:50:27-rados-jewel-distro-basic-smithi/ PASSED #20489*
http://pulpito.ceph.com/yuriw-2017-07-05_00:26:35-rados-jewel-distro-basic-smithi/ https://github.com/ceph/ceph/pull/14710
rgw http://pulpito.ceph.com/yuriw-2017-06-30_22:56:30-rgw-jewel-distro-basic-smithi/ PASSED see one "saw valgrind issues"
http://pulpito.ceph.com/yuriw-2017-07-01_04:05:30-rgw-jewel-distro-basic-smithi/ passed on rerun
rbd http://pulpito.ceph.com/yuriw-2017-06-30_22:59:10-rbd-jewel-distro-basic-smithi/ PASSED
fs http://pulpito.ceph.com/yuriw-2017-06-30_23:02:06-fs-jewel-distro-basic-smithi/ PASSED
http://pulpito.front.sepia.ceph.com/yuriw-2017-07-03_14:57:31-fs-jewel-distro-basic-smithi/
http://pulpito.ceph.com/yuriw-2017-07-03_16:49:04-fs-jewel-distro-basic-smithi/
krbd http://pulpito.ceph.com/yuriw-2017-07-01_04:10:21-krbd-jewel-testing-basic-smithi/ FAILED approved by Ilya
http://pulpito.front.sepia.ceph.com:80/yuriw-2017-07-03_15:58:06-krbd-jewel-testing-basic-smithi/ rerun per Ilya
kcephfs http://pulpito.ceph.com/yuriw-2017-07-01_04:11:25-kcephfs-jewel-testing-basic-smithi/ PASSED
knfs http://pulpito.ceph.com/yuriw-2017-07-01_04:12:02-knfs-jewel-testing-basic-smithi/ PASSED
rest http://pulpito.ceph.com/yuriw-2017-07-01_14:54:12-rest-jewel-distro-basic-smithi/ PASSED
hadoop http://pulpito.ceph.com/yuriw-2017-07-01_14:54:48-hadoop-jewel-distro-basic-smithi/ FAILED #19456 EXCLUDED FROM THIS RELEASE
samba EXCLUDED FROM THIS RELEASE
ceph-deploy http://pulpito.ceph.com/yuriw-2017-07-01_14:55:48-ceph-deploy-jewel-distro-basic-vps/ PASSED
http://pulpito.ceph.com/yuriw-2017-07-03_23:00:14-ceph-deploy-jewel-distro-basic-vps/
ceph-disk http://pulpito.ceph.com/yuriw-2017-07-01_14:56:06-ceph-disk-jewel-distro-basic-vps/ PASSED
upgrade/client-upgrade (http://pulpito.ceph.com/yuriw-2017-07-01_14:56:42-upgrade:client-upgrade-jewel-distro-basic-smithi/) PASSED
(http://pulpito.front.sepia.ceph.com/yuriw-2017-07-03_15:21:48-upgrade:client-upgrade-jewel-distro-basic-smithi/)
http://pulpito.ceph.com/yuriw-2017-07-04_15:05:26-upgrade:client-upgrade-jewel-distro-basic-vps/ https://github.com/ceph/ceph/pull/16088
upgrade/hammer-x (jewel) http://pulpito.ceph.com/yuriw-2017-07-01_14:57:44-upgrade:hammer-x-jewel-distro-basic-vps/ PASSED
upgrade/jewel-x/point-to-point-x (http://pulpito.ceph.com/yuriw-2017-07-01_14:58:30-upgrade:jewel-x:point-to-point-x-jewel-distro-basic-vps/) PASSED
http://pulpito.ceph.com/yuriw-2017-07-04_15:06:27-upgrade:jewel-x:point-to-point-x-jewel-distro-basic-vps/ https://github.com/ceph/ceph/pull/16089
powercycle http://pulpito.ceph.com/yuriw-2017-07-01_04:12:42-powercycle-jewel-testing-basic-smithi/ PASSED
ceph-ansible http://pulpito.ceph.com/yuriw-2017-07-01_15:11:28-ceph-ansible-jewel-distro-basic-vps/ PASSED
PASSED / FAILED

#90 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

#91 Updated by Yuri Weinstein almost 4 years ago

  • Description updated (diff)

#92 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

Updated release SHA1 to 66dbf9beef04988dbd3653591e51afa6d84e3990

#93 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

#94 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

#95 Updated by Nathan Cutler almost 4 years ago

  • Description updated (diff)

#96 Updated by Nathan Cutler almost 4 years ago

  • Status changed from In Progress to Resolved

Also available in: Atom PDF