Project

General

Profile

Actions

Tasks #19009

closed

kraken v11.2.1

Added by Nathan Cutler about 7 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
High
Assignee:
Target version:
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

Workflow

  • Preparing the release
  • Cutting the release
    • Nathan asks Abhishek L. if a point release should be published - YES 20170801
    • Nathan gets approval from all leads
      • Yehuda, Matt, Casey, Orit rgw - YES 20170802
      • Patrick, CephFS - YES 20170801 ("go for it")
      • Jason, RBD - YES 20170801 ("long overdue")
      • Josh, rados - YES 20170801 ("sounds good")
      • Sage - YES 20170801 ("release what we have for 11.2.1")
    • Abhishek L. writes and commits the release notes - PENDING, see https://github.com/ceph/ceph/pull/16879
    • Nathan informs Yuri that the branch is ready for testing - DONE 20170802
    • Yuri runs additional integration tests - DONE 20170807
    • If Yuri discovers new bugs that need to be backported urgently (i.e. their priority is set to Urgent), the release goes back to being prepared, it was not ready after all
    • Yuri informs Alfredo that the branch is ready for release - DONE 20170807
    • Alfredo creates the packages and sets the release tag - DONE

Release information

Actions #1

Updated by Nathan Cutler about 7 years ago

  • Status changed from New to In Progress
  • Assignee set to Nathan Cutler
Actions #2

Updated by Nathan Cutler about 7 years ago

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #3

Updated by Nathan Cutler about 7 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

Re-running 1 failed and 3 dead tests:

Failure is in rados/objectstore/objectstore.yaml and appears to be reproducible. Log here: http://pulpito.ceph.com/smithfarm-2017-04-11_06:46:27-rados-wip-kraken-backports-distro-basic-smithi/1012969

Actions #4

Updated by Nathan Cutler about 7 years ago

Upgrade jewel-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-kraken-backports --machine-type vps --priority 1000

Re-running the dead job:

Actions #5

Updated by Nathan Cutler about 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-kraken-backports --machine-type vps --priority 101
Actions #6

Updated by Nathan Cutler about 7 years ago

powercycle

teuthology-suite -v -c wip-kraken-backports -k distro -m smithi -s powercycle -p 1000 -l 2 --email ncutler@suse.com
Actions #7

Updated by Nathan Cutler about 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

Re-running 2 failed jobs:

Actions #8

Updated by Nathan Cutler about 7 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

Re-running 2 failed and 2 dead jobs:

Actions #9

Updated by Nathan Cutler about 7 years ago

rbd

teuthology-suite -k distro --priority 1000 --suite rbd --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 10)/10

Re-running the 2 failed jobs:

Actions #10

Updated by Nathan Cutler about 7 years ago

The first attempt (1 month ago) was a false start. Now starting again with a re-populated integration branch. Since none of the first runs yielded any data, I just recycled the respective comments.

Actions #11

Updated by Nathan Cutler about 7 years ago

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #12

Updated by Nathan Cutler about 7 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 65)/65 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

NOTE: rados/objectstore/objectstore.yaml failure from previous rados run was not reproduced: http://pulpito.ceph.com/smithfarm-2017-04-13_17:37:26-rados-wip-kraken-backports-distro-basic-smithi/1020874/

NOTE: this integration branch was missing a backport that fixed a teuthology issue with the workunit task. This issue caused trouble with upgrade tests in particular.

Re-running 5 failed and 2 dead jobs:

Ruled a pass

Actions #13

Updated by Nathan Cutler about 7 years ago

Upgrade jewel-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-kraken-backports --machine-type vps --priority 101

Re-running 11 failed and 3 dead jobs:

Most (if not all) of the failures were caused by missing https://github.com/ceph/ceph/pull/14487 in the integration branch.

Actions #14

Updated by Nathan Cutler about 7 years ago

Upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-kraken-backports --machine-type vps --priority 101

Re-running 2 failed jobs:

Actions #15

Updated by Nathan Cutler about 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-kraken-backports --machine-type vps --priority 101
Actions #16

Updated by Nathan Cutler about 7 years ago

powercycle

teuthology-suite -v -c wip-kraken-backports -k distro -m smithi -s powercycle -p 1000 -l 2 --email ncutler@suse.com
Actions #17

Updated by Nathan Cutler about 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

Re-running 1 failed job:

Actions #18

Updated by Nathan Cutler about 7 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

Re-running 4 failed jobs:

Actions #19

Updated by Nathan Cutler about 7 years ago

rbd

teuthology-suite -k distro --priority 1000 --suite rbd --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 10)/10
Actions #20

Updated by Nathan Cutler about 7 years ago

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #22

Updated by Nathan Cutler about 7 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 75)/75 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

2 fail, 1 dead, 269 passed (272 total) http://pulpito.ceph.com:80/smithfarm-2017-04-15_15:29:43-rados-wip-kraken-backports-distro-basic-smithi/

Re-running 2 failed and 1 dead jobs:

Re-running 1 failed job:

Actions #23

Updated by Nathan Cutler about 7 years ago

Upgrade jewel-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-kraken-backports --machine-type vps --priority 1000

Re-running 4 failed and 1 dead job:

Re-running:

Note that there is a regression in jewel which I am in the process of bisecting: http://tracker.ceph.com/issues/19538#note-29

Re-running on smithi:

Asserting that http://tracker.ceph.com/issues/18089#note-68 fixed the one failure:

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-kraken-backports --machine-type vps --priority 101 --filter="upgrade:jewel-x/point-to-point-x/{distros/ubuntu_latest.yaml point-to-point-upgrade.yaml}"

Re-running again, 5 times:

Patched teuthology/task/install/packages.yaml so the apt-get install command explicitly installs all the necessary packages, including libradosstriper1=10.2.0-1xenial librgw2=10.2.0-1xenial python-rados=10.2.0-1xenial python-cephfs=10.2.0-1xenial python-rbd=10.2.0-1xenial libcephfs1=10.2.0-1xenial

Re-running again with patched teuthology:

For some reason it is not seeing the updated package list; opened http://tracker.ceph.com/issues/19681

Actions #24

Updated by Nathan Cutler about 7 years ago

upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-kraken-backports --machine-type vps --priority 1000

Re-running two failed jobs:

Re-running

Actions #25

Updated by Nathan Cutler about 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-kraken-backports --machine-type vps --priority 1000

Re-running 1 dead job:

Actions #26

Updated by Nathan Cutler about 7 years ago

powercycle

teuthology-suite -v -c wip-kraken-backports -k distro -m smithi -s powercycle -p 1000 -l 2 --email ncutler@suse.com

pass http://pulpito.ceph.com:80/smithfarm-2017-04-15_16:03:01-powercycle-wip-kraken-backports-distro-basic-smithi/

Actions #27

Updated by Nathan Cutler about 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

pass http://pulpito.ceph.com:80/smithfarm-2017-04-15_16:03:52-fs-wip-kraken-backports-distro-basic-smithi/

Actions #28

Updated by Nathan Cutler about 7 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

3 fail, 1 dead, 268 pass (272 total) http://pulpito.ceph.com:80/smithfarm-2017-04-15_16:06:44-rgw-wip-kraken-backports-distro-basic-smithi/front.sepia.

Re-running 3 failed and 1 dead job:

Actions #29

Updated by Nathan Cutler about 7 years ago

rbd

teuthology-suite -k distro --priority 1000 --suite rbd --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 10)/10

pass http://pulpito.ceph.com:80/smithfarm-2017-04-15_16:10:41-rbd-wip-kraken-backports-distro-basic-smithi/

Actions #30

Updated by Nathan Cutler about 7 years ago

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/f2fdbbe354d7acc1ca162a3179822856f672875b/

Actions #31

Updated by Nathan Cutler about 7 years ago

Upgrade jewel-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-kraken-backports --machine-type vps --priority 101

The failed jobs were caused by a pool filling up. Re-running them on smithi:

Actions #32

Updated by Nathan Cutler about 7 years ago

upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-kraken-backports --machine-type vps --priority 101

Re-running 1 dead and 1 fail (5 times each):

Jason opened a PR fixing issue 19692 in master; cherry-picked https://github.com/ceph/ceph/pull/14641 and will run another test.

Actions #33

Updated by Nathan Cutler about 7 years ago

Upgrade hammer-jewel-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-jewel-x --ceph wip-kraken-backports --machine-type vps --priority 101

Sigh

Actions #34

Updated by Nathan Cutler almost 7 years ago

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/f90b221bff5c5d31bfa4682ed36c95e28d4d59b4/

Actions #35

Updated by Nathan Cutler almost 7 years ago

Upgrade jewel-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-kraken-backports --machine-type vps --priority 101

Dead job appears to be fixed by https://github.com/ceph/ceph/pull/14612 \o/

Re-running all jobs:

Actions #36

Updated by Nathan Cutler almost 7 years ago

upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-kraken-backports --machine-type vps --priority 101

Re-running 2 failed jobs:

Jason writes at https://github.com/ceph/ceph/pull/14680#issuecomment-295888776:

I'm at least satisfied that the build is functional since it got to basically the end of the test.

Ruled a pass

Actions #37

Updated by Nathan Cutler almost 7 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 75)/75 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

Subset 27/75

Re-running 3 dead jobs:

Actions #38

Updated by Nathan Cutler almost 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-kraken-backports --machine-type vps --priority 101
Actions #39

Updated by Nathan Cutler almost 7 years ago

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/d69a70e89bbf73e1b842ca8857d78a38ce7a753f

Actions #40

Updated by Nathan Cutler almost 7 years ago

Upgrade jewel-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-kraken-backports --machine-type vps --priority 101 --email ncutler@suse.com
Actions #41

Updated by Nathan Cutler almost 7 years ago

upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-kraken-backports --machine-type vps --priority 101 --email ncutler@suse.com
Actions #42

Updated by Nathan Cutler almost 7 years ago

Upgrade hammer-jewel-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-jewel-x --ceph wip-kraken-backports --machine-type vps --priority 101 --email ncutler@suse.com
  • can't be scheduled; broken in master too
Actions #43

Updated by Nathan Cutler almost 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-kraken-backports --machine-type vps --priority 101 --email ncutler@suse.com
Actions #44

Updated by Nathan Cutler almost 7 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 75)/75 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

Subset 45/75

Actions #45

Updated by Nathan Cutler almost 7 years ago

powercycle

teuthology-suite -v -c wip-kraken-backports -k distro -m smithi -s powercycle -p 1000 -l 2 --email ncutler@suse.com
Actions #46

Updated by Nathan Cutler almost 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi
Actions #47

Updated by Nathan Cutler almost 7 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2
Actions #48

Updated by Nathan Cutler almost 7 years ago

rbd

teuthology-suite -k distro --priority 1000 --suite rbd --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 10)/10
Actions #49

Updated by Nathan Cutler almost 7 years ago

rgw suite for ragweed support

Special request by Yehuda

build: https://shaman.ceph.com/builds/ceph/wip-rgw-support-ragweed-kraken/

teuthology-suite -k distro --priority 999 --suite rgw --email ncutler@suse.com --ceph wip-rgw-support-ragweed-kraken --machine-type smithi --subset $(expr $RANDOM % 2)/2

--rerun

Actions #50

Updated by Nathan Cutler almost 7 years ago

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/241c44b083d6e9fc82ff2d1d10379963d65db866/

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #51

Updated by Nathan Cutler almost 7 years ago

rados

Assert that the previous "massive failure" is no longer an issue:

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --rerun smithfarm-2017-05-03_08:22:47-rados-wip-kraken-backports-distro-basic-smithi --limit 10

1 fail, 9 pass (10 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-04_09:46:39-rados-wip-kraken-backports-distro-basic-smithi/

Full suite (311 jobs; next time we could go to subset /80)

teuthology-suite -k distro --priority 101 --suite rados --subset $(expr $RANDOM % 75)/75 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

5 fail, 1 dead, 305 pass (311 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-04_10:46:32-rados-wip-kraken-backports-distro-basic-smithi/

Failed jobs:

Dead job:

  • 1359969
--rerun

2 fail, 4 pass (6 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-05_06:41:59-rados-wip-kraken-backports-distro-basic-smithi/

Actions #52

Updated by Nathan Cutler almost 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-kraken-backports --machine-type vps --priority 101 --email ncutler@suse.com

1 fail, 2 pass (3 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-04_09:49:19-ceph-disk-wip-kraken-backports-distro-basic-vps/

  • The failure looks like infrastructure noise
--rerun

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-04_10:39:47-ceph-disk-wip-kraken-backports-distro-basic-vps/

Actions #53

Updated by Nathan Cutler almost 7 years ago

powercycle

teuthology-suite -v -c wip-kraken-backports -k distro -m smithi -s powercycle -p 102 -l 2 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-04_09:51:15-powercycle-wip-kraken-backports-distro-basic-smithi/

Actions #54

Updated by Nathan Cutler almost 7 years ago

rgw

Assert that the previous "massive failure" is no longer an issue:

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --rerun smithfarm-2017-05-03_10:00:00-rgw-wip-kraken-backports-distro-basic-smithi --limit 10

fail http://pulpito.ceph.com/smithfarm-2017-07-04_09:48:04-rgw-wip-kraken-backports-distro-basic-smithi/

bluestore memory leak (?)

(01:44:00 PM) owasserm: smithfarm, they all are osd leaks
(01:45:22 PM) owasserm: smithfarm, all are uninit condition
(01:45:45 PM) owasserm: in bstore_kv_sync
(01:46:03 PM) smithfarm: owasserm: ah, thanks - so we need a bluestore memory leak fix
(01:46:29 PM) owasserm: smithfarm, probably 
(01:46:41 PM) owasserm: smithfarm, it looks like something is not initialized

Apparently those leaks might be fixed by yet-to-be-merged https://github.com/ceph/ceph-ci/commit/42db0c70bc7ef595f0925657c043ce081799b2b9

Actions #55

Updated by Nathan Cutler almost 7 years ago

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/78cd8adc612e74743d75d15a33bb75d0136f7e4d/

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/784140670ad97359ea67547dccc566dd1c0e317a/

And a subsequent iteration dropped PR#14577 and added:

Actions #56

Updated by Nathan Cutler almost 7 years ago

rbd

teuthology-suite -k distro --priority 101 --suite rbd --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 10)/10

8 fail, 160 pass (168 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-04_20:05:04-rbd-wip-kraken-backports-distro-basic-smithi/

Cherry-picked the fix into wip-kraken-backports. Need to re-run.

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/5fbe1d9dbb453cbbb090b902dcfc75109bc81dd9/

--rerun

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-05_06:38:53-rbd-wip-kraken-backports-distro-basic-smithi/

Actions #57

Updated by Nathan Cutler almost 7 years ago

fs

teuthology-suite -k distro --priority 101 --suite fs --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

4 fail, 184 pass (188 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-04_20:10:10-fs-wip-kraken-backports-distro-basic-smithi/

Fix is https://github.com/ceph/ceph/pull/16114

--rerun with the fix included:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-05_08:53:19-fs-wip-kraken-backports-distro-basic-smithi/

Actions #58

Updated by Nathan Cutler almost 7 years ago

rgw

The SLO/DLO failures are still with us; check if bluestore valgrind issue is gone:

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --rerun smithfarm-2017-05-03_10:00:00-rgw-wip-kraken-backports-distro-basic-smithi --limit 10

3 fail, 7 pass (10 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-04_20:13:07-rgw-wip-kraken-backports-distro-basic-smithi/

0/3 run:

teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 3)/3

18 fail, 74 pass (92 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-05_06:44:30-rgw-wip-kraken-backports-distro-basic-smithi/

Actions #59

Updated by Nathan Cutler almost 7 years ago

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/30ed046cd1566b1b6b5901da709c28bc71876bec/

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #60

Updated by Nathan Cutler almost 7 years ago

rgw

teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-05_22:18:24-rgw-wip-kraken-backports-distro-basic-smithi/

Actions #61

Updated by Nathan Cutler almost 7 years ago

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/fad609aa98d1a24280ce1134723ca2f0edf73c01/

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #62

Updated by Nathan Cutler almost 7 years ago

rados

teuthology-suite -k distro --priority 101 --suite rados --subset $(expr $RANDOM % 80)/80 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-07_11:59:33-rados-wip-kraken-backports-distro-basic-smithi/

Quite a few failures, OSDs dying with:

2017-07-07T12:10:27.086 INFO:tasks.ceph.osd.1.smithi008.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/11.2.0-693-gfad609a/rpm/el7/BUILD/ceph-11.2.0-693-gfad609a/src/osd/PrimaryLogPG.cc: In function 'virtual void PrimaryLogPG::snap_trimmer(epoch_t)' thread 7f8114ff6700 time 2017-07-07 12:10:27.075629
2017-07-07T12:10:27.090 INFO:tasks.ceph.osd.1.smithi008.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/11.2.0-693-gfad609a/rpm/el7/BUILD/ceph-11.2.0-693-gfad609a/src/osd/PrimaryLogPG.cc: 3845: FAILED assert(is_primary())
2017-07-07T12:10:27.093 INFO:tasks.ceph.osd.1.smithi008.stderr: ceph version 11.2.0-693-gfad609a (fad609aa98d1a24280ce1134723ca2f0edf73c01)
2017-07-07T12:10:27.096 INFO:tasks.ceph.osd.1.smithi008.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x7e) [0x7f813a2a1bde]
2017-07-07T12:10:27.099 INFO:tasks.ceph.osd.1.smithi008.stderr: 2: (PrimaryLogPG::snap_trimmer(unsigned int)+0x15f) [0x7f8139d59a4f]
2017-07-07T12:10:27.102 INFO:tasks.ceph.osd.1.smithi008.stderr: 3: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x7cc) [0x7f8139c4919c]
2017-07-07T12:10:27.104 INFO:tasks.ceph.osd.1.smithi008.stderr: 4: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x945) [0x7f813a2a7855]
2017-07-07T12:10:27.108 INFO:tasks.ceph.osd.1.smithi008.stderr: 5: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x7f813a2a99b0]
2017-07-07T12:10:27.112 INFO:tasks.ceph.osd.1.smithi008.stderr: 6: (()+0x7dc5) [0x7f813743cdc5]
2017-07-07T12:10:27.115 INFO:tasks.ceph.osd.1.smithi008.stderr: 7: (clone()+0x6d) [0x7f813632473d]
2017-07-07T12:10:27.123 INFO:tasks.ceph.osd.1.smithi008.stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

Re-running the failed jobs on kraken baseline:

running http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-07_17:51:03-rados-kraken-distro-basic-smithi/

Pushed a new version of wip-kraken-backports after adding DNM to two or three rados PRs:

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/7451aa0dc9a2e4e3844dbe2490832a5d98ee252f/

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --rerun smithfarm-2017-07-07_11:59:33-rados-wip-kraken-backports-distro-basic-smithi

1 fail, 43 pass (44 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-18_07:53:06-rados-wip-kraken-backports-distro-basic-smithi/

Re-running one failed job:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-18_11:38:33-rados-wip-kraken-backports-distro-basic-smithi/

Actions #63

Updated by Nathan Cutler almost 7 years ago

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/7451aa0dc9a2e4e3844dbe2490832a5d98ee252f/

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #64

Updated by Nathan Cutler almost 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-kraken-backports --machine-type vps --priority 101

pass http://pulpito.ceph.com/smithfarm-2017-07-18_10:08:54-ceph-disk-wip-kraken-backports-distro-basic-vps/

Actions #65

Updated by Nathan Cutler almost 7 years ago

rgw

teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2

2 fail, 134 pass (136 total) http://pulpito.ceph.com/smithfarm-2017-07-18_10:13:59-rgw-wip-kraken-backports-distro-basic-smithi/

  • Both failures are "saw valgrind issues"

Rerun

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-18_16:41:51-rgw-wip-kraken-backports-distro-basic-smithi/

Actions #66

Updated by Nathan Cutler almost 7 years ago

rbd

teuthology-suite -k distro --priority 101 --suite rbd --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 10)/10

2 fail, 166 pass (168 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-18_11:48:51-rbd-wip-kraken-backports-distro-basic-smithi/

  • Failures are ???

Rerun

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-19_08:05:49-rbd-wip-kraken-backports-distro-basic-smithi/

Actions #67

Updated by Nathan Cutler almost 7 years ago

fs

teuthology-suite -k distro --priority 101 --suite fs --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-18_11:43:04-fs-wip-kraken-backports-distro-basic-smithi/

Actions #68

Updated by Nathan Cutler almost 7 years ago

powercycle

teuthology-suite -v -c wip-kraken-backports -k distro -m smithi -s powercycle -p 102 -l 2 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-18_14:20:29-powercycle-wip-kraken-backports-distro-basic-smithi/

Actions #69

Updated by Nathan Cutler almost 7 years ago

Upgrade jewel-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-kraken-backports --machine-type vps --priority 101 --email ncutler@suse.com

running http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-19_11:36:25-upgrade:jewel-x-wip-kraken-backports-distro-basic-vps/

  • 1419136, 1419142, 1419151: MONs running kraken, some OSDs on kraken while others still on jewel, MDS is restarted (completing its upgrade to kraken) -> cluster enters HEALTH_WARN and does not recover within timeout
    • Rerun with teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-kraken-backports --machine-type vps --priority 101 --email ncutler@suse.com --filter="jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml kraken.yaml}" --num 3

Rerun

running http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-20_08:08:54-upgrade:jewel-x-wip-kraken-backports-distro-basic-vps/

Actions #70

Updated by Nathan Cutler almost 7 years ago

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/a9d889422fb49f5ea109d98f2a9b2a973f0f358f/

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #71

Updated by Nathan Cutler almost 7 years ago

rados

teuthology-suite -k distro --priority 101 --suite rados --subset $(expr $RANDOM % 90)/90 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

3 fail, 242 pass (245 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-19_21:14:07-rados-wip-kraken-backports-distro-basic-smithi/

  • SELinux failures

Rerun

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-20_08:29:01-rados-wip-kraken-backports-distro-basic-smithi/

Actions #72

Updated by Nathan Cutler over 6 years ago

Upgrade jewel-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-kraken-backports --machine-type vps --priority 101 --email ncutler@suse.com

fail http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-20_08:53:03-upgrade:jewel-x-wip-kraken-backports-distro-basic-vps/

Rerunning the failures on smithi:

3 fail, rest pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-21_06:55:36-upgrade:jewel-x-wip-kraken-backports-distro-basic-smithi/

  • The "failed to complete snap trimming before timeout" failures are caused by PR#14597
  • Then there is what appears to be a transient SELinux failure

Rerunning 3 failed jobs:

2 fail, 1 pass (3 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-21_14:35:56-upgrade:jewel-x-wip-kraken-backports-distro-basic-smithi/

Opened https://github.com/ceph/ceph/pull/16493 to drop the two failing tests.

Actions #73

Updated by Nathan Cutler over 6 years ago

Upgrade hammer-jewel-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-jewel-x --ceph wip-kraken-backports --machine-type vps --priority 101

upgrade/hammer-jewel-x/stress-split links into upgrade/jewel-x, so it is broken by https://github.com/ceph/ceph/pull/14612 . . . will come back to this test after that PR is merged

waiting for PR#14612 to merge

Actions #74

Updated by Nathan Cutler over 6 years ago

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/36577247a3303f5d74668cb53b1d76cac6558109/

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #75

Updated by Nathan Cutler over 6 years ago

rados

teuthology-suite -k distro --priority 102 --suite rados --subset $(expr $RANDOM % 90)/90 --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi

245 pass (245 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-07-31_20:10:08-rados-wip-kraken-backports-distro-basic-smithi/

Actions #76

Updated by Nathan Cutler over 6 years ago

Upgrade jewel-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-kraken-backports --machine-type smithi --priority 101 --email ncutler@suse.com

3 fail, 29 pass (32 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-01_09:32:53-upgrade:jewel-x-wip-kraken-backports-distro-basic-smithi/

Rerun:

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-01_14:22:03-upgrade:jewel-x-wip-kraken-backports-distro-basic-smithi/

Actions #77

Updated by Nathan Cutler over 6 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-kraken-backports --machine-type vps --priority 101

pass http://pulpito.ceph.com/smithfarm-2017-08-01_07:41:01-ceph-disk-wip-kraken-backports-distro-basic-vps/

Actions #78

Updated by Nathan Cutler over 6 years ago

rgw

teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 3)/3

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-01_07:47:49-rgw-wip-kraken-backports-distro-basic-smithi/

Actions #79

Updated by Nathan Cutler over 6 years ago

rbd

teuthology-suite -k distro --priority 103 --suite rbd --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 15)/15

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-01_07:51:03-rbd-wip-kraken-backports-distro-basic-smithi/

Actions #80

Updated by Nathan Cutler over 6 years ago

upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-kraken-backports --machine-type vps --priority 101

running http://pulpito.ceph.com/smithfarm-2017-08-01_07:53:13-upgrade:client-upgrade-wip-kraken-backports-distro-basic-vps/

Actions #81

Updated by Nathan Cutler over 6 years ago

fs

teuthology-suite -k distro --priority 101 --suite fs --email ncutler@suse.com --ceph wip-kraken-backports --machine-type smithi --subset $(expr $RANDOM % 3)/3

104 pass (104 total) http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-01_07:58:06-fs-wip-kraken-backports-distro-basic-smithi/

Actions #82

Updated by Nathan Cutler over 6 years ago

  • Description updated (diff)
Actions #83

Updated by Nathan Cutler over 6 years ago

  • Description updated (diff)
Actions #84

Updated by Nathan Cutler over 6 years ago

  • Description updated (diff)
Actions #85

Updated by Nathan Cutler over 6 years ago

  • Description updated (diff)
Actions #86

Updated by Nathan Cutler over 6 years ago

smoke systemd

teuthology-suite -k distro --verbose --suite smoke/systemd --ceph wip-kraken-backports --machine-type vps --priority 101 --email ncutler@suse.com

running http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-08-01_18:48:26-smoke:systemd-wip-kraken-backports-distro-basic-vps/

Actions #87

Updated by Nathan Cutler over 6 years ago

  • Description updated (diff)
Actions #88

Updated by Nathan Cutler over 6 years ago

  • Description updated (diff)
Actions #89

Updated by Nathan Cutler over 6 years ago

https://shaman.ceph.com/builds/ceph/wip-kraken-backports/80d64e5679125f285a57d9638e064334f25cda1b/

git --no-pager log --format='%H %s' --graph ceph/kraken..wip-kraken-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #90

Updated by Nathan Cutler over 6 years ago

  • Description updated (diff)
Actions #91

Updated by Yuri Weinstein over 6 years ago

QE VALIDATION (STARTED 8/1/17)

(Note: PASSED / FAILED - indicates "TEST IS IN PROGRESS")

re-runs command lines and filters are captured in http://pad.ceph.com/p/kraken_v11.2.1_QE_validation_notes

command line CEPH_QA_MAIL="ceph-qa@ceph.com"; MACHINE_NAME=smithi; CEPH_BRANCH=kraken; teuthology-suite -v --ceph-repo https://github.com/ceph/ceph.git --suite-repo https://github.com/ceph/ceph.git -c $CEPH_BRANCH -m $MACHINE_NAME -s rados --subset 35/90 -k distro -p 100 -e $CEPH_QA_MAIL --suite-branch kraken --dry-run

(same subset for rbd)

teuthology-suite -v -c $CEPH_BRANCH -m smithi -r $RERUN --suite-repo https://github.com/ceph/ceph.git --ceph-repo https://github.com/ceph/ceph.git --suite-branch $CEPH_BRANCH -p 90 -R fail,dead,running --dry-run

Suite Runs/Reruns Notes/Issues
rados http://pulpito.ceph.com/yuriw-2017-08-02_15:55:15-rados-kraken-distro-basic-smithi/ PASSED
http://pulpito.ceph.com/yuriw-2017-08-03_15:19:23-rados-kraken-distro-basic-smithi/
rgw http://pulpito.ceph.com/yuriw-2017-08-02_16:03:13-rgw-kraken-distro-basic-smithi/ PASSED
http://pulpito.ceph.com/yuriw-2017-08-03_15:20:19-rgw-kraken-distro-basic-smithi/
rbd http://pulpito.ceph.com/yuriw-2017-08-02_16:17:20-rbd-kraken-distro-basic-smithi/ PASSED
http://pulpito.ceph.com/yuriw-2017-08-03_15:21:02-rbd-kraken-distro-basic-smithi/
fs http://pulpito.ceph.com/yuriw-2017-08-02_16:06:18-fs-kraken-distro-basic-smithi/ PASSED
http://pulpito.ceph.com/yuriw-2017-08-03_15:26:44-fs-kraken-distro-basic-smithi/
krbd http://pulpito.ceph.com/yuriw-2017-08-02_16:27:14-krbd-kraken-testing-basic-smithi/ FAILED
http://pulpito.ceph.com/yuriw-2017-08-04_16:10:42-krbd-kraken-testing-basic-smithi/
kcephfs http://pulpito.ceph.com/yuriw-2017-08-02_16:28:15-kcephfs-kraken-testing-basic-smithi/ PASSED
knfs http://pulpito.ceph.com/yuriw-2017-08-02_16:29:04-knfs-kraken-testing-basic-smithi/ PASSED
rest http://pulpito.front.sepia.ceph.com:80/yuriw-2017-08-03_15:30:39-rest-kraken-distro-basic-smithi/ PASSED
hadoop http://pulpito.front.sepia.ceph.com:80/yuriw-2017-08-03_15:31:11-hadoop-kraken-distro-basic-smithi/ EXCLUDED FROM THIS RELEASE
samba EXCLUDED FROM THIS RELEASE
ceph-deploy http://pulpito.ceph.com/yuriw-2017-08-03_15:32:07-ceph-deploy-kraken-distro-basic-vps/ PASSED
http://pulpito.ceph.com/yuriw-2017-08-04_17:14:40-ceph-deploy-kraken-distro-basic-vps/
ceph-disk http://pulpito.ceph.com/yuriw-2017-08-03_15:32:56-ceph-disk-kraken-distro-basic-vps/ PASSED
upgrade/client-upgrade http://pulpito.ceph.com/yuriw-2017-08-03_15:33:55-upgrade:client-upgrade-kraken-distro-basic-vps/ FAILED
upgrade/jewel-x (kraken) http://pulpito.front.sepia.ceph.com:80/yuriw-2017-08-04_17:08:54-upgrade:jewel-x-kraken-distro-basic-smithi/ FAILED similar to the PR for jewel https://github.com/ceph/ceph/pull/16089
http://pulpito.ceph.com/yuriw-2017-08-04_22:53:36-upgrade:jewel-x-kraken-distro-basic-smithi/
upgrade/hammer-jewel-x (kraken) EXCLUDED FROM THIS RELEASE
powercycle http://pulpito.ceph.com/yuriw-2017-08-02_16:19:18-powercycle-kraken-testing-basic-smithi/ FAILED #20917 Approved by Sage and Josh
http://pulpito.ceph.com/yuriw-2017-08-04_16:18:11-powercycle-kraken-testing-basic-smithi/
ceph-ansible http://pulpito.front.sepia.ceph.com:80/yuriw-2017-08-03_15:38:06-ceph-ansible-kraken-distro-basic-vps/ FAILED https://github.com/ceph/ceph-ansible/issues/1737
PASSED / FAILED
PASSED / FAILED
Actions #92

Updated by Nathan Cutler over 6 years ago

  • Description updated (diff)
Actions #93

Updated by Nathan Cutler over 6 years ago

  • Description updated (diff)
  • Status changed from In Progress to Resolved
Actions

Also available in: Atom PDF