Project

General

Profile

Actions

Tasks #17851

closed

jewel v10.2.6

Added by Loïc Dachary over 7 years ago. Updated about 7 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Abhishek Varshney
Target version:
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

Workflow

  • Preparing the release
  • Cutting the release
    • Abhishek V. asks Abhishek L. if a point release should be published - YES
    • Abhishek V. gets approval from all leads
      • Yehuda, rgw - YES (February 20, 2017 on ceph-devel mailing list)
      • John, CephFS - YES (February 22, 2017 on ceph-devel mailing list)
      • Jason, RBD - YES (February 20, 2017 on ceph-devel mailing list)
      • Josh, rados - YES (February 21, 2017 on ceph-devel mailing list)
    • Abhishek L. writes and commits the release notes
    • Abhishek V. informs Yuri that the branch is ready for testing - DONE (February 22, 2017 on ceph-devel mailing list)
    • Yuri runs additional integration tests - DONE (3/2/17 on ceph-devel mailing list)
    • If Yuri discovers new bugs that need to be backported urgently (i.e. their priority is set to Urgent), the release goes back to being prepared, it was not ready after all
    • Yuri informs Alfredo that the branch is ready for release - DONE
    • Alfredo creates the packages and sets the release tag

Release information

Actions #1

Updated by Loïc Dachary over 7 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel-next..jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #2

Updated by Loïc Dachary over 7 years ago

rbd

teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi

Re-running failed tests

Actions #3

Updated by Loïc Dachary over 7 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi

Re-running failed tests

Actions #4

Updated by Loïc Dachary over 7 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 2000)/2000 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi

Re-running failed tests

Actions #5

Updated by Loïc Dachary over 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi

Re-running failed tests

Actions #6

Updated by Loïc Dachary over 7 years ago

powercycle

teuthology-suite -v -c jewel-backports --suite-branch jewel -k testing -m smithi -s powercycle -p 1000 --email loic@dachary.org
Actions #7

Updated by Loïc Dachary over 7 years ago

Upgrade

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --suite-branch jewel --ceph jewel-backports --machine-type vps --priority 1000 machine_types/vps.yaml

Re-running failed tests

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --suite-branch jewel --ceph jewel-backports --machine-type vps --priority 1000 machine_types/vps.yaml

Re-running failed tests

Re-running the dead test with https://github.com/ceph/ceph-qa-suite/pull/1256 to set require_jewel

filter='upgrade:hammer-x/stress-split/{0-tz-eastern.yaml 0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml}'
teuthology-suite -k distro --filter="$filter" --verbose --suite upgrade/hammer-x --suite-branch wip-17734-jewel --ceph jewel-backports --machine-type vps --priority 1000 machine_types/vps.yaml
Actions #8

Updated by Loïc Dachary over 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --suite-branch jewel --ceph jewel-backports --machine-type vps --priority 1000 machine_types/vps.yaml
Actions #9

Updated by Loïc Dachary over 7 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel-next..jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #10

Updated by Loïc Dachary over 7 years ago

rbd

teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi

Re-running the failed jobs

Re-running failed jobs

Re-running failed jobs on jewel

Actions #12

Updated by Loïc Dachary over 7 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 2000)/2000 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
Actions #13

Updated by Loïc Dachary over 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi

Re-running failed jobs

Actions #14

Updated by Loïc Dachary over 7 years ago

powercycle

teuthology-suite -v -c jewel-backports --suite-branch jewel -k distro -m smithi -s powercycle -p 1000 --email loic@dachary.org

Re-running the failed job

Actions #15

Updated by Loïc Dachary over 7 years ago

Upgrade

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --suite-branch jewel --ceph jewel-backports --machine-type vps --priority 1000 machine_types/vps.yaml

Re-running failed jobs

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --suite-branch jewel --ceph jewel-backports --machine-type vps --priority 1000 machine_types/vps.yaml

Re-running failed jobs

Actions #16

Updated by Loïc Dachary over 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --suite-branch jewel --ceph jewel-backports --machine-type vps --priority 1000 machine_types/vps.yaml
Actions #17

Updated by Loïc Dachary over 7 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel-next..wip-jewel-backports-loic | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #18

Updated by Loïc Dachary over 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --suite-branch jewel --ceph wip-jewel-backports-loic --machine-type vps --priority 1000 machine_types/vps.yaml ~/shaman.yaml
Actions #19

Updated by Loïc Dachary over 7 years ago

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --suite-branch jewel --ceph wip-jewel-backports-loic --machine-type vps --priority 1000 machine_types/vps.yaml ~/shaman.yaml
Actions #20

Updated by Loïc Dachary over 7 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports-loic --suite-branch jewel -k distro -m smithi -s powercycle -p 1000 --email loic@dachary.org ~/shaman.yaml
Actions #22

Updated by Loïc Dachary over 7 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 50)/50 --suite-branch jewel --email loic@dachary.org --ceph wip-jewel-backports-loic --machine-type smithi ~/shaman.yaml
Actions #23

Updated by Loïc Dachary over 7 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --suite-branch jewel --email loic@dachary.org --ceph wip-jewel-backports-loic --machine-type smithi ~/shaman.yaml
Actions #24

Updated by Loïc Dachary over 7 years ago

rbd

teuthology-suite --priority 1000 --suite rbd --suite-branch jewel --email loic@dachary.org --ceph wip-jewel-backports-loic --machine-type smithi ~/shaman.yaml
  • fail http://pulpito.ceph.com/loic-2016-12-06_09:46:56-rbd-wip-jewel-backports-loic---basic-smithi
    • 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term /usr/libexec/qemu-kvm -enable-kvm -nographic -m 4096 -drive file=/home/ubuntu/cephtest/qemu/base.client.0.qcow2,format=qcow2,if=virtio -cdrom /home/ubuntu/cephtest/qemu/client.0.iso -drive file=rbd:rbd/client.0.0-clone:id=0,format=raw,if=virtio,cache=writeback -drive file=rbd:rbd/client.0.1-clone:id=0,format=raw,if=virtio,cache=writeback'
    • 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10328195aad09f59d5c2c382bd9241c7418f744e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/qemu-iotests.sh'
    • 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term /usr/libexec/qemu-kvm -enable-kvm -nographic -m 4096 -drive file=/home/ubuntu/cephtest/qemu/base.client.0.qcow2,format=qcow2,if=virtio -cdrom /home/ubuntu/cephtest/qemu/client.0.iso -drive file=rbd:rbd/client.0.0-clone:id=0,format=raw,if=virtio,cache=writethrough -drive file=rbd:rbd/client.0.1-clone:id=0,format=raw,if=virtio,cache=writethrough'
    • SELinux denials found on : ['type=AVC msg=audit(1481252822.592:36797): avc: denied { create } for pid=13771 comm="mandb" name="13771" scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.561:36792): avc: denied { unlink } for pid=13771 comm="mandb" name="index.db" dev="sda1" ino=29361527 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.561:36792): avc: denied { remove_name } for pid=13771 comm="mandb" name="#index.db#" dev="sda1" ino=29363155 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir', 'type=AVC msg=audit(1481252822.462:36790): avc: denied { create } for pid=13771 comm="mandb" name="#index.db#" scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252821.938:36784): avc: denied { create } for pid=13760 comm="logrotate" name="logrotate.status.tmp" scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252821.939:36785): avc: denied { setattr } for pid=13760 comm="logrotate" name="logrotate.status.tmp" dev="sda1" ino=29363169 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.561:36792): avc: denied { rename } for pid=13771 comm="mandb" name="#index.db#" dev="sda1" ino=29363155 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.182:36789): avc: denied { lock } for pid=13771 comm="mandb" path="/var/cache/man/index.db" dev="sda1" ino=29361527 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.561:36792): avc: denied { add_name } for pid=13771 comm="mandb" name="index.db" dev="sda1" ino=29361527 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir', 'type=AVC msg=audit(1481252821.938:36784): avc: denied { write } for pid=13760 comm="logrotate" path="/var/lib/logrotate.status.tmp" dev="sda1" ino=29363169 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.182:36788): avc: denied { open } for pid=13771 comm="mandb" path="/var/cache/man/index.db" dev="sda1" ino=29361527 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.026:36786): avc: denied { rename } for pid=13760 comm="logrotate" name="logrotate.status.tmp" dev="sda1" ino=29363169 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.462:36790): avc: denied { add_name } for pid=13771 comm="mandb" name="#index.db#" scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir', 'type=AVC msg=audit(1481252822.562:36795): avc: denied { read write } for pid=13771 comm="mandb" path="/var/cache/man/index.db" dev="sda1" ino=29363155 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.561:36792): avc: denied { write } for pid=13771 comm="mandb" name="man" dev="sda1" ino=29360274 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir', 'type=AVC msg=audit(1481252821.786:36783): avc: denied { read } for pid=13760 comm="logrotate" name="logrotate.status" dev="sda1" ino=29363155 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.561:36793): avc: denied { lock } for pid=13771 comm="mandb" path=2F7661722F63616368652F6D616E2F23696E6465782E646223202864656C6574656429 dev="sda1" ino=29361527 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.562:36794): avc: denied { getattr } for pid=13771 comm="mandb" path="/var/cache/man/index.db" dev="sda1" ino=29363155 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252821.786:36783): avc: denied { open } for pid=13760 comm="logrotate" path="/var/lib/logrotate.status" dev="sda1" ino=29363155 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.182:36788): avc: denied { read write } for pid=13771 comm="mandb" name="index.db" dev="sda1" ino=29361527 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252821.786:36782): avc: denied { getattr } for pid=13760 comm="logrotate" path="/var/lib/logrotate.status" dev="sda1" ino=29363155 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.595:36798): avc: denied { setattr } for pid=13771 comm="mandb" name="13771" dev="sda1" ino=29361527 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.026:36786): avc: denied { unlink } for pid=13760 comm="logrotate" name="logrotate.status" dev="sda1" ino=29363155 scontext=system_u:system_r:logrotate_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252823.146:36812): avc: denied { read } for pid=13771 comm="mandb" name="man" dev="sda1" ino=29360274 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir', 'type=AVC msg=audit(1481252822.157:36787): avc: denied { getattr } for pid=13771 comm="mandb" path="/var/cache/man/index.db" dev="sda1" ino=29361527 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481252822.462:36790): avc: denied { write } for pid=13771 comm="mandb" name="man" dev="sda1" ino=29360274 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir', 'type=AVC msg=audit(1481252822.592:36796): avc: denied { open } for pid=13771 comm="mandb" path="/var/cache/man/index.db" dev="sda1" ino=29363155 scontext=system_u:system_r:mandb_t:s0-s0:c0.c1023 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file']
    • }, 'changed': False, 'msg': 'Failed to update apt cache.'}}
    • 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term /usr/libexec/qemu-kvm -enable-kvm -nographic -m 4096 -drive file=/home/ubuntu/cephtest/qemu/base.client.0.qcow2,format=qcow2,if=virtio -cdrom /home/ubuntu/cephtest/qemu/client.0.iso -drive file=rbd:rbd/client.0.0-clone:id=0,format=raw,if=virtio,cache=none -drive file=rbd:rbd/client.0.1-clone:id=0,format=raw,if=virtio,cache=none'
    • 'mkdir p - /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10328195aad09f59d5c2c382bd9241c7418f744e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_ARGS=\'\' RBD_MIRROR_USE_RBD_MIRROR=1 RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/workunit.cluster1.client.mirror/rbd/rbd_mirror_stress.sh'
    • SELinux denials found on : ['type=AVC msg=audit(1481253420.151:8286): avc: denied { open } for pid=11385 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-0/type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481253937.158:8443): avc: denied { open } for pid=17743 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-0/type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481253420.151:8286): avc: denied { read } for pid=11385 comm="ceph-osd" name="type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481253688.221:8387): avc: denied { read } for pid=14751 comm="ceph-osd" name="type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481253688.221:8387): avc: denied { open } for pid=14751 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-0/type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481253937.158:8443): avc: denied { read } for pid=17743 comm="ceph-osd" name="type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481253440.886:8326): avc: denied { read } for pid=12332 comm="ceph-osd" name="type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481253440.886:8326): avc: denied { open } for pid=12332 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-0/type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file']
    • 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=10328195aad09f59d5c2c382bd9241c7418f744e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=125 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd.sh'
    • }, 'changed': False, 'msg': 'Failed to update apt cache.'}}
    • SELinux denials found on : ['type=AVC msg=audit(1481244833.591:4250): avc: denied { read } for pid=23876 comm="ceph-osd" name="type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481244813.084:4167): avc: denied { read } for pid=23274 comm="ceph-osd" name="type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481244833.591:4250): avc: denied { open } for pid=23876 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-0/type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481244813.084:4167): avc: denied { open } for pid=23274 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-0/type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file']
    • SELinux denials found on : ['type=AVC msg=audit(1481245626.292:4358): avc: denied { open } for pid=18438 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-0/type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481245483.209:4162): avc: denied { read } for pid=23241 comm="ceph-osd" name="type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481245483.209:4162): avc: denied { open } for pid=23241 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-0/type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481245626.292:4358): avc: denied { read } for pid=18438 comm="ceph-osd" name="type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481245503.709:4257): avc: denied { open } for pid=23893 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-0/type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1481245503.709:4257): avc: denied { read } for pid=23893 comm="ceph-osd" name="type" dev="nvme0n1p2" ino=25 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file']
    • }, 'changed': False, 'msg': 'Failed to update apt cache.'}}
    • 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3'

Re-running failed jobs:

Actions #25

Updated by Loïc Dachary over 7 years ago

  • Subject changed from jewel v10.2.5 to jewel v10.2.6
  • Target version changed from v10.2.5 to v10.2.6
Actions #26

Updated by Loïc Dachary over 7 years ago

  • Description updated (diff)
Actions #27

Updated by Loïc Dachary over 7 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #28

Updated by Loïc Dachary over 7 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 50)/50 --suite-branch jewel --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

Re-running failed jobs

Actions #29

Updated by Loïc Dachary over 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --suite-branch jewel --ceph wip-jewel-backports --machine-type vps --priority 1000
Actions #30

Updated by Loïc Dachary over 7 years ago

Upgrade

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --suite-branch jewel --ceph wip-jewel-backports --machine-type vps --priority 1000

Re-running dead job

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --suite-branch jewel --ceph wip-jewel-backports --machine-type vps --priority 1000

Re-running failed tests

Actions #31

Updated by Loïc Dachary over 7 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports --suite-branch jewel -k distro -m smithi -s powercycle -p 1000 --email loic@dachary.org

Re-running failed jobs

Actions #32

Updated by Loïc Dachary over 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --suite-branch jewel --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

Re-running failed jobs

Actions #33

Updated by Loïc Dachary over 7 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --suite-branch jewel --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

Re-running failed jobs

Actions #34

Updated by Loïc Dachary over 7 years ago

rbd

teuthology-suite --priority 1000 --suite rbd --suite-branch jewel --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

Re-running failed jobs

Actions #35

Updated by Loïc Dachary over 7 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #36

Updated by Loïc Dachary over 7 years ago

Upgrade

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-jewel-backports --machine-type vps --priority 1000
  • cannot be scheduled
    • IOError: /home/smithfarm/src/git.ceph.com_ceph-c_wip-jewel-backports/qa/suites/upgrade/jewel-x/point-to-point-x/distros/centos.yaml does not exist (abs /home/smithfarm/src/git.ceph.com_ceph-c_wip-jewel-backports/qa/suites/upgrade/jewel-x/point-to-point-x/distros/centos.yaml)
teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 1000
Actions #37

Updated by Loïc Dachary over 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 1000

Re-running the failed job:

Actions #38

Updated by Loïc Dachary over 7 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 1000 --email loic@dachary.org

Re-running 3 failed jobs:

Actions #39

Updated by Loïc Dachary over 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

Re-running 4 failed jobs:

Actions #40

Updated by Loïc Dachary over 7 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

Re-running 18 failed jobs:

Found and staged pending RGW valgrind-related backports:

Hopefully that will clear up the valgrind issues in the next integration run.

NEWS FLASH rgw valgrind issues are appearing also in hammer QE testing, but only on centos. Hammer testing is done on CentOS 7.3 and Ubuntu 14.04 only. Upon closer examination, I don't see Ubuntu 14.04 in any of the jobs with valgrind issues.

Actions #42

Updated by Loïc Dachary over 7 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 50)/50 --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

Re-running the jobs that stand a chance of succeeding on re-run:

Re-running with the fix from PR#13161

teuthology-suite -k distro --priority 101 --suite rados --email ncutler@suse.com --machine-type smithi --ceph wip-lfn-upgrade-hammer --filter="rados/upgrade/{hammer-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{ec-rados-plugin=jerasure-k=3-m=1.yaml rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml test_cache-pool-snaps.yaml}} rados.yaml}"

running http://pulpito.ceph.com:80/smithfarm-2017-01-29_21:05:27-rados-wip-lfn-upgrade-hammer-distro-basic-smithi/

Re-running the upgrade jobs to test David Galloway's fix from http://tracker.ceph.com/issues/18089#note-12

Actions #43

Updated by Nathan Cutler over 7 years ago

rados baseline (jewel)

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 50)/50 --email loic@dachary.org --ceph jewel --machine-type smithi --ceph-repo http://github.com/ceph/ceph.git --suite-repo http://github.com/ceph/ceph.git
Actions #44

Updated by Nathan Cutler over 7 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #45

Updated by Nathan Cutler over 7 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 50)/50 --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi
  • fail http://pulpito.ceph.com:80/smithfarm-2017-01-30_11:11:11-rados-wip-jewel-backports-distro-basic-smithi/
    • false positive saw valgrind issues
      • 764136: false positive (tcmalloc)
    • known intermittent bug, succeeded on re-run http://tracker.ceph.com/issues/16236 ("racing read got wrong version")
      • 764195
    • known intermittent bug http://tracker.ceph.com/issues/16943 Error ENOENT: unrecognized pool '.rgw.control' (CRITICAL:root:IOError) in s3tests-test-readwrite under thrashosds
      • 764231: rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml hobj-sort.yaml msgr-failures/osd-delay.yaml msgr/async.yaml rados.yaml thrashers/morepggrow.yaml workloads/rgw_snaps.yaml}
    • problem with the tests, now being fixed in lfn-upgrade-{hammer,infernalis}.yaml
      • 764236 "rados/singleton-nomsgr/{all/lfn-upgrade-hammer.yaml rados.yaml}" - "need more than 0 values to unpack" - appears to be a regression caused by smithfarm stupidity in PR#13161, fixed and re-pushed
      • 764258 "rados/singleton-nomsgr/{all/lfn-upgrade-infernalis.yaml rados.yaml}" - "'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds" - ceph osd set require_jewel_osds does not get run - same stupid smithfarm mistake

Re-running the first three failures:

tcmalloc valgrind failure

Re-running the "tcmalloc valgrind" test 10 times:

filter="rados/verify/{1thrash/default.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/simple.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml}" ./virtualenv/bin/teuthology-suite -k distro --priority 101 --suite rados --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --num 10 --filter="$filter"

16236 ("racing read got wrong version")

Succeeded on re-run.

s3tests-test-readwrite dies under thrashosds, out of order op

Test: rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml hobj-sort.yaml msgr-failures/osd-delay.yaml msgr/async.yaml rados.yaml thrashers/morepggrow.yaml workloads/rgw_snaps.yaml}

This test failed twice in two different ways.

Re-running 10 times:

(11:15:51 AM) owasserm: smithfarm, I looked at the failures and the s3_readwrite looks like a timeout issue (it happens after xfs injected stalls)

Conclusion: not a blocker

lfn-upgrade-{hammer,infernalis}.yaml failures

Re-running the last two failures with fixed PR branch (after prolonged trial-and-error):

teuthology-suite -k distro --priority 101 --suite rados --email ncutler@suse.com --ceph wip-lfn-upgrade-hammer --machine-type smithi --filter="rados/singleton-nomsgr/{all/lfn-upgrade-hammer.yaml rados.yaml},rados/singleton-nomsgr/{all/lfn-upgrade-infernalis.yaml rados.yaml}" teuthology-suite -k distro --priority 101 --suite rados --email ncutler@suse.com --ceph wip-lfn-upgrade-hammer --machine-type vps --filter="rados/singleton-nomsgr/{all/lfn-upgrade-hammer.yaml rados.yaml},rados/singleton-nomsgr/{all/lfn-upgrade-infernalis.yaml rados.yaml}"

And again with --num 8

Actions #46

Updated by Nathan Cutler over 7 years ago

Upgrade

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-jewel-backports --machine-type vps --priority 1000 teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 1000

CONCLUSION: nothing more to do here, needs a new integration branch

Actions #47

Updated by Nathan Cutler over 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 1000
Actions #48

Updated by Nathan Cutler over 7 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 1000 --email loic@dachary.org

Re-running the failed job 10 times:

Since there was just one failure in the entire suite, and that job only fails about 60% of the time, the overall run is ruled a pass

Actions #49

Updated by Nathan Cutler over 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

Since the valgrind issues are apparently benign, ruled a pass

Actions #50

Updated by Nathan Cutler over 7 years ago

rgw

teuthology-suite -k distro --priority 1000 --suite rgw --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

Re-running all the failed jobs, expecting a largish number of tcmalloc-related valgrind failures:

Re-running just "rgw_s3tests.yaml lockdep.yaml" to try to reproduce the s3tests.functional failure:

teuthology-suite -k distro --priority 1000 --suite rgw/verify --ceph wip-jewel-backports --machine-type smithi --email ncutler@suse.com --filter="tasks/rgw_s3tests.yaml validater/lockdep.yaml" --num 5
Actions #51

Updated by Nathan Cutler over 7 years ago

Actions #52

Updated by Nathan Cutler over 7 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #54

Updated by Nathan Cutler over 7 years ago

Upgrade

teuthology-suite -k distro --verbose --suite upgrade/jewel-x --ceph wip-jewel-backports --machine-type vps --priority 1000 --email ncutler@suse.com teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 1000 --email ncutler@suse.com

In general, both of these runs have improved significantly but still need work:

Actions #55

Updated by Nathan Cutler over 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 1000 --email ncutler@suse.com
Actions #56

Updated by Nathan Cutler over 7 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 1000 --email ncutler@suse.com

Ruled a pass

Actions #57

Updated by Nathan Cutler over 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Re-running the six failed jobs:

Re-running the one dead job:

  • pending
Actions #58

Updated by Nathan Cutler over 7 years ago

rgw

teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Re-running:

Conclusion: since all 8 failures are libtcmalloc-related, Orit says we can merge all the RGW backport PRs.

Actions #59

Updated by Nathan Cutler over 7 years ago

rbd

teuthology-suite -k distro --priority 1000 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Ruled a pass

Actions #60

Updated by Nathan Cutler over 7 years ago

  • Description updated (diff)
Actions #61

Updated by Nathan Cutler over 7 years ago

  • Description updated (diff)
Actions #62

Updated by Nathan Cutler over 7 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #63

Updated by Nathan Cutler over 7 years ago

rados

teuthology-suite -k distro --priority 101 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi
Actions #64

Updated by Nathan Cutler over 7 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 1000 --email ncutler@suse.com
Actions #65

Updated by Nathan Cutler over 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi
Actions #66

Updated by Nathan Cutler over 7 years ago

rgw

teuthology-suite -k distro --priority 101 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi
Actions #67

Updated by Nathan Cutler over 7 years ago

rbd

teuthology-suite -k distro --priority 1000 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Re-running failed jobs

Actions #68

Updated by Loïc Dachary over 7 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #69

Updated by Loïc Dachary over 7 years ago

rados

teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 50)/50 --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

Re-running failed jobs

Re-running on the failed jobs on jewel to assert a regression

Actions #70

Updated by Loïc Dachary over 7 years ago

Upgrade jewel point-to-point-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x/point-to-point-x --ceph wip-jewel-backports --machine-type vps --priority 1000
Actions #71

Updated by Loïc Dachary over 7 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 1000

Re-running

Actions #72

Updated by Loïc Dachary over 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 1000
Actions #73

Updated by Loïc Dachary over 7 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 1000 --email loic@dachary.org
Actions #74

Updated by Loïc Dachary over 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

Re-running failed jobs

Actions #76

Updated by Loïc Dachary over 7 years ago

rbd

teuthology-suite -k distro --priority 1000 --suite rbd --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

Re-running failed jobs

Actions #77

Updated by Loïc Dachary over 7 years ago

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #78

Updated by Nathan Cutler over 7 years ago

fs

teuthology-suite -k distro --priority 1000 --suite fs --email loic@dachary.org --ceph wip-jewel-backports --machine-type smithi

NOTE: merge https://github.com/ceph/ceph/pull/13459 when/if this run passes, and then ask John to approve 10.2.6 DONE

Actions #79

Updated by Nathan Cutler over 7 years ago

rados

teuthology-suite -k distro --priority 101 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Re-running the three other failures:

filter="rados/singleton/{all/osd-recovery.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/random.yaml rados.yaml},rados/singleton/{all/dump-stuck.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/random.yaml rados.yaml},rados/verify/{1thrash/none.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml}"
Actions #80

Updated by Nathan Cutler over 7 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 101 --email ncutler@suse.com
Actions #81

Updated by Nathan Cutler over 7 years ago

Upgrade jewel point-to-point-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x/point-to-point-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

Re-running:

Actions #82

Updated by Nathan Cutler over 7 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

Re-running the list-pgs failure:

filter="stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml}" teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com --filter="$filter"
Actions #83

Updated by Nathan Cutler over 7 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 101 --email ncutler@suse.com

Because of one of :

Actions #84

Updated by Nathan Cutler over 7 years ago

Release blockers to be merged urgently upon obtaining green rados/powercycle/upgrade:

After these are merged, ask Josh to approve 10.2.6

Actions #85

Updated by Nathan Cutler over 7 years ago

  • Description updated (diff)
Actions #86

Updated by Nathan Cutler over 7 years ago

  • Description updated (diff)
Actions #87

Updated by Nathan Cutler over 7 years ago

  • Description updated (diff)
Actions #88

Updated by Nathan Cutler over 7 years ago

RADOS ON PR#13131 AND PR#13255

teuthology-suite -k distro --priority 101 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph pr-13131 --machine-type smithi teuthology-suite -k distro --priority 101 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph pr-13255 --machine-type smithi

Re-running four failed jobs (three are btrfs, one is xfs):

The one failure is in the same test, and looks very similar:

2017-02-22T15:26:29.695 INFO:tasks.workunit.client.0.smithi114.stdout:                 api_misc: test/librados/misc.cc:71: Failure
2017-02-22T15:26:29.695 INFO:tasks.workunit.client.0.smithi114.stdout:                 api_misc: Expected: (0) != (rados_connect(cluster)), actual: 0 vs 0
2017-02-22T15:26:29.695 INFO:tasks.workunit.client.0.smithi114.stdout:                 api_misc: [  FAILED  ] LibRadosMiscConnectFailure.ConnectFailure (43 ms)
Actions #89

Updated by Nathan Cutler over 7 years ago

  • Description updated (diff)
Actions #90

Updated by Nathan Cutler over 7 years ago

  • Description updated (diff)
Actions #91

Updated by Nathan Cutler over 7 years ago

  • Description updated (diff)
Actions #92

Updated by Yuri Weinstein over 7 years ago

QE VALIDATION (STARTED 2/23/17)

(Note: PASSED / FAILED - indicates "TEST IS IN PROGRESS")

re-runs command lines and filters are captured in http://pad.ceph.com/p/hammer_v10.2.6_QE_validation_notes

command line CEPH_QA_MAIL="ceph-qa@ceph.com"; MACHINE_NAME=smithi; CEPH_BRANCH=jewel; SHA1=d9eaab456ff45ae88e83bd633f0c4efb5902bf07 ; teuthology-suite -v --ceph-repo https://github.com/ceph/ceph.git --suite-repo https://github.com/ceph/ceph.git -c $CEPH_BRANCH -S $SHA1 -m $MACHINE_NAME -s rados --subset 35/50 -k distro -p 100 -e $CEPH_QA_MAIL --suite-branch jewel --dry-run

teuthology-suite -v -c $CEPH_BRANCH -S $SHA1 -m $MACHINE_NAME -r $RERUN --suite-repo https://github.com/ceph/ceph.git --ceph-repo https://github.com/ceph/ceph.git --suite-branch jewel -p 90 -R fail,dead,running

Suite Runs/Reruns Notes/Issues
rados http://pulpito.ceph.com/yuriw-2017-02-23_17:29:38-rados-jewel-distro-basic-smithi/ PASSED one job #18089 Josh approved, failed job removed https://github.com/ceph/ceph/pull/13705
http://pulpito.ceph.com/yuriw-2017-02-24_16:57:35-rados-jewel---basic-smithi/
rgw http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:08:36-rgw-jewel-distro-basic-smithi/ PASSED
http://pulpito.ceph.com/yuriw-2017-02-24_00:03:16-rgw-jewel---basic-smithi/
rbd http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:19:29-rbd-jewel-distro-basic-smithi/ PASSED
http://pulpito.ceph.com/yuriw-2017-02-24_16:59:22-rbd-jewel---basic-smithi/
fs http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:23:09-fs-jewel-distro-basic-smithi/ PASSED
http://pulpito.ceph.com/yuriw-2017-02-24_17:02:22-fs-jewel---basic-smithi/
krbd http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:25:04-krbd-jewel-testing-basic-smithi/ FAILED #17221 Ilya approved
http://pulpito.ceph.com/yuriw-2017-02-24_21:11:45-krbd-jewel-testing-basic-smithi/
kcephfs http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:26:11-kcephfs-jewel-testing-basic-smithi/ PASSED
knfs http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:28:17-knfs-jewel-testing-basic-smithi/ PASSED
rest http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:29:02-rest-jewel-distro-basic-smithi/ PASSED
hadoop http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:29:41-hadoop-jewel-distro-basic-smithi/ PASSED
http://pulpito.ceph.com/yuriw-2017-02-24_17:24:15-hadoop-jewel---basic-smithi/
samba http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-24_18:14:42-samba-jewel-distro-basic-smithi/ FAILED #19101 John approved
http://pulpito.ceph.com/yuriw-2017-02-24_20:42:46-samba-jewel---basic-smithi/
ceph-deploy http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:30:27-ceph-deploy-jewel-distro-basic-vps/ PASSED
ceph-disk http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:31:57-ceph-disk-jewel-distro-basic-vps/ PASSED
upgrade/client-upgrade http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:32:48-upgrade:client-upgrade-jewel-distro-basic-smithi/ FAILED #19080 Jason approved
http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-24_17:46:07-upgrade:client-upgrade-jewel-distro-basic-vps/
upgrade/hammer-x (jewel) http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:34:03-upgrade:hammer-x-jewel-distro-basic-vps/ PASSED 2 jobs #18089
http://pulpito.ceph.com/yuriw-2017-02-27_21:22:18-upgrade:hammer-x-jewel---basic-vps/
http://pulpito.ceph.com/yuriw-2017-03-01_23:00:18-upgrade:hammer-x-jewel---basic-vps/ added firefly to shaman
upgrade/jewel-x/point-to-point-x http://pulpito.front.sepia.ceph.com:80/yuriw-2017-02-23_21:35:13-upgrade:jewel-x:point-to-point-x-jewel-distro-basic-vps/ PASSED
powercycle http://pulpito.front.sepia.ceph.com/yuriw-2017-02-23_21:04:26-powercycle-jewel-testing-basic-smithi/ PASSED
ceph-ansible http://pulpito.ceph.com/yuriw-2017-02-28_22:57:14-ceph-ansible-jewel-distro-basic-ovh/ PASSED
=========
PASSED / FAILED
Actions #93

Updated by Yuri Weinstein about 7 years ago

  • Description updated (diff)
Actions #94

Updated by Yuri Weinstein about 7 years ago

  • Description updated (diff)
Actions #95

Updated by Nathan Cutler about 7 years ago

  • Status changed from In Progress to Resolved
Actions #96

Updated by Nathan Cutler about 7 years ago

  • Release set to jewel
Actions

Also available in: Atom PDF