Project

General

Profile

Tasks #11644

firefly v0.80.11

Added by Nathan Cutler over 4 years ago. Updated about 4 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Target version:
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

Workflow

  • Preparing the release IN PROGRESS
  • Cutting the release
    • Nathan asks Sage if a point release should be published OK
    • Nathan gets approval from all leads
      • Yehuda, rgw: OK
      • Gregory, CephFS: OK
      • Josh, RBD: OK
      • Sam, rados: OK
    • Sage writes and commits the release notes IN PROGRESS
    • Nathan informs Yuri that the branch is ready for testing DONE
    • Yuri runs additional integration tests DONE
    • If Yuri discovers new bugs with severity Critical, the release goes back to being prepared, it was not ready after all
    • Yuri informs Alfredo that the branch is ready for release DONE
    • Alfredo creates the packages and sets the release tag IN PROGERSS

Release information

git --no-pager log --format='%H %s' --graph tags/v0.80.10..ceph/firefly | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

teuthology run commit:d135e397fa182cebde4d96f52c8eb3075e51b74c (firefly-backports august 2015)

git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

rados

./virtualenv/bin/teuthology-suite --priority 1000 --subset 5/18 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports

Re-running dead and failed jobs with:

./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports

Reset integration branch, omitting PR#5360. Re-running dead and failed jobs:

./virtualenv/bin/teuthology-suite --priority 101 --subset 5/18 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="$filter"

Re-run again (really omitting PR#5360 this time):

Re-run the last failed job on the assumption that missing ansible-playbook was "environmental noise":

rgw

./virtualenv/bin/teuthology-suite --priority 1000 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 5/18

fs

./virtualenv/bin/teuthology-suite --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 5/18

Re-running dead and failed jobs with:

./virtualenv/bin/teuthology-suite --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 5/18 --filter="$filter"

rbd

./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 5/18

Re-running 1 failed and 1 dead job:

./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 5/18 --filter="$filter"

powercycle

./virtualenv/bin/teuthology-suite -l2 -v -c firefly-backports -k testing -m plana,burnupi,mira -s powercycle -p 1000 --email ncutler@suse.cz

Re-run with the same command:

upgrade

./virtualenv/bin/teuthology-suite --machine-type plana,burnupi,mira --suite upgrade/firefly --filter=ubuntu_14.04 --suite-branch firefly --ceph firefly-backports

Re-running the 10 dead jobs with:

./virtualenv/bin/teuthology-suite --machine-type plana,burnupi,mira --suite upgrade/firefly --suite-branch firefly --ceph firefly-backports --filter="$filter"

Running also on CentOS 6 to verify the sanity of the rpm packages

./virtualenv/bin/teuthology-suite --machine-type plana,burnupi,mira --suite upgrade/firefly --filter=centos_6 --suite-branch firefly --ceph firefly-backports --email ncutler@suse.cz

Re-running on CentOS with -m vps:

Re-running the four failed jobs on CentOS with -m vps:

ceph-deploy

./virtualenv/bin/teuthology-suite --machine-type vps --suite ceph-deploy --suite-branch firefly --ceph firefly-backports --email ncutler@suse.cz

teuthology run commit:b2aaddd3a06ac13c46df659e1f2b3119f5675802 (firefly-backports july 2015)

git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

rados

./virtualenv/bin/teuthology-suite --priority 1000 --subset 1/18 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports

rgw

run=loic-2015-07-15_14:21:51-rgw-firefly-backports---basic-multi
eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --priority 1000 --filter="$filter" --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
run=loic-2015-07-09_21:11:18-rgw-firefly-backports---basic-multi
eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --priority 1000 --filter="$filter" --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
./virtualenv/bin/teuthology-suite --priority 1000 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports

fs

run="loic-2015-07-16_13:13:38-fs-firefly-backports-testing-basic-multi"
eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/ | jq '.jobs[] | select(.status == "dead") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/ | jq '.jobs[] | select(.status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports ./virtualenv/bin/teuthology-suite --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports

rbd

run=loic-2015-07-09_16:57:58-rbd-firefly-backports---basic-multi
eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --priority 1000 --filter="$filter" --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports ./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
  • red http://pulpito.ceph.com/loic-2015-07-09_16:57:58-rbd-firefly-backports---basic-multi
    • environmental noise adjust-ulimits ceph-coverage command fails in two different ways
      2015-07-14T02:59:32.995 INFO:tasks.qemu:starting qemu...
      2015-07-14T02:59:32.995 INFO:teuthology.orchestra.run.mira077:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term qemu-system-x86_64 -enable-kvm -nographic -m 4096 -drive file=/home/ubuntu/cephtest/qemu/base.client.0.qcow2,format=qcow2,if=virtio -cdrom /home/ubuntu/cephtest/qemu/client.0.iso -drive file=rbd:rbd/client.0.0:id=0,format=raw,if=virtio,cache=none'
      2015-07-14T02:59:32.997 DEBUG:teuthology.run_tasks:Unwinding manager qemu
      2015-07-14T02:59:32.998 INFO:tasks.qemu:waiting for qemu tests to finish...
      2015-07-14T02:59:33.096 INFO:tasks.qemu.client.0.mira077.stderr:Traceback (most recent call last):
      2015-07-14T02:59:33.096 INFO:tasks.qemu.client.0.mira077.stderr:  File "/usr/bin/daemon-helper", line 66, in <module>
      2015-07-14T02:59:33.096 INFO:tasks.qemu.client.0.mira077.stderr:    preexec_fn=os.setsid,
      

powercyle

../virtualenv/bin/teuthology-suite -l2 -v -c firefly-backports -k testing -m plana,burnupi,mira -s powercycle -p 1000 --email ncutler@suse.cz

upgrade

teuthology-openstack --verbose --key-name loic --suite upgrade/firefly --filter=ubuntu_14.04 --suite-branch firefly --ceph firefly-backports

teuthology run commit:73cad63e89ebeb9de47c7f2a2187deff8c39d590 (firefly-backports may 2015)

rados

./virtualenv/bin/teuthology-suite --priority 1000 --subset 1/18 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports

History

#1 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

s/Loic/Nathan/g in Workflow section

#2 Updated by Nathan Cutler over 4 years ago

  • Priority changed from Normal to Urgent

#3 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)
  • Priority changed from Urgent to Normal

#4 Updated by Loic Dachary over 4 years ago

  • Priority changed from Normal to Urgent

#5 Updated by Loic Dachary over 4 years ago

  • translation missing: en.field_release set to 8

#6 Updated by Loic Dachary over 4 years ago

  • Target version changed from 474 to v0.80.11

#7 Updated by Nathan Cutler over 4 years ago

  • Status changed from New to In Progress

#8 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#9 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#10 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#11 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#12 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#13 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#14 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#15 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#16 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#17 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#18 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#19 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#20 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#21 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#22 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#23 Updated by Loic Dachary over 4 years ago

  • Subject changed from firefly 0.80.11 to firefly v0.80.11
  • Description updated (diff)

#24 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#25 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#26 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#27 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#28 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#29 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#30 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#31 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#32 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#33 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#34 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#35 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#36 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#37 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#38 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#39 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#40 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#41 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#42 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#43 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#44 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#45 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#46 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#47 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#48 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#49 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#50 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#51 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#52 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#53 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#54 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#55 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#56 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#57 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#58 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#59 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#60 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#61 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#62 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#63 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#64 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#65 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

rgw is green

#66 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#67 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#68 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#69 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#70 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#71 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

fs is green

#72 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#73 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#74 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#75 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#76 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#77 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#78 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#79 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#80 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#81 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#82 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#83 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#84 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#85 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#86 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#87 Updated by Loic Dachary over 4 years ago

  • Description updated (diff)

#88 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#89 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#90 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#91 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#92 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#93 Updated by Nathan Cutler over 4 years ago

git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#94 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#95 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#96 Updated by Nathan Cutler over 4 years ago

  • Description updated (diff)

#97 Updated by Loic Dachary over 4 years ago

$ git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph /commit/$1|; } s/\*/+/; s/^/* /;'

#98 Updated by Nathan Cutler over 4 years ago

git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph /commit/$1|; } s/\*/+/; s/^/* /;'

#99 Updated by Nathan Cutler over 4 years ago

git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph /commit/$1|; } s/\*/+/; s/^/* /;'

#100 Updated by Nathan Cutler about 4 years ago

#12942 needs to go into 0.80.11 so we will need another integration round

#101 Updated by Nathan Cutler about 4 years ago

git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph /commit/$1|; } s/\*/+/; s/^/* /;'

#102 Updated by Loic Dachary about 4 years ago

rados

./virtualenv/bin/teuthology-suite --priority 1000 --subset 13/18 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
  • red http://pulpito.ceph.com/loic-2015-09-29_16:59:40-rados-firefly-backports---basic-multi/
    • known bug http://tracker.ceph.com/issues/13300
      • LibRadosTwoPoolsPP tests returning ENOTEMPTY (Directory not empty)
    • known bug ceph.newdream.net is no more http://tracker.ceph.com/issues/13357
      • In, e.g., rados/thrash/{clusters/fixed-2.yaml fs/xfs.yaml msgr-failures/few.yaml thrashers/default.yaml workloads/admin_socket_objecter_requests.yaml}
      • CommandFailedError: Command failed on burnupi64 with status 4: "wget -q -O /home/ubuntu/cephtest/admin_socket_client.0/objecter_requests -- 'http://ceph.newdream.net/git/?p=ceph.git;a=blob_plain;f=src/test/admin_socket/objecter_requests;hb=firefly-backports' && chmod u=rx -- /home/ubuntu/cephtest/admin_socket_client.0/objecter_requests"
      • Fixed by loicd in ceph-qa-suite wip branch, successfully tested with: ./virtualenv/bin/teuthology-suite --priority 101 --subset 13/18 --suite rados --suite-branch wip-git-ceph-com-firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="rados/thrash/{clusters/fixed-2.yaml fs/xfs.yaml msgr-failures/few.yaml thrashers/default.yaml workloads/admin_socket_objecter_requests.yaml}" http://pulpito.ceph.com/smithfarm-2015-10-05_02:57:39-rados-firefly-backports---basic-multi/

Rescheduling with -s rados/verify --filter valgrind to see if valgrind failures are still there in the latest firefly branch:

./virtualenv/bin/teuthology-suite --priority 1000 --suite rados/verify --filter valgrind --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports

#103 Updated by Loic Dachary about 4 years ago

rgw

./virtualenv/bin/teuthology-suite --priority 1000 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 5/18
2015-09-30T14:32:47.552 INFO:teuthology.orchestra.run.burnupi51.stderr:======================================================================
2015-09-30T14:32:47.552 INFO:teuthology.orchestra.run.burnupi51.stderr:ERROR: testContainerSerializedInfo (test.functional.tests.TestAccount)
2015-09-30T14:32:47.552 INFO:teuthology.orchestra.run.burnupi51.stderr:----------------------------------------------------------------------
2015-09-30T14:32:47.552 INFO:teuthology.orchestra.run.burnupi51.stderr:Traceback (most recent call last):
2015-09-30T14:32:47.552 INFO:teuthology.orchestra.run.burnupi51.stderr:  File "/home/ubuntu/cephtest/swift/test/functional/tests.py", line 233, in testContainerSerializedInfo
2015-09-30T14:32:47.553 INFO:teuthology.orchestra.run.burnupi51.stderr:    self.assertEquals(headers['content-type'],
2015-09-30T14:32:47.553 INFO:teuthology.orchestra.run.burnupi51.stderr:KeyError: 'content-type'
2015-09-30T14:50:36.368 INFO:tasks.ceph:Checking for errors in any valgrind logs...
2015-09-30T14:50:36.369 INFO:teuthology.orchestra.run.plana93:Running: "sudo zgrep '<kind>' /var/log/ceph/valgrind/* /dev/null | sort | uniq" 
2015-09-30T14:50:36.435 INFO:teuthology.orchestra.run.plana31:Running: "sudo zgrep '<kind>' /var/log/ceph/valgrind/* /dev/null | sort | uniq" 
2015-09-30T14:50:36.482 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/client.0.log.gz kind   <kind>Leak_DefinitelyLost</kind>
2015-09-30T14:50:36.483 ERROR:tasks.ceph:saw valgrind issue   <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/client.0.log.gz
2015-09-30T14:50:36.483 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/mon.a.log.gz kind   <kind>Leak_DefinitelyLost</kind>
2015-09-30T14:50:36.483 ERROR:tasks.ceph:saw valgrind issue   <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/mon.a.log.gz
2015-09-30T14:50:36.483 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/mon.c.log.gz kind   <kind>Leak_DefinitelyLost</kind>
2015-09-30T14:50:36.483 ERROR:tasks.ceph:saw valgrind issue   <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/mon.c.log.gz
2015-09-30T14:50:36.483 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/osd.0.log.gz kind   <kind>Leak_DefinitelyLost</kind>
2015-09-30T14:50:36.483 ERROR:tasks.ceph:saw valgrind issue   <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/osd.0.log.gz
2015-09-30T14:50:36.484 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/osd.1.log.gz kind   <kind>Leak_DefinitelyLost</kind>
2015-09-30T14:50:36.484 ERROR:tasks.ceph:saw valgrind issue   <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/osd.1.log.gz
2015-09-30T14:50:36.484 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/osd.2.log.gz kind   <kind>Leak_DefinitelyLost</kind>
2015-09-30T14:50:36.484 ERROR:tasks.ceph:saw valgrind issue   <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/osd.2.log.gz
2015-09-30T14:50:36.485 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/mds.a.log.gz kind   <kind>Leak_DefinitelyLost</kind>
2015-09-30T14:50:36.485 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/mds.a.log.gz kind   <kind>Leak_PossiblyLost</kind>
2015-09-30T14:50:36.485 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/mon.b.log.gz kind   <kind>Leak_DefinitelyLost</kind>
2015-09-30T14:50:36.485 ERROR:tasks.ceph:saw valgrind issue   <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/mon.b.log.gz
2015-09-30T14:50:36.485 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/osd.3.log.gz kind   <kind>Leak_DefinitelyLost</kind>
2015-09-30T14:50:36.486 ERROR:tasks.ceph:saw valgrind issue   <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/osd.3.log.gz
2015-09-30T14:50:36.486 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/osd.4.log.gz kind   <kind>Leak_DefinitelyLost</kind>
2015-09-30T14:50:36.486 ERROR:tasks.ceph:saw valgrind issue   <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/osd.4.log.gz
2015-09-30T14:50:36.486 DEBUG:tasks.ceph:file /var/log/ceph/valgrind/osd.5.log.gz kind   <kind>Leak_DefinitelyLost</kind>
2015-09-30T14:50:36.486 ERROR:tasks.ceph:saw valgrind issue   <kind>Leak_DefinitelyLost</kind> in /var/log/ceph/valgrind/osd.5.log.gz
2015-09-30T14:50:36.486 ERROR:teuthology.run_tasks:Manager failed: ceph

#104 Updated by Loic Dachary about 4 years ago

fs

./virtualenv/bin/teuthology-suite --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 17/18

#105 Updated by Loic Dachary about 4 years ago

rbd

./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 9/18

#106 Updated by Loic Dachary about 4 years ago

powercycle

./virtualenv/bin/teuthology-suite -l2 -v -c firefly-backports -k testing -m plana,burnupi,mira -s powercycle -p 1000 --email ncutler@suse.cz

#107 Updated by Loic Dachary about 4 years ago

upgrade

./virtualenv/bin/teuthology-suite --machine-type vps --suite upgrade/firefly --filter=ubuntu_14.04,centos_6 --suite-branch firefly --ceph firefly-backports
  • red http://pulpito.ceph.com/loic-2015-09-27_01:05:46-upgrade:firefly-firefly-backports---basic-vps
    • environmental problem Downloading/unpacking pip from https://pypi.python.org/packages/source/p/pip/pip-7.1.2.tar.gz#md5=3823d2343d9f3aaab21cf9c917710196 // Error <urlopen error The read operation timed out>
    • unknown problem Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph status --format=json-pretty' 2015-09-28T13:41:40.652 INFO:tasks.thrashosds.ceph_manager:no progress seen, keeping timeout for now // AssertionError: failed to recover before timeout expired
    • environmental problem (yum) Connecting to ceph.com|173.236.248.54|:80... failed: Connection timed out. // Connecting to ceph.com|2607:f298:6050:51f3:f816:3eff:fe62:31d3|:80... failed: Network is unreachable. 2015-09-28T08:57:31.474 INFO:tasks.workunit:Stopping suites/blogbench.sh on client.0...
    • environmental problem
      Running: 'sudo yum install ceph-debuginfo -y'
      Loaded plugins: fastestmirror, priorities, security
      Setting up Install Process
      Determining fastest mirrors
      Error: Cannot find a valid baseurl for repo: extras
      Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=extras error was
      14: PYCURL ERROR 56 - "Failure when receiving data from the peer" 
      
    • environmental problem
      Could not fetch URL https://pypi.python.org/simple/: There was a problem confirming the ssl certificate: <urlopen error _ssl.c:477: The handshake operation timed out>
      Will skip URL https://pypi.python.org/simple/ when looking for download links for pip in ./virtualenv/lib/python2.6/site-packages
      Cannot fetch index base URL https://pypi.python.org/simple/
      Could not find any downloads that satisfy the requirement pip in ./virtualenv/lib/python2.6/site-packages
      2015-09-28T10:40:48.635 Downloading/unpacking pip
      Cleaning up...
      No distributions at all found for pip in ./virtualenv/lib/python2.6/site-packages
      

#108 Updated by Loic Dachary about 4 years ago

ceph-deploy

./virtualenv/bin/teuthology-suite --machine-type vps --suite ceph-deploy --suite-branch firefly --ceph firefly-backports --filter=ubuntu_14,centos_6 --email ncutler@suse.cz

#109 Updated by Nathan Cutler about 4 years ago

git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#110 Updated by Nathan Cutler about 4 years ago

rados

Latest firefly-backports contains a fix for the valgrind issue. Verifying with:

./virtualenv/bin/teuthology-suite --priority 1000 --suite rados/verify --filter valgrind --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports

Also scheduled a full rados run:

./virtualenv/bin/teuthology-suite --priority 1000 --subset 7/18 --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports
filter='rados/verify/{1thrash/default.yaml clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml tasks/mon_recovery.yaml validater/valgrind.yaml}'
teuthology-suite --priority 101 --filter="$filter" --suite rados --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports

#111 Updated by Nathan Cutler about 4 years ago

rgw

./virtualenv/bin/teuthology-suite --priority 1000 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 7/18
run=smithfarm-2015-10-12_02:33:22-rgw-firefly-backports---basic-multi
paddles=paddles.front.sepia.ceph.com
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
teuthology-suite --priority 101 --suite rgw --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="$filter" 

#112 Updated by Nathan Cutler about 4 years ago

fs

./virtualenv/bin/teuthology-suite --priority 1000 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 7/18
run=smithfarm-2015-10-12_02:34:59-fs-firefly-backports-testing-basic-multi
paddles=paddles.front.sepia.ceph.com
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
teuthology-suite --priority 101 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="$filter" 
run=loic-2015-10-20_21:19:19-fs-firefly-backports-testing-basic-multi
paddles=paddles.front.sepia.ceph.com
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
teuthology-suite --priority 101 --suite fs -k testing --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="$filter" 

#113 Updated by Nathan Cutler about 4 years ago

rbd

./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --subset 7/18
run=smithfarm-2015-10-12_02:36:15-rbd-firefly-backports---basic-multi
paddles=paddles.front.sepia.ceph.com
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
teuthology-suite --priority 101 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="$filter" 

#115 Updated by Nathan Cutler about 4 years ago

upgrade

./virtualenv/bin/teuthology-suite --machine-type vps --suite upgrade/firefly --filter=ubuntu_14.04,centos_6 --suite-branch firefly --ceph firefly-backports
  • red http://pulpito.ceph.com/smithfarm-2015-10-12_02:57:09-upgrade:firefly-firefly-backports---basic-vps/
    • 13 failures, e.g.:
    • environmental problem GPG key retrieval failed: [Errno 14] PYCURL ERROR 7 - "Failed to connect to 2607:f298:6050:51f3:f816:3eff:fe62:31d3: Network is unreachable"
    • monthrash failure in description: upgrade:firefly/newer/{4-finish-upgrade.yaml 0-cluster/start.yaml 1-install/v0.80.6.yaml 2-workload/{blogbench.yaml rbd.yaml s3tests.yaml testrados.yaml} 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 5-final/{monthrash.yaml osdthrash.yaml rbd.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}

Rescheduled failed jobs:

2015-10-12T07:51:53.544 INFO:tasks.ceph:Starting mon daemons...
2015-10-12T07:51:53.545 INFO:tasks.ceph.mon.a:Restarting daemon
2015-10-12T07:51:53.545 INFO:teuthology.orchestra.run.vpm103:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a'
2015-10-12T07:51:53.624 INFO:tasks.ceph.mon.a:Started
2015-10-12T07:51:53.625 INFO:tasks.ceph.mon.b:Restarting daemon
2015-10-12T07:51:53.625 INFO:teuthology.orchestra.run.vpm056:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i b'
2015-10-12T07:51:53.656 INFO:tasks.ceph.mon.b:Started
2015-10-12T07:51:53.656 INFO:tasks.ceph.mon.c:Restarting daemon
2015-10-12T07:51:53.657 INFO:teuthology.orchestra.run.vpm056:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i c'
2015-10-12T07:51:53.685 INFO:tasks.ceph.mon.c:Started
2015-10-12T07:51:53.685 INFO:tasks.ceph:Starting osd daemons...
2015-10-12T07:51:53.686 INFO:tasks.ceph.osd.0:Restarting daemon
2015-10-12T07:51:53.686 INFO:teuthology.orchestra.run.vpm103:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 0'
2015-10-12T07:51:53.719 INFO:tasks.ceph.osd.0:Started
2015-10-12T07:51:53.719 INFO:tasks.ceph.osd.1:Restarting daemon
2015-10-12T07:51:53.720 INFO:teuthology.orchestra.run.vpm103:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 1'
2015-10-12T07:51:53.757 INFO:tasks.ceph.osd.1:Started
2015-10-12T07:51:53.757 INFO:tasks.ceph.osd.2:Restarting daemon
2015-10-12T07:51:53.757 INFO:teuthology.orchestra.run.vpm103:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 2'
2015-10-12T07:51:53.786 INFO:tasks.ceph.osd.2:Started
2015-10-12T07:51:53.786 INFO:tasks.ceph.osd.3:Restarting daemon
2015-10-12T07:51:53.786 INFO:teuthology.orchestra.run.vpm056:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 3'
2015-10-12T07:51:53.806 INFO:tasks.ceph.osd.0.vpm103.stdout:starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
2015-10-12T07:51:53.816 INFO:tasks.ceph.osd.3:Started
2015-10-12T07:51:53.816 INFO:tasks.ceph.osd.4:Restarting daemon
2015-10-12T07:51:53.817 INFO:teuthology.orchestra.run.vpm056:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 4'
2015-10-12T07:51:53.823 INFO:tasks.ceph.osd.0.vpm103.stderr:2015-10-12 10:51:53.806695 7fd7efe8a7a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-10-12T07:51:53.846 INFO:tasks.ceph.osd.4:Started
2015-10-12T07:51:53.846 INFO:tasks.ceph.osd.5:Restarting daemon
2015-10-12T07:51:53.847 INFO:teuthology.orchestra.run.vpm056:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 5'
2015-10-12T07:51:53.875 INFO:tasks.ceph.osd.5:Started
2015-10-12T07:51:53.875 INFO:tasks.ceph:Starting mds daemons...
2015-10-12T07:51:53.876 INFO:tasks.ceph.mds.a:Restarting daemon
2015-10-12T07:51:53.876 INFO:teuthology.orchestra.run.vpm103:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f -i a'
2015-10-12T07:51:53.910 INFO:tasks.ceph.mds.a:Started
2015-10-12T07:51:53.910 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph mds set_max_mds 1'
2015-10-12T07:51:53.930 INFO:tasks.ceph.osd.1.vpm103.stdout:starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
2015-10-12T07:51:53.941 INFO:tasks.ceph.osd.3.vpm056.stdout:starting osd.3 at :/0 osd_data /var/lib/ceph/osd/ceph-3 /var/lib/ceph/osd/ceph-3/journal
2015-10-12T07:51:53.954 INFO:tasks.ceph.osd.1.vpm103.stderr:2015-10-12 10:51:53.938253 7fc118f8d7a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-10-12T07:51:53.963 INFO:tasks.ceph.osd.3.vpm056.stderr:2015-10-12 10:51:52.653244 7f0f1ed657a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-10-12T07:51:53.986 INFO:tasks.ceph.osd.2.vpm103.stdout:starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
2015-10-12T07:51:54.022 INFO:tasks.ceph.osd.2.vpm103.stderr:2015-10-12 10:51:54.005539 7fc5b894a7a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-10-12T07:51:54.054 INFO:tasks.ceph.osd.4.vpm056.stdout:starting osd.4 at :/0 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal
2015-10-12T07:51:54.069 INFO:tasks.ceph.osd.4.vpm056.stderr:2015-10-12 10:51:52.759666 7f6a3a1927a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-10-12T07:51:54.086 INFO:tasks.ceph.osd.5.vpm056.stdout:starting osd.5 at :/0 osd_data /var/lib/ceph/osd/ceph-5 /var/lib/ceph/osd/ceph-5/journal
2015-10-12T07:51:54.099 INFO:tasks.ceph.osd.5.vpm056.stderr:2015-10-12 10:51:52.789645 7f0a3f67e7a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-10-12T07:51:54.116 INFO:tasks.ceph.mds.a.vpm103.stdout:starting mds.a at :/0
2015-10-12T07:52:00.135 INFO:tasks.ceph.mds.a.vpm103.stderr:2015-10-12 10:52:00.118600 7f1f833087a0 -1 mds.-1.-1 *** no OSDs are up as of epoch 1, waiting
2015-10-12T07:52:00.865 INFO:teuthology.orchestra.run.vpm103.stderr:max_mds = 1
2015-10-12T07:52:00.888 INFO:tasks.ceph:Waiting until ceph is healthy...
2015-10-12T07:52:00.889 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd dump --format=json'
2015-10-12T07:52:01.426 DEBUG:teuthology.misc:5 of 6 OSDs are up
2015-10-12T07:52:02.426 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd dump --format=json'
2015-10-12T07:52:03.693 DEBUG:teuthology.misc:5 of 6 OSDs are up
2015-10-12T07:52:04.693 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd dump --format=json'
2015-10-12T07:52:05.171 DEBUG:teuthology.misc:6 of 6 OSDs are up
2015-10-12T07:52:05.171 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:52:05.469 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN 72 pgs peering; 72 pgs stuck inactive; 72 pgs stuck unclean; clock skew detected on mon.a
2015-10-12T07:52:10.136 INFO:tasks.ceph.mds.a.vpm103.stderr:2015-10-12 10:52:10.118924 7f1f833087a0 -1 mds.-1.-1 *** no OSDs are up as of epoch 1, waiting
2015-10-12T07:52:12.469 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:52:12.785 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:52:19.786 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:52:20.080 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:52:27.080 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:52:27.403 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:52:34.403 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:52:34.722 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:52:41.723 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:52:42.028 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:52:49.029 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:52:49.355 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:52:56.356 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:52:56.670 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:53:03.671 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:53:04.030 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:53:11.030 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:53:11.355 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:53:18.355 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:53:18.698 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:53:25.698 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:53:26.011 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:53:33.012 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:53:33.335 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:53:40.335 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:53:40.656 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:53:47.658 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:53:47.961 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:53:54.963 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:53:55.286 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:54:02.287 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:54:02.594 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:54:09.596 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:54:09.915 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:54:16.917 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:54:17.242 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:54:24.243 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:54:24.553 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:54:31.555 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:54:32.035 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
2015-10-12T07:54:39.037 INFO:teuthology.orchestra.run.vpm103:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health'
2015-10-12T07:54:39.361 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN clock skew detected on mon.a
[many, many more of these]

Rescheduled the one remaining failed test:

#116 Updated by Nathan Cutler about 4 years ago

ceph-deploy

./virtualenv/bin/teuthology-suite --machine-type vps --suite ceph-deploy --suite-branch firefly --ceph firefly-backports --filter=ubuntu_14,centos_6 --email ncutler@suse.cz

#117 Updated by Loic Dachary about 4 years ago

<ircolle> loicd - when do you anticipate releasing 0.80.11?
<loicd> ircolle: smithfarm has a better view but it's pretty much whenever we decide it's a good idea to release it
<sage> ircolle: i haven't been paying attention.. whenever the backport team thinks it's ready? 
<sage> yeah
<loicd> sage: ack
<loicd> ircolle: smithfarm is running the latest tests and it's in good shape
<ircolle> well, maybe after alfredodeza catches his breath after 9.1.0?
<sage> yeah
<loicd> cool

#118 Updated by Loic Dachary about 4 years ago

  • Description updated (diff)

#119 Updated by Loic Dachary about 4 years ago

Add https://github.com/ceph/ceph/pull/6328 for qemu-iotest

git --no-pager log --format='%H %s' --graph ceph/firefly..ceph/firefly-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#120 Updated by Loic Dachary about 4 years ago

rbd

run=loic-2015-10-21_01:20:10-rbd-firefly-backports---basic-multi
paddles=paddles.front.sepia.ceph.com
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
teuthology-suite --priority 101 --suite rbd --suite-branch firefly --machine-type plana,burnupi,mira --distro ubuntu --email ncutler@suse.cz --ceph firefly-backports --filter="$filter" 

#121 Updated by Loic Dachary about 4 years ago

  • Description updated (diff)

#122 Updated by Loic Dachary about 4 years ago

  • Description updated (diff)

#123 Updated by Loic Dachary about 4 years ago

  • Description updated (diff)

#124 Updated by Yuri Weinstein about 4 years ago

QE Validation

Suite Runs Notes/Issues
rados http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-23_22:30:01-rados-firefly-distro-basic-openstack/ PASSED
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-26_21:53:55-rados-firefly-distro-basic-openstack/
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-27_16:22:04-rados-firefly-distro-basic-openstack/
rbd http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-22_23:00:01-rbd-firefly-distro-basic-openstack/ FAILED #11104
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-26_21:35:38-rbd-firefly-distro-basic-openstack/ fail in ovh b/c xfstests can take 6+ hrs
http://pulpito.ceph.com/teuthology-2015-10-27_09:38:04-rbd-firefly-distro-basic-multi/ rerun on baremetal
http://pulpito.ceph.com/teuthology-2015-11-12_17:03:29-rbd-firefly-distro-basic-vps/ rerun on centos wip-13622-fix-wusui fix for #11104
http://pulpito.ceph.com/teuthology-2015-11-12_20:15:04-rbd-firefly-distro-basic-multi/ rerun on centos wip-13622-fix-wusui fix for #11104 in sepia, killed see jdillaman comment in https://github.com/ceph/teuthology/pull/704
rgw http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-23_23:02:01-rgw-firefly-distro-basic-openstack/ FAILED #11104
http://pulpito.ceph.com/teuthology-2015-10-27_09:52:18-rgw-firefly-distro-basic-multi/
http://pulpito.ceph.redhat.com/teuthology-2015-11-12_20:10:28-rgw-firefly-distro-basic-magna/ rerun in octo/rhel wip-13622-fix-wusui fix for #11104 (suspect octo nodes setup for failures)
http://pulpito.ceph.com/teuthology-2015-11-12_21:24:06-rgw-firefly-distro-basic-multi/ rerun in sepia/centos wip-13622-fix-wusui fix for #11104
fs http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-23_23:04:01-fs-firefly-distro-basic-openstack/ FAILED #11104, #13630
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-26_21:43:12-fs-firefly-distro-basic-openstack/ ran 6+ h
http://pulpito.ceph.com/teuthology-2015-10-27_10:24:08-fs-firefly-distro-basic-multi/ rerun in sepia
http://pulpito.ceph.redhat.com/teuthology-2015-11-13_00:03:41-fs-firefly-distro-basic-magna/ rerun in octo/rhel wip-13622-fix-wusui fix for #11104
http://pulpito.ceph.com/teuthology-2015-11-12_21:16:20-fs-firefly-distro-basic-multi/ rerun in sepia/centos wip-13622-fix-wusui fix for #11104
krbd http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-23_23:06:02-krbd-firefly-testing-basic-openstack/ PASSED #13631
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-26_21:38:44-krbd-firefly-testing-basic-openstack/ ran 6+
http://pulpito.ceph.com/teuthology-2015-10-27_10:35:10-krbd-firefly-testing-basic-multi/ rerun in sepia
http://pulpito.ceph.com/teuthology-2015-10-29_14:34:51-krbd-firefly-testing-basic-multi/
http://pulpito.ceph.com/teuthology-2015-10-30_08:35:27-krbd-firefly-testing-basic-multi/
kcephfs http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-22_23:08:02-kcephfs-firefly-testing-basic-openstack/ PASSED #13631, #13630, note new #13658
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-26_21:27:44-kcephfs-firefly-testing-basic-openstack/
http://pulpito.ceph.com/teuthology-2015-10-27_10:41:29-kcephfs-firefly-testing-basic-multi/
http://pulpito.ceph.com/teuthology-2015-10-30_10:48:34-kcephfs-firefly-testing-basic-multi/
samba http://pulpito.ceph.com/teuthology-2015-10-27_10:44:28-samba-firefly---basic-vps/ FAILED same as in firefly v0.80.10 #11090, #6613
hadoop http://pulpito.ceph.com/teuthology-2015-10-14_09:45:17-hadoop-hammer---basic-multi/ PASSED
http://pulpito.ceph.com/teuthology-2015-11-02_14:52:01-hadoop-hammer---basic-multi/
rest http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-31_18:16:02-rest-hammer---basic-openstack/ PASSED
ceph-deploy(ubuntu_) http://pulpito.ceph.com/teuthology-2015-10-26_15:05:25-ceph-deploy-firefly-distro-basic-vps/
http://pulpito.ceph.com/teuthology-2015-10-28_09:25:44-ceph-deploy-firefly-distro-basic-vps/
ceph-deploy(distros) http://pulpito.ceph.com/teuthology-2015-10-27_13:25:44-ceph-deploy-firefly-distro-basic-vps/ FAILED #13367
upgrade/dumpling-x (to firefly)(distros) http://pulpito.ceph.com/teuthology-2015-10-25_19:13:08-upgrade:dumpling-x-firefly-distro-basic-vps/ PASSED
upgrade/firefly(distros) http://pulpito.ceph.com/teuthology-2015-10-23_17:00:01-upgrade:firefly-firefly-distro-basic-vps/ PASSED
http://pulpito.ceph.com/teuthology-2015-10-26_13:58:26-upgrade:firefly-firefly-distro-basic-vps/
http://pulpito.ceph.com/teuthology-2015-10-27_13:30:40-upgrade:firefly-firefly-distro-basic-vps/
upgrade/dumpling-firefly-x (to giant) http://pulpito.ceph.com/teuthology-2015-10-24_18:15:08-upgrade:dumpling-firefly-x-giant-distro-basic-vps/ #13300 DEPRECATED AS UNSUPPORTED UPGRADE PATH
upgrade/dumpling-firefly-x (to hammer)(distros) this suite runs out of memory on VPS and unrunnable
upgrade/firefly-x (to giant)(distros) http://pulpito.ceph.com/teuthology-2015-10-24_18:18:02-upgrade:firefly-x-giant-distro-basic-vps/ #13300 DEPRECATED AS UNSUPPORTED UPGRADE PATH
upgrade/firefly-x (to hammer)(distros) http://pulpito.ceph.com/teuthology-2015-10-23_17:18:01-upgrade:firefly-x-hammer-distro-basic-vps/ PASSED ALMOST missing packadge for v0.80.4 #13788, #11104, #13632
http://pulpito.ceph.com/teuthology-2015-10-26_14:05:59-upgrade:firefly-x-hammer-distro-basic-vps/ filtered in ubuntu, b/c #11104
http://pulpito.ceph.com/teuthology-2015-11-12_17:21:52-upgrade:firefly-x-hammer-distro-basic-vps/ rerun on centos/wip-13622-fix-wusui fix for #11104
powercycle http://pulpito.ceph.com/teuthology-2015-10-23_16:17:52-powercycle-firefly-testing-basic-multi/ FAILED #11104, #13631
http://pulpito.ceph.com/teuthology-2015-10-25_11:01:33-powercycle-firefly-testing-basic-multi/ #11104 centos had to be filter out
http://pulpito.ceph.com/teuthology-2015-10-26_09:39:03-powercycle-firefly-testing-basic-multi/
http://pulpito.ceph.com/teuthology-2015-10-27_10:53:08-powercycle-firefly-testing-basic-multi/
http://pulpito.ceph.com/teuthology-2015-11-12_17:15:33-powercycle-firefly-testing-basic-multi/ PASSED full rerun on wip-13622-fix-wusui fix for #11104
http://pulpito.ceph.com/teuthology-2015-11-14_16:35:20-powercycle-firefly-testing-basic-multi/
http://pulpito.ceph.com/teuthology-2015-11-15_15:14:35-powercycle-firefly-testing-basic-multi/
FAILED / PASSED

Email from Tamil and Yuri on status of this release:

From: Yuri Weinstein <yweinste@redhat.com>
Date: Mon, Nov 16, 2015 at 8:16 AM
Subject: Re: regd: ceph v0.80.11 release
To: Tamil Muthamizhan <tmuthami@redhat.com>
Cc: Ian Colle <icolle@redhat.com>, Warren Usui <wusui@redhat.com>, Sage Weil <sweil@redhat.com>

Tamil, agree.

In last several days we refocused more on testing #11104 and related issues.
Just to update:

I ran several smoke tests over weekend on CentOS -

http://pulpito.ceph.com/teuthology-2015-11-14_08:30:20-smoke-firefly-distro-basic-vps/
(with os CentOS)
http://pulpito.ceph.redhat.com/teuthology-2015-11-13_20:02:11-smoke-firefly---basic-magna/
(rhel)
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-11-14_19:43:44-smoke-firefly-distro-basic-openstack/
(ovh CentOS)

They did not all pass, but we run smoke only on master, so I think
it'd be a wrong to expect them to pass on vps for example fully and
probably is not worthwhile to make the smoke suite run on firefly
(Sage?). But they were ran to test the fix for #11104
(wip-13622-fix-wusui)

rgw and fs test forced on baremetal in sepia simply timed out waiting
for CentoS nodes.

However, upgrade/firefly-x (to hammer) passed on vps (one minor issue)
and powercycle passed (two job still waiting for nodes) including on
CentOS nodes.

I hope it gives enough data points for accepting v0.80.11 for
downstream release.

?

Thx
YuriW

On Fri, Nov 13, 2015 at 3:53 PM, Tamil Muthamizhan <tmuthami@redhat.com> wrote:
> Adding Sage in this email thread.
>
> Thanks Yuri, I looked into the logs and we have tests passing for rados/rbd/rgw - good enough for regression in firefly on centos 7.0 and rhel 7.1
>
>
> ----- Original Message -----
> From: "Yuri Weinstein" <yweinste@redhat.com>
> To: "Tamil Muthamizhan" <tmuthami@redhat.com>
> Cc: "Ian Colle" <icolle@redhat.com>, "Warren Usui" <wusui@redhat.com>, "Ken Dreyer" <kdreyer@redhat.com>, "Jason Dillaman" <dillaman@redhat.com>
> Sent: Friday, November 13, 2015 1:32:50 PM
> Subject: Re: regd: ceph v0.80.11 release
>
> Only failed tests were scheduled to rerun (many on CentOS) on
> wip-13622-fix-wusui and in addition two smoke suites are running (on
> CentOS) as requested:
>
> http://pulpito.ceph.redhat.com/teuthology-2015-11-13_15:34:29-smoke-firefly-distro-basic-magna/
> http://pulpito.ceph.com/teuthology-2015-11-13_12:31:06-smoke-firefly-distro-basic-vps/
> (we don't run smoke on vps, so errors may be related to this fact)
>
> Details are in http://tracker.ceph.com/issues/11644
> Here is the gist (focus on runs related to #11104 fix):
>
> rbd - has issues as was mentioned before
>
> rgw - waiting for centos nodes in sepia
>
> fs - waiting for centos nodes in sepia
>
> upgrade/firefly-x (to hammer) - passed, one minor missing package issue
>
> powercycle - full rerun (as centos was filtered out before to avoid
> the 11104 issue), running, couple of errors looks like env noise so
> far, otherwise ok
>
> Thx
> YuriW

#125 Updated by Loic Dachary about 4 years ago

  • Description updated (diff)

#126 Updated by Loic Dachary about 4 years ago

  • Description updated (diff)

#127 Updated by Yuri Weinstein about 4 years ago

  • Description updated (diff)

#128 Updated by Loic Dachary about 4 years ago

  • Status changed from In Progress to Resolved

Also available in: Atom PDF