Project

General

Profile

Tasks #11153

Updated by Abhishek Lekshmanan about 9 years ago

h3. Workflow 

 * "Preparing the release":http://ceph.com/docs/master/dev/development-workflow/#preparing-a-new-release *IN PROGRESS* 
 * "Cutting the release":http://ceph.com/docs/master/dev/development-workflow/#cutting-a-new-stable-release 
 ** Abhishek gets approval from all leads 
 *** Yehuda, rgw:  
 *** Gregory, CephFS: 
 *** Josh, RBD: 
 *** Sam, rados: 
 ** Sage writes and commits the release notes  
 ** Abhishek informs Yuri that the branch is ready for testing  
 ** Yuri runs additional integration tests 
 ** If Yuri discovers new bugs with severity Critical, the relase goes back to being prepared, it was not ready after all 
 ** Yuri informs Alfredo that the branch is ready for release 
 ** Alfredo creates the packages and sets the release tag  

 h3. Release information 

 * branch to build from: giant, commit:??? 
 * version: v0.87.2 
 * type of release: point release 
 * where to publish the release: debian/rpm-$release 

 h3. Synchronized pull requests 

 * -"rgw acl response should start with <?xml version...":http://tracker.ceph.com/issues/10106 https://github.com/ceph/ceph-qa-suite/pull/370 https://github.com/ceph/ceph/pull/4082- 

 h3. Inventory 

 * http://workbench.dachary.org/ceph/ceph-backports/wikis/giant 

 * waiting : 
 * OK :  
 * Not OK : 

 h3. teuthology run commit:538d012a30361d7fb33ee19632103a1ff242dee3 (giant-backports branch april 2015) 

 Commits added after the commit on which the previous tests were done: 
 <pre> 
 $ git log --no-merges --oneline 02f9cdbf889071ca6fe3811d9b9a92a0b630fa55..ceph/giant-backports 
 5f4e62f mon: MDSMonitor: wait for osdmon to be writable when requesting proposal 
 257bd17 mon: MDSMonitor: have management_command() returning int instead of bool 
 9e9c3c6 osd: Get pgid ancestor from last_map when building past intervals 
 9f1f355 Objecter: failed assert(tick_event==NULL) at osdc/Objecter.cc 
 de4b087 ceph-objectstore-tool: Output only unsupported features when incomatible 
 </pre> 

 It does not seem necessary to run the rbd and rgw suites again. 

 h4. rados 

 <pre> 
 ./virtualenv/bin/teuthology-suite --filter-out btrfs,ext4 --priority 1000 --suite rados --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email abhishek.lekshmanan@gmail.com --owner abhishek.lekshmanan@gmail.com --ceph giant-backports 
 </pre> 

 * *running* http://pulpito.ceph.com/abhi-2015-04-03_10:47:06-rados-giant-backports---basic-multi/ 
 ** *red* one hung job at  

 h4. fs 

 <pre> 
 ./virtualenv/bin/teuthology-suite -k testing --priority 1000 --suite fs --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email abhishek.lekshmanan@gmail.com --owner abhishek.lekshmanan@gmail.com --ceph giant-backports 
 </pre> 

 * *running* http://pulpito.ceph.com/abhi-2015-04-03_10:40:03-fs-giant-backports-testing-basic-multi/ 
 ** *hung* http://pulpito.ceph.com/abhi-2015-04-03_10:40:03-fs-giant-backports-testing-basic-multi/835540/, though currently the status is shown as running, both teuthworker and logs reveal the job was hung 
 Since, this was the only *running* job, rescheduled as  

 <pre> 
 eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/jobs/?status=running | jq '.[].description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') 
 ./virtualenv/bin/teuthology-suite -k testing --filter "$filter" --p riority 1000 --suite fs --suite-branch giant --machine-type plana,burnupi,mira plana,b urnupi,mira --distro ubuntu --email abhishek.lekshmanan@gmail.com - -owner abhishek.lekshmanan@gmail.com --ceph giant-backports 
 </pre> 
 * *running* http://pulpito.ceph.com/abhi-2015-04-05_11:40:04-fs-giant-backports-testing-basic-multi/ 

 h3. teuthology run commit:02f9cdbf889071ca6fe3811d9b9a92a0b630fa55 (giant branch march 2015) 

 h4. rados 

 <pre> 
 run=loic-2015-03-23_01:09:31-rados-giant---basic-multi 
 eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/jobs/?status=dead | jq '.[].description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') 
 ./virtualenv/bin/teuthology-suite --filter "$filter" --priority 1000 --suite rados --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant 
 eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/jobs/?status=fail | jq '.[].description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') 
 ./virtualenv/bin/teuthology-suite --filter "$filter" --priority 1000 --suite rados --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant 
 </pre> 

 * *green* http://pulpito.ceph.com/loic-2015-03-27_09:01:25-rados-giant---basic-multi/ (dead from the previous run) 
 * *green* http://pulpito.ceph.com/loic-2015-03-27_09:03:04-rados-giant---basic-multi/ (fail from the previous run) 

 <pre> 
 ./virtualenv/bin/teuthology-suite --filter-out btrfs,ext4 --priority 1000 --suite rados --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant 
 </pre> 

 * *aborted* http://pulpito.ceph.com/loic-2015-03-23_01:09:31-rados-giant---basic-multi/ (to not run against more important test suites) 

 h4. rgw  

 -Waiting on http://tracker.ceph.com/issues/11180- 

 <pre> 
 ./virtualenv/bin/teuthology-suite --priority 1000 --suite rgw --filter-out btrfs,ext4 --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant 
 </pre> 

 * *green* http://pulpito.ceph.com/abhi-2015-03-29_17:16:17-rgw-giant---basic-multi/ 
 Re run of the failed test suite to confirm we are not affected by environmental noise. The tests have passed this time, though, if we analyze the logs between the "failed run":http://pulpito.ceph.com/loic-2015-03-27_08:40:06-rgw-giant---basic-multi/824286 and this run, there is not much difference that indicates any network failure; primary differences that could be observed between the 2 runs are: 
 in the failed run, as the test starts we can observe a:  
 <pre> 
 2015-03-27T15:31:24.082 INFO:tasks.rgw.client.0.mira012.stdout:2015-03-27 15:31:24.082177 7f1bc0ff9700 -1 failed to list objects pool_iterate returned r=-2 
 </pre> 
 which might probably indicate the cause of failure. This is not seen in the successful run, however in the successful run; at the beginning, apt update fails on one of the sources (though this shouldn't technically affect the test run) 
 <pre>  
 2015-03-29T04:48:53.694 INFO:teuthology.orchestra.run.plana82.stderr:W: Failed to fetch http://apt-mirror.front.sepia.ceph.com/archive.ubuntu.com/ubuntu/dists/precise-backports/restricted/i18n/Index    No sections in Release file /var/lib/apt/lists/partial/apt-mirror.front.sepia.ceph.com_archive.ubuntu.com_ubuntu_dists_precise-backports_restricted_i18n_Index 
 2015-03-29T04:48:53.694 INFO:teuthology.orchestra.run.plana82.stderr: 
 2015-03-29T04:48:53.694 INFO:teuthology.orchestra.run.plana82.stderr:E: Some index files failed to download. They have been ignored, or old ones used instead. 
 </pre> 
 
 * *red* http://pulpito.ceph.com/loic-2015-03-27_08:40:06-rgw-giant---basic-multi/ 
 ** *new bug* potential regression, s3 bucket quota tests failed, reported issue http://tracker.ceph.com/issues/11259  
 <pre> 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=90b37d9bdcc044e26f978632cd68f19ece82d19a TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rgw/s3_bucket_quota.pl'* 
 </pre> 
 *** "rgw/multifs/{clusters/fixed-2.yaml frontend/civetweb.yaml fs/xfs.yaml rgw_pool_type/replicated.yaml tasks/rgw_bucket_quota.yaml}":http://pulpito.ceph.com/loic-2015-03-27_08:40:06-rgw-giant---basic-multi/824286 


 h4. rbd 

 <pre> 
 ./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant 
 </pre> 

 * *green* http://pulpito.ceph.com/loic-2015-03-23_01:04:10-rbd-giant---basic-multi/ 

 h4. fs 

 <pre> 
 ./virtualenv/bin/teuthology-suite -k testing --priority 1000 --suite fs --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant 
 </pre> 

 * *green* http://pulpito.ceph.com/loic-2015-03-22_23:57:16-fs-giant-testing-basic-multi/ 

 h3. teuthology run commit:b6bebbbc99540b76221aeccb3693784b414f607b (giant integration branch, march 2015) 

 The is one potential regression (does not happen all the time but appears to be new) : "osdc/Objecter.cc: 405: FAILED assert(tick_event == __null)":http://tracker.ceph.com/issues/11183 

 h4. rados 

 <pre> 
 run=loic-2015-03-19_19:01:57-rados-giant-backports---basic-multi 
 eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/jobs/?status=fail | jq '.[].description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') 
 ./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite rados --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant-backports 
 </pre> 

 * *green* http://pulpito.ceph.com/loic-2015-03-20_17:38:25-rados-giant-backports---basic-multi/ 


 <code>./virtualenv/bin/teuthology-suite --filter-out btrfs,ext4 --priority 1000 --suite rados --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org    --ceph giant-backports</code> 

 * *red* http://pulpito.ceph.com/loic-2015-03-19_19:01:57-rados-giant-backports---basic-multi 
 ** *environmental noise* NTP problems : cluster [WRN] message from mon.1 was stamped 0.897801s in the future, clocks not synchronized" in cluster log 
 *** "rados/thrash-erasure-code-isa/{clusters/fixed-2.yaml distros/rhel_7.0.yaml fs/xfs.yaml msgr-failures/few.yaml thrashers/pggrow.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}":http://pulpito.ceph.com/loic-2015-03-19_19:01:57-rados-giant-backports---basic-multi/811652 
 ** *new bug* "osdc/Objecter.cc: 405: FAILED assert(tick_event == __null)":http://tracker.ceph.com/issues/11183 
 *** "rados/monthrash/{ceph/ceph.yaml clusters/3-mons.yaml fs/xfs.yaml msgr-failures/few.yaml thrashers/force-sync-many.yaml workloads/rados_api_tests.yaml}":http://pulpito.ceph.com/loic-2015-03-19_19:01:57-rados-giant-backports---basic-multi/811322 


 h4. rgw 

 <pre> 
 run=loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi 
 eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/jobs/?status=fail | jq '.[].description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') 
 ./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite rgw --filter-out btrfs,ext4 --suite-branch wip-rgw-regional-summary --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant-backports 
 </pre> 

 * *green* http://pulpito.ceph.com/loic-2015-03-22_20:48:29-rgw-giant-backports---basic-multi/ 

 The previous run failed because of "ensure summary is looked for the user we need (part 2)":https://github.com/ceph/ceph-qa-suite/pull/375 which was also "backported to the giant branch":https://github.com/ceph/ceph-qa-suite/pull/376 

 <pre> 
 run=loic-2015-03-20_00:17:55-rgw-giant-backports---basic-multi 
 eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/jobs/?status=fail | jq '.[].description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') 
 ./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite rgw --filter-out btrfs,ext4 --suite-branch wip-rgw-regional-summary --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant-backports 
 </pre> 

 * *red* http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/ 
 ** *environmental error* Download error on https://pypi.python.org/simple/ 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/civetweb.yaml rgw_pool_type/ec.yaml}":http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/815909 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/civetweb.yaml rgw_pool_type/ec-cache.yaml}":http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/815907 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/apache.yaml rgw_pool_type/ec.yaml}":http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/815905 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/apache.yaml rgw_pool_type/ec-cache.yaml}":http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/815903 
 ** *needs fixing* Remote branch wip-rgw-regional-summary not found in upstream origin, using HEAD instead, because the wip-rgw-regional-summary branch does not exist 
 *** "rgw/multifs/{clusters/fixed-2.yaml frontend/civetweb.yaml fs/xfs.yaml rgw_pool_type/ec.yaml tasks/rgw_s3tests.yaml}":http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/815901 
 *** "rgw/multifs/{clusters/fixed-2.yaml frontend/civetweb.yaml fs/xfs.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_s3tests.yaml}":http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/815899 
 *** "rgw/multifs/{clusters/fixed-2.yaml frontend/apache.yaml fs/xfs.yaml rgw_pool_type/ec.yaml tasks/rgw_s3tests.yaml}":http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/815897 
 *** "rgw/multifs/{clusters/fixed-2.yaml frontend/apache.yaml fs/xfs.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_s3tests.yaml}":http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/815895 
 ** *needs fixing* 'git clone -b wip-rgw-regional-summary git://ceph.com/git/s3-tests.git /home/ubuntu/cephtest/s3-tests' because the wip-rgw-regional-summary branch does not exist 
 *** "rgw/multifs/{clusters/fixed-2.yaml frontend/civetweb.yaml fs/xfs.yaml rgw_pool_type/replicated.yaml tasks/rgw_s3tests.yaml}":http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/815902 
 *** "rgw/multifs/{clusters/fixed-2.yaml frontend/civetweb.yaml fs/xfs.yaml rgw_pool_type/ec-profile.yaml tasks/rgw_s3tests.yaml}":http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/815900 
 *** "rgw/multifs/{clusters/fixed-2.yaml frontend/apache.yaml fs/xfs.yaml rgw_pool_type/replicated.yaml tasks/rgw_s3tests.yaml}":http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/815898 
 *** "rgw/multifs/{clusters/fixed-2.yaml frontend/apache.yaml fs/xfs.yaml rgw_pool_type/ec-profile.yaml tasks/rgw_s3tests.yaml}":http://pulpito.ceph.com/loic-2015-03-22_19:18:01-rgw-giant-backports---basic-multi/815896 

 Before v0.87.1 was released 6b08a729540c61f3c8b15c5a3ce9382634bf800c was tested at http://tracker.ceph.com/issues/10501#rgw and came back ok. There was a few commits merged in between: 

 <pre> 
 git log --no-merges --oneline tags/v0.87.1 ^6b08a729540c61f3c8b15c5a3ce9382634bf800c^  
 283c2e7 0.87.1 
 734e9af osd: tolerate sessionless con in fast dispatch path 
 ccb0914 qa: use correct binary path on rpm-based systems 
 df8285c fsync-tester: print info about PATH and locations of lsof lookup 
 91515e7 osdc: Constrain max number of in-flight read requests 
 </pre> 

 Later commit:ccb0914f76da23acdd7374233cd1939ab80ef3c8 was tested at http://pulpito.ceph.com/teuthology-2015-02-12_17:41:04-rgw-giant-distro-basic-multi/ and was green. Two commits were merged after that and before v0.87.1 was released: 

 <pre> 
 $ git log --oneline tags/v0.87.1 ^ccb0914f76da23acdd7374233cd1939ab80ef3c8^ 
 283c2e7 0.87.1 
 4178e32 Merge pull request #3731 from liewegas/wip-10834-giant 
 734e9af osd: tolerate sessionless con in fast dispatch path 
 ccb0914 qa: use correct binary path on rpm-based systems 
 </pre> 

 <pre> 
 filter='rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/civetweb.yaml rgw_pool_type/replicated.yaml}' 
 ./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite rgw --filter-out btrfs,ext4 --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph v0.87.1 
 </pre> 

 The problem seems to be related to "rgwadmin tasks assumes non-regional output":http://tracker.ceph.com/issues/11180 that was fixed two days ago. But why did it manifest itself all of a sudden and changes things for a branch that was otherwise unmodified ? 

 * *red* http://pulpito.ceph.com/loic-2015-03-22_14:55:18-rgw-v0.87.1---basic-multi/ (v0.87.1) 

 <pre> 
 filter='rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/civetweb.yaml rgw_pool_type/replicated.yaml}' 
 ./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite rgw --filter-out btrfs,ext4 --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant 
 </pre> 

 * *red* http://pulpito.ceph.com/loic-2015-03-22_12:55:06-rgw-giant---basic-multi/ (giant branch) 

 <pre> 
 run=loic-2015-03-20_00:17:55-rgw-giant-backports---basic-multi 
 eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/jobs/?status=fail | jq '.[].description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') 
 ./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite rgw --filter-out btrfs,ext4 --suite-branch wip-rgw-acl-giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant-backports 
 </pre> 

 * *red* http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/ 
 ** *???* tasks/radosgw_admin.py assert time.time() - timestamp <= (20 x 60) 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/civetweb.yaml rgw_pool_type/replicated.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815807 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/civetweb.yaml rgw_pool_type/ec.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815806 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/civetweb.yaml rgw_pool_type/ec-profile.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815805 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/apache.yaml rgw_pool_type/replicated.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815803 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/apache.yaml rgw_pool_type/ec.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815802 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/apache.yaml rgw_pool_type/ec-profile.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815801 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/civetweb.yaml rgw_pool_type/ec.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815798 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/civetweb.yaml rgw_pool_type/ec-profile.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815797 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/civetweb.yaml rgw_pool_type/ec-cache.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815796 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/apache.yaml rgw_pool_type/replicated.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815795 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/apache.yaml rgw_pool_type/ec.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815794 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/apache.yaml rgw_pool_type/ec-cache.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815792 
 ** *environmental noise* Command failed with status 1: 'cd /home/ubuntu/cephtest && cd radosgw-agent.client.0 && ./bootstrap' 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/apache.yaml rgw_pool_type/ec-cache.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815800 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/apache.yaml rgw_pool_type/ec-profile.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815793 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/civetweb.yaml rgw_pool_type/replicated.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815799 
 ** *environmental noise* No route to host (113) plana59 is down 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/civetweb.yaml rgw_pool_type/ec-cache.yaml}":http://pulpito.ceph.com/loic-2015-03-22_10:53:10-rgw-giant-backports---basic-multi/815804 

 <pre> 
 run=loic-2015-03-20_00:17:55-rgw-giant-backports---basic-multi 
 eval filter=$(curl --silent http://paddles.front.sepia.ceph.com/runs/$run/jobs/?status=fail | jq '.[].description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//') 
 ./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite rgw --filter-out btrfs,ext4 --suite-branch wip-rgw-acl-giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant-backports 
 </pre> 

 * *red* http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/ 
 ** the s3-tests referred to a branch wip-acl-giant which didn't exist in https://github.com/ceph/s3-tests, "pushed the wip-acl-giant branch":https://github.com/ceph/s3-tests/tree/wip-rgw-acl-giant and trying again 
 ** *tasks/radosgw_admin.py assert time.time() - timestamp <= (20 x 60) * 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/civetweb.yaml rgw_pool_type/replicated.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813491 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/civetweb.yaml rgw_pool_type/ec.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813490 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/civetweb.yaml rgw_pool_type/ec-profile.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813489 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/civetweb.yaml rgw_pool_type/ec-cache.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813488 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/apache.yaml rgw_pool_type/replicated.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813487 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/apache.yaml rgw_pool_type/ec.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813486 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/apache.yaml rgw_pool_type/ec-profile.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813485 
 *** "rgw/singleton/{all/radosgw-admin-multi-region.yaml frontend/apache.yaml rgw_pool_type/ec-cache.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813484 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/civetweb.yaml rgw_pool_type/replicated.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813483 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/civetweb.yaml rgw_pool_type/ec.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813482 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/civetweb.yaml rgw_pool_type/ec-profile.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813481 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/civetweb.yaml rgw_pool_type/ec-cache.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813480 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/apache.yaml rgw_pool_type/replicated.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813479 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/apache.yaml rgw_pool_type/ec.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813478 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/apache.yaml rgw_pool_type/ec-profile.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813477 
 *** "rgw/singleton/{all/radosgw-admin-data-sync.yaml frontend/apache.yaml rgw_pool_type/ec-cache.yaml}":http://pulpito.ceph.com/loic-2015-03-20_17:58:01-rgw-giant-backports---basic-multi/813476 


 <code>./virtualenv/bin/teuthology-suite --priority 1000 --suite rgw --filter-out btrfs,ext4 --suite-branch wip-rgw-acl-giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org --ceph giant-backports</code> 

 * *red* http://pulpito.ceph.com/loic-2015-03-19_18:55:59-rgw-giant-backports---basic-multi 
 ** assuming that all 24 failed jobs are environmental noise 

 h4. rbd 

 <pre> 
 filter='rbd/librbd/{cache/none.yaml cachepool/small.yaml clusters/fixed-3.yaml fs/btrfs.yaml msgr-failures/few.yaml workloads/qemu_xfstests.yaml}' 
 ./virtualenv/bin/teuthology-suite --filter="$filter" --priority 1000 --suite rbd --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org    --ceph giant-backports 
 </pre> 

 * *green* http://pulpito.ceph.com/loic-2015-03-20_17:46:39-rbd-giant-backports---basic-multi/ 

 <code>./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org    --ceph giant-backports</code> 

 * *red* http://pulpito.ceph.com/loic-2015-03-19_18:53:40-rbd-giant-backports---basic-multi 
 ** *environmental noise* plana34.stdout:[15188.076023] BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:0:12689] 
 *** "rbd/librbd/{cache/none.yaml cachepool/small.yaml clusters/fixed-3.yaml fs/btrfs.yaml msgr-failures/few.yaml workloads/qemu_xfstests.yaml}":http://pulpito.ceph.com/loic-2015-03-19_18:53:40-rbd-giant-backports---basic-multi/811021/ 

 h4. fs 

 <code>./virtualenv/bin/teuthology-suite -k testing --priority 1000 --suite fs --suite-branch giant --machine-type plana,burnupi,mira --distro ubuntu --email loic@dachary.org --owner loic@dachary.org    --ceph giant-backports</code> 

 * *green* http://pulpito.ceph.com/loic-2015-03-19_18:52:05-fs-giant-backports-testing-basic-multi 

Back