Project

General

Profile

Tasks #12701

hammer v0.94.4

Added by Abhishek Varshney over 3 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Target version:
Start date:
08/15/2015
Due date:
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

Workflow

  • Preparing the release OK
  • Cutting the release
    • Loic asks Sage if a point release should be published OK
    • Loic gets approval from all leads
    • Sage writes and commits the release notes IN PROGRESS
    • Loic informs Yuri that the branch is ready for testing DONE
    • Yuri runs additional integration tests - DONE
    • If Yuri discovers new bugs that need to be backported urgently (i.e. their priority is set to Urgent), the release goes back to being prepared, it was not ready after all
    • Yuri informs Alfredo that the branch is ready for release DONE
    • Alfredo creates the packages and sets the release tag DONE

Release information

git --no-pager log --format='%H %s' --graph tags/v0.94.3..ceph/hammer | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

teuthology run commit e1dadd3da9e39daf669f94715c7833d2b280bbed (HAMMER BACKPORTS August-14)

git --no-pager log --format='%H %s' --graph ceph/hammer..ceph/hammer-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

rbd

./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports --machine-type plana,burnupi,mira
paddles=paddles.front.sepia.ceph.com
run=loic-2015-08-15_21:59:25-rbd-hammer-backports---basic-multi
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --filter="$filter" --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports --machine-type plana,burnupi,mira

rados

Together with http://pulpito.ceph.com/loic-2015-08-29_20:19:58-rados-hammer-backports---basic-multi/ that is a re-run of the failed tests after removing https://github.com/ceph/ceph/pull/5361 from the integration branch, the following makes for a successful run of the rados suite.

./virtualenv/bin/teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 18)/18 --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports --machine-type plana,burnupi,mira
paddles=paddles.front.sepia.ceph.com
run=loic-2015-08-15_22:00:55-rados-hammer-backports---basic-multi
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --priority 1000 --suite rados --filter="$filter" --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports --machine-type plana,burnupi,mira

rgw

./virtualenv/bin/teuthology-suite --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports --machine-type plana,burnupi,mira

History

#1 Updated by Abhishek Varshney over 3 years ago

  • Start date changed from 06/12/2015 to 08/15/2015

#2 Updated by Abhishek Varshney over 3 years ago

  • Description updated (diff)

#3 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#4 Updated by Abhishek Varshney over 3 years ago

  • Description updated (diff)

#5 Updated by Abhishek Varshney over 3 years ago

  • Description updated (diff)

#6 Updated by Abhishek Varshney over 3 years ago

  • Description updated (diff)

#7 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#8 Updated by Abhishek Varshney over 3 years ago

  • Description updated (diff)

#9 Updated by Abhishek Varshney over 3 years ago

  • Description updated (diff)

#10 Updated by Loic Dachary over 3 years ago

http://pulpito.ceph.com/loic-2015-08-15_22:14:11-upgrade:hammer-hammer-backports---basic-multi/

Sage: every single test was stuck, and on the 3 I checked
all mon's were stuck in leveldb from scrub using 100% cpu:

#0  0x00007fa14bd164a0 in 
std::string::_Rep::_M_destroy(std::allocator<char> const&) () from 
/usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1  0x00000000008a4c45 in _M_dispose (__a=..., this=<optimized out>) at 
/usr/include/c++/4.8/bits/basic_string.h:249
#2  ~basic_string (this=0x7fa145fe1880, __in_chrg=<optimized out>) at 
/usr/include/c++/4.8/bits/basic_string.h:539
#3  LevelDBStore::split_key (in=..., prefix=prefix@entry=0x0, 
key=key@entry=0x7fa145fe1990) at os/LevelDBStore.cc:230
#4  0x00000000008a73d6 in LevelDBStore::LevelDBWholeSpaceIteratorImpl::key 
(this=<optimized out>) at os/LevelDBStore.h:271
#5  0x000000000059a86e in KeyValueDB::IteratorImpl::key (this=<optimized 
out>) at ./os/KeyValueDB.h:146
#6  0x00000000008a5fdf in LevelDBStore::get (this=0x361b080, prefix=..., 
keys=..., out=0x7fa145fe1b60) at os/LevelDBStore.cc:195
#7  0x000000000059fc0a in MonitorDBStore::get (this=0x361b1e0, prefix=..., 
key=..., bl=...) at mon/MonitorDBStore.h:499
#8  0x00000000005b268a in Monitor::_scrub (this=this@entry=0x3792000, 
r=0x360d358) at mon/Monitor.cc:4272
#9  0x00000000005c2531 in Monitor::scrub (this=this@entry=0x3792000) at 
mon/Monitor.cc:4217
#10 0x00000000005cc6b7 in Monitor::handle_command 
(this=this@entry=0x3792000, m=m@entry=0x3ad3a00) at mon/Monitor.cc:2711
#11 0x00000000005cefa9 in Monitor::dispatch (this=this@entry=0x3792000, 
s=s@entry=0x424ae00, m=m@entry=0x3ad3a00, 
src_is_mon=src_is_mon@entry=false) at mon/Monitor.cc:3457
#12 0x00000000005cfc26 in Monitor::_ms_dispatch 
(this=this@entry=0x3792000, m=m@entry=0x3ad3a00) at mon/Monitor.cc:3376
#13 0x00000000005cea35 in Monitor::handle_forward 
(this=this@entry=0x3792000, m=m@entry=0x3c0f080) at mon/Monitor.cc:3068
#14 0x00000000005cf66d in Monitor::dispatch (this=this@entry=0x3792000, 
s=s@entry=0x3636e00, m=m@entry=0x3c0f080, 
src_is_mon=src_is_mon@entry=true) at mon/Monitor.cc:3589
#15 0x00000000005cfc26 in Monitor::_ms_dispatch 
(this=this@entry=0x3792000, m=m@entry=0x3c0f080) at mon/Monitor.cc:3376
#16 0x00000000005ede23 in Monitor::ms_dispatch (this=0x3792000, 
m=0x3c0f080) at mon/Monitor.h:833
#17 0x00000000009277c9 in ms_deliver_dispatch (m=0x3c0f080, 
this=0x37a2700) at ./msg/Messenger.h:567
#18 DispatchQueue::entry (this=0x37a28c8) at 
msg/simple/DispatchQueue.cc:185
#19 0x00000000007c7fcd in DispatchQueue::DispatchThread::entry 
(this=<optimized out>) at msg/simple/DispatchQueue.h:103
#20 0x00007fa14cf0a182 in start_thread (arg=0x7fa145fe4700) at 
pthread_create.c:312
#21 0x00007fa14b474fbd in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:111
Candidate Unlikely Cosmetic Upgrade

#11 Updated by Joao Eduardo Luis over 3 years ago

The stack trace appears to get stuck while dealloc'ing a string. I don't really know how this can happen.

#12 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#13 Updated by Sage Weil over 3 years ago

Joao Luis wrote:

The stack trace appears to get stuck while dealloc'ing a string. I don't really know how this can happen.

The stack traces vary slightly, they're all in get_key() but with various other bits after that:

Thread 31 (Thread 0x7f3c24479700 (LWP 12869)):
#0  LevelDBStore::get (this=0x41231e0, prefix=..., keys=..., out=0x7f3c24477e70) at /usr/include/c++/4.8/bits/stl_set.h:299
#1  0x000000000059fc0a in MonitorDBStore::get (this=0x4123340, prefix=..., key=..., bl=...) at mon/MonitorDBStore.h:499
#2  0x00000000005b268a in Monitor::_scrub (this=this@entry=0x4282000, r=r@entry=0x4897848) at mon/Monitor.cc:4272

Thread 33 (Thread 0x7fdad6593700 (LWP 12898)):
#0  LevelDBStore::split_key (in=..., prefix=prefix@entry=0x7fdad6590890, key=key@entry=0x7fdad65908a0) at os/LevelDBStore.cc:226
#1  0x00000000008a7462 in LevelDBStore::LevelDBWholeSpaceIteratorImpl::raw_key (this=<optimized out>) at os/LevelDBStore.h:276
#2  0x000000000059aaaf in KeyValueDB::IteratorImpl::valid (this=0x5139e20) at ./os/KeyValueDB.h:132
#3  0x00000000008a5fc0 in LevelDBStore::get (this=0x50b8f20, prefix=..., keys=..., out=0x7fdad6590b60) at os/LevelDBStore.cc:195
#4  0x000000000059fc0a in MonitorDBStore::get (this=0x50b9080, prefix=..., key=..., bl=...) at mon/MonitorDBStore.h:499
#5  0x00000000005b268a in Monitor::_scrub (this=this@entry=0x521e000, r=0x50aa398) at mon/Monitor.cc:4272

Thread 31 (Thread 0x7f93f0e82700 (LWP 12899)):
#0  0x00007f93f658be70 in std::string::compare(std::string const&) const () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1  0x00000000005b2875 in operator< <char, std::char_traits<char>, std::allocator<char> > (__rhs=..., __lhs=...) at /usr/include/c++/4.8/bits/basic_string.h:2571
#2  operator() (this=<optimized out>, __y=..., __x=...) at /usr/include/c++/4.8/bits/stl_function.h:235
#3  operator[] (__k=..., this=0x4b005f8) at /usr/include/c++/4.8/bits/stl_map.h:463
#4  Monitor::_scrub (this=this@entry=0x465a000, r=r@entry=0x4b005c8) at mon/Monitor.cc:4274

Thread 33 (Thread 0x7f0194204700 (LWP 14872)):
#0  ~basic_string (this=0x7f0194202b90, __in_chrg=<optimized out>) at /usr/include/c++/4.8/bits/basic_string.h:539
#1  raw (l=1652, this=0x3aea0b0) at common/buffer.cc:135
#2  raw_char (l=1652, this=0x3aea0b0) at common/buffer.cc:483
#3  ceph::buffer::copy (c=0x7f018bad1012 "\001\002\002\202\002", len=1652) at common/buffer.cc:589
#4  0x000000000084f888 in ceph::buffer::ptr::ptr (this=0x7f0194202be0, d=<optimized out>, l=<optimized out>) at common/buffer.cc:651
#5  0x00000000008a59d4 in LevelDBStore::to_bufferlist (in=...) at os/LevelDBStore.cc:215
#6  0x00000000008a788e in LevelDBStore::LevelDBWholeSpaceIteratorImpl::value (this=<optimized out>) at os/LevelDBStore.h:280
#7  0x000000000059a88e in KeyValueDB::IteratorImpl::value (this=<optimized out>) at ./os/KeyValueDB.h:149
#8  0x00000000008a6053 in LevelDBStore::get (this=0x38c8f20, prefix=..., keys=..., out=0x7f0194202e70) at os/LevelDBStore.cc:196
#9  0x000000000059fc0a in MonitorDBStore::get (this=0x38c9080, prefix=..., key=..., bl=...) at mon/MonitorDBStore.h:499
#10 0x00000000005b268a in Monitor::_scrub (this=this@entry=0x3a6c000, r=r@entry=0x4067c48) at mon/Monitor.cc:4272

Thread 33 (Thread 0x7fcb0a153700 (LWP 14871)):
#0  0x00000000008a73c3 in LevelDBStore::LevelDBWholeSpaceIteratorImpl::key (this=0x3b807d0) at os/LevelDBStore.h:271
#1  0x000000000059a86e in KeyValueDB::IteratorImpl::key (this=<optimized out>) at ./os/KeyValueDB.h:146
#2  0x00000000008a5fdf in LevelDBStore::get (this=0x3ba4f20, prefix=..., keys=..., out=0x7fcb0a151e70) at os/LevelDBStore.cc:195
#3  0x000000000059fc0a in MonitorDBStore::get (this=0x3ba5080, prefix=..., key=..., bl=...) at mon/MonitorDBStore.h:499
#4  0x00000000005b268a in Monitor::_scrub (this=this@entry=0x3d48000, r=r@entry=0x3fb8388) at mon/Monitor.cc:4272

and all are at 100% cpu. pretty clearly busy looping in get()!

Probably something in the caller is different on hammer vs master? :/

In any case, I'd drop that commit!

#14 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#15 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#16 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#17 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#18 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#19 Updated by Abhishek Varshney over 3 years ago

  • Description updated (diff)

#20 Updated by Abhishek Varshney over 3 years ago

  • Description updated (diff)

#21 Updated by Abhishek Varshney over 3 years ago

  • Description updated (diff)

#22 Updated by Abhishek Varshney over 3 years ago

  • Description updated (diff)

#23 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#24 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#25 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#26 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#27 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#28 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#29 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#30 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#31 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#32 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#33 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#34 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#35 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#36 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#37 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#38 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#39 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#40 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#41 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#42 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#43 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#44 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#45 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#46 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#47 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#48 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#49 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#50 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#51 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#52 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#53 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#54 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#55 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#56 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)
filter="rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/xfs.yaml rgw_pool_type/replicated.yaml tasks/rgw_swift.yaml}" 
./virtualenv/bin/teuthology-suite --priority 101 --suite rgw --filter="$filter" --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports-loic --machine-type plana,burnupi,mira

#57 Updated by Loic Dachary over 3 years ago

$ git rev-parse ceph/hammer-backports
63c3d50ace54238418cec1d5ebb5a32364058cad

git --no-pager log --format='%H %s' --graph ceph/hammer..ceph/hammer-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#58 Updated by Loic Dachary over 3 years ago

./virtualenv/bin/teuthology-suite --priority 1000 --subset $(expr $RANDOM % 5)/5 --suite rgw --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports --machine-type plana,burnupi,mira

#59 Updated by Loic Dachary over 3 years ago

No need to run rgw suite on this batch because all pull requests have already been tested in the previous batch. No need to run the fs suite either because there is a single pull request for fs and it has already been tested. It is included merely because it is pending approval.

$ git rev-parse ceph/hammer-backports-loic
11265088285fe45c345a8771dfc2a39918a81857

git --no-pager log --format='%H %s' --graph ceph/hammer..ceph/hammer-backports-loic | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'

#60 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#61 Updated by Loic Dachary over 3 years ago

rbd

./virtualenv/bin/teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports-loic --machine-type plana,burnupi,mira
paddles=paddles.front.sepia.ceph.com
run=loic-2015-09-02_13:56:47-rbd-hammer-backports-loic---basic-multi
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --priority 101 --suite rbd --filter="$filter" --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports-loic --machine-type plana,burnupi,mira

#62 Updated by Loic Dachary over 3 years ago

rados

./virtualenv/bin/teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 18)/18 --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports-loic --machine-type plana,burnupi,mira
paddles=paddles.front.sepia.ceph.com
run=loic-2015-09-02_13:58:31-rados-hammer-backports-loic---basic-multi
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
./virtualenv/bin/teuthology-suite --priority 101 --suite rados --filter="$filter" --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports-loic --machine-type plana,burnupi,mira

Verifying that https://github.com/ceph/ceph/pull/5887 does not cause problems with the rados suite.

teuthology-suite --priority 101 --suite rados --subset $(expr $RANDOM % 18)/18 --suite-branch hammer --distro ubuntu --email loic@dachary.org --filter-out=rhel --ceph wip-pr-5887 --machine-type plana,burnupi,mira
run=loic-2015-10-03_11:11:28-rados-wip-pr-5887---basic-multi
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
teuthology-suite --priority 101 --suite rados --filter="$filter" --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph wip-pr-5887 --machine-type plana,burnupi,mira

#63 Updated by Loic Dachary over 3 years ago

powercycle

./virtualenv/bin/teuthology-suite -l2 -v -c hammer-backports-loic -k testing -m plana,burnupi,mira -s powercycle -p 1000 --email loic@dachary.org

#65 Updated by Loic Dachary over 3 years ago

ceph-deploy

teuthology-suite --verbose --suite ceph-deploy --filter=centos_6,ubuntu_14 --suite-branch hammer --ceph hammer-backports-loic --machine-type vps --priority 1000

Note: centos 6.5 was removed from distros

#66 Updated by Loic Dachary over 3 years ago

fs

./virtualenv/bin/teuthology-suite --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports-loic --machine-type plana,burnupi,mira
run=loic-2015-09-06_23:31:17-fs-hammer-backports-loic---basic-multi
paddles=paddles.front.sepia.ceph.com
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
teuthology-suite --priority 1000 --suite fs --filter="$filter" --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports-loic --machine-type plana,burnupi,mira

Assuming transient errors / environmental errors, running again

run=loic-2015-09-06_23:31:17-fs-hammer-backports-loic---basic-multi
paddles=paddles.front.sepia.ceph.com
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
teuthology-suite --priority 1000 --suite fs --filter="$filter" --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports-loic --machine-type plana,burnupi,mira

The errors are LibCephFS.GetPoolId failure, verifying the https://github.com/ceph/ceph/pull/5887

run=loic-2015-09-06_23:31:17-fs-hammer-backports-loic---basic-multi
paddles=paddles.front.sepia.ceph.com
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
teuthology-suite --priority 101 --suite fs --filter="$filter" --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph wip-pr-5887 --machine-type plana,burnupi,mira

#67 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#68 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#69 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#70 Updated by Loic Dachary over 3 years ago

  • Description updated (diff)

#71 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)

#72 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)

#73 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)

#74 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)

#75 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)

#76 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)

#77 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)

#78 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)

#79 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)

#80 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)

#81 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)

#82 Updated by Yuri Weinstein about 3 years ago

  • Description updated (diff)

#83 Updated by Yuri Weinstein about 3 years ago

QE Validation (started 10/12/15)

re-runs command lines and filters are captured in http://pad.ceph.com/p/hammer_v0.94.4_QE_validation_notes

Suite Runs/Reruns Notes/Issues
rados http://pulpito.ceph.com/teuthology-2015-10-13_14:11:26-rados-hammer-distro-basic-multi/ PASSED
http://149.202.176.126:8081/teuthology-2015-10-14_20:38:52-rados-hammer-distro-basic-openstack/ openstack
http://pulpito.ceph.com/kchai-2015-10-15_08:09:36-rados-infernalis---basic-multi/
rbd http://pulpito.ceph.com/teuthology-2015-10-12_17:29:20-rbd-hammer-distro-basic-multi/ PASSED
http://pulpito.ceph.com/teuthology-2015-10-13_13:00:56-rbd-hammer-distro-basic-multi/
http://pulpito.ceph.com/teuthology-2015-10-13_13:41:05-rbd-hammer-distro-basic-vps/
rgw http://pulpito.ceph.com/teuthology-2015-10-12_17:31:55-rgw-hammer-distro-basic-multi/ PASSED
http://pulpito.ceph.com/teuthology-2015-10-13_13:07:07-rgw-hammer-distro-basic-multi/
fs http://pulpito.ceph.com/teuthology-2015-10-13_07:57:16-fs-hammer---basic-multi/ PASSED
http://pulpito.ceph.com/teuthology-2015-10-13_18:35:01-fs-hammer---basic-multi/
http://pulpito.ceph.com/teuthology-2015-10-13_20:55:23-fs-hammer---basic-multi/
krbd http://pulpito.ceph.com/teuthology-2015-10-13_07:58:26-krbd-hammer-testing-basic-multi/ FAILED env noise
http://pulpito.ceph.com/teuthology-2015-10-14_09:28:34-krbd-hammer-testing-basic-multi/
http://149.202.176.126:8081/teuthology-2015-10-15_16:10:56-krbd-hammer-testing-basic-openstack/ openstack
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-19_15:55:25-krbd-hammer-testing-basic-openstack/ openstack
kcephfs http://pulpito.ceph.com/teuthology-2015-10-13_07:59:34-kcephfs-hammer-testing-basic-multi/ PASSED
http://pulpito.ceph.com/teuthology-2015-10-13_18:28:59-kcephfs-hammer-testing-basic-multi/
http://pulpito.ceph.com/teuthology-2015-10-14_09:41:02-kcephfs-hammer-testing-basic-multi/
http://pulpito.ceph.com/teuthology-2015-10-19_09:41:43-kcephfs-hammer-testing-basic-multi/ https://github.com/ceph/ceph/pull/6287
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-17_18:08:01-kcephfs-hammer-testing-basic-openstack/ 1 failed #11482
knfs http://pulpito.ceph.com/teuthology-2015-10-13_08:10:14-knfs-hammer-testing-basic-multi/ PASSED
hadoop http://pulpito.ceph.com/teuthology-2015-10-14_09:45:17-hadoop-hammer---basic-multi/ or http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-15_21:24:32-hadoop-hammer---basic-openstack/ FAILED env noise, John, pls double check download.ceph.com URL in hammer
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-18_18:12:02-hadoop-hammer---basic-openstack/
multimds not required, optional
samba http://pulpito.ceph.com/teuthology-2015-10-14_09:47:08-samba-hammer---basic-multi/ PASSED
http://pulpito.ceph.com/teuthology-2015-10-15_14:39:15-samba-hammer---basic-multi/ or http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-15_21:42:14-samba-hammer---basic-openstack/
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-15_22:52:30-samba-hammer---basic-openstack/
rest http://pulpito.ceph.com/teuthology-2015-10-14_09:48:09-rest-hammer---basic-multi/ PASSED
ceph-deploy http://pulpito.ceph.com/teuthology-2015-10-14_09:51:21-ceph-deploy-hammer-distro-basic-vps/ FAILED Alfredo, Sage pls review #13366
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-15_21:44:53-upgrade:dumpling-firefly-x-hammer-distro-basic-openstack/
upgrade/client-upgrade http://pulpito.ceph.com/teuthology-2015-10-14_14:04:59-upgrade:client-upgrade-hammer-distro-basic-multi/ PASSED
http://149.202.176.126:8081/teuthology-2015-10-14_21:18:02-upgrade:client-upgrade-hammer-distro-basic-openstack/ same as above in openstack
upgrade/dumpling-firefly-x ubuntu14 http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-15_21:44:53-upgrade:dumpling-firefly-x-hammer-distro-basic-openstack/ FAILED #11104
upgrade/dumpling-firefly-x - vps http://149.202.176.126:8081/teuthology-2015-10-14_21:39:06-upgrade:dumpling-firefly-x-hammer-distro-basic-openstack/ this suite runs out of memory on vps
upgrade/firefly-x http://149.202.176.126:8081/teuthology-2015-10-14_22:01:56-upgrade:firefly-x-hammer-distro-basic-openstack/ or http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-17_18:33:01-upgrade:firefly-x-hammer-distro-basic-openstack/ FAILED ubuntu14 passed, #11104
http://pulpito.ceph.com/teuthology-2015-10-14_15:01:45-upgrade:firefly-x-hammer-distro-basic-multi/
upgrade/giant-x - ubuntu14 http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-15_21:51:54-upgrade:giant-x-hammer-distro-basic-openstack/ PASSED
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-16_16:00:28-upgrade:giant-x-hammer-distro-basic-openstack/
upgrade/giant-x - vps http://pulpito.ceph.com/teuthology-2015-10-16_17:05:08-upgrade:giant-x-hammer-distro-basic-vps/ FAILED #11104
upgrade/hammer - ubuntu14 http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-15_21:50:58-upgrade:hammer-hammer-distro-basic-openstack/ PASSED
http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2015-10-16_16:02:45-upgrade:hammer-hammer-distro-basic-openstack/
upgrade/hammer - vps http://pulpito.ceph.com/teuthology-2015-10-15_14:13:10-upgrade:hammer-hammer-distro-basic-vps/ #11104 FAILED
powercycle http://pulpito.ceph.com/teuthology-2015-10-13_14:29:06-powercycle-hammer-testing-basic-multi/ PASSED
http://pulpito.ceph.com/teuthology-2015-10-15_14:47:41-powercycle-hammer-testing-basic-multi/

==

PASSED / FAILED
(DON'T REMOVE BELOW)
Suite Runs/Reruns Notes/Issues
PASSED / FAILED

#84 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)

#85 Updated by Loic Dachary about 3 years ago

  • Description updated (diff)
  • Status changed from In Progress to Resolved

Also available in: Atom PDF