Tasks #14692
closedhammer v0.94.7
0%
Description
Workflow¶
- Preparing the release
- Cutting the release
- Nathan asks Sage if a point release should be published YES
- Nathan gets approval from all leads
- Sage writes and commits the release notes IN PROGRESS
- Nathan informs Yuri that the branch is ready for testing
- Yuri runs additional integration tests "DONE"
- If Yuri discovers new bugs that need to be backported urgently (i.e. their priority is set to Urgent), the release goes back to being prepared, it was not ready after all
- Yuri informs Alfredo that the branch is ready for release DONE
- Alfredo creates the packages and sets the release tag DONE
Release information¶
- branch to build from: hammer, commit:d56bdf93ced6b80b07397d57e3fa68fe68304432
- version: v0.94.7
- type of release: point release
- where to publish the release: http://download.ceph.com/debian-hammer and http://download.ceph.com/rpm-hammer
Updated by Loïc Dachary about 8 years ago
git --no-pager log --format='%H %s' --graph ceph/hammer..hammer-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 6551
- |\
- | + client: add InodeRef.h to make dist
- | + client: use smart pointer to track 'cwd' and 'root_parents'
- | + client: convert Inode::snapdir_parent to smart pointer
- | + client: use smart pointer to track temporary inode reference
- | + client: convert CapSnap::in to smart pointer
- | + client: convert Fh::inode to smart pointer
- | + client: use smart pointers in MetaRequest
- | + client: convert Dentry::inode to smart pointer
- | + client: hold reference for returned inode
- + Pull request 6604
- |\
- | + client/Client.cc: remove only once used variable
- | + client/Client.cc: fix realloc memory leak
- | + client: added permission check based on getgrouplist
- | + configure.ac: added autoconf check for getgrouplist
- + Pull request 6754
- |\
- | + mds: fix out-of-order messages
- + Pull request 6946
- |\
- | + osd: log inconsistent shard sizes
- + Pull request 7110
- |\
- | + fix ceph-fuse writing to stale log file after log rotation
- + Pull request 7185
- |\
- | + rgw: radosgw-admin bucket check --fix not work
- + Pull request 7188
- |\
- | + rgw: Add default quota config
- + Pull request 7414
- |\
- | + rgw: warn on suspicious civetweb frontend parameters
- + Pull request 7415
- |\
- | + PG::activate(): handle unexpected cached_removed_snaps more gracefully
- + Pull request 7456
- |\
- | + qa/workunits/post-file.sh: sudo
- | + qa/workunits/post-file: pick a dir that's readable by world
- | + qa/workunits/post-file.sh: use /etc/default
- + Pull request 7475
- |\
- | + ceph-disk: use blkid instead of sgdisk -i
- + Pull request 7485
- |\
- | + librbd: ensure librados callbacks are flushed prior to destroying image
- | + librbd: simplify IO flush handling
- | + WorkQueue: PointerWQ drain no longer waits for other queues
- | + test: new librbd flatten test case
- + Pull request 7488
- |\
- | + auth: return error code from encrypt/decrypt; make error string optional
- | + auth: optimize crypto++ key context
- | + auth/Crypto: optimize libnss key
- | + auth: refactor crypto key context
- | + auth/cephx: optimize signature check
- | + auth/cephx: move signature calc into helper
- | + auth/Crypto: avoid memcpy on libnss crypto operation
- | + auth: make CryptoHandler implementations totally private
- + Pull request 7542
- |\
- | + Fixed the ceph get mdsmap assertion.
- + Pull request 7576
- |\
- | + mon: add mon_config_key prefix when sync full
- + Pull request 7577
- + OSD::consume_map: correctly remove pg shards which are no longer acting
Updated by Loïc Dachary about 8 years ago
fs¶
teuthology-suite --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=3/5
- fail http://pulpito.ceph.com/loic-2016-02-08_23:43:36-fs-hammer-backports---basic-multi
- new bug open returns EACCES when O_TRUNC is specified and write permission is denied 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=846d83652dca59138e64bb2d71972e1e3464299c TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/suites/pjd.sh'- fs/basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_pjd.yaml}
- fs/thrash/{ceph/base.yaml ceph-thrash/default.yaml clusters/mds-1active-1standby.yaml debug/mds_client.yaml fs/btrfs.yaml msgr-failures/none.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_pjd.yaml}
- fs/basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/no.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cephfs_scrub_tests.yaml}
- fs/basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/no.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_pjd.yaml}
- fs/thrash/{ceph/base.yaml ceph-thrash/default.yaml clusters/mds-1active-1standby.yaml debug/mds_client.yaml fs/btrfs.yaml msgr-failures/osd-mds-delay.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_pjd.yaml}
- fs/basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cephfs_scrub_tests.yaml}
- environmental noise valgrind: mmap(...) failed in UME with error 12 (Cannot allocate memory). 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/mon.b.log --time-stamp=yes --tool=memcheck --leak-check=full --show-reachable=yes ceph-mon -f -i b'
- saw valgrind issues
- new bug open returns EACCES when O_TRUNC is specified and write permission is denied 'mkdir
Re-running failed jobs
- fail http://pulpito.ceph.com/loic-2016-02-09_21:25:29-fs-hammer-backports---basic-multi/
- new bug open returns EACCES when O_TRUNC is specified and write permission is denied 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=846d83652dca59138e64bb2d71972e1e3464299c TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/suites/pjd.sh'- fs/basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_pjd.yaml}
- fs/thrash/{ceph/base.yaml ceph-thrash/default.yaml clusters/mds-1active-1standby.yaml debug/mds_client.yaml fs/btrfs.yaml msgr-failures/none.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_pjd.yaml}
- fs/basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/no.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cephfs_scrub_tests.yaml}
- fs/basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/no.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_pjd.yaml}
- fs/thrash/{ceph/base.yaml ceph-thrash/default.yaml clusters/mds-1active-1standby.yaml debug/mds_client.yaml fs/btrfs.yaml msgr-failures/osd-mds-delay.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_pjd.yaml}
- fs/basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cephfs_scrub_tests.yaml}
- environmental noise valgrind: mmap(...) failed in UME with error 12 (Cannot allocate memory). 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/mon.a.log --time-stamp=yes --tool=memcheck --leak-check=full --show-reachable=yes ceph-mon -f -i a'
- new bug open returns EACCES when O_TRUNC is specified and write permission is denied 'mkdir
Updated by Loïc Dachary about 8 years ago
rgw¶
teuthology-suite --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=2/5
- fail http://pulpito.ceph.com/loic-2016-02-08_23:45:59-rgw-hammer-backports---basic-multi
- environmental noise valgrind: mmap(...) failed in UME with error 12 (Cannot allocate memory). 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/mon.c.log --time-stamp=yes --tool=memcheck --leak-check=full --show-reachable=yes ceph-mon -f -i c'
Updated by Loïc Dachary about 8 years ago
powercycle¶
teuthology-suite -l2 -v -c hammer-backports -k testing -m smithi -s powercycle -p 1000 --email loic@dachary.org
Updated by Loïc Dachary about 8 years ago
upgrade¶
teuthology-suite --verbose --suite upgrade/hammer --suite-branch hammer --ceph hammer-backports --machine-type vps --priority 1000 machine_types/vps.yaml
- fail http://pulpito.ceph.com/loic-2016-02-08_23:44:32-upgrade:hammer-hammer-backports---basic-vps
- known bug Assertion `nlock == 0' failed in upgrade:firefly-firefly-testing-basic-vps suite Command crashed: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid bar.client.0 --purge-data'
- environmental noise '/home/ubuntu/cephtest/archive/syslog/misc.log:2016-02-09T08:51:08.462077+00:00 vpm035 crond22226: (root) INFO (Job execution of per-minute job scheduled for 08:50 delayed into subsequent minute 08:51. Skipping job run.)
' in syslog - environmental noise '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-02-09T09:12:31.169339+00:00 vpm047 kernel: BUG: soft lockup - CPU#0 stuck for 22s! [khugepaged:27]
' in syslog
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
rados¶
teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 18)/18 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=9/18
- environmental noise 'sudo yum install ceph-radosgw -y'
- rados/singleton-nomsgr/{all/11429.yaml}":http://pulpito.ceph.com/loic-2016-02-08_23:46:34-rados-hammer-backports---basic-multi/1250
- environmental noise {'smithi046.front.sepia.ceph.com': 'SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh'}
Re-running failed tests
- fail http://pulpito.ceph.com/loic-2016-02-09_21:38:35-rados-hammer-backports---basic-multi/
- environmental noise {'mira121.front.sepia.ceph.com': 'SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh'}
Updated by Loïc Dachary about 8 years ago
rbd¶
teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=0/5
- fail http://pulpito.ceph.com/loic-2016-02-08_23:50:38-rbd-hammer-backports---basic-multi
- 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 3'
Updated by Loïc Dachary about 8 years ago
git --no-pager log --format='%H %s' --graph ceph/hammer..hammer-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 6551
- |\
- | + client: add InodeRef.h to make dist
- | + client: use smart pointer to track 'cwd' and 'root_parents'
- | + client: convert Inode::snapdir_parent to smart pointer
- | + client: use smart pointer to track temporary inode reference
- | + client: convert CapSnap::in to smart pointer
- | + client: convert Fh::inode to smart pointer
- | + client: use smart pointers in MetaRequest
- | + client: convert Dentry::inode to smart pointer
- | + client: hold reference for returned inode
- + Pull request 6604
- |\
- | + client/Client.cc: remove only once used variable
- | + client/Client.cc: fix realloc memory leak
- | + client: added permission check based on getgrouplist
- | + configure.ac: added autoconf check for getgrouplist
- + Pull request 6754
- |\
- | + mds: fix out-of-order messages
- + Pull request 6946
- |\
- | + osd: log inconsistent shard sizes
- + Pull request 7110
- |\
- | + fix ceph-fuse writing to stale log file after log rotation
- + Pull request 7185
- |\
- | + rgw: radosgw-admin bucket check --fix not work
- + Pull request 7188
- |\
- | + rgw: Add default quota config
- + Pull request 7414
- |\
- | + rgw: warn on suspicious civetweb frontend parameters
- + Pull request 7415
- |\
- | + PG::activate(): handle unexpected cached_removed_snaps more gracefully
- + Pull request 7456
- |\
- | + qa/workunits/post-file.sh: sudo
- | + qa/workunits/post-file: pick a dir that's readable by world
- | + qa/workunits/post-file.sh: use /etc/default
- + Pull request 7475
- |\
- | + ceph-disk: use blkid instead of sgdisk -i
- + Pull request 7485
- |\
- | + librbd: ensure librados callbacks are flushed prior to destroying image
- | + librbd: simplify IO flush handling
- | + WorkQueue: PointerWQ drain no longer waits for other queues
- | + test: new librbd flatten test case
- + Pull request 7488
- |\
- | + auth: return error code from encrypt/decrypt; make error string optional
- | + auth: optimize crypto++ key context
- | + auth/Crypto: optimize libnss key
- | + auth: refactor crypto key context
- | + auth/cephx: optimize signature check
- | + auth/cephx: move signature calc into helper
- | + auth/Crypto: avoid memcpy on libnss crypto operation
- | + auth: make CryptoHandler implementations totally private
- + Pull request 7542
- |\
- | + Fixed the ceph get mdsmap assertion.
- + Pull request 7576
- |\
- | + mon: add mon_config_key prefix when sync full
- + Pull request 7577
- |\
- | + OSD::consume_map: correctly remove pg shards which are no longer acting
- + Pull request 7589
- |\
- | + rados: Add new field flags for ceph_osd_op.copy_get.
- + Pull request 7590
- |\
- | + OSDMap: reset osd_primary_affinity shared_ptr when deepish_copy_from
- + Pull request 7591
- |\
- | + rgw: support admin credentials in S3-related Keystone authentication.
- + Pull request 7645
- + os/LevelDBStore:fix bug when compact_on_mount
Updated by Loïc Dachary about 8 years ago
rados¶
teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 18)/18 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=6/18
- fail http://pulpito.ceph.com/loic-2016-02-16_22:00:52-rados-hammer-backports---basic-multi/
- known bug SELinux denials found SELinux denials found on ubuntu@smithi022.front.sepia.ceph.com: ['type=AVC msg=audit(1455698605.259:45133): avc: denied { search } for pid=27316 comm=72733A6D61696E20513A526567 name="cephtest" dev="sda1" ino=31981750 scontext=system_u:system_r:syslogd_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=dir']
- new bug hammer: CentOS 7 tcmalloc::ThreadCache valgrind error libboost_thread-mt.so.1.53 saw valgrind issues
- known bug maybe hammer specific regression osd/PG.cc: 288: FAILED assert
- known bug "test hung "joining thrashosds in hammer tests
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
rbd¶
teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=1/5
- fail http://pulpito.ceph.com/loic-2016-02-16_22:05:19-rbd-hammer-backports---basic-multi
- known bug qemu/tests/qemu-iotests/077 fails 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c8e54591aa45fe42f3fd53164de8f89161696097 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/qemu-iotests.sh' - environmental noise '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-02-17T08:32:19.958183+00:00 mira078 kernel: [ BUG: held lock freed! ]' in syslog
- known bug qemu/tests/qemu-iotests/077 fails 'mkdir
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
upgrade¶
teuthology-suite --verbose --suite upgrade/hammer --suite-branch hammer --ceph hammer-backports --machine-type vps --priority 1000 machine_types/vps.yaml
- fail http://pulpito.ceph.com/loic-2016-02-16_22:06:28-upgrade:hammer-hammer-backports---basic-vps
- Command crashed: "adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user create --uid foo.client.0 --display-name 'Mr. foo.client.0' --access-key LGAKCSDLUCUHURLRWWVA --secret 3D2Jk5cfLo+E1F5695wctFMbFOBnnXUni6nIk8/nA9+5Nj1uNB/NYw== --email foo.client.0+test@test.test"
- failed to reach quorum size 3 before timeout expired
- {'vpm157.front.sepia.ceph.com': 'SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh'}
- '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-02-17T07:21:23.944118+00:00 vpm157 kernel: BUG: soft lockup - CPU#0 stuck for 22s! [khugepaged:27]
' in syslog - u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph-mds=0.94.5-372-gc8e5459-1trusty rbd-fuse=0.94.5-372-gc8e5459-1trusty librbd1=0.94.5-372-gc8e5459-1trusty ceph-fuse=0.94.5-372-gc8e5459-1trusty python-ceph=0.94.5-372-gc8e5459-1trusty ceph-common=0.94.5-372-gc8e5459-1trusty libcephfs-java=0.94.5-372-gc8e5459-1trusty ceph=0.94.5-372-gc8e5459-1trusty libcephfs-jni=0.94.5-372-gc8e5459-1trusty ceph-test=0.94.5-372-gc8e5459-1trusty radosgw=0.94.5-372-gc8e5459-1trusty librados2=0.94.5-372-gc8e5459-1trusty libcephfs1=0.94.5-372-gc8e5459-1trusty'
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.3.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
- {'vpm144.front.sepia.ceph.com': 'SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh'}
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c8e54591aa45fe42f3fd53164de8f89161696097 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh'- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.4.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_7.2.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.4.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_7.2.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.3.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_7.2.yaml}
Re-running failed tests
- fail http://pulpito.ceph.com/loic-2016-02-17_22:08:35-upgrade:hammer-hammer-backports---basic-vps/
- failed to recover before timeout expired
- , 'failed': True, 'parsed': False, 'msg': 'BECOME-SUCCESS-dezgifyuzwpgapnfekttmvsuurdsaxgi\nOpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014\r\ndebug1: Reading configuration data /var/lib/teuthworker/.ssh/config\r\ndebug1: /var/lib/teuthworker/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for vpm\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: mux_client_request_session: master session id: 2\r\nTraceback (most recent call last):\n File "<stdin>", line 2246, in <module>\n File "<stdin>", line 549, in main\n File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 107, in init\n self.open(progress)\n File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 151, in open\n self._cache = apt_pkg.Cache(progress)\nSystemError: E:Problem renaming the file /var/cache/apt/pkgcache.bin.gN5lYW to /var/cache/apt/pkgcache.bin - rename (2: No such file or directory), W:You may want to run apt-get update to correct these problems\n'}}*
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c8e54591aa45fe42f3fd53164de8f89161696097 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh'- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.4.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_7.2.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.3.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
git --no-pager log --format='%H %s' --graph ceph/hammer..hammer-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 6551
- |\
- | + client: add InodeRef.h to make dist
- | + client: use smart pointer to track 'cwd' and 'root_parents'
- | + client: convert Inode::snapdir_parent to smart pointer
- | + client: use smart pointer to track temporary inode reference
- | + client: convert CapSnap::in to smart pointer
- | + client: convert Fh::inode to smart pointer
- | + client: use smart pointers in MetaRequest
- | + client: convert Dentry::inode to smart pointer
- | + client: hold reference for returned inode
- + Pull request 6604
- |\
- | + client/Client.cc: remove only once used variable
- | + client/Client.cc: fix realloc memory leak
- | + client: added permission check based on getgrouplist
- | + configure.ac: added autoconf check for getgrouplist
- + Pull request 6754
- |\
- | + mds: fix out-of-order messages
- + Pull request 6946
- |\
- | + osd: log inconsistent shard sizes
- + Pull request 7110
- |\
- | + fix ceph-fuse writing to stale log file after log rotation
- + Pull request 7185
- |\
- | + rgw: radosgw-admin bucket check --fix not work
- + Pull request 7188
- |\
- | + rgw: Add default quota config
- + Pull request 7414
- |\
- | + rgw: warn on suspicious civetweb frontend parameters
- + Pull request 7415
- |\
- | + PG::activate(): handle unexpected cached_removed_snaps more gracefully
- + Pull request 7456
- |\
- | + qa/workunits/post-file.sh: sudo
- | + qa/workunits/post-file: pick a dir that's readable by world
- | + qa/workunits/post-file.sh: use /etc/default
- + Pull request 7475
- |\
- | + ceph-disk: use blkid instead of sgdisk -i
- + Pull request 7485
- |\
- | + librbd: ensure librados callbacks are flushed prior to destroying image
- | + librbd: simplify IO flush handling
- | + WorkQueue: PointerWQ drain no longer waits for other queues
- | + test: new librbd flatten test case
- + Pull request 7488
- |\
- | + auth: return error code from encrypt/decrypt; make error string optional
- | + auth: optimize crypto++ key context
- | + auth/Crypto: optimize libnss key
- | + auth: refactor crypto key context
- | + auth/cephx: optimize signature check
- | + auth/cephx: move signature calc into helper
- | + auth/Crypto: avoid memcpy on libnss crypto operation
- | + auth: make CryptoHandler implementations totally private
- + Pull request 7542
- |\
- | + Fixed the ceph get mdsmap assertion.
- + Pull request 7576
- |\
- | + mon: add mon_config_key prefix when sync full
- + Pull request 7577
- |\
- | + OSD::consume_map: correctly remove pg shards which are no longer acting
- + Pull request 7589
- |\
- | + rados: Add new field flags for ceph_osd_op.copy_get.
- + Pull request 7590
- |\
- | + OSDMap: reset osd_primary_affinity shared_ptr when deepish_copy_from
- + Pull request 7591
- |\
- | + rgw: support admin credentials in S3-related Keystone authentication.
- + Pull request 7645
- |\
- | + os/LevelDBStore:fix bug when compact_on_mount
- + Pull request 7648
- |\
- | + mon/LogMonitor: use the configured facility if log to syslog
- + Pull request 7656
- |\
- | + ceph.in: Notify user that 'tell' can't be used in interactive mode
- + Pull request 7671
- |\
- | + global: do not start two daemons with a single pid-file
- | + test: add unitest test_pidfile.sh
- | + global/pidfile: do not start two daemons with a single pid-file
- + Pull request 7672
- + common/bit_vector: use hard-coded value for block size
Updated by Loïc Dachary about 8 years ago
rados¶
teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 18)/18 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=14/18
- fail http://pulpito.ceph.com/loic-2016-02-22_22:00:52-rados-hammer-backports---basic-multi/
- timed out waiting for admin_socket to appear after osd.5 restart
- 'sudo yum install ceph-radosgw -y'
- {'smithi005.front.sepia.ceph.com': 'SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh'}
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
upgrade¶
teuthology-suite --verbose --suite upgrade/hammer --suite-branch hammer --ceph hammer-backports --machine-type vps --priority 1000 machine_types/vps.yaml
- fail http://pulpito.ceph.com/loic-2016-02-22_22:03:49-upgrade:hammer-hammer-backports---basic-vps
- Fuse mount failed to populate /sys/ after 31 seconds
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_7.2.yaml}
- '/home/ubuntu/cephtest/archive/syslog/misc.log:2016-02-23T07:22:57.644166+00:00 vpm156 crond17936: (root) INFO (Job execution of per-minute job scheduled for 07:20 delayed into subsequent minute 07:22. Skipping job run.)
' in syslog - '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-02-23T07:15:47.684866+00:00 vpm154 kernel: BUG: soft lockup - CPU#0 stuck for 22s! [ceph-mon:21650]
' in syslog - u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=0.94.4-1trusty ceph-mds=0.94.4-1trusty ceph-common=0.94.4-1trusty ceph-fuse=0.94.4-1trusty ceph-test=0.94.4-1trusty radosgw=0.94.4-1trusty python-ceph=0.94.4-1trusty libcephfs1=0.94.4-1trusty libcephfs-java=0.94.4-1trusty libcephfs-jni=0.94.4-1trusty librados2=0.94.4-1trusty librbd1=0.94.4-1trusty rbd-fuse=0.94.4-1trusty librados2=0.94.4-1trusty librbd1=0.94.4-1trusty'
- '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-02-23T07:17:55.571562+00:00 vpm155 kernel: INFO: rcu_sched detected stalls on CPUs/tasks: {} (detected by 0, t=60003 jiffies, g=49151, c=49150, q=0)
' in syslog
- Fuse mount failed to populate /sys/ after 31 seconds
Re-running failed tests
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
rgw¶
teuthology-suite --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=0/5
- fail http://pulpito.ceph.com/loic-2016-02-22_22:05:04-rgw-hammer-backports---basic-multi
- environmental noise (possibly linked to http://tracker.ceph.com/issues/13750#note-17 ?) HTTPConnectionPool(host='smithi053.front.sepia.ceph.com', port=8000): Max retries exceeded with url: /metadata/incremental (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f89cd7f5cd0>: Failed to establish a new connection: [Errno 111] Connection refused',))
Re-running the failed test:
Updated by Loïc Dachary about 8 years ago
rbd¶
teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=1/5
Updated by Loïc Dachary about 8 years ago
powercycle¶
teuthology-suite -l2 -v -c hammer-backports -k testing -m smithi -s powercycle -p 1000 --email loic@dachary.org
Updated by Loïc Dachary about 8 years ago
fs¶
teuthology-suite --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=2/5
- fail http://pulpito.ceph.com/loic-2016-02-22_22:08:47-fs-hammer-backports---basic-multi
- Test failure: test_client_release_bug (tasks.mds_client_limits.TestClientLimits)
- new bug* open returns EACCES when O_TRUNC is specified and write permission is denied 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bd7a6b6b61b049efa4b08ac0443afa01100d5168 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/suites/pjd.sh'- fs/basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_pjd.yaml}
- fs/thrash/{ceph/base.yaml ceph-thrash/default.yaml clusters/mds-1active-1standby.yaml debug/mds_client.yaml fs/btrfs.yaml msgr-failures/none.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_pjd.yaml}
- fs/basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/no.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cephfs_scrub_tests.yaml}
- fs/basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/no.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_pjd.yaml}
- fs/thrash/{ceph/base.yaml ceph-thrash/default.yaml clusters/mds-1active-1standby.yaml debug/mds_client.yaml fs/btrfs.yaml msgr-failures/osd-mds-delay.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_pjd.yaml}
- fs/basic/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml inline/yes.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cephfs_scrub_tests.yaml}
- saw valgrind issues
- fs/verify/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/libcephfs_interface_tests.yaml validater/valgrind.yaml}
- fs/verify/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_dbench.yaml validater/valgrind.yaml}
Updated by Loïc Dachary about 8 years ago
git --no-pager log --format='%H %s' --graph ceph/hammer..hammer-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Merge 6551: hammer: client inoderef
- |\
- | + client: add InodeRef.h to make dist
- | + client: use smart pointer to track 'cwd' and 'root_parents'
- | + client: convert Inode::snapdir_parent to smart pointer
- | + client: use smart pointer to track temporary inode reference
- | + client: convert CapSnap::in to smart pointer
- | + client: convert Fh::inode to smart pointer
- | + client: use smart pointers in MetaRequest
- | + client: convert Dentry::inode to smart pointer
- | + client: hold reference for returned inode
- + Merge 6604: hammer: client: added permission check based on getgrouplist
- |\
- | + client: use fuse_req_getgroups() to get group list
- | + client: use thread local data to track fuse request
- | + client/Client.cc: remove only once used variable
- | + client/Client.cc: fix realloc memory leak
- | + client: added permission check based on getgrouplist
- | + configure.ac: added autoconf check for getgrouplist
- + Merge 6754: hammer: mds: fix out-of-order messages
- |\
- | + mds: fix out-of-order messages
- + Merge 7542: hammer: Wrong ceph get mdsmap assertion
- |\
- | + Fixed the ceph get mdsmap assertion.
- + Merge 7589: hammer: rados cppool omap to ec pool crashes osd
- |\
- | + rados: Add new field flags for ceph_osd_op.copy_get.
- + Merge 7591: hammer: rgw: S3 authentication subsystem should be able to use admin credentials for accessing Keystone
- |\
- | + rgw: support admin credentials in S3-related Keystone authentication.
- + Merge 7671: hammer: global/pidfile: do not start two daemons with a single pid-file
- |\
- | + global: do not start two daemons with a single pid-file
- | + global/pidfile: do not start two daemons with a single pid-file
- + Merge 7672: hammer: test_bit_vector.cc uses magic numbers against #defines that vary
- |\
- | + common/bit_vector: use hard-coded value for block size
- + Merge 7797: hammer: ceph init script unconditionally sources /lib/lsb/init-functions
- + init-ceph: check if /lib/lsb/init-functions exists
Updated by Loïc Dachary about 8 years ago
rados¶
teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 18)/18 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=?????/18
- fail http://pulpito.ceph.com/loic-2016-02-28_21:51:29-rados-hammer-backports---basic-multi
- environmental noise 'sudo yum install ceph-radosgw -y'
- environmental noise 'yes | sudo mkfs.btrfs -m single -l 32768 -n 32768 -f /dev/sdd'
- rados/singleton/{all/osd-recovery-incomplete.yaml fs/btrfs.yaml msgr-failures/many.yaml}
2016-02-29T01:25:09.166 INFO:tasks.ceph:['mkfs.btrfs', '-m', 'single', '-l', '32768', '-n', '32768'] on /dev/sdd on ubuntu@mira058.front.sepia.ceph.com 2016-02-29T01:25:09.167 INFO:teuthology.orchestra.run.mira058:Running: 'yes | sudo mkfs.btrfs -m single -l 32768 -n 32768 /dev/sdd' 2016-02-29T01:25:09.268 INFO:teuthology.orchestra.run.mira058.stderr:/dev/sdd appears to contain an existing filesystem (ddf_raid_member). 2016-02-29T01:25:09.268 INFO:teuthology.orchestra.run.mira058.stderr:Error: Use the -f option to force overwrite. 2016-02-29T01:25:09.269 INFO:tasks.ceph:['mkfs.btrfs', '-m', 'single', '-l', '32768', '-n', '32768', '-f'] on /dev/sdd on ubuntu@mira058.front.sepia.ceph.com 2016-02-29T01:25:09.269 INFO:teuthology.orchestra.run.mira058:Running: 'yes | sudo mkfs.btrfs -m single -l 32768 -n 32768 -f /dev/sdd' 2016-02-29T01:25:09.376 INFO:teuthology.orchestra.run.mira058.stderr:Error: unable to open /dev/sdd: Device or resource busy
- rados/singleton/{all/osd-recovery-incomplete.yaml fs/btrfs.yaml msgr-failures/many.yaml}
- new bug [ FAILED ] LibRadosAioEC.RoundTripSparseReadPP 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=13dc6a4c97fa336bbc0ac5844954066a393a0527 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test.sh' - new bug hammer: CentOS 7 tcmalloc::ThreadCache valgrind error libboost_thread-mt.so.1.53 saw valgrind issues
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
upgrade¶
teuthology-suite --verbose --suite upgrade/hammer --suite-branch hammer --ceph hammer-backports --machine-type vps --priority 1000 --email loic@dachary.org machine_types/vps.yaml
- fail http://pulpito.ceph.com/loic-2016-02-28_21:54:57-upgrade:hammer-hammer-backports---basic-vps
- Host key for server vpm195.front.sepia.ceph.com does not match!
- 'sudo yum install rbd-fuse -y'
- {'vpm113.front.sepia.ceph.com': {'cmd': ['yum-complete-transaction', '--cleanup-only'], 'end': '2016-02-29 18:18:16.695609', 'stdout': 'Loaded plugins: fastestmirror\nhttp://gitbuilder.ceph.com/mod_fastcgi-rpm-centos7-x86_64-basic/ref/master/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: gitbuilder.ceph.com; No address associated with hostname"\nTrying other mirror.\nfailure: repodata/repomd.xml from centos7-fcgi-ceph: [Errno 256] No more mirrors to try.\nhttp://gitbuilder.ceph.com/mod_fastcgi-rpm-centos7-x86_64-basic/ref/master/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: gitbuilder.ceph.com; No address associated with hostname"', 'changed': False, 'rc': 1, 'start': '2016-02-29 18:18:16.221090', 'stderr': '', 'delta': '0:00:00.474519', 'invocation': {'module_name': 'command', 'module_args': 'yum-complete-transaction --cleanup-only'}, 'stdout_lines': ['Loaded plugins: fastestmirror', 'http://gitbuilder.ceph.com/mod_fastcgi-rpm-centos7-x86_64-basic/ref/master/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: gitbuilder.ceph.com; No address associated with hostname"', 'Trying other mirror.', 'failure: repodata/repomd.xml from centos7-fcgi-ceph: [Errno 256] No more mirrors to try.', 'http://gitbuilder.ceph.com/mod_fastcgi-rpm-centos7-x86_64-basic/ref/master/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: gitbuilder.ceph.com; No address associated with hostname"'], 'warnings': []}, 'vpm145.front.sepia.ceph.com': {'cmd': ['yum-complete-transaction', '--cleanup-only'], 'end': '2016-02-29 18:18:17.923544', 'stdout': 'Loaded plugins: fastestmirror\nhttp://gitbuilder.ceph.com/mod_fastcgi-rpm-centos7-x86_64-basic/ref/master/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: gitbuilder.ceph.com; No address associated with hostname"\nTrying other mirror.\nfailure: repodata/repomd.xml from centos7-fcgi-ceph: [Errno 256] No more mirrors to try.\nhttp://gitbuilder.ceph.com/mod_fastcgi-rpm-centos7-x86_64-basic/ref/master/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: gitbuilder.ceph.com; No address associated with hostname"', 'changed': False, 'rc': 1, 'start': '2016-02-29 18:18:17.340089', 'stderr': '', 'delta': '0:00:00.583455', 'invocation': {'module_name': 'command', 'module_args': 'yum-complete-transaction --cleanup-only'}, 'stdout_lines': ['Loaded plugins: fastestmirror', 'http://gitbuilder.ceph.com/mod_fastcgi-rpm-centos7-x86_64-basic/ref/master/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: gitbuilder.ceph.com; No address associated with hostname"', 'Trying other mirror.', 'failure: repodata/repomd.xml from centos7-fcgi-ceph: [Errno 256] No more mirrors to try.', 'http://gitbuilder.ceph.com/mod_fastcgi-rpm-centos7-x86_64-basic/ref/master/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: gitbuilder.ceph.com; No address associated with hostname"'], 'warnings': []}, 'vpm088.front.sepia.ceph.com': {'cmd': ['yum-complete-transaction', '--cleanup-only'], 'end': '2016-02-29 18:18:18.513707', 'stdout': 'Loaded plugins: fastestmirror\nhttp://gitbuilder.ceph.com/mod_fastcgi-rpm-centos7-x86_64-basic/ref/master/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: gitbuilder.ceph.com; No address associated with hostname"\nTrying other mirror.\nfailure: repodata/repomd.xml from centos7-fcgi-ceph: [Errno 256] No more mirrors to try.\nhttp://gitbuilder.ceph.com/mod_fastcgi-rpm-centos7-x86_64-basic/ref/master/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: gitbuilder.ceph.com; No address associated with hostname"', 'changed': False, 'rc': 1, 'start': '2016-02-29 18:18:18.041508', 'stderr': '', 'delta': '0:00:00.472199', 'invocation': {'module_name': 'command', 'module_args': 'yum-complete-transaction --cleanup-only'}, 'stdout_lines': ['Loaded plugins: fastestmirror', 'http://gitbuilder.ceph.com/mod_fastcgi-rpm-centos7-x86_64-basic/ref/master/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: gitbuilder.ceph.com; No address associated with hostname"', 'Trying other mirror.', 'failure: repodata/repomd.xml from centos7-fcgi-ceph: [Errno 256] No more mirrors to try.', 'http://gitbuilder.ceph.com/mod_fastcgi-rpm-centos7-x86_64-basic/ref/master/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: gitbuilder.ceph.com; No address associated with hostname"'], 'warnings': []}}
- HTTPConnectionPool(host='gitbuilder.ceph.com', port=80): Max retries exceeded with url: /ceph-deb-trusty-x86_64-basic/sha1/13dc6a4c97fa336bbc0ac5844954066a393a0527/version (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fae533fff90>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))
- HTTPConnectionPool(host='gitbuilder.ceph.com', port=80): Max retries exceeded with url: /ceph-deb-trusty-x86_64-basic/ref/v0.94/version (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f1dc0fd9dd0>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))
- Fuse mount failed to populate /sys/ after 31 seconds
- HTTPConnectionPool(host='gitbuilder.ceph.com', port=80): Max retries exceeded with url: /ceph-deb-trusty-x86_64-basic/ref/v0.94.6/version (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f010c16c890>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))
- 'NoneType' object is not iterable
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.6.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_7.2.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_7.2.yaml}
- 'sudo yum install ceph-fuse -y'
Re-running failed tests
- fail http://pulpito.ceph.com/loic-2016-03-01_20:40:27-upgrade:hammer-hammer-backports---basic-vps/
- 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove'
- Fuse mount failed to populate /sys/ after 31 seconds
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.3.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_7.2.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_7.2.yaml}
Updated by Loïc Dachary about 8 years ago
rgw¶
teuthology-suite --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --distro ubuntu --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=????/5
- fail http://pulpito.ceph.com/loic-2016-02-28_22:00:18-rgw-hammer-backports---basic-multi
- 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 1'
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
rbd¶
teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=????/5
- pass http://pulpito.ceph.com/loic-2016-02-28_21:58:39-rbd-hammer-backports---basic-multi
- bug was fixed cram 0.7 regression CEPH_REF=master CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram \-v -- /home/ubuntu/cephtest/archive/cram.client.0/*.t
Updated by Loïc Dachary about 8 years ago
powercycle¶
teuthology-suite -l2 -v -c hammer-backports -k testing -m smithi -s powercycle -p 1000 --email loic@dachary.org
Updated by Loïc Dachary about 8 years ago
fs¶
teuthology-suite --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=0/5
- fail http://pulpito.ceph.com/loic-2016-02-28_22:01:06-fs-hammer-backports---basic-multi
- 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 3'
- , 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'qemu-system-x86\' \'libfcgi\' \'dmapi\' \'tgt\'\' failed: E: There are problems and -y was used without --force-yes\n'}, 'mira032.front.sepia.ceph.com': {'item': 'libgoogle-perftools4,libboost-thread1.54.0,mpich,qemu-system-x86,lttng-tools,libfcgi0ldbl,python-virtualenv,python-dev,libevent-dev,perl,libwww-perl,lsb-release,build-essential,sysstat,gdb,python-configobj,python-gevent,libedit2,libssl0.9.8,cryptsetup-bin,xfsprogs,gdisk,parted,smbios-utils,libcrypto++9,libuuid1,libfcgi,btrfs-tools,libatomic-ops-dev,git-core,attr,dbench,bonnie++,iozone3,valgrind,python-nose,mpich2,libmpich2-dev,ant,libtool,automake,gettext,uuid-dev,libacl1-dev,bc,xfsdump,dmapi,xfslibs-dev,libattr1-dev,quota,libcap2-bin,libncurses5-dev,lvm2,sysprof,vim,pdsh,collectl,blktrace,python-numpy,python-matplotlib,genisoimage,libjson-xs-perl,xml-twig-tools,default-jdk,junit4,tgt,open-iscsi,smartmontools,nagios-nrpe-server,cifs-utils,ipcalc,nfs-common,nfs-kernel-server', 'failed': True, 'stderr': 'E: There are problems and -y was used without --force-yes\n', 'stdout': 'Reading package lists...\nBuilding dependency tree...\nReading state information...\nlibfcgi0ldbl is already the newest version.\nlibdm0 is already the newest version.\nThe following extra packages will be installed:\n librados2 librbd1 qemu-utils\nSuggested packages:\n samba vde2 sgabios kvm-ipxe-precise debootstrap\nThe following NEW packages will be installed:\n librados2 librbd1 qemu-system-x86 qemu-utils tgt\n0 upgraded, 5 newly installed, 0 to remove and 103 not upgraded.\nNeed to get 0 B/4313 kB of archives.\nAfter this operation, 22.6 MB of additional disk space will be used.\nWARNING: The following packages cannot be authenticated!\n librados2 librbd1 tgt\n', 'invocation': {'module_name': 'apt', 'module_args': ''}, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'qemu-system-x86\' \'libfcgi\' \'dmapi\' \'tgt\'\' failed: E: There are problems and -y was used without --force-yes\n'}}
- , 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'qemu-system-x86\' \'libfcgi\' \'dmapi\' \'tgt\'\' failed: E: There are problems and -y was used without --force-yes\n'}, 'mira108.front.sepia.ceph.com': {'item': 'libgoogle-perftools4,libboost-thread1.54.0,mpich,qemu-system-x86,lttng-tools,libfcgi0ldbl,python-virtualenv,python-dev,libevent-dev,perl,libwww-perl,lsb-release,build-essential,sysstat,gdb,python-configobj,python-gevent,libedit2,libssl0.9.8,cryptsetup-bin,xfsprogs,gdisk,parted,smbios-utils,libcrypto++9,libuuid1,libfcgi,btrfs-tools,libatomic-ops-dev,git-core,attr,dbench,bonnie++,iozone3,valgrind,python-nose,mpich2,libmpich2-dev,ant,libtool,automake,gettext,uuid-dev,libacl1-dev,bc,xfsdump,dmapi,xfslibs-dev,libattr1-dev,quota,libcap2-bin,libncurses5-dev,lvm2,sysprof,vim,pdsh,collectl,blktrace,python-numpy,python-matplotlib,genisoimage,libjson-xs-perl,xml-twig-tools,default-jdk,junit4,tgt,open-iscsi,smartmontools,nagios-nrpe-server,cifs-utils,ipcalc,nfs-common,nfs-kernel-server', 'failed': True, 'stderr': 'E: There are problems and -y was used without --force-yes\n', 'stdout': 'Reading package lists...\nBuilding dependency tree...\nReading state information...\nlibfcgi0ldbl is already the newest version.\nlibdm0 is already the newest version.\nThe following extra packages will be installed:\n cpu-checker ipxe-qemu libbluetooth3 libbrlapi0.6 libcaca0\n libconfig-general-perl libfdt1 libibverbs1 librados2 librbd1 librdmacm1\n libsdl1.2debian libseccomp2 libsgutils2-2 libspice-server1\n libusbredirparser1 libxen-4.4 libxenstore3.0 libyajl2 msr-tools qemu-keymaps\n qemu-system-common qemu-utils seabios sg3-utils sharutils\nSuggested packages:\n samba vde2 sgabios kvm-ipxe-precise debootstrap\nThe following NEW packages will be installed:\n cpu-checker ipxe-qemu libbluetooth3 libbrlapi0.6 libcaca0\n libconfig-general-perl libfdt1 libibverbs1 librados2 librbd1 librdmacm1\n libsdl1.2debian libseccomp2 libsgutils2-2 libspice-server1\n libusbredirparser1 libxen-4.4 libxenstore3.0 libyajl2 msr-tools qemu-keymaps\n qemu-system-common qemu-system-x86 qemu-utils seabios sg3-utils sharutils\n tgt\n0 upgraded, 28 newly installed, 0 to remove and 102 not upgraded.\nNeed to get 0 B/7120 kB of archives.\nAfter this operation, 33.5 MB of additional disk space will be used.\nWARNING: The following packages cannot be authenticated!\n libbluetooth3 librados2 librbd1 libsdl1.2debian libusbredirparser1\n libibverbs1 ipxe-qemu seabios tgt\n', 'invocation': {'module_name': 'apt', 'module_args': ''}, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'qemu-system-x86\' \'libfcgi\' \'dmapi\' \'tgt\'\' failed: E: There are problems and -y was used without --force-yes\n'}}
Re-running failed tests
Trying on hammer
teuthology-suite --priority 1000 --suite fs --filter="fs/recovery/{clusters/2-remote-clients.yaml debug/mds_client.yaml mounts/ceph-fuse.yaml tasks/mds-full.yaml}" --suite-branch hammer --email loic@dachary.org --ceph hammer --machine-type smithi,mira
teuthology-suite --priority 1000 --suite fs --filter="fs/recovery/{clusters/2-remote-clients.yaml debug/mds_client.yaml mounts/ceph-fuse.yaml tasks/mds-full.yaml}" --suite-branch hammer --email loic@dachary.org --ceph v0.94.6 --machine-type smithi,mira
The same test failed February 8th, 2016 http://pulpito.ceph.com/loic-2016-02-08_23:43:36-fs-hammer-backports---basic-multi/1087/
The same test passed January 29th, 2016 http://pulpito.ceph.com/loic-2016-01-29_03:02:05-fs-hammer---basic-multi/49423/
git log --merges --since 2016-01-29 --until 2016-02-07 --format='%H' ceph/hammer | \ while read sha1 ; do \ echo ; git log --format='** %aD "%s":https://github.com/ceph/ceph/commit/%H' ${sha1}^1..${sha1} ; \ done | perl -p -e 'print "* \"PR $1\":https://github.com/ceph/ceph/pull/$1\n" if(/Merge pull request #(\d+)/)'
- PR 7524
- Fri, 5 Feb 2016 21:10:46 -0500 Merge pull request #7524 from ktdreyer/wip-14637-hammer-man-radosgw-admin-orphans
- Wed, 3 Feb 2016 19:51:58 -0700 doc: add orphans commands to radosgw-admin
- Thu, 4 Feb 2016 11:04:39 -0700 man: rebuild manpages
- PR 7526
- Fri, 5 Feb 2016 10:30:22 +0100 Merge pull request #7526 from ceph/wip-14516-hammer
- Mon, 1 Feb 2016 16:33:55 -0800 rgw-admin: document orphans commands in usage
- PR 7441
- Fri, 5 Feb 2016 12:47:33 +0700 Merge pull request #7441 from odivlad/backport-pr-14569
- Thu, 9 Jul 2015 16:56:07 +0800 rgw: Make RGW_MAX_PUT_SIZE configurable
- PR 7442
- Fri, 5 Feb 2016 12:46:54 +0700 Merge pull request #7442 from odivlad/backport-pr-14570
- Mon, 21 Sep 2015 20:32:29 +0200 rgw: fix wrong etag calculation during POST on S3 bucket.
- PR 7454
- Wed, 3 Feb 2016 12:41:56 +0700 Merge pull request #7454 from dachary/wip-14584-hammer
- Tue, 18 Aug 2015 15:22:55 +0800 qa/fsstress.sh: fix 'cp not writing through dangling symlink'
- PR 6918
- Wed, 3 Feb 2016 11:38:57 +0700 Merge pull request #6918 from asheplyakov/hammer-bug-12449
- Wed, 16 Dec 2015 15:31:52 +0300 Check for full before changing the cached obc
- PR 7236
- Sat, 30 Jan 2016 21:42:29 -0500 Merge pull request #7236 from athanatos/wip-14376
- Thu, 14 Jan 2016 08:35:23 -0800 config_opts: increase suicide timeout to 300 to match recovery
- PR 6450
- Sat, 30 Jan 2016 21:42:12 -0500 Merge pull request #6450 from dachary/wip-13672-hammer
- Tue, 3 Nov 2015 00:21:51 +0100 tests: test/librados/test.cc must create profile
- Mon, 2 Nov 2015 20:24:51 +0100 tests: destroy testprofile before creating one
- Mon, 2 Nov 2015 20:23:52 +0100 tests: add destroy_ec_profile{,_pp} helpers
- PR 6680
- Sat, 30 Jan 2016 21:41:39 -0500 Merge pull request #6680 from SUSE/wip-13859-hammer
- Thu, 3 Sep 2015 20:30:50 +0200 ceph.spec.in: fix License line
- PR 6791
- Sat, 30 Jan 2016 21:41:18 -0500 Merge pull request #6791 from branch-predictor/bp-5812-backport
- Mon, 6 Jul 2015 09:56:11 +0200 tools: fix race condition in seq/rand bench
- Wed, 20 May 2015 12:41:22 +0200 tools: add --no-verify option to rados bench
- PR 6973
- Sat, 30 Jan 2016 21:40:38 -0500 Merge pull request #6973 from dreamhost/wip-configure-hammer
- Tue, 5 May 2015 15:07:33 0800 "configure.ac: no use to add " before ac_ext=c
- PR 7206
- Sat, 30 Jan 2016 21:40:13 -0500 Merge pull request #7206 from dzafman/wip-14292
- Mon, 15 Jun 2015 17:55:41 -0700 ceph_osd: Add required feature bits related to this branch to osd_required mask
- Thu, 4 Jun 2015 18:47:42 -0700 osd: CEPH_FEATURE_CHUNKY_SCRUB feature now required
- PR 7207
- Sat, 30 Jan 2016 21:39:42 -0500 Merge pull request #7207 from rldleblanc/recency_fix_for_hammer
- Wed, 25 Nov 2015 14:40:26 -0500 osd: recency should look at newest (not oldest) hitsets
- Wed, 25 Nov 2015 14:39:08 -0500 osd/ReplicatedPG: fix promotion recency logic
- PR 7347
- Sat, 30 Jan 2016 21:39:11 -0500 Merge pull request #7347 from tchaikov/wip-hammer-10093
- Tue, 21 Apr 2015 14:04:40 +0800 tools: ceph-monstore-tool must do out_store.close()
- PR 7411
- Sat, 30 Jan 2016 21:38:35 -0500 Merge pull request #7411 from dachary/wip-14467-hammer
- Mon, 18 Jan 2016 08:24:46 -0700 osd: disable filestore_xfs_extsize by default
- PR 7412
- Sat, 30 Jan 2016 21:38:13 -0500 Merge pull request #7412 from dachary/wip-14470-hammer
- Tue, 5 Jan 2016 14:34:05 +0800 tools: monstore: add 'show-versions' command.
- Wed, 16 Sep 2015 18:28:52 +0800 tools: ceph_monstore_tool: add inflate-pgmap command
- Tue, 20 Oct 2015 15:23:49 +0800 tools:support printing the crushmap in readable fashion.
- Mon, 14 Sep 2015 19:50:47 +0800 tools:print the map infomation in human readable format.
- Mon, 14 Sep 2015 19:19:05 +0800 tools:remove the local file when get map failed.
- Mon, 13 Jul 2015 12:35:13 +0100 tools: ceph_monstore_tool: describe behavior of rewrite command
- Fri, 19 Jun 2015 22:57:57 +0800 tools/ceph-monstore-tools: add rewrite command
- Tue, 21 Apr 2015 14:04:40 +0800 tools: ceph-monstore-tool must do out_store.close()
- PR 7446
- Sat, 30 Jan 2016 21:37:46 -0500 Merge pull request #7446 from liewegas/wip-14537-hammer
- Thu, 28 Jan 2016 02:09:53 -0800 mon: compact full epochs also
- PR 7182
- Sat, 30 Jan 2016 11:45:31 -0800 Merge pull request #7182 from dachary/wip-14143-hammer
- Mon, 14 Dec 2015 17:41:49 -0500 librbd: optionally validate RBD pool configuration
- PR 7183
- Sat, 30 Jan 2016 11:45:20 -0800 Merge pull request #7183 from dachary/wip-14283-hammer
- Tue, 18 Aug 2015 16:05:29 -0400 rbd: fix bench-write
- PR 7416
- Sat, 30 Jan 2016 11:45:05 -0800 Merge pull request #7416 from dachary/wip-14466-hammer
- Thu, 21 Jan 2016 13:45:42 +0200 rbd-replay: handle EOF gracefully
- PR 7417
- Sat, 30 Jan 2016 11:44:50 -0800 Merge pull request #7417 from dachary/wip-14553-hammer
- Fri, 22 Jan 2016 11:18:40 -0800 rbd: remove canceled tasks from timer thread
- PR 7407
- Sat, 30 Jan 2016 11:44:32 -0800 Merge pull request #7407 from dillaman/wip-14543-hammer
- Thu, 28 Jan 2016 14:38:20 -0500 librbd: ImageWatcher shouldn't block the notification thread
- Thu, 28 Jan 2016 14:35:54 -0500 librados_test_stub: watch/notify now behaves similar to librados
- Thu, 28 Jan 2016 12:40:18 -0500 tests: simulate writeback flush during snap create
- PR 6980
- Sat, 30 Jan 2016 11:44:12 -0800 Merge pull request #6980 from dillaman/wip-14063-hammer
- Fri, 18 Dec 2015 15:22:13 -0500 librbd: fix merge-diff for >2GB diff-files
- PR 6353
- Fri, 29 Jan 2016 23:31:47 +0700 Merge pull request #6353 from theanalyst/wip-13513-hammer
- Wed, 19 Aug 2015 20:32:39 +0200 rgw: url_decode values from X-Object-Manifest during GET on Swift DLO.
- PR 6620
- Fri, 29 Jan 2016 23:31:16 +0700 Merge pull request #6620 from SUSE/wip-13820-hammer
- Wed, 23 Sep 2015 09:49:36 -0500 rgw: fix modification to index attrs when setting acls
- PR 7186
- Fri, 29 Jan 2016 23:30:57 +0700 Merge pull request #7186 from dachary/wip-13888-hammer
- Thu, 19 Nov 2015 13:38:40 +0300 Fixing NULL pointer dereference
- PR 5789
- Fri, 29 Jan 2016 08:52:51 -0500 Merge pull request #5789 from SUSE/wip-12928-hammer
- Mon, 1 Jun 2015 15:57:03 +0200 ceph.spec.in summary-ended-with-dot
- Mon, 1 Jun 2015 14:58:31 +0200 ceph.spec.in libcephfs_jni1 has no %post and %postun
- PR 7434
- Fri, 29 Jan 2016 08:50:56 -0500 Merge pull request #7434 from tchaikov/wip-14441-hammer
- Wed, 23 Dec 2015 11:23:38 +0800 "man: document listwatchers cmd in "rados manpage
The most likely candidate for this regression is : https://github.com/ceph/ceph/pull/6918 hammer: osd: check for full before changing the cached obc (hammer)
Trying on v0.94.5
Updated by Loïc Dachary about 8 years ago
git --no-pager log --format='%H %s' --graph ceph/hammer..hammer-backports-loic | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 6551
- |\
- | + client: add InodeRef.h to make dist
- | + client: use smart pointer to track 'cwd' and 'root_parents'
- | + client: convert Inode::snapdir_parent to smart pointer
- | + client: use smart pointer to track temporary inode reference
- | + client: convert CapSnap::in to smart pointer
- | + client: convert Fh::inode to smart pointer
- | + client: use smart pointers in MetaRequest
- | + client: convert Dentry::inode to smart pointer
- | + client: hold reference for returned inode
- + Pull request 6604
- |\
- | + client: use fuse_req_getgroups() to get group list
- | + client: use thread local data to track fuse request
- | + client/Client.cc: remove only once used variable
- | + client/Client.cc: fix realloc memory leak
- | + client: added permission check based on getgrouplist
- | + configure.ac: added autoconf check for getgrouplist
- + Pull request 6754
- |\
- | + mds: fix out-of-order messages
- + Pull request 7542
- |\
- | + Fixed the ceph get mdsmap assertion.
- + Pull request 7589
- |\
- | + rados: Add new field flags for ceph_osd_op.copy_get.
- + Pull request 7591
- |\
- | + rgw: support admin credentials in S3-related Keystone authentication.
- + Pull request 7671
- |\
- | + global: do not start two daemons with a single pid-file
- | + global/pidfile: do not start two daemons with a single pid-file
- + Pull request 7672
- |\
- | + common/bit_vector: use hard-coded value for block size
- + Pull request 7702
- |\
- | + Merge backport pull request #6545 into wip-14077-hammer
- | |\
- | | + ceph-objectstore-tool: Add dry-run checking to ops missing it
- | | + test: Remove redundant test output
- | | + test: Verify replicated PG beyond just data after vstart
- | | + test: Fix verify() used after import to also check xattr and omap
- | | + test: Add test cases for xattr and omap ceph-objectstore-tool operations
- | | + rados: Minor output changes for consistency across operations
- | + | Merge backport pull request #5783 into wip-14077-hammer
- | |\ \
- | | |/
- | | + test: osd-scrub-snaps.sh uses ceph-helpers.sh and added to make check
- | | + osd: Use boost::optional instead of snap 0 for all_clones
- | | + osd, test: When head missing a snapset, clones not an error
- | | + osd, test: Keep missing count and log number of missing clones
- | | + test: Eliminate check for bogus obj13/head snaps empty error
- | | + ceph-objectstore-tool: Add new remove-clone-metadata object op
- | | + osd: Fix trim_object() to not crash on corrupt snapset
- | | + ceph-objectstore-tool: Improve object spec error handling
- | | + ceph-objectstore-tool: Add undocumented clear-snapset command for testing
- | | + ceph-objectstore-tool: Add set-size command for objects
- | | + ceph-objectstore-tool: Enhanced dump command replaces dump-info
- | | + test: Add some clones to ceph-objectstore-tool test
- | | + ceph-objectstore-tool: For corrupt objectstores, don't abort listing on errors
- | | + ceph-objectstore-tool: Improve some error messages
- | | + ceph-objectstore-tool: White space fixes
- | | + tools/rados: Improve xattr import handling so future internal xattrs ignored
- | | + test: Test scrubbing of snapshot problems
- | | + osd: Don't crash if OI_ATTR attribute is missing or corrupt
- | | + osd: Additional _scrub() check for snapset inconsistency
- | | + osd: Better SnapSet scrub checking
- | | + osd: Make the _scrub routine produce good output and detect errors properly
- | | + osd: Fix log message name of ceph-objectstore-tool
- | + | Merge backport pull request #5031 into wip-14077-hammer
- | |\ \
- | | |/
- | | + ceph-objectstore-tool: add mark-complete operation
- | + | Merge backport pull request #5842 into wip-14077-hammer
- | |\ \
- | | |/
- | | + test: Fix failure test to find message anywhere in stderr
- | | + rados: Fix usage for notify command
- | + | Merge backport pull request #5127 into wip-14077-hammer
- | |\ \
- | | |/
- | | + test: add test for {get,set}-inc-osdmap commands.
- | | + test: add test for {get,set}-osdmap commands
- | | + tools/ceph-objectstore-tool: add get-inc-osdmap command
- | | + tools/ceph-objectstore-tool: add set-inc-osdmap command
- | | + tools/ceph-objectstore-tool: add get-osdmap command
- | | + tools/ceph-objectstore-tool: add set-osdmap command
- | + | Merge backport 6 commits from pull request #5197 into wip-14077-hammer
- | |\ \
- | | |/
- | | + test: Add debug argument to the ceph-objectstore-tool test
- | | + tools, test: Some ceph-objectstore-tool error handling fixes
- | | + tools: Check for valid --op earlier so we can get a better error message
- | | + tools: Fix newlines in output of --op list
- | | + tools: Fix dump-super which doesn't require pgid
- | | + tools: Check and specify commands that require the pgid specification
- | + | Merge backport branch 'wip-temp' into wip-14077-hammer
- | |\ \
- | | |/
- | | + hobject_t: modify operator<<
- | | + osd, tools: Always filter temp objects since not being exported
- | | + tools: Don't export temporary objects until we have persistent-temp objects
- | + | Merge backport pull request #4932 into wip-14077-hammer
- | |\ \
- | | |/
- | | + test, tools: Improve ceph-objectstore-tool import error handling and add tests
- | + | Merge backport pull request #4915 into wip-14077-hammer
- | |\ \
- | | |/
- | | + tools: For ec pools list objects in all shards if the pgid doesn't specify
- | + | Merge backport 1 commit from pull request #4863 into wip-14077-hammer
- | |\ \
- | | |/
- | | + tools: clean up errors in ceph-objectstore-tool
- | + | Merge backport 8 commits from pull request #4784 into wip-14077-hammer
- | |\ \
- | | |/
- | | + test/ceph-objectstore-tool: Don't need stderr noise
- | | + test/ceph-objectstore-tool: Show command that should have failed
- | | + test/ceph_objectstore_tool: Improve dump-journal testing
- | | + ceph-objectstore-tool: Allow --pgid specified on import
- | | + ceph-objectstore-tool: Invalidate pg stats when objects were skipped during pg import
- | | + ceph-objectstore-tool: Add dump-super to show OSDSuperblock in format specified
- | | + mds, include: Fix dump() numeric char array to include additional alpha chars
- | | + ceph-objectstore-tool: Add dump-journal as not requiring --pgid in usage
- | + | Merge backport 41 commits from pull request #4473 into wip-14077-hammer
- | |\ \
- | | |/
- | | + osd: Show number of divergent_priors in log message
- | | + test: Add config changes to all tests to avoid order dependency
- | | + test: ceph_test_filejournal: Conform to test infrastructure requirements
- | | + test: ceph_test_filejournal need to force aio because testing with a file
- | | + test: ceph_test_filejournal fix missing argument to FileJournal constructor
- | | + test: ceph_test_filejournal Add check of journalq in WriteTrim test
- | | + test: Fix ceph-objectstore-tool test missing fd.close()
- | | + test: Fix ceph-objectstore-tool test error message
- | | + test: ceph-objectstore-tool: Remove duplicate debug messages, keep cmd/log/call together
- | | + test: ceph-objectstore-tool import after split testing
- | | + test: Use CEPH_DIR where appropriate
- | | + test: Limit how long ceph-objectstore-tool test will wait for health
- | | + test: Add optional arg to vstart() to provide additional args to vstart
- | | + test: Test ceph-objectstore-tool --op dump-journal output
- | | + test: Pep8 fixes for ceph-objectstore-tool test
- | | + test: Fix ceph-objectstore-tool test, overwrite OTHERFILE so second check is meaningful
- | | + osd: FileJournal: Add _fdump() that takes Formatter instead of ostream
- | | + osd: Add simple_dump() to FileJournal for unit testing
- | | + osd: FileJournal clean-up
- | | + osd: Dump header in FileJournal::dump()
- | | + osd: FileJournal::read_entry() can't use a zero seq to check for corruption
- | | + osd: Fix flushing in FileJournal::dump()
- | | + osd: Add admin socket feature set_recovery_delay
- | | + ceph-objectstore-tool: For import/export --debug dump the log
- | | + ceph-objectstore-tool: If object re-appears after removal, just skip it
- | | + ceph-objectstore-tool: Add --no-overwrite flag for import-rados
- | | + ceph-objectstore-tool: Remove list-lost because now we have --dry-run flag
- | | + ceph-objectstore-tool: Add --dry-run option
- | | + ceph-objectstore-tool: Add dump-info command to show object info
- | | + ceph-objectstore-tool: Use empty string for <object> to specify pgmeta object
- | | + ceph-objectstore-tool: Add a couple of strategically placed prints
- | | + ceph-objectstore-tool: Clean up error handling
- | | + ceph-objectstore-tool: Create section around log/missing/divergent_priors of --op log
- | | + ceph-objectstore-tool: Add divergent_priors handling
- | | + ceph-objectstore-tool: Add --force option which is used for import only
- | | + ceph-objectstore-tool: Fix pgid scan to skip snapdirs
- | | + ceph-objectstore-tool: Add dump-journal op
- | | + ceph-objectstore-tool: On any exit release CephContext so logging can flush
- | | + ceph-objectstore-tool: Check for keyvaluestore experimental feature
- | | + ceph-objectstore-tool: Eliminate obscure Invalid params error
- | | + ceph-objectstore-tool: Check pgid validity earlier like we did before
- | + | Merge backport branch 'wip-journal-header' of git://github.com/XinzeChi/ceph into wip-14077-hammer
- | |\ \
- | | |/
- | | + Backport the merge commit of branch 'wip-journal-header' of git://github.com/XinzeChi/ceph
- | | + osd: write journal header by force when journal write close
- | + | Merge backport 1 commit of pull request #3686 into wip-14077-hammer
- | |/
- | + crushtool: send --tree to stdout
- + Pull request 7797
- + init-ceph: check if /lib/lsb/init-functions exists
Updated by Loïc Dachary about 8 years ago
rados¶
teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 18)/18 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
- fail http://pulpito.ceph.com/loic-2016-02-28_21:22:06-rados-hammer-backports-loic---basic-multi
- timed out waiting for admin_socket to appear after osd.4 restart
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=854e7455f3ac7485b9ca77214634850942ba412f TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/stress_watch.sh' - "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph tell osd.0 flush_pg_stats'"
- Stale /var/lib/ceph detected, aborting.
- , 'failed': True, 'attempts': 24, 'item': {'name': 'xiaoxichen', 'key': 'https://raw.githubusercontent.com/ceph/keys/master/ssh/xiaoxichen.pub'}, 'msg': 'Task failed as maximum retries was encountered'}}
- saw valgrind issues
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
rbd¶
teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports-loic --machine-type smithi,mira
- fail http://pulpito.ceph.com/loic-2016-02-28_21:56:09-rbd-hammer-backports-loic---basic-multi
- 'CEPH_REF=master CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram
v -/home/ubuntu/cephtest/archive/cram.client.0/.t'* - 'sudo systemctl restart nfs'
- 'CEPH_REF=master CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
upgrade¶
teuthology-suite --verbose --suite upgrade/hammer --suite-branch hammer --ceph hammer-backports-loic --machine-type vps --priority 1000 machine_types/vps.yaml
- fail http://pulpito.ceph.com/loic-2016-02-28_21:53:36-upgrade:hammer-hammer-backports-loic---basic-vps
- '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-02-29T15:16:36.833710+00:00 vpm156 kernel: BUG: soft lockup - CPU#0 stuck for 22s! [nosetests:30132]
' in syslog - {'vpm008.front.sepia.ceph.com': 'SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh'}
- {'vpm068.front.sepia.ceph.com': 'SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh'}
- , 'failed': True, 'msg': "Error from repoquery: ['/usr/bin/repoquery', '--show-duplicates', '--plugins', '--quiet', '-q', '--disablerepo', '', '--enablerepo', '', '--qf', '%{name}-%{version}-%{release}.%{arch}', 'krb5-workstation']: Cannot find a valid baseurl for repo: updates/7/x86_64\n"}}
- 'sudo yum -y install libcephfs_jni1 rbd-fuse ceph-radosgw librbd1 ceph-fuse python-ceph ceph ceph-devel ceph-test librados2 cephfs-java libcephfs1'
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.6.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_7.2.yaml}
- upgrade:hammer/older/{0-cluster/start.yaml 1-install/latest_giant_release.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/centos_7.2.yaml}
- Fuse mount failed to populate /sys/ after 31 seconds
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.3.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.3.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/centos_7.2.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.4.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
- Host key for server vpm198.front.sepia.ceph.com does not match!
- '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-02-29T15:16:36.833710+00:00 vpm156 kernel: BUG: soft lockup - CPU#0 stuck for 22s! [nosetests:30132]
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
git --no-pager log --format='%H %s' --graph ceph/hammer..ceph/hammer-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 6551
- |\
- | + client: add InodeRef.h to make dist
- | + client: use smart pointer to track 'cwd' and 'root_parents'
- | + client: convert Inode::snapdir_parent to smart pointer
- | + client: use smart pointer to track temporary inode reference
- | + client: convert CapSnap::in to smart pointer
- | + client: convert Fh::inode to smart pointer
- | + client: use smart pointers in MetaRequest
- | + client: convert Dentry::inode to smart pointer
- | + client: hold reference for returned inode
- + Pull request 6604
- |\
- | + client: use fuse_req_getgroups() to get group list
- | + client: use thread local data to track fuse request
- | + client/Client.cc: remove only once used variable
- | + client/Client.cc: fix realloc memory leak
- | + client: added permission check based on getgrouplist
- | + configure.ac: added autoconf check for getgrouplist
- + Pull request 6754
- |\
- | + mds: fix out-of-order messages
- + Pull request 7542
- |\
- | + Fixed the ceph get mdsmap assertion.
- + Pull request 7589
- |\
- | + rados: Add new field flags for ceph_osd_op.copy_get.
- + Pull request 7591
- |\
- | + rgw: support admin credentials in S3-related Keystone authentication.
- + Pull request 7797
- |\
- | + init-ceph: check if /lib/lsb/init-functions exists
- + Pull request 7817
- |\
- | + hammer: tools: fix race condition in seq/rand bench
- + Pull request 7848
- |\
- | + doc: mon-config Add information about pg_prime
- | + mon: Enable pg_prime by Default
- | + mon: do not prime pg_temp when the current acting is < min_size
- | + mon: be more careful about when we prime all pgs
- | + mon: cap the amount of time we spend priming pg_temp
- | + mon: prime pg_temp
- | + mon/PGMap: keep osd -> pg mapping in memory
- + Pull request 7876
- |\
- | + packaging: lsb_release build and runtime dependency
- + Pull request 7896
- |\
- | + hammer: tools: fix race condition in seq/rand bench
- + Pull request 7903
- |\
- | + common/obj_bencher.cc: make verify error fatal
- | + test/test_rados_tool.sh: force rados bench rand and seq
- + Pull request 7911
- |\
- | + tools, test: Add ceph-objectstore-tool to operate on the meta collection
- + Pull request 7917
- + ceph-objectstore-tool, osd: Fix import handling
Updated by Loïc Dachary about 8 years ago
upgrade¶
teuthology-suite --verbose --suite upgrade/hammer --suite-branch hammer --ceph hammer-backports --machine-type vps --priority 1000 machine_types/vps.yaml
- fail http://pulpito.ceph.com/loic-2016-03-06_19:22:53-upgrade:hammer-hammer-backports---basic-vps
- **
- "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.2.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'"
- '/home/ubuntu/cephtest/archive/syslog/misc.log:2016-03-07T06:01:50.223406+00:00 vpm160 crond28495: (root) INFO (Job execution of per-minute job scheduled for 06:00 delayed into subsequent minute 06:01. Skipping job run.)
' in syslog - 'connect to vpm158.front.sepia.ceph.com' reached maximum tries (10) after waiting for 10 seconds
- Fuse mount failed to populate /sys/ after 31 seconds
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.3.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.6.yaml 2-workload/blogbench.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.2.yaml 2-workload/testrados.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
- upgrade:hammer/newer/{0-cluster/start.yaml 1-install/v0.94.3.yaml 2-workload/rbd.yaml 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrgw.yaml} distros/ubuntu_14.04.yaml}
- 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --mkfs --mkkey -i 4 --monmap /home/ubuntu/cephtest/monmap'
Re-running failed tests
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
rados¶
teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 18)/18 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=4/18
- fail http://pulpito.ceph.com/loic-2016-03-06_19:45:27-rados-hammer-backports---basic-multi/
- , 'changed': False, 'results': ['xmlstarlet is not installed', 'python-jinja2 is not installed', 'python-ceph is not installed', 'python-flask is not installed', 'python-requests is not installed', 'python-urllib3 is not installed', 'python-babel is not installed', 'hdparm is not installed', 'python-markupsafe is not installed', 'python-werkzeug is not installed', 'python-itsdangerous is not installed', 'Loaded plugins: fastestmirror, langpacks, priorities\n'], 'msg': 'Traceback (most recent call last):\n File "/usr/bin/yum", line 29, in <module>\n yummain.user_main(sys.argv[1:], exit_code=True)\n File "/usr/share/yum-cli/yummain.py", line 365, in user_main\n errcode = main(args)\n File "/usr/share/yum-cli/yummain.py", line 174, in main\n result, resultmsgs = base.doCommands()\n File "/usr/share/yum-cli/cli.py", line 573, in doCommands\n return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds)\n File "/usr/share/yum-cli/yumcommands.py", line 878, in doCommand\n ret = base.erasePkgs(extcmds, pos=pos, basecmd=basecmd)\n File "/usr/share/yum-cli/cli.py", line 1198, in erasePkgs\n rms = self.remove(pattern=arg)\n File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 5399, in remove\n (e,m,u) = self.rpmdb.matchPackageNames([kwargs[\'pattern\']])\n File "/usr/lib/python2.7/site-packages/yum/packageSack.py", line 304, in matchPackageNames\n exactmatch.append(self.searchPkgTuple(pkgtup)[0])\nIndexError: list index out of range\n'}}
- rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml thrashers/mapgap.yaml workloads/cache-agent-small.yaml}
- rados/basic/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml tasks/repair_test.yaml}
- rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml supported/centos_7.2.yaml thrashers/pggrow.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}
- rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml thrashers/default.yaml workloads/pool-snaps-few-objects.yaml}
- rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml thrashers/pggrow.yaml workloads/small-objects.yaml}
- rados/monthrash/{ceph/ceph.yaml clusters/9-mons.yaml fs/xfs.yaml msgr-failures/few.yaml thrashers/one.yaml workloads/rados_api_tests.yaml}
- rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr-failures/few.yaml supported/centos_7.2.yaml thrashers/mapgap.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}
- rados/verify/{1thrash/none.yaml clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml tasks/mon_recovery.yaml validater/lockdep.yaml}
- rados/objectstore/filestore-idempotent-aio-journal.yaml
- rados/monthrash/{ceph/ceph.yaml clusters/9-mons.yaml fs/xfs.yaml msgr-failures/mon-delay.yaml thrashers/force-sync-many.yaml workloads/snaps-few-objects.yaml}
- rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml thrashers/pggrow.yaml workloads/ec-small-objects.yaml}
- rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr-failures/osd-delay.yaml thrashers/default.yaml workloads/cache-snaps.yaml}
- rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml thrashers/default.yaml workloads/readwrite.yaml}
- rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml thrashers/morepggrow.yaml workloads/rados_api_tests.yaml}
- rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml thrashers/mapgap.yaml workloads/pool-snaps-few-objects.yaml}
- rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr-failures/osd-delay.yaml thrashers/default.yaml workloads/pool-snaps-few-objects.yaml}
- rados/singleton-nomsgr/{all/13234.yaml}
- rados/objectstore/ceph_objectstore_tool.yaml
- 'sudo yum install ceph-radosgw -y'
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ee47361d8eb9199b9d681f634a2580fba55e34a TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test.sh'- rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr-failures/osd-delay.yaml thrashers/mapgap.yaml workloads/rados_api_tests.yaml}
- rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml thrashers/pggrow.yaml workloads/rados_api_tests.yaml}
- rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml thrashers/default.yaml workloads/rados_api_tests.yaml}
- rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr-failures/osd-delay.yaml thrashers/default.yaml workloads/rados_api_tests.yaml}
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ee47361d8eb9199b9d681f634a2580fba55e34a TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/workunit.client.0/rados/test.sh' - reached maximum tries (250) after waiting for 1500 seconds
- timed out waiting for admin_socket to appear after osd.0 restart
- new bug hammer: CentOS 7 tcmalloc::ThreadCache valgrind error libboost_thread-mt.so.1.53 saw valgrind issues
- , 'changed': False, 'results': ['xmlstarlet is not installed', 'python-jinja2 is not installed', 'python-ceph is not installed', 'python-flask is not installed', 'python-requests is not installed', 'python-urllib3 is not installed', 'python-babel is not installed', 'hdparm is not installed', 'python-markupsafe is not installed', 'python-werkzeug is not installed', 'python-itsdangerous is not installed', 'Loaded plugins: fastestmirror, langpacks, priorities\n'], 'msg': 'Traceback (most recent call last):\n File "/usr/bin/yum", line 29, in <module>\n yummain.user_main(sys.argv[1:], exit_code=True)\n File "/usr/share/yum-cli/yummain.py", line 365, in user_main\n errcode = main(args)\n File "/usr/share/yum-cli/yummain.py", line 174, in main\n result, resultmsgs = base.doCommands()\n File "/usr/share/yum-cli/cli.py", line 573, in doCommands\n return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds)\n File "/usr/share/yum-cli/yumcommands.py", line 878, in doCommand\n ret = base.erasePkgs(extcmds, pos=pos, basecmd=basecmd)\n File "/usr/share/yum-cli/cli.py", line 1198, in erasePkgs\n rms = self.remove(pattern=arg)\n File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 5399, in remove\n (e,m,u) = self.rpmdb.matchPackageNames([kwargs[\'pattern\']])\n File "/usr/lib/python2.7/site-packages/yum/packageSack.py", line 304, in matchPackageNames\n exactmatch.append(self.searchPkgTuple(pkgtup)[0])\nIndexError: list index out of range\n'}}
Re-running failed tests
- fail http://pulpito.ceph.com/loic-2016-03-07_21:21:27-rados-hammer-backports---basic-multi
- , 'changed': False, 'results': ['xmlstarlet is not installed', 'python-jinja2 is not installed', 'python-ceph is not installed', 'python-flask is not installed', 'python-requests is not installed', 'python-urllib3 is not installed', 'python-babel is not installed', 'hdparm is not installed', 'python-markupsafe is not installed', 'python-werkzeug is not installed', 'python-itsdangerous is not installed', 'Loaded plugins: fastestmirror, langpacks, priorities\n'], 'msg': 'Traceback (most recent call last):\n File "/usr/bin/yum", line 29, in <module>\n yummain.user_main(sys.argv[1:], exit_code=True)\n File "/usr/share/yum-cli/yummain.py", line 365, in user_main\n errcode = main(args)\n File "/usr/share/yum-cli/yummain.py", line 174, in main\n result, resultmsgs = base.doCommands()\n File "/usr/share/yum-cli/cli.py", line 573, in doCommands\n return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds)\n File "/usr/share/yum-cli/yumcommands.py", line 878, in doCommand\n ret = base.erasePkgs(extcmds, pos=pos, basecmd=basecmd)\n File "/usr/share/yum-cli/cli.py", line 1198, in erasePkgs\n rms = self.remove(pattern=arg)\n File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 5399, in remove\n (e,m,u) = self.rpmdb.matchPackageNames([kwargs[\'pattern\']])\n File "/usr/lib/python2.7/site-packages/yum/packageSack.py", line 304, in matchPackageNames\n exactmatch.append(self.searchPkgTuple(pkgtup)[0])\nIndexError: list index out of range\n'}}
- rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml supported/centos_7.2.yaml thrashers/pggrow.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}
- rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr-failures/osd-delay.yaml thrashers/mapgap.yaml workloads/rados_api_tests.yaml}
- 'sudo yum install ceph-radosgw -y'
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ee47361d8eb9199b9d681f634a2580fba55e34a TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test.sh' - new bug hammer: CentOS 7 tcmalloc::ThreadCache valgrind error libboost_thread-mt.so.1.53 saw valgrind issues
- , 'changed': False, 'results': ['xmlstarlet is not installed', 'python-jinja2 is not installed', 'python-ceph is not installed', 'python-flask is not installed', 'python-requests is not installed', 'python-urllib3 is not installed', 'python-babel is not installed', 'hdparm is not installed', 'python-markupsafe is not installed', 'python-werkzeug is not installed', 'python-itsdangerous is not installed', 'Loaded plugins: fastestmirror, langpacks, priorities\n'], 'msg': 'Traceback (most recent call last):\n File "/usr/bin/yum", line 29, in <module>\n yummain.user_main(sys.argv[1:], exit_code=True)\n File "/usr/share/yum-cli/yummain.py", line 365, in user_main\n errcode = main(args)\n File "/usr/share/yum-cli/yummain.py", line 174, in main\n result, resultmsgs = base.doCommands()\n File "/usr/share/yum-cli/cli.py", line 573, in doCommands\n return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds)\n File "/usr/share/yum-cli/yumcommands.py", line 878, in doCommand\n ret = base.erasePkgs(extcmds, pos=pos, basecmd=basecmd)\n File "/usr/share/yum-cli/cli.py", line 1198, in erasePkgs\n rms = self.remove(pattern=arg)\n File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 5399, in remove\n (e,m,u) = self.rpmdb.matchPackageNames([kwargs[\'pattern\']])\n File "/usr/lib/python2.7/site-packages/yum/packageSack.py", line 304, in matchPackageNames\n exactmatch.append(self.searchPkgTuple(pkgtup)[0])\nIndexError: list index out of range\n'}}
Re-running failed tests
- fail http://pulpito.ceph.com/loic-2016-03-08_18:36:13-rados-hammer-backports---basic-multi
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ee47361d8eb9199b9d681f634a2580fba55e34a TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test.sh' - environmental error (should have been a valgrind issue as above) 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/osd.3.log --time-stamp=yes --tool=memcheck ceph-osd -f -i 3'
- rados/verify/{1thrash/default.yaml clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}
osd/OSD.cc: 2414: FAILED assert(0) with 2016-03-09 03:05:35.307523 2b9bc700 -1 osd.3 746 pgid 90.4 has ref count of 5
could be because of a faulty disk that creates errors from which the OSD can't recover
- rados/verify/{1thrash/default.yaml clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}
- 'mkdir
Re-running failed tests (on smithi in an attempt to fix what appears to be a timing issue)
- fail http://pulpito.ceph.com/loic-2016-03-10_18:57:11-rados-hammer-backports---basic-smithi/
- 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/osd.5.log --time-stamp=yes --tool=memcheck ceph-osd -f -i 5'
- rados/verify/{1thrash/default.yaml clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}
osd.5 643 pgid 90.5 has ref count of 5 (the error above is real)
- rados/verify/{1thrash/default.yaml clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ee47361d8eb9199b9d681f634a2580fba55e34a TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test.sh'
- 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/osd.5.log --time-stamp=yes --tool=memcheck ceph-osd -f -i 5'
Updated by Loïc Dachary about 8 years ago
rbd¶
teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=1/5
- fail http://pulpito.ceph.com/loic-2016-03-06_19:48:42-rbd-hammer-backports---basic-multi/
- , 'changed': False, 'results': ['xmlstarlet is not installed', 'python-jinja2 is not installed', 'python-ceph is not installed', 'python-flask is not installed', 'python-requests is not installed', 'python-urllib3 is not installed', 'python-babel is not installed', 'hdparm is not installed', 'python-markupsafe is not installed', 'python-werkzeug is not installed', 'python-itsdangerous is not installed', 'Loaded plugins: fastestmirror, langpacks, priorities\n'], 'msg': 'Traceback (most recent call last):\n File "/usr/bin/yum", line 29, in <module>\n yummain.user_main(sys.argv[1:], exit_code=True)\n File "/usr/share/yum-cli/yummain.py", line 365, in user_main\n errcode = main(args)\n File "/usr/share/yum-cli/yummain.py", line 174, in main\n result, resultmsgs = base.doCommands()\n File "/usr/share/yum-cli/cli.py", line 573, in doCommands\n return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds)\n File "/usr/share/yum-cli/yumcommands.py", line 878, in doCommand\n ret = base.erasePkgs(extcmds, pos=pos, basecmd=basecmd)\n File "/usr/share/yum-cli/cli.py", line 1198, in erasePkgs\n rms = self.remove(pattern=arg)\n File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 5399, in remove\n (e,m,u) = self.rpmdb.matchPackageNames([kwargs[\'pattern\']])\n File "/usr/lib/python2.7/site-packages/yum/packageSack.py", line 304, in matchPackageNames\n exactmatch.append(self.searchPkgTuple(pkgtup)[0])\nIndexError: list index out of range\n'}}
- rbd/thrash/{base/install.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml thrashers/cache.yaml workloads/rbd_api_tests.yaml}
- rbd/cli/{base/install.yaml cachepool/none.yaml clusters/fixed-1.yaml features/layering.yaml fs/btrfs.yaml msgr-failures/many.yaml workloads/rbd_cli_copy.yaml}
- , 'changed': False, 'results': ['xmlstarlet is not installed', 'python-jinja2 is not installed', 'python-ceph is not installed', 'python-flask is not installed', 'python-requests is not installed', 'python-urllib3 is not installed', 'python-babel is not installed', 'hdparm is not installed', 'python-markupsafe is not installed', 'python-werkzeug is not installed', 'python-itsdangerous is not installed', 'Loaded plugins: fastestmirror, langpacks, priorities\n'], 'msg': 'Traceback (most recent call last):\n File "/usr/bin/yum", line 29, in <module>\n yummain.user_main(sys.argv[1:], exit_code=True)\n File "/usr/share/yum-cli/yummain.py", line 365, in user_main\n errcode = main(args)\n File "/usr/share/yum-cli/yummain.py", line 174, in main\n result, resultmsgs = base.doCommands()\n File "/usr/share/yum-cli/cli.py", line 573, in doCommands\n return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds)\n File "/usr/share/yum-cli/yumcommands.py", line 878, in doCommand\n ret = base.erasePkgs(extcmds, pos=pos, basecmd=basecmd)\n File "/usr/share/yum-cli/cli.py", line 1198, in erasePkgs\n rms = self.remove(pattern=arg)\n File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 5399, in remove\n (e,m,u) = self.rpmdb.matchPackageNames([kwargs[\'pattern\']])\n File "/usr/lib/python2.7/site-packages/yum/packageSack.py", line 304, in matchPackageNames\n exactmatch.append(self.searchPkgTuple(pkgtup)[0])\nIndexError: list index out of range\n'}}
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
fs¶
teuthology-suite --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=0/5
- fail http://pulpito.ceph.com/loic-2016-03-06_19:50:10-fs-hammer-backports---basic-multi
- , 'changed': False, 'results': ['xmlstarlet is not installed', 'python-jinja2 is not installed', 'python-ceph is not installed', 'python-flask is not installed', 'python-requests is not installed', 'python-urllib3 is not installed', 'python-babel is not installed', 'hdparm is not installed', 'python-markupsafe is not installed', 'python-werkzeug is not installed', 'python-itsdangerous is not installed', 'Loaded plugins: fastestmirror, langpacks, priorities\n'], 'msg': 'Traceback (most recent call last):\n File "/usr/bin/yum", line 29, in <module>\n yummain.user_main(sys.argv[1:], exit_code=True)\n File "/usr/share/yum-cli/yummain.py", line 365, in user_main\n errcode = main(args)\n File "/usr/share/yum-cli/yummain.py", line 174, in main\n result, resultmsgs = base.doCommands()\n File "/usr/share/yum-cli/cli.py", line 573, in doCommands\n return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds)\n File "/usr/share/yum-cli/yumcommands.py", line 878, in doCommand\n ret = base.erasePkgs(extcmds, pos=pos, basecmd=basecmd)\n File "/usr/share/yum-cli/cli.py", line 1198, in erasePkgs\n rms = self.remove(pattern=arg)\n File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 5399, in remove\n (e,m,u) = self.rpmdb.matchPackageNames([kwargs[\'pattern\']])\n File "/usr/lib/python2.7/site-packages/yum/packageSack.py", line 304, in matchPackageNames\n exactmatch.append(self.searchPkgTuple(pkgtup)[0])\nIndexError: list index out of range\n'}}
- {'mira012.front.sepia.ceph.com': 'SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh'}
- Test failure: test_client_pin_root (tasks.mds_client_limits.TestClientLimits)
- new bug hammer: CentOS 7 tcmalloc::ThreadCache valgrind error libboost_thread-mt.so.1.53 saw valgrind issues
Re-running failed jobs
Updated by Loïc Dachary about 8 years ago
powercycle¶
teuthology-suite -l2 -v -c hammer-backports -k testing -m smithi -s powercycle -p 1000 --email loic@dachary.org
Updated by Loïc Dachary about 8 years ago
rgw¶
teuthology-suite --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=2/5
- fail http://pulpito.ceph.com/loic-2016-03-06_19:52:12-rgw-hammer-backports---basic-multi
- , 'changed': False, 'results': ['xmlstarlet is not installed', 'python-jinja2 is not installed', 'python-ceph is not installed', 'python-flask is not installed', 'python-requests is not installed', 'python-urllib3 is not installed', 'python-babel is not installed', 'hdparm is not installed', 'python-markupsafe is not installed', 'python-werkzeug is not installed', 'python-itsdangerous is not installed', 'Loaded plugins: fastestmirror, langpacks, priorities\n'], 'msg': 'Traceback (most recent call last):\n File "/usr/bin/yum", line 29, in <module>\n yummain.user_main(sys.argv[1:], exit_code=True)\n File "/usr/share/yum-cli/yummain.py", line 365, in user_main\n errcode = main(args)\n File "/usr/share/yum-cli/yummain.py", line 174, in main\n result, resultmsgs = base.doCommands()\n File "/usr/share/yum-cli/cli.py", line 573, in doCommands\n return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds)\n File "/usr/share/yum-cli/yumcommands.py", line 878, in doCommand\n ret = base.erasePkgs(extcmds, pos=pos, basecmd=basecmd)\n File "/usr/share/yum-cli/cli.py", line 1198, in erasePkgs\n rms = self.remove(pattern=arg)\n File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 5399, in remove\n (e,m,u) = self.rpmdb.matchPackageNames([kwargs[\'pattern\']])\n File "/usr/lib/python2.7/site-packages/yum/packageSack.py", line 304, in matchPackageNames\n exactmatch.append(self.searchPkgTuple(pkgtup)[0])\nIndexError: list index out of range\n'}}
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/ext4.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_s3tests.yaml}
- rgw/verify/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_s3tests.yaml validater/lockdep.yaml}
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/ext4.yaml rgw_pool_type/replicated.yaml tasks/rgw_readwrite.yaml}
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/xfs.yaml rgw_pool_type/ec-profile.yaml tasks/rgw_bucket_quota.yaml}
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ee47361d8eb9199b9d681f634a2580fba55e34a TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rgw/s3_bucket_quota.pl' - 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ee47361d8eb9199b9d681f634a2580fba55e34a TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rgw/s3_user_quota.pl' - 'git clone https://github.com/ceph/swift.git /home/ubuntu/cephtest/swift'
- , 'changed': False, 'results': ['xmlstarlet is not installed', 'python-jinja2 is not installed', 'python-ceph is not installed', 'python-flask is not installed', 'python-requests is not installed', 'python-urllib3 is not installed', 'python-babel is not installed', 'hdparm is not installed', 'python-markupsafe is not installed', 'python-werkzeug is not installed', 'python-itsdangerous is not installed', 'Loaded plugins: fastestmirror, langpacks, priorities\n'], 'msg': 'Traceback (most recent call last):\n File "/usr/bin/yum", line 29, in <module>\n yummain.user_main(sys.argv[1:], exit_code=True)\n File "/usr/share/yum-cli/yummain.py", line 365, in user_main\n errcode = main(args)\n File "/usr/share/yum-cli/yummain.py", line 174, in main\n result, resultmsgs = base.doCommands()\n File "/usr/share/yum-cli/cli.py", line 573, in doCommands\n return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds)\n File "/usr/share/yum-cli/yumcommands.py", line 878, in doCommand\n ret = base.erasePkgs(extcmds, pos=pos, basecmd=basecmd)\n File "/usr/share/yum-cli/cli.py", line 1198, in erasePkgs\n rms = self.remove(pattern=arg)\n File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 5399, in remove\n (e,m,u) = self.rpmdb.matchPackageNames([kwargs[\'pattern\']])\n File "/usr/lib/python2.7/site-packages/yum/packageSack.py", line 304, in matchPackageNames\n exactmatch.append(self.searchPkgTuple(pkgtup)[0])\nIndexError: list index out of range\n'}}
Re-running failed tests
- fail http://pulpito.ceph.com/loic-2016-03-07_21:25:36-rgw-hammer-backports---basic-multi/
- "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'"
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/ext4.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_s3tests.yaml} - looks like a hardware problem on mira096:
2016-03-07T21:43:45.310 INFO:tasks.ceph.osd.3.mira096.stderr:2016-03-08 05:43:45.309402 7f39ff3cd700 -1 journal FileJournal::write_bl : write_fd failed: (30) Read-only file system
conclusion: environmental noise
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/ext4.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_s3tests.yaml} - looks like a hardware problem on mira096:
- "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'"
Opened bug #15044 for the infrastructure issue and re-running the last remaining failed test:
./virtualenv/bin/teuthology-suite --priority 1000 --suite rgw --suite-branch hammer --email ncutler@suse.cz --ceph hammer-backports --machine-type smithi,mira --filter "rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/ext4.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_s3tests.yaml}"
Updated by Loïc Dachary about 8 years ago
git --no-pager log --format='%H %s' --graph ceph/hammer..hammer-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 7542
- |\
- | + Fixed the ceph get mdsmap assertion.
- + Pull request 7589
- |\
- | + rados: Add new field flags for ceph_osd_op.copy_get.
- + Pull request 7591
- |\
- | + rgw: support admin credentials in S3-related Keystone authentication.
- + Pull request 7848
- |\
- | + doc: mon-config Add information about pg_prime
- | + mon: Enable pg_prime by Default
- | + mon: do not prime pg_temp when the current acting is < min_size
- | + mon: be more careful about when we prime all pgs
- | + mon: cap the amount of time we spend priming pg_temp
- | + mon: prime pg_temp
- | + mon/PGMap: keep osd -> pg mapping in memory
- + Pull request 7883
- |\
- | + osd/osd_types: encode pg_pool_t the old way
- | + mon: disable gmt_hitset if not supported
- | + osd: do not let OSD_HITSET_GMT reuse the feature bit
- | + osd: Decode use_gmt_hitset with a unique version
- | + mon: print use_gmt_hitset in ceph osd pool get
- | + mon: add ceph osd pool set $pool use_gmt_hitset true cmd
- | + osd: use GMT time for the object name of hitsets
- + Pull request 7917
- |\
- | + ceph-objectstore-tool, osd: Fix import handling
- + Pull request 7922
- |\
- | + tests: Add TEST_no_segfault_for_bad_keyring to test/mon/misc.sh
- | + tests: make sure no segfault occurs when using some bad keyring
- | + auth: fix a crash issue due to CryptoHandler::create() failed
- | + auth: fix double PK11_DestroyContext() if PK11_DigestFinal() failed
- + Pull request 7961
- |\
- | + test: add test-case for repair unrecovery-ec pg.
- | + osd: Remove the duplicated func MissingLoc::get_all_missing.
- | + osd: Fix ec pg repair endless when met unrecover object.
- | + osd: For object op, first check object whether unfound.
- + Pull request 7992
- |\
- | + osdc/Objecter: call notify completion only once
- + Pull request 8011
- |\
- | + librbd: complete cache reads on cache's dedicate thread
- | + test: reproducer for writeback CoW deadlock
- + Pull request 8026
- |\
- | + test/pybind/test_ceph_argparse: fix reweight-by-utilization tests
- | + man/8/ceph.rst: remove invalid option for reweight-by-*
- | + mon: remove range=100 from reweight-by-* commands
- | + mon: make max_osds an optional arg
- | + mon: make reweight max_change default configurable
- | + mon/OSDMonitor: fix indentation
- | + qa/workunits/cephtool/test.sh: test reweight-by-x commands
- | + osd/MonCommand: add/fix up 'osd [test-]reweight-by-{pg,utilization}'
- | + mon: add 'osd utilization' command
- | + osd/OSDMap: add summarize_mapping_stats
- | + mon: make reweight-by-* max_change an argument
- | + osd: add mon_reweight_max_osds to limit reweight-by-* commands
- | + osd: add mon_reweight_max_change option which limits reweight-by-*
- | + test: add simple test for new reweight-by-* options
- | + osd: add sure and no-increasing options to reweight-by-*
- + Pull request 8049
- |\
- | + keyring permissions for mon daemon
- + Pull request 8051
- |\
- | + mon: Monitor: get rid of weighted clock skew reports
- | + mon: Monitor: adaptative clock skew detection interval
- + Pull request 8052
- + test/librados/test.cc: clean up EC pools' crush rules too
Updated by Loïc Dachary about 8 years ago
upgrade¶
teuthology-suite --verbose --suite upgrade/hammer --suite-branch hammer --ceph hammer-backports --machine-type vps --priority 1000 machine_types/vps.yaml
- red http://pulpito.ceph.com/loic-2016-03-12_17:31:24-upgrade:hammer-hammer-backports---basic-vps/
- 56440, 56456, 56501 new bug?
RuntimeError: Fuse mount failed to populate /sys/ after 31 seconds
- this looks like #12506, but Zheng says that one does not affect hammer. Zheng's analysis of the teuthology log: "seem like ceph-fuse failed to connect to monitor - no idea why" - smithfarm analysis: "looking more closely at the log, I see: Running task ceph-fuse..., then Mounting ceph-fuse clients..., then mon_thrasher:start thrashing, then sudo mount -t fusectl /sys/fs/fuse/connections /sys/fs/fuse/connections, and finally mount point /sys/fs/fuse/connections does not exist" - 56468, 56486, 56488, 56498, 56504, 56511, 56517, 56523 unresponsive mira007 reported at http://tracker.ceph.com/issues/15116
Downburst failed on ubuntu@vpm033.front.sepia.ceph.com: libvirt: XML-RPC error : Cannot recv data: ssh: connect to host mira007.front.sepia.ceph.com port 22: No route to host: Connection reset by peer
- 56470 mon daemon fails to restart, leading to dead job vpm149 runs workunit test suites/dbench.sh which seems to take an unusually long time in the cleanup phase, and eventually exceeds 3 hour timeout
- 56440, 56456, 56501 new bug?
Rerunning failed and dead jobs:
- fail http://pulpito.ceph.com/smithfarm-2016-03-14_06:44:23-upgrade:hammer-hammer-backports---basic-vps/
- unresponsive mira007 reported at http://tracker.ceph.com/issues/15116
- cluster [WRN] message from mon.0 was stamped 0.652280s in the future, clocks not synchronized" in cluster log
Rerunning 2 failed jobs:
Updated by Loïc Dachary about 8 years ago
rados¶
teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 18)/18 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=7/18
Fails for the following reasons and discarded the results.
Updated by Loïc Dachary about 8 years ago
rbd¶
teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=3/5
- fail http://pulpito.ceph.com/loic-2016-03-12_17:35:57-rbd-hammer-backports---basic-smithi/
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8cc324df4caf0d043208d9abe88a0e217e861dc3 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/qemu-iotests.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8cc324df4caf0d043208d9abe88a0e217e861dc3 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=13 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd.sh'
- 'mkdir
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
fs¶
teuthology-suite --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=0/5
- fail http://pulpito.ceph.com/loic-2016-03-12_17:36:49-fs-hammer-backports---basic-smithi
- 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 3'
- ignore for now see http://tracker.ceph.com/issues/15117 saw valgrind issues
Re-running the failed tests
Updated by Loïc Dachary about 8 years ago
powercycle¶
teuthology-suite -l2 -v -c hammer-backports -k testing -m smithi -s powercycle -p 1000 --email loic@dachary.org
Updated by Loïc Dachary about 8 years ago
rgw¶
teuthology-suite --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=2/5
- fail http://pulpito.ceph.com/loic-2016-03-12_17:38:57-rgw-hammer-backports---basic-smithi
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8cc324df4caf0d043208d9abe88a0e217e861dc3 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rgw/s3_user_quota.pl' - ignore for now see http://tracker.ceph.com/issues/15117 saw valgrind issues
- rgw/verify/{overrides.yaml clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/replicated.yaml tasks/rgw_s3tests.yaml validater/valgrind.yaml}
- rgw/verify/{overrides.yaml clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_swift.yaml validater/valgrind.yaml}
- 'mkdir
Re-running failed tests
- fail http://pulpito.ceph.com/loic-2016-03-14_09:02:20-rgw-hammer-backports---basic-smithi
- ignore for now see http://tracker.ceph.com/issues/15117 saw valgrind issues
- rgw/verify/{overrides.yaml clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_swift.yaml validater/valgrind.yaml}
- rgw/verify/{overrides.yaml clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/replicated.yaml tasks/rgw_s3tests.yaml validater/valgrind.yaml}
- ignore for now see http://tracker.ceph.com/issues/15117 saw valgrind issues
Updated by Loïc Dachary about 8 years ago
git --no-pager log --format='%H %s' --graph ceph/hammer..hammer-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Merge pull request #6754: hammer: mds: fix out-of-order messages
- |\
- | + mds: fix out-of-order messages
- + Merge pull request #7542: hammer: Wrong ceph get mdsmap assertion
- |\
- | + Fixed the ceph get mdsmap assertion.
- + Merge pull request #7589: hammer: rados cppool omap to ec pool crashes osd
- |\
- | + rados: Add new field flags for ceph_osd_op.copy_get.
- + Merge pull request #7591: hammer: rgw: S3 authentication subsystem should be able to use admin credentials for accessing Keystone
- |\
- | + rgw: support admin credentials in S3-related Keystone authentication.
- + Merge pull request #7848: hammer: mon: Enable Pg_prime By Default
- |\
- | + doc: mon-config Add information about pg_prime
- | + mon: Enable pg_prime by Default
- | + mon: do not prime pg_temp when the current acting is < min_size
- | + mon: be more careful about when we prime all pgs
- | + mon: cap the amount of time we spend priming pg_temp
- | + mon: prime pg_temp
- | + mon/PGMap: keep osd -> pg mapping in memory
- + Merge pull request #7883: hammer: osd: use GMT time for the object name of hitsets
- |\
- | + osd/osd_types: encode pg_pool_t the old way
- | + mon: disable gmt_hitset if not supported
- | + osd: do not let OSD_HITSET_GMT reuse the feature bit
- | + osd: Decode use_gmt_hitset with a unique version
- | + mon: print use_gmt_hitset in ceph osd pool get
- | + mon: add ceph osd pool set $pool use_gmt_hitset true cmd
- | + osd: use GMT time for the object name of hitsets
- + Merge pull request #7917: hammer: ceph-objectstore-tool, osd: Fix import handling
- |\
- | + osd/PG: fix generate_past_intervals
- | + ceph-objectstore-tool, osd: Fix import handling
- + Merge pull request #7922: hammer: PK11_DestroyContext() is called twice if PK11_DigestFinal() fails
- |\
- | + tests: Add TEST_no_segfault_for_bad_keyring to test/mon/misc.sh
- | + tests: make sure no segfault occurs when using some bad keyring
- | + auth: fix a crash issue due to CryptoHandler::create() failed
- | + auth: fix double PK11_DestroyContext() if PK11_DigestFinal() failed
- + Merge pull request #7992: hammer: segfault in Objecter::handle_watch_notify
- |\
- | + osdc/Objecter: call notify completion only once
- + Merge pull request #8026: hammer: mon: implement reweight-by-utilization feature
- |\
- | + osd/OSDMap: fix typo in summarize_mapping_stats
- | + test/pybind/test_ceph_argparse: fix reweight-by-utilization tests
- | + man/8/ceph.rst: remove invalid option for reweight-by-*
- | + mon: remove range=100 from reweight-by-* commands
- | + mon: make max_osds an optional arg
- | + mon: make reweight max_change default configurable
- | + mon/OSDMonitor: fix indentation
- | + qa/workunits/cephtool/test.sh: test reweight-by-x commands
- | + osd/MonCommand: add/fix up 'osd [test-]reweight-by-{pg,utilization}'
- | + mon: add 'osd utilization' command
- | + osd/OSDMap: add summarize_mapping_stats
- | + mon: make reweight-by-* max_change an argument
- | + osd: add mon_reweight_max_osds to limit reweight-by-* commands
- | + osd: add mon_reweight_max_change option which limits reweight-by-*
- | + test: add simple test for new reweight-by-* options
- | + osd: add sure and no-increasing options to reweight-by-*
- + Merge pull request #8042: hammer: mds: fix stray purging in 'stripe_count > 1' case
- |\
- | + mds: fix stray purging in 'stripe_count > 1' case
- + Merge pull request #8049: hammer: keyring permisions for mon deamon
- |\
- | + keyring permissions for mon daemon
- + Merge pull request #8051: hammer: clock skew report is incorrect by ceph health detail command
- |\
- | + mon: Monitor: get rid of weighted clock skew reports
- | + mon: Monitor: adaptative clock skew detection interval
- + Merge pull request #8052: hammer: test/librados/tier.cc doesn't completely clean up EC pools
- |\
- | + test/librados/test.cc: clean up EC pools' crush rules too
- + Merge pull request #8113: hammer: rgw: user quota may not adjust on bucket removal
- |\
- | + rgw: user quota may not adjust on bucket removal
- + Merge pull request #8272: hammer: tests: bufferlist: do not expect !is_page_aligned() after unaligned rebuild
- |\
- | + test/bufferlist: do not expect !is_page_aligned() after unaligned rebuild
- + Merge pull request #8313: hammer: rgw: radosgw server abort when user passed bad parameters to set quota
- |\
- | + rgw: do not abort when user passed bad parameters to set quota
- | + rgw: do not abort when user passed bad parameters to set metadata
- + Merge pull request #8379: hammer: RGW shouldn't send Content-Type nor Content-Length for 304 responses
- |\
- | + rgw: Do not send a Content-Length header on a 304 response
- | + rgw: Do not send a Content-Type on a '304 Not Modified' response
- | + rgw: dump_status() uses integer
- | + rgw: move status_num initialization into constructor
- | + rgw: Do not send a Content-Length header on status 204
- + Merge pull request #8398: hammer: monclient: avoid key renew storm on clock skew
- |\
- | + hammer: monclient: avoid key renew storm on clock skew
- + Merge pull request #8440: hammer: rpm package building fails if the build machine has lttng and babeltrace development packages installed locally
- |\
- | + ceph.spec.in: disable lttng and babeltrace explicitly
- + Merge pull request #8470: hammer: tests: be more generous with test timeout
- + tests: be more generous with test timeout
Updated by Loïc Dachary about 8 years ago
rados¶
teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 2000)/2000 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=488/2000
- fail http://pulpito.ceph.com/loic-2016-04-05_15:07:36-rados-hammer-backports---basic-smithi/
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0eb9d5598ac1dfbca0e679722f2e86f9270d2bc4 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test.sh' - 'sudo yum install ceph-radosgw -y'
- 'mkdir
Updated by Loïc Dachary about 8 years ago
rgw¶
teuthology-suite --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=4/5
- fail http://pulpito.ceph.com/loic-2016-04-05_04:41:12-rgw-hammer-backports---basic-smithi
- saw valgrind issues
- known bug Can't locate Amazon/S3.pm in @INC
- environmental noise the rest, because there was a network outage during the run
Re-running failed tests
- fail http://pulpito.ceph.com/loic-2016-04-06_01:32:41-rgw-hammer-backports---basic-smithi/
- known bug Can't locate Amazon/S3.pm in @INC 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0eb9d5598ac1dfbca0e679722f2e86f9270d2bc4 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rgw/s3_multipart_upload.pl' - saw valgrind issues
- known bug Can't locate Amazon/S3.pm in @INC 'mkdir
Updated by Loïc Dachary about 8 years ago
powercycle¶
teuthology-suite -l2 -v -c hammer-backports -k testing -m smithi -s powercycle -p 1000 --email loic@dachary.org
Updated by Loïc Dachary about 8 years ago
fs¶
teuthology-suite --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=1/5
- fail http://pulpito.ceph.com/loic-2016-04-05_04:42:58-fs-hammer-backports---basic-smithi
- 'TESTDIR=/home/ubuntu/cephtest bash -s'
- 'mkdir -- /home/ubuntu/cephtest/workunit.client.0 && git archive --remote=git://git.ceph.com/ceph.git 0eb9d5598ac1dfbca0e679722f2e86f9270d2bc4:qa/workunits | tar
C /home/ubuntu/cephtest/workunit.client.0 -x -f' - , 'failed': True, 'item': 'http://download.ceph.com/keys/release.asc', 'msg': 'Failed to download key at http://download.ceph.com/keys/release.asc: Request failed: <urlopen error [Errno
2] Name or service not known>'}, 'smithi035.front.sepia.ceph.com': {'invocation': {'module_name': 'apt_key', 'module_args': ''}, 'failed': True, 'item': 'http://download.ceph.com/keys/release.asc', 'msg': 'Failed to download key at http://download.ceph.com/keys/release.asc: Request failed: <urlopen error [Errno -2] Name or service not known>'}, 'smithi011.front.sepia.ceph.com': {'item': '@core,@base,yum-plugin-priorities,yum-plugin-fastestmirror,redhat-lsb,sysstat,gdb,git-all,python-configobj,libedit,openssl098e,boost-thread,xfsprogs,gdisk,parted,libgcrypt,fuse,fuse-libs,python-virtualenv,openssl,libuuid,btrfs-progs,attr,valgrind,python-nose,mpich,ant,iozone,libtool,automake,gettext,libuuid-devel,libacl-devel,bc,xfsdump,xfsprogs-devel,blktrace,numpy,python-matplotlib,qemu-kvm,usbredir,genisoimage,httpd,httpd-devel,httpd-tools,mod_ssl,mod_fastcgi-2.4.7-1.ceph.el7.centos,libevent-devel,perl-XML-Twig,java-1.6.0-openjdk-devel,junit4,smartmontools,nfs-utils,ncurses-devel', 'rc': 1, 'invocation': {'module_name': 'yum', 'module_args': ''}, 'changed': True, 'results': ['yum-plugin-priorities-1.1.31-34.el7.noarch providing yum-plugin-priorities is already installed', 'yum-plugin-fastestmirror-1.1.31-34.el7.noarch providing yum-plugin-fastestmirror is already installed', 'redhat-lsb-4.1-27.el7.centos.1.x86_64 providing redhat-lsb is already installed', 'sysstat-10.1.5-7.el7.x86_64 providing sysstat is already installed', 'gdb-7.6.1-80.el7.x86_64 providing gdb is already installed', 'git-all-1.8.3.1-6.el7.noarch providing git-all is already installed', 'python-configobj-4.7.2-7.el7.noarch providing python-configobj is already installed', 'libedit-3.0-12.20121213cvs.el7.x86_64 providing libedit is already installed', 'openssl098e-0.9.8e-29.el7.centos.3.x86_64 providing openssl098e is already installed', 'boost-thread-1.53.0-25.el7.x86_64 providing boost-thread is already installed', 'xfsprogs-3.2.2-2.el7.x86_64 providing xfsprogs is already installed', 'gdisk-0.8.6-5.el7.x86_64 providing gdisk is already installed', 'parted-3.1-23.el7.x86_64 providing parted is already installed', 'libgcrypt-1.5.3-12.el7_1.1.x86_64 providing libgcrypt is already installed', 'fuse-2.9.2-6.el7.x86_64 providing fuse is already installed', 'fuse-libs-2.9.2-6.el7.x86_64 providing fuse-libs is already installed', 'python-virtualenv-1.10.1-2.el7.noarch providing python-virtualenv is already installed', 'openssl-1.0.1e-42.el7.9.x86_64 providing openssl is already installed', 'libuuid-2.23.2-26.el7.x86_64 providing libuuid is already installed', 'btrfs-progs-3.19.1-1.el7.x86_64 providing btrfs-progs is already installed', 'attr-2.4.46-12.el7.x86_64 providing attr is already installed', 'valgrind-3.10.0-16.el7.x86_64 providing valgrind is already installed', 'python-nose-1.3.0-3.el7.noarch providing python-nose is already installed', 'mpich-3.0.4-8.el7.x86_64 providing mpich is already installed', 'ant-1.9.2-9.el7.noarch providing ant is already installed', 'libtool-2.4.2-21.el7_2.x86_64 providing libtool is already installed', 'automake-1.13.4-3.el7.noarch providing automake is already installed', 'gettext-0.18.2.1-4.el7.x86_64 providing gettext is already installed', 'libuuid-devel-2.23.2-26.el7.x86_64 providing libuuid-devel is already installed', 'libacl-devel-2.2.51-12.el7.x86_64 providing libacl-devel is already installed', 'bc-1.06.95-13.el7.x86_64 providing bc is already installed', 'xfsdump-3.1.4-1.el7.x86_64 providing xfsdump is already installed', 'xfsprogs-devel-3.2.2-2.el7.x86_64 providing xfsprogs-devel is already installed', 'blktrace-1.0.5-6.el7.x86_64 providing blktrace is already installed', 'numpy-1.7.1-11.el7.x86_64 providing numpy is already installed', 'python-matplotlib-1.2.0-15.el7.x86_64 providing python-matplotlib is already installed', 'usbredir-0.6-7.el7.x86_64 providing usbredir is already installed', 'genisoimage-1.1.11-23.el7.x86_64 providing genisoimage is already installed', 'httpd-2.4.6-40.el7.centos.x86_64 providing httpd is already installed', 'httpd-devel-2.4.6-40.el7.centos.x86_64 providing httpd-devel is already installed', 'httpd-tools-2.4.6-40.el7.centos.x86_64 providing httpd-tools is already installed', 'mod_ssl-2.4.6-40.el7.centos.x86_64 providing mod_ssl is already installed', 'libevent-devel-2.0.21-4.el7.x86_64 providing libevent-devel is already installed', 'perl-XML-Twig-3.44-2.el7.noarch providing perl-XML-Twig is already installed', 'java-1.6.0-openjdk-devel-1.6.0.38-1.13.10.0.el7_2.x86_64 providing java-1.6.0-openjdk-devel is already installed', 'junit-4.11-8.el7.noarch providing junit4 is already installed', 'smartmontools-6.2-4.el7.x86_64 providing smartmontools is already installed', 'nfs-utils-1.3.0-0.21.el7_2.x86_64 providing nfs-utils is already installed', 'ncurses-devel-5.9-13.20130511.el7.x86_64 providing ncurses-devel is already installed', 'Loaded plugins: fastestmirror, langpacks, priorities\nLoading mirror speeds from cached hostfile\n * base: mirror.es.its.nyu.edu\n * epel: fedora-epel.mirror.lstn.net\n * extras: mirror.ash.fastserv.com\n * updates: mirror.cogentco.com\nResolving Dependencies\n-> Running transaction check\n---> Package iozone.x86_64 0:3.424-2_ceph.el7.centos will be installed\n---> Package mod_fastcgi.x86_64 0:2.4.7-1.ceph.el7.centos will be installed\n---> Package qemu-kvm.x86_64 10:1.5.3-105.el7_2.3 will be installed\n--> Processing Dependency: qemu-kvm-common = 10:1.5.3-105.el7_2.3 for package: 10:qemu-kvm-1.5.3-105.el7_2.3.x86_64\n--> Processing Dependency: qemu-img = 10:1.5.3-105.el7_2.3 for package: 10:qemu-kvm-1.5.3-105.el7_2.3.x86_64\n--> Processing Dependency: librbd.so.1()(64bit) for package: 10:qemu-kvm-1.5.3-105.el7_2.3.x86_64\n--> Processing Dependency: librados.so.2()(64bit) for package: 10:qemu-kvm-1.5.3-105.el7_2.3.x86_64\n--> Running transaction check\n---> Package librados2.x86_64 1:0.80.7-3.el7 will be installed\n---> Package librbd1.x86_64 1:0.80.7-3.el7 will be installed\n---> Package qemu-img.x86_64 10:1.5.3-105.el7_2.3 will be installed\n---> Package qemu-kvm-common.x86_64 10:1.5.3-105.el7_2.3 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n iozone x86_64 3.424-2_ceph.el7.centos lab-extras 800 k\n mod_fastcgi x86_64 2.4.7-1.ceph.el7.centos centos7-fcgi-ceph 70 k\n qemu-kvm x86_64 10:1.5.3-105.el7_2.3 updates 1.8 M\nInstalling for dependencies:\n librados2 x86_64 1:0.80.7-3.el7 base 1.5 M\n librbd1 x86_64 1:0.80.7-3.el7 base 350 k\n qemu-img x86_64 10:1.5.3-105.el7_2.3 updates 657 k\n qemu-kvm-common x86_64 10:1.5.3-105.el7_2.3 updates 391 k\n\nTransaction Summary\n================================================================================\nInstall 3 Packages (+4 Dependent packages)\n\nTotal download size: 5.6 M\nInstalled size: 17 M\nDownloading packages:\n'], 'msg': 'http://mirror.es.its.nyu.edu/centos/7.2.1511/os/x86_64/Packages/librbd1-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.es.its.nyu.edu; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.es.its.nyu.edu/centos/7.2.1511/os/x86_64/Packages/librados2-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.es.its.nyu.edu; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.cogentco.com/pub/linux/centos/7.2.1511/updates/x86_64/Packages/qemu-img-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.cogentco.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.cogentco.com/pub/linux/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.cogentco.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.cogentco.com/pub/linux/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-common-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.cogentco.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://centos.firehosted.com/7.2.1511/os/x86_64/Packages/librbd1-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: centos.firehosted.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.confluxtech.com/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.confluxtech.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.confluxtech.com/centos/7.2.1511/os/x86_64/Packages/librados2-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.confluxtech.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.steadfast.net/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-common-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.steadfast.net; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.steadfast.net/centos/7.2.1511/os/x86_64/Packages/librbd1-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.steadfast.net; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.tzulo.com/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.tzulo.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://centos.chi.host-engine.com/7.2.1511/os/x86_64/Packages/librados2-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: centos.chi.host-engine.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://cosmos.illinois.edu/pub/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-common-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: cosmos.illinois.edu; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.millry.co/CentOS/7.2.1511/os/x86_64/Packages/librbd1-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.millry.co; Temporary failure in name resolution"\nTrying other mirror.\nhttp://repos.dfw.quadranet.com/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: repos.dfw.quadranet.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.us.oneandone.net/linux/distributions/centos/7.2.1511/os/x86_64/Packages/librados2-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.us.oneandone.net; Temporary failure in name resolution"\nTrying other mirror.\nhttp://dallas.tx.mirror.xygenhosting.com/CentOS/7.2.1511/updates/x86_64/Packages/qemu-kvm-common-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: dallas.tx.mirror.xygenhosting.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://denver.gaminghost.co/7.2.1511/os/x86_64/Packages/librbd1-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: denver.gaminghost.co; Temporary failure in name resolution"\nTrying other mirror.\nhttp://centos.mirror.ndchost.com/7.2.1511/updates/x86_64/Packages/qemu-kvm-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: centos.mirror.ndchost.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://dist1.800hosting.com/centos/7.2.1511/os/x86_64/Packages/librados2-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: dist1.800hosting.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://dist1.800hosting.com/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-common-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: dist1.800hosting.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirrors.tummy.com/mirrors/CentOS/7.2.1511/os/x86_64/Packages/librbd1-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirrors.tummy.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://centos.mirror.nac.net/7.2.1511/updates/x86_64/Packages/qemu-kvm-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: centos.mirror.nac.net; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirrors.tummy.com/mirrors/CentOS/7.2.1511/os/x86_64/Packages/librados2-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirrors.tummy.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://centos.mirror.nac.net/7.2.1511/updates/x86_64/Packages/qemu-kvm-common-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: centos.mirror.nac.net; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.confluxtech.com/centos/7.2.1511/os/x86_64/Packages/librbd1-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.confluxtech.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.steadfast.net/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.steadfast.net; Temporary failure in name resolution"\nTrying other mirror.\nhttp://centos.firehosted.com/7.2.1511/os/x86_64/Packages/librados2-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: centos.firehosted.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://cosmos.illinois.edu/pub/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: cosmos.illinois.edu; Temporary failure in name resolution"\nTrying other mirror.\nhttp://centos.chi.host-engine.com/7.2.1511/os/x86_64/Packages/librbd1-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: centos.chi.host-engine.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.millry.co/CentOS/7.2.1511/os/x86_64/Packages/librados2-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.millry.co; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.confluxtech.com/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-common-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.confluxtech.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://dallas.tx.mirror.xygenhosting.com/CentOS/7.2.1511/updates/x86_64/Packages/qemu-kvm-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: dallas.tx.mirror.xygenhosting.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.us.oneandone.net/linux/distributions/centos/7.2.1511/os/x86_64/Packages/librbd1-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.us.oneandone.net; Temporary failure in name resolution"\nTrying other mirror.\nhttp://denver.gaminghost.co/7.2.1511/os/x86_64/Packages/librados2-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: denver.gaminghost.co; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.tzulo.com/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-common-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.tzulo.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://dist1.800hosting.com/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: dist1.800hosting.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.steadfast.net/centos/7.2.1511/os/x86_64/Packages/librados2-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.steadfast.net; Temporary failure in name resolution"\nTrying other mirror.\nhttp://repos.dfw.quadranet.com/centos/7.2.1511/updates/x86_64/Packages/qemu-kvm-common-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: repos.dfw.quadranet.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://dist1.800hosting.com/centos/7.2.1511/os/x86_64/Packages/librbd1-0.80.7-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: dist1.800hosting.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://centos.mirror.ndchost.com/7.2.1511/updates/x86_64/Packages/qemu-kvm-common-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: centos.mirror.ndchost.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://centos.mirror.nac.net/7.2.1511/updates/x86_64/Packages/qemu-img-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#7 - "Failed connect to centos.mirror.nac.net:80; Connection timed out"\nTrying other mirror.\nhttp://cosmos.illinois.edu/pub/centos/7.2.1511/updates/x86_64/Packages/qemu-img-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: cosmos.illinois.edu; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.confluxtech.com/centos/7.2.1511/updates/x86_64/Packages/qemu-img-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.confluxtech.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://dallas.tx.mirror.xygenhosting.com/CentOS/7.2.1511/updates/x86_64/Packages/qemu-img-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: dallas.tx.mirror.xygenhosting.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.tzulo.com/centos/7.2.1511/updates/x86_64/Packages/qemu-img-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.tzulo.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://mirror.steadfast.net/centos/7.2.1511/updates/x86_64/Packages/qemu-img-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.steadfast.net; Temporary failure in name resolution"\nTrying other mirror.\nhttp://repos.dfw.quadranet.com/centos/7.2.1511/updates/x86_64/Packages/qemu-img-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: repos.dfw.quadranet.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://dist1.800hosting.com/centos/7.2.1511/updates/x86_64/Packages/qemu-img-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: dist1.800hosting.com; Temporary failure in name resolution"\nTrying other mirror.\nhttp://centos.mirror.ndchost.com/7.2.1511/updates/x86_64/Packages/qemu-img-1.5.3-105.el7_2.3.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: centos.mirror.ndchost.com; Temporary failure in name resolution"\nTrying other mirror.\n\n\nError downloading packages:\n 1:librados2-0.80.7-3.el7.x86_64: [Errno 256] No more mirrors to try.\n 10:qemu-img-1.5.3-105.el7_2.3.x86_64: [Errno 256] No more mirrors to try.\n 10:qemu-kvm-1.5.3-105.el7_2.3.x86_64: [Errno 256] No more mirrors to try.\n 10:qemu-kvm-common-1.5.3-105.el7_2.3.x86_64: [Errno 256] No more mirrors to try.\n 1:librbd1-0.80.7-3.el7.x86_64: [Errno 256] No more mirrors to try.\n\n'}} - while scanning a simple key
in "/tmp/teuth_ansible_failures_SF2WYn", line 15, column 1
could not found expected ':'
in "/tmp/teuth_ansible_failures_SF2WYn", line 22, column 216 - Stale /var/lib/ceph detected, aborting.
- 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 3'
- {'smithi043.front.sepia.ceph.com': 'SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh'}
- saw valgrind issues
- fs/verify/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/libcephfs_interface_tests.yaml validater/valgrind.yaml}
- fs/verify/{clusters/fixed-3-cephfs.yaml debug/mds_client.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_dbench.yaml validater/valgrind.yaml}
Updated by Loïc Dachary about 8 years ago
rbd¶
teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch hammer --email loic@dachary.org --ceph hammer-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=2/5
Updated by Loïc Dachary about 8 years ago
upgrade¶
teuthology-suite --verbose --suite upgrade/hammer --suite-branch hammer --ceph hammer-backports --machine-type vps --priority 1000 machine_types/vps.yaml
Updated by Loïc Dachary about 8 years ago
rados OpenStack¶
ceph-workbench --verbose ceph-qa-suite --suite rados --ceph-git-url http://github.com/ceph/ceph --ceph hammer-backports --ceph-qa-suite-git-url http://github.com/ceph/ceph-qa-suite --suite-branch hammer --subset $(expr $RANDOM % 1000)/1000 --upload
INFO:teuthology.suite:Passed subset=410/1000
- fail http://teuthology-logs.public.ceph.com/ubuntu-2016-04-05_07:29:49-rados-hammer-backports---basic-openstack/
- known bug erasure coded pools are sometime stuck when recovering and restarting an OSD unstuck them 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=38322cd0372d335e3a928d829fcac9b1665ab826 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rados/test.sh' - all others are environmental noise
- known bug erasure coded pools are sometime stuck when recovering and restarting an OSD unstuck them 'mkdir
Re-running failed tests
- fail http://167.114.242.130:8081/ubuntu-2016-04-05_13:09:53-rados-hammer-backports---basic-openstack/
- new bug (but passes on sepia, the bug is OpenStack specific) rados/test.sh workunit timesout on OpenStack 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0eb9d5598ac1dfbca0e679722f2e86f9270d2bc4 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/workunit.client.0/rados/test.sh'- rados/verify/{1thrash/default.yaml clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml}
- re-run, also failed http://167.114.242.130:8081/ubuntu-2016-04-05_21:52:49-rados-hammer-backports---basic-openstack/
- re-run, also failed http://167.114.242.130:8081/ubuntu-2016-04-06_10:58:21-rados-hammer-backports---basic-openstack/
- new bug (but passes on sepia, the bug is OpenStack specific) rados/test.sh workunit timesout on OpenStack 'mkdir
Updated by Loïc Dachary about 8 years ago
upgrade openstack¶
ceph-workbench --verbose ceph-qa-suite --suite upgrade/hammer --ceph-git-url http://github.com/ceph/ceph --ceph hammer-backports --ceph-qa-suite-git-url http://github.com/ceph/ceph-qa-suite --suite-branch hammer --subset $(expr $RANDOM % 100)/100 --upload
Updated by Loïc Dachary about 8 years ago
rbd openstack¶
ceph-workbench --verbose ceph-qa-suite --suite rbd --ceph-git-url http://github.com/ceph/ceph --ceph hammer-backports --ceph-qa-suite-git-url http://github.com/ceph/ceph-qa-suite --suite-branch hammer --subset $(expr $RANDOM % 100)/100 --upload distros/supported/ubuntu_14.04.yaml
- fail http://teuthology-logs.public.ceph.com/ubuntu-2016-04-05_22:35:04-rbd-hammer-backports---basic-openstack/
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0eb9d5598ac1dfbca0e679722f2e86f9270d2bc4 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=13 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd.sh' - Command 'RSYNC_RSH='ssh -i teuthology/openstack/archive-key' rsync -avz --relative /usr/share/nginx/html/./ubuntu-2016-04-05_22:35:04-rbd-hammer-backports---basic-openstack/211 ubuntu@teuthology-logs.public.ceph.com:./' returned non-zero exit status 255
- 'mkdir
Updated by Loïc Dachary about 8 years ago
fs openstack¶
ceph-workbench --verbose ceph-qa-suite --suite fs --ceph-git-url http://github.com/ceph/ceph --ceph hammer-backports --ceph-qa-suite-git-url http://github.com/ceph/ceph-qa-suite --suite-branch hammer --subset $(expr $RANDOM % 100)/100 --upload
- fail http://teuthology-logs.public.ceph.com/ubuntu-2016-04-05_22:26:26-fs-hammer-backports---basic-openstack/
- environmental noise
Re-running failed jobs
Updated by Nathan Cutler almost 8 years ago
- Description updated (diff)
Last lead has approved for QA to start testing.
Updated by Yuri Weinstein almost 8 years ago
QE VALIDATION (STARTED 4/26/16)¶
(Note: PASSED / FAILED - indicates "TEST IS IN PROGRESS")
re-runs command lines and filters are captured in http://pad.ceph.com/p/hammer_v0.94.7_QE_validation_notes
Suite | Runs/Reruns | Notes/Issues |
rados | http://pulpito.ceph.com/teuthology-2016-04-25_09:00:02-rados-hammer-distro-basic-vps/ | PASSED |
http://pulpito.ceph.com/teuthology-2016-04-26_09:47:03-rados-hammer-distro-basic-smithi/ | ||
Note from Jason: "This should not block the release.
xfs test cases 020, 206, and 229 also failed in v0.94.6
(http://tracker.ceph.com/issues/14717). The new failure is xfs/050
which appears to be a quota test case. Specifically it fails trying
to use project quotas, so this is a v4.6 kernel/xfs issue not a krbd
issue (last release was tested against kernel v4.5)."
knfs | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-04-24_20:10:01-knfs-hammer-testing-basic-openstack | PASSED |
hadoop | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-04-24_23:12:02-hadoop-hammer---basic-openstack | PASSED |
rest | http://pulpito.ceph.com/teuthology-2016-04-26_13:35:25-rest-hammer---basic-smithi/ | PASSED |
ceph-deploy | http://pulpito.ceph.com/teuthology-2016-04-26_13:36:53-ceph-deploy-hammer-distro-basic-mira/ | FAILED #14223 same as in v0.94.6 see #13356 |
vps | http://pulpito.ceph.com/teuthology-2016-04-26_14:31:03-ceph-deploy-hammer-distro-basic-vps/ | |
upgrade/client-upgrade | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-04-24_21:11:01-upgrade:client-upgrade-hammer-distro-basic-openstack | PASSED |
upgrade/dumpling-firefly-x | http://pulpito.ceph.com/yuriw-2016-04-26_14:12:28-upgrade:dumpling-firefly-x-hammer-distro-basic-vps/ | PASSED (ubuntu) packaging problem, centos failed + vps out of memory |
http://pulpito.ceph.com/teuthology-2016-05-02_13:20:27-upgrade:dumpling-firefly-x-hammer-distro-basic-mira/ | ||
Suite | Runs/Reruns | Notes/Issues |
PASSED / FAILED | ||
Updated by Loïc Dachary almost 8 years ago
- Description updated (diff)
- Status changed from In Progress to Resolved