Tasks #13750
closedinfernalis v9.2.1
0%
Description
Workflow¶
- Preparing the release
- Cutting the release
- Abhishek asks Sage if a point release should be published "YES" ( [10.02 20:53] <sage> [13:51:46] loicd: abhishekvrshny: let's do it (9.2.1) )
- Abhishek gets approval from all leads
- Sage writes and commits the release notes
- Abhishek informs Yuri that the branch is ready for testing DONE
- Yuri runs additional integration tests DONE
- If Yuri discovers new bugs that need to be backported urgently (i.e. their priority is set to Urgent), the release goes back to being prepared, it was not ready after all DONE
- Yuri informs Alfredo that the branch is ready for release DONE
- Alfredo creates the packages and sets the release tag DONE
Release information¶
- branch to build from: infernalis, commit:71f380a81c6870466e11a74a597f847494ba23e9
- version: v9.2.1
- type of release: point release
- where to publish the release: http://download.ceph.com/debian-infernalis and http://download.ceph.com/rpm-infernalis
Updated by Loïc Dachary over 8 years ago
- Copied from Tasks #13356: hammer v0.94.6 added
Updated by Loïc Dachary over 8 years ago
- Copied from deleted (Tasks #13356: hammer v0.94.6)
Updated by Abhishek Varshney over 8 years ago
git --no-pager log --format='%H %s' --graph ceph/infernalis..abhishekvrshny/infernalis-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 6392
- |\
- | + mon: should not set isvalid = true when cephx_verify_authorizer return false
- + Pull request 6395
- |\
- | + ceph-fuse.cc: While starting ceph-fuse, start the log thread first
- + Pull request 6396
- |\
- | + init-rbdmap: fix CMDPARAMS
- + Pull request 6397
- |\
- | + OSD:shall reset primary and up_primary fields when beginning a new past_interval.
- + Pull request 6449
- |\
- | + tests: test/librados/test.cc must create profile
- | + tests: destroy testprofile before creating one
- | + tests: add destroy_ec_profile{,_pp} helpers
- + Pull request 6474
- |\
- | + rbd: fix clone issue when we specify image feature
- + Pull request 6477
- |\
- | + librbd : fix enable objectmap feature issue
- + Pull request 6490
- |\
- | + rgw:swift use Civetweb ssl cannot get right url
- + Pull request 6500
- |\
- | + rbdmap: systemd support
- | + rbdmap: Move do_map and do_unmap shell functions to rbdmap script
- + Pull request 6621
- |\
- | + rgw: fix modification to index attrs when setting acls
- + Pull request 6626
- |\
- | + crush/mapper: ensure take bucket value is valid
- | + crush/mapper: ensure bucket id is valid before indexing buckets array
- + Pull request 6627
- |\
- | + Objecter: pool_op callback may hang forever.
- + Pull request 6628
- |\
- | + build/ops: rbd-replay moved from ceph-test-dbg to ceph-common-dbg
- + Pull request 6630
- |\
- | + librbd: resize should only update image size within header
- + Pull request 6632
- |\
- | + librbd: fixed deadlock while attempting to flush AIO requests
- | + tests: new test case to catch deadlock on RBD image refresh
- + Pull request 6633
- |\
- | + WorkQueue: new PointerWQ base class for ContextWQ
- + Pull request 6634
- |\
- | + krbd: remove deprecated --quiet param from udevadm
- | + run_cmd: close parent process console file descriptors
- + Pull request 6635
- |\
- | + rgw: fix swift API returning incorrect account metadata
- + Pull request 6636
- |\
- | + rgw: Add default quota config
- + Pull request 6650
- + rgw: fix reload on non Debian systems.
Updated by Abhishek Varshney over 8 years ago
rados¶
teuthology-openstack --verbose --key-name myself --key-filename ~/myself.pem --suite rados --subset $(expr $RANDOM % 18)/18 --suite-branch infernalis --ceph infernalis-backports --ceph-git-url https://github.com/abhishekvrshny/ceph.git --throttle 30 --simultaneous-jobs 20
- openstack fail http://149.202.164.49:8081/ubuntu-2015-11-26_08:17:44-rados-infernalis-backports---basic-openstack/
- environmental ansible failed to apt-get update, probably network problem '{'target183245.teuthology': {'invocation': {'module_name': 'apt', 'module_args': ''}, 'failed': True, 'msg': 'Could not fetch updated apt files'}}'
- bug rados/singleton-nomsgr/{all/valgrind-leaks.yaml} fails if archive-on-error: true expected valgrind issues and found none
- bug --ceph-git-url must warn about obsolete stable branches 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 0' (good infernalis)
- bug rados/objectstore/objectstore.yaml common/Mutex.cc: 105: FAILED assert "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'mkdir $TESTDIR/ostest && cd $TESTDIR/ostest && ceph_test_objectstore'"
- environmental SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh
- rados/thrash-erasure-code-big/{cluster/12-osds.yaml fs/xfs.yaml msgr-failures/fastclose.yaml thrashers/morepggrow.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml}
- rados/thrash-erasure-code-big/{cluster/12-osds.yaml fs/btrfs.yaml msgr-failures/few.yaml thrashers/pggrow.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml}
- rados/thrash-erasure-code-shec/{clusters/fixed-4.yaml fs/xfs.yaml msgr-failures/few.yaml thrashers/default.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml}
- known bug workunit: assumes git.ceph.com "wget
q -O /home/ubuntu/cephtest/admin_socket_client.0/objecter_requests -'http://git.ceph.com/?p=ceph.git;a=blob_plain;f=src/test/admin_socket/objecter_requests;hb=infernalis-backports' && chmod u=rx -- /home/ubuntu/cephtest/admin_socket_client.0/objecter_requests"- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr/simple.yaml msgr-failures/osd-delay.yaml thrashers/pggrow.yaml workloads/admin_socket_objecter_requests.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr/simple.yaml msgr-failures/osd-delay.yaml thrashers/default.yaml workloads/admin_socket_objecter_requests.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr/simple.yaml msgr-failures/few.yaml thrashers/mapgap.yaml workloads/admin_socket_objecter_requests.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr/simple.yaml msgr-failures/few.yaml thrashers/morepggrow.yaml workloads/admin_socket_objecter_requests.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr/simple.yaml msgr-failures/few.yaml thrashers/pggrow.yaml workloads/admin_socket_objecter_requests.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr/simple.yaml msgr-failures/few.yaml thrashers/default.yaml workloads/admin_socket_objecter_requests.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr/simple.yaml msgr-failures/fastclose.yaml thrashers/mapgap.yaml workloads/admin_socket_objecter_requests.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr/simple.yaml msgr-failures/fastclose.yaml thrashers/morepggrow.yaml workloads/admin_socket_objecter_requests.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr/simple.yaml msgr-failures/fastclose.yaml thrashers/pggrow.yaml workloads/admin_socket_objecter_requests.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr/simple.yaml msgr-failures/fastclose.yaml thrashers/default.yaml workloads/admin_socket_objecter_requests.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr/simple.yaml msgr-failures/osd-delay.yaml thrashers/mapgap.yaml workloads/admin_socket_objecter_requests.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr/simple.yaml msgr-failures/osd-delay.yaml thrashers/morepggrow.yaml workloads/admin_socket_objecter_requests.yaml}
- bug openstack: rados/.../morepggrow.yaml may need more disk "2015-11-26 11:11:39.001542 osd.1 149.202.169.77:6809/13845 245 : cluster [WRN] OSD near full (90%)" in cluster log
- known bug flipping the overlay from forward to seems to reorder writes Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 10000 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op copy_from 50 --op write_excl 50 --op delete 50 --pool base'*
- known bug rgw: DNSError on OpenStack '/home/ubuntu/cephtest/s3-tests/virtualenv/bin/s3tests-test-readwrite'
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr/simple.yaml msgr-failures/few.yaml thrashers/pggrow.yaml workloads/rgw_snaps.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr/simple.yaml msgr-failures/fastclose.yaml thrashers/default.yaml workloads/rgw_snaps.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr/simple.yaml msgr-failures/fastclose.yaml thrashers/mapgap.yaml workloads/rgw_snaps.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr/simple.yaml msgr-failures/fastclose.yaml thrashers/morepggrow.yaml workloads/rgw_snaps.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr/simple.yaml msgr-failures/fastclose.yaml thrashers/pggrow.yaml workloads/rgw_snaps.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr/simple.yaml msgr-failures/osd-delay.yaml thrashers/default.yaml workloads/rgw_snaps.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr/simple.yaml msgr-failures/osd-delay.yaml thrashers/mapgap.yaml workloads/rgw_snaps.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr/simple.yaml msgr-failures/osd-delay.yaml thrashers/morepggrow.yaml workloads/rgw_snaps.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr/simple.yaml msgr-failures/osd-delay.yaml thrashers/pggrow.yaml workloads/rgw_snaps.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr/simple.yaml msgr-failures/few.yaml thrashers/default.yaml workloads/rgw_snaps.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr/simple.yaml msgr-failures/few.yaml thrashers/mapgap.yaml workloads/rgw_snaps.yaml}
- rados/thrash/{hobj-sort.yaml 0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} fs/ext4.yaml msgr/simple.yaml msgr-failures/few.yaml thrashers/morepggrow.yaml workloads/rgw_snaps.yaml}
paddles=149.202.164.49:8080
run=ubuntu-2015-11-26_08:17:44-rados-infernalis-backports---basic-openstack
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | select(.description | contains("rgw") | not) | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
teuthology-openstack --verbose --key-name myself --key-filename ~/myself.pem --suite rados --filter="$filter" --suite-branch infernalis --ceph infernalis-backports --ceph-git-url https://github.com/abhishekvrshny/ceph.git --throttle 30 --simultaneous-jobs 20
Updated by Loïc Dachary over 8 years ago
rgw¶
teuthology-openstack --verbose --key-name myself --key-filename ~/myself.pem --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch infernalis --ceph infernalis-backports --ceph-git-url https://github.com/abhishekvrshny/ceph.git --throttle 30 --simultaneous-jobs 20
- openstack fail http://158.69.79.66:8081/ubuntu-2015-11-26_16:04:28-rgw-infernalis-backports---basic-openstack/
- '{'target082197.teuthology': {'invocation': {'module_name': 'apt', 'module_args': ''}, 'failed': True, 'msg': 'Could not fetch updated apt files'}}'
- '{'target082193.teuthology': {'invocation': {'module_name': 'apt', 'module_args': ''}, 'failed': True, 'msg': 'Could not fetch updated apt files'}}'
- '/home/ubuntu/cephtest/s3-tests/virtualenv/bin/s3tests-test-roundtrip'
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/xfs.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_roundtrip.yaml}
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/apache.yaml fs/ext4.yaml rgw_pool_type/ec-profile.yaml tasks/rgw_roundtrip.yaml}
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/btrfs.yaml rgw_pool_type/ec.yaml tasks/rgw_roundtrip.yaml}
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/apache.yaml fs/xfs.yaml rgw_pool_type/replicated.yaml tasks/rgw_roundtrip.yaml}
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/ext4.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_roundtrip.yaml}
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml rgw_pool_type/ec-profile.yaml tasks/rgw_roundtrip.yaml}
- '{'target082198.teuthology': {'invocation': {'module_name': 'apt', 'module_args': ''}, 'failed': True, 'msg': 'Could not fetch updated apt files'}}'
- '/home/ubuntu/cephtest/s3-tests/virtualenv/bin/s3tests-test-readwrite'
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/btrfs.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_readwrite.yaml}
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/apache.yaml fs/xfs.yaml rgw_pool_type/ec-profile.yaml tasks/rgw_readwrite.yaml}
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/ext4.yaml rgw_pool_type/ec.yaml tasks/rgw_readwrite.yaml}
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml rgw_pool_type/replicated.yaml tasks/rgw_readwrite.yaml}
- rgw/multifs/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/xfs.yaml rgw_pool_type/ec-cache.yaml tasks/rgw_readwrite.yaml}
- saw valgrind issues
- rgw/verify/{overrides.yaml clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/ec-profile.yaml tasks/rgw_s3tests.yaml validater/lockdep.yaml}
- rgw/verify/{overrides.yaml clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/replicated.yaml tasks/rgw_s3tests_multiregion.yaml validater/lockdep.yaml}
teuthology-suite --priority 101 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch infernalis --distro ubuntu --email loic@dachary.org --ceph infernalis-backports --machine-type plana,burnupi,mira
Updated by Loïc Dachary over 8 years ago
rbd¶
teuthology-openstack --verbose --key-name myself --key-filename ~/myself.pem --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch infernalis --ceph infernalis-backports --ceph-git-url https://github.com/abhishekvrshny/ceph.git --throttle 30 --simultaneous-jobs 20
- openstack fail http://167.114.242.172:8081/ubuntu-2015-11-26_17:48:25-rbd-infernalis-backports---basic-openstack/
- environmental '{'target243236.teuthology': {'invocation': {'module_name': 'apt', 'module_args': ''}, 'failed': True, 'msg': 'Could not fetch updated apt files'}, 'target243237.teuthology': {'invocation': {'module_name': 'apt', 'module_args': ''}, 'failed': True, 'msg': 'Could not fetch updated apt files'}}'
- bug rbd_fio.py does not check check_package_signatures 'sudo apt-get -y install librbd-dev'
- rbd/librbd/{cache/none.yaml cachepool/small.yaml clusters/fixed-3.yaml copy-on-read/off.yaml fs/btrfs.yaml msgr-failures/few.yaml workloads/rbd_fio.yaml}
- rbd/librbd/{cache/writethrough.yaml cachepool/small.yaml clusters/fixed-3.yaml copy-on-read/off.yaml fs/btrfs.yaml msgr-failures/few.yaml workloads/rbd_fio.yaml}
paddles=167.114.242.172:8080
run=ubuntu-2015-11-26_17:48:25-rbd-infernalis-backports---basic-openstack
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | select(.description | contains("rgw") | not) | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
teuthology-openstack --verbose --key-name myself --key-filename ~/myself-sbg1.pem --suite rbd --filter="$filter" --suite-branch infernalis --ceph infernalis-backports --ceph-git-url https://github.com/abhishekvrshny/ceph.git --throttle 30 --simultaneous-jobs 20
Updated by Abhishek Varshney over 8 years ago
upgrade¶
teuthology-openstack --verbose --key-name myself --key-filename ~/myself-sbg1.pem --subset 4/5 --verbose --suite upgrade/infernalis --ceph-qa-suite-git-url https://github.com/ceph/ceph-qa-suite.git --suite-branch infernalis --ceph-git-url https://github.com/ceph/ceph.git --ceph infernalis-backports
- openstack pass http://167.114.236.49:8081/ubuntu-2015-11-29_18:07:17-upgrade:infernalis-infernalis-backports---basic-openstack/
teuthology-openstack --verbose --key-name myself --key-filename ~/myself.pem --verbose --suite upgrade/infernalis/point-to-point/point-to-point.yaml --ceph-qa-suite-git-url https://github.com/ceph/ceph-qa-suite.git --suite-branch infernalis --ceph-git-url https://github.com/ceph/ceph.git --ceph infernalis-backports distros/all/centos_7.0.yaml
Updated by Abhishek Varshney over 8 years ago
fs¶
teuthology-openstack --verbose --key-name myself --key-filename ~/myself-bhs1.pem --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch infernalis --ceph infernalis-backports --ceph-git-url https://github.com/abhishekvrshny/ceph.git --throttle 30 --simultaneous-jobs 40
- openstack fail http://158.69.71.40:8081/ubuntu-2015-12-01_17:18:47-fs-infernalis-backports---basic-openstack/
- "{'target075105.teuthology': {'invocation': {'module_name': 'apt', 'module_args': ''}, 'failed': True, 'msg': 'Could not fetch updated apt files'}}"
- Test failure: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessionMap)
- Test failure: test_client_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=97293b554026cff96da59760aeb577660867a2ac TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/kernel_untar_build.sh'
paddles=158.69.71.40:8080
run=ubuntu-2015-12-01_17:18:47-fs-infernalis-backports---basic-openstack
eval filter=$(curl --silent http://$paddles/runs/$run/ | jq '.jobs[] | select(.status == "dead" or .status == "fail") | select(.description | contains("rgw") | not) | .description' | while read description ; do echo -n $description, ; done | sed -e 's/,$//')
teuthology-openstack --verbose --key-name myself --key-filename ~/myself-bhs1.pem --suite fs --filter="$filter" --suite-branch infernalis --ceph infernalis-backports --ceph-git-url https://github.com/abhishekvrshny/ceph.git --throttle 30 --simultaneous-jobs 40
- openstack fail http://158.69.71.40:8081/ubuntu-2015-12-02_06:58:41-fs-infernalis-backports---basic-openstack/*
- Test failure: test_client_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
Updated by Loïc Dachary about 8 years ago
git --no-pager log --format='%H %s' --graph ceph/infernalis..ceph/infernalis-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge (\d+)/) { s|\w+\s+Merge (\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 6397
- |\
- | + OSD:shall reset primary and up_primary fields when beginning a new past_interval.
- + Pull request 6449
- |\
- | + tests: test/librados/test.cc must create profile
- | + tests: destroy testprofile before creating one
- | + tests: add destroy_ec_profile{,_pp} helpers
- + Pull request 6490
- |\
- | + rgw:swift use Civetweb ssl cannot get right url
- + Pull request 6621
- |\
- | + rgw: fix modification to index attrs when setting acls
- + Pull request 6626
- |\
- | + crush/mapper: ensure take bucket value is valid
- | + crush/mapper: ensure bucket id is valid before indexing buckets array
- + Pull request 6627
- |\
- | + Objecter: pool_op callback may hang forever.
- + Pull request 6628
- |\
- | + build/ops: rbd-replay moved from ceph-test-dbg to ceph-common-dbg
- + Pull request 6629
- |\
- | + osd: move misdirected op check from OSD thread to PG thread
- | + osd: ensure op rwm flags are checked before they are initialized
- + Pull request 6635
- |\
- | + rgw: fix swift API returning incorrect account metadata
- + Pull request 6636
- |\
- | + rgw: Add default quota config
- + Pull request 6694
- |\
- | + osd: fix send_failures() locking
- + Pull request 6752
- |\
- | + mds: consider client's flushing caps when choosing lock states
- | + client: cancel revoking caps when reconnecting the mds
- | + mds: choose EXCL state for filelock when client has Fb capability
- + Pull request 6833
- |\
- | + init-ceph: fix systemd-run cant't start ceph daemon sometimes
- + Pull request 6836
- |\
- | + auth/cephx: large amounts of log are produced by osd if the auth of osd is deleted when the osd is running, the osd will produce large amounts of log.
- + Pull request 6840
- |\
- | + Objecter: remove redundant result-check of _calc_target in _map_session.
- | + Objecter: potential null pointer access when do pool_snap_list.
- + Pull request 6846
- |\
- | + FileStore: potential memory leak if _fgetattrs fails
- + Pull request 6849
- |\
- | + osd: call on_new_interval on newly split child PG
- + Pull request 6851
- |\
- | + osd: Test osd_find_best_info_ignore_history_les config in another assert
- + Pull request 6852
- |\
- | + build/ops: systemd ceph-disk unit must not assume /bin/flock
- + Pull request 6853
- |\
- | + client: use null snapc to check pool permission
- + Pull request 6880
- |\
- | + ceph-disk: list accepts absolute dev names
- | + ceph-disk: display OSD details when listing dmcrypt devices
- | + tests: limit ceph-disk unit tests to test dir
- | + ceph-disk: factorize duplicated dmcrypt mapping
- | + ceph-disk: fix regression in cciss devices names
- + Pull request 6882
- |\
- | + tests: verify it is possible to reuse an OSD id
- + Pull request 6907
- |\
- | + mon/PGMonitor: MAX AVAIL is 0 if some OSDs' weight is 0
- + Pull request 6981
- |\
- | + librbd: fix merge-diff for >2GB diff-files
- + Pull request 6993
- |\
- | + log: Log.cc: Assign LOG_DEBUG priority to syslog calls
- + Pull request 7079
- |\
- | + librbd: properly handle replay of snap remove RPC message
- + Pull request 7080
- |\
- | + tests: new integration test for validating new RBD pools
- | + librbd: optionally validate RBD pool configuration
- + Pull request 7406
- |\
- | + librbd: ImageWatcher shouldn't block the notification thread
- | + librados_test_stub: watch/notify now behaves similar to librados
- | + tests: simulate writeback flush during snap create
- + Pull request 7421
- |\
- | + osd/PG: For performance start scrub scan at pool to skip temp objects
- | + osd/OSD: clear_temp_objects() include removal of Hammer temp objects
- | + osd: Improve log message which isn't about a particular shard
- + Pull request 7422
- |\
- | + rgw: add a method to purge all associate keys when removing a subuser
- + Pull request 7423
- |\
- | + rgw: radosgw-admin bucket check --fix not work
- + Pull request 7424
- |\
- | + Fixing NULL pointer dereference
- + Pull request 7426
- |\
- | + rbd: remove canceled tasks from timer thread
- + Pull request 7427
- |\
- | + rbd-replay: handle EOF gracefully
- + Pull request 7428
- |\
- | + cls_rbd: enable object map checksums for object_map_save
- + Pull request 7429
- |\
- | + fsx: checkout old version until it compiles properly on miras
- + Pull request 7431
- |\
- | + mds: properly set STATE_STRAY/STATE_ORPHAN for stray dentry/inode
- | + mon: don't require OSD W for MRemoveSnaps
- + Pull request 7484
- + librbd: ensure librados callbacks are flushed prior to destroying image
- + librbd: simplify IO flush handling
- + WorkQueue: PointerWQ drain no longer waits for other queues
- + test: new librbd flatten test case
Updated by Loïc Dachary about 8 years ago
rados¶
teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 400)/400 --suite-branch infernalis --email loic@dachary.org --ceph infernalis-backports --machine-type smithi,mira
2016-02-04 00:33:20,525.525 INFO:teuthology.suite:Passed subset=361/400
- fail http://pulpito.ceph.com/loic-2016-02-04_00:33:20-rados-infernalis-backports---basic-multi/
- environmental noise '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-02-04T09:42:10.950344+00:00 smithi011 kernel: [ INFO: possible circular locking dependency detected ]
' in syslog- rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml thrashers/morepggrow.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml}
- rados/monthrash/{ceph/ceph.yaml clusters/3-mons.yaml fs/xfs.yaml msgr/simple.yaml msgr-failures/mon-delay.yaml thrashers/sync.yaml workloads/rados_api_tests.yaml}
- rados/thrash-erasure-code-shec/{clusters/fixed-4.yaml fs/xfs.yaml msgr-failures/fastclose.yaml thrashers/default.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml}
- rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml thrashers/fastread.yaml workloads/ec-small-objects.yaml}
- rados/thrash-erasure-code-shec/{clusters/fixed-4.yaml fs/xfs.yaml msgr-failures/osd-delay.yaml thrashers/default.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml}
- rados/thrash-erasure-code-shec/{clusters/fixed-4.yaml fs/xfs.yaml msgr-failures/few.yaml thrashers/default.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml}
- rados/thrash-erasure-code-big/{cluster/12-osds.yaml fs/xfs.yaml msgr-failures/fastclose.yaml thrashers/mapgap.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml}
- known bug sudo yum install ceph-radosgw -y 'sudo yum install ceph-radosgw -y'
- environmental noise valgrind: mmap(...) failed in UME with error 12 (Cannot allocate memory). 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/mon.b.log --time-stamp=yes --tool=memcheck --leak-check=full --show-reachable=yes ceph-mon -f -i b'*
- rados/verify/{1thrash/default.yaml clusters/fixed-2.yaml fs/btrfs.yaml msgr/simple.yaml msgr-failures/few.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}
- rados/verify/{1thrash/none.yaml clusters/fixed-2.yaml fs/btrfs.yaml msgr/simple.yaml msgr-failures/few.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml}
- environmental noise '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-02-04T09:42:10.950344+00:00 smithi011 kernel: [ INFO: possible circular locking dependency detected ]
Re-running the failed tests:
- fail http://pulpito.ceph.com/loic-2016-02-04_22:33:36-rados-infernalis-backports---basic-multi/
- known bug sudo yum install ceph-radosgw -y ceph version 0.94.5-298-g2ca3c3e-1trusty was not installed, found 0.94.5-294-gf3bab8c-1trusty.
- environmental noise cannot open /dev/sdd: Device or resource busy / 'yes | sudo mkfs.xfs -f -i size=2048 -f /dev/sdd'
Re-running the failed tests:
- fail http://pulpito.ceph.com/loic-2016-02-07_19:37:09-rados-infernalis-backports---basic-multi/
- environmental noise valgrind: mmap(...) failed in UME with error 12 (Cannot allocate memory). 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/osd.4.log --time-stamp=yes --tool=memcheck ceph-osd -f -i 4'
- known bug sudo yum install ceph-radosgw -y 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'sudo grep -c \'"\'"\'load_pgs. skipping PG\'"\'"\' /var/log/ceph/ceph-osd.0.log\''
Updated by Loïc Dachary about 8 years ago
upgrade¶
teuthology-suite --verbose --suite upgrade/infernalis --suite-branch infernalis --ceph infernalis-backports --machine-type vps --priority 1000 machine_types/vps.yaml
- fail http://pulpito.ceph.com/loic-2016-02-04_00:37:12-upgrade:infernalis-infernalis-backports---basic-vps/
- new bug radosgw-admin -n client.0 user create crashes Command crashed: "adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user create --uid foo.client.0 --display-name 'Mr. foo.client.0' --access-key EUHWGVDIRGMKIRDSHADQ --secret DZaXOLpwIosVlrOw6TRTpHyDoqUBeNyVFOQ5kPKuUjkL2Z74vVJh4Q== --email foo.client.0+test@test.test"*
- upgrade:infernalis/older/{0-cluster/start.yaml 1-install/v9.2.0.yaml 2-workload/{blogbench.yaml rbd.yaml s3tests.yaml testrados.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/centos_7.2.yaml}
- upgrade:infernalis/older/{0-cluster/start.yaml 1-install/latest_hammer_release.yaml 2-workload/{blogbench.yaml rbd.yaml s3tests.yaml testrados.yaml} 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/centos_7.2.yaml}
- environmental noise 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --mkfs --mkkey -i 1 --monmap /home/ubuntu/cephtest/monmap' (filestore(/var/lib/ceph/osd/ceph-1) mount initial op seq is 0; something is wrong)
- new bug radosgw-admin -n client.0 user create crashes Command crashed: "adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user create --uid foo.client.0 --display-name 'Mr. foo.client.0' --access-key EUHWGVDIRGMKIRDSHADQ --secret DZaXOLpwIosVlrOw6TRTpHyDoqUBeNyVFOQ5kPKuUjkL2Z74vVJh4Q== --email foo.client.0+test@test.test"*
Verifying the upgrade on the infernalis branch
teuthology-suite --verbose --suite upgrade/infernalis --suite-branch infernalis --ceph infernalis --machine-type vps --priority 1000 machine_types/vps.yaml
- fail http://pulpito.ceph.com/loic-2016-02-05_13:57:02-upgrade:infernalis-infernalis---basic-vps/
- new bug radosgw-admin -n client.0 user create crashes Command crashed: "adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user create --uid foo.client.0 --display-name 'Mr. foo.client.0' --access-key ZOUFBFKFEQZJAAVLFYMR --secret MnYUAa94Npd56oqKFsI8YPhWdYpe9JZ6PEUBanAUQ3cE0AOdMl1gqg== --email foo.client.0+test@test.test"
- upgrade:infernalis/older/{0-cluster/start.yaml 1-install/latest_hammer_release.yaml 2-workload/{blogbench.yaml rbd.yaml s3tests.yaml testrados.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/centos_7.2.yaml}
- upgrade:infernalis/older/{0-cluster/start.yaml 1-install/latest_hammer_release.yaml 2-workload/{blogbench.yaml rbd.yaml s3tests.yaml testrados.yaml} 3-upgrade-sequence/upgrade-osd-mon-mds.yaml 4-final/{monthrash.yaml osdthrash.yaml testrados.yaml} distros/centos_7.2.yaml}
- new bug radosgw-admin -n client.0 user create crashes Command crashed: "adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user create --uid foo.client.0 --display-name 'Mr. foo.client.0' --access-key ZOUFBFKFEQZJAAVLFYMR --secret MnYUAa94Npd56oqKFsI8YPhWdYpe9JZ6PEUBanAUQ3cE0AOdMl1gqg== --email foo.client.0+test@test.test"
Updated by Loïc Dachary about 8 years ago
powercycle¶
teuthology-suite -l2 -v -c infernalis-backports -k testing -m smithi -s powercycle -p 1000 --email loic@dachary.org
- fail http://pulpito.ceph.com/loic-2016-02-04_00:38:58-powercycle-infernalis-backports-testing-basic-smithi/
- environmental noise 'sudo yum install ceph-debuginfo -y'*
Re-running
- fail http://pulpito.ceph.com/loic-2016-02-04_23:17:52-powercycle-infernalis-backports-testing-basic-smithi/
- known bug BUG: held lock freed! '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-02-05T12:25:33.849462+00:00 smithi013 kernel: [ BUG: held lock freed! ]
' in syslog
- known bug BUG: held lock freed! '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-02-05T12:25:33.849462+00:00 smithi013 kernel: [ BUG: held lock freed! ]
Re-running
- fail http://pulpito.ceph.com/loic-2016-02-05_07:14:45-powercycle-infernalis-backports-testing-basic-smithi/ (same error as above)
Updated by Loïc Dachary about 8 years ago
rgw¶
teuthology-suite --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch infernalis --distro ubuntu --email loic@dachary.org --ceph infernalis-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=4/5
- fail http://pulpito.ceph.com/loic-2016-02-04_00:40:51-rgw-infernalis-backports---basic-multi/
- known bug swift/test/functional -v -a '!fails_on_rgw' "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'"
- environmental noise valgrind: mmap(...) failed in UME with error 12 (Cannot allocate memory). 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/mon.b.log --time-stamp=yes --tool=memcheck --leak-check=full --show-reachable=yes ceph-mon -f -i b'
- rgw/verify/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/ec.yaml tasks/rgw_s3tests_multiregion.yaml validater/valgrind.yaml}
- rgw/verify/{overrides.yaml clusters/fixed-2.yaml frontend/civetweb.yaml fs/btrfs.yaml msgr-failures/few.yaml rgw_pool_type/ec.yaml tasks/rgw_s3tests.yaml validater/valgrind.yaml}
- suspicious pull request rgw: the swift key remains after removing a subuser 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 subuser rm --subuser foo:foo1'
- rgw/singleton/{overrides.yaml all/radosgw-admin.yaml frontend/apache.yaml rgw_pool_type/replicated.yaml}
- rgw/singleton/{overrides.yaml all/radosgw-admin-multi-region.yaml frontend/civetweb.yaml rgw_pool_type/ec.yaml}
- rgw/singleton/{overrides.yaml all/radosgw-admin-data-sync.yaml frontend/apache.yaml rgw_pool_type/ec-profile.yaml}
- known bug Seg fault 9.0.3-1845-gf1ead76 : RGWRESTSimpleRequest::forward_request(RGWAccessKey&, req_info&, unsigned long, ceph::buffer::list*, ceph::buffer::list*)+0x74) HTTPConnectionPool(host='smithi053.front.sepia.ceph.com', port=8000): Max retries exceeded with url: /metadata/incremental (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f0674ff6c10>: Failed to establish a new connection: [Errno 111] Connection refused',))
Updated by Loïc Dachary about 8 years ago
rbd¶
teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 20)/20 --suite-branch infernalis --distro ubuntu --email loic@dachary.org --ceph infernalis-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=11/20
- fail http://pulpito.ceph.com/loic-2016-02-04_00:43:44-rbd-infernalis-backports---basic-multi/
- environmental noise valgrind: mmap(...) failed in UME with error 12 (Cannot allocate memory). 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a663786655b852e8e5f22e6757a43586602cce8d TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=1 VALGRIND=memcheck adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh'*- rbd/valgrind/{base/install.yaml clusters/fixed-1.yaml fs/btrfs.yaml workloads/python_api_tests.yaml}
- rbd/valgrind/{base/install.yaml clusters/fixed-1.yaml fs/btrfs.yaml workloads/python_api_tests_with_object_map.yaml}
- rbd/valgrind/{base/install.yaml clusters/fixed-1.yaml fs/btrfs.yaml workloads/c_api_tests_with_object_map.yaml}
- environmental noise valgrind: mmap(...) failed in UME with error 12 (Cannot allocate memory). 'mkdir
Updated by Loïc Dachary about 8 years ago
fs¶
teuthology-suite --priority 1000 --suite fs --subset $(expr $RANDOM % 20)/20 --suite-branch infernalis --distro ubuntu --email loic@dachary.org --ceph infernalis-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=13/20
- fail http://pulpito.ceph.com/loic-2016-02-04_00:45:49-fs-infernalis-backports---basic-multi/
- known bug Client::_fsync() on a given file does not wait unsafe requests that create/modify the file Test failure: test_client_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
Re-running the failed test
Re-running the failed test on infernalis
Updated by Loïc Dachary about 8 years ago
git --no-pager log --format='%H %s' --graph ceph/infernalis..infernalis-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 6397
- |\
- | + OSD:shall reset primary and up_primary fields when beginning a new past_interval.
- + Pull request 6449
- |\
- | + tests: test/librados/test.cc must create profile
- | + tests: destroy testprofile before creating one
- | + tests: add destroy_ec_profile{,_pp} helpers
- + Pull request 6490
- |\
- | + rgw:swift use Civetweb ssl cannot get right url
- + Pull request 6621
- |\
- | + rgw: fix modification to index attrs when setting acls
- + Pull request 6626
- |\
- | + crush/mapper: ensure take bucket value is valid
- | + crush/mapper: ensure bucket id is valid before indexing buckets array
- + Pull request 6627
- |\
- | + Objecter: pool_op callback may hang forever.
- + Pull request 6628
- |\
- | + build/ops: rbd-replay moved from ceph-test-dbg to ceph-common-dbg
- + Pull request 6629
- |\
- | + osd: move misdirected op check from OSD thread to PG thread
- | + osd: ensure op rwm flags are checked before they are initialized
- + Pull request 6635
- |\
- | + rgw: fix swift API returning incorrect account metadata
- + Pull request 6636
- |\
- | + rgw: Add default quota config
- + Pull request 6694
- |\
- | + osd: fix send_failures() locking
- + Pull request 6752
- |\
- | + mds: consider client's flushing caps when choosing lock states
- | + client: cancel revoking caps when reconnecting the mds
- | + mds: choose EXCL state for filelock when client has Fb capability
- + Pull request 6833
- |\
- | + init-ceph: fix systemd-run cant't start ceph daemon sometimes
- + Pull request 6836
- |\
- | + auth/cephx: large amounts of log are produced by osd if the auth of osd is deleted when the osd is running, the osd will produce large amounts of log.
- + Pull request 6840
- |\
- | + Objecter: remove redundant result-check of _calc_target in _map_session.
- | + Objecter: potential null pointer access when do pool_snap_list.
- + Pull request 6846
- |\
- | + FileStore: potential memory leak if _fgetattrs fails
- + Pull request 6849
- |\
- | + osd: call on_new_interval on newly split child PG
- + Pull request 6851
- |\
- | + osd: Test osd_find_best_info_ignore_history_les config in another assert
- + Pull request 6852
- |\
- | + build/ops: systemd ceph-disk unit must not assume /bin/flock
- + Pull request 6853
- |\
- | + client: use null snapc to check pool permission
- + Pull request 6882
- |\
- | + tests: verify it is possible to reuse an OSD id
- + Pull request 6907
- |\
- | + mon/PGMonitor: MAX AVAIL is 0 if some OSDs' weight is 0
- + Pull request 6981
- |\
- | + librbd: fix merge-diff for >2GB diff-files
- + Pull request 6993
- |\
- | + log: Log.cc: Assign LOG_DEBUG priority to syslog calls
- + Pull request 7079
- |\
- | + librbd: properly handle replay of snap remove RPC message
- + Pull request 7080
- |\
- | + tests: new integration test for validating new RBD pools
- | + librbd: optionally validate RBD pool configuration
- + Pull request 7406
- |\
- | + librbd: ImageWatcher shouldn't block the notification thread
- | + librados_test_stub: watch/notify now behaves similar to librados
- | + tests: simulate writeback flush during snap create
- + Pull request 7421
- |\
- | + osd/PG: For performance start scrub scan at pool to skip temp objects
- | + osd/OSD: clear_temp_objects() include removal of Hammer temp objects
- | + osd: Improve log message which isn't about a particular shard
- + Pull request 7423
- |\
- | + rgw: radosgw-admin bucket check --fix not work
- + Pull request 7424
- |\
- | + Fixing NULL pointer dereference
- + Pull request 7426
- |\
- | + rbd: remove canceled tasks from timer thread
- + Pull request 7427
- |\
- | + rbd-replay: handle EOF gracefully
- + Pull request 7428
- |\
- | + cls_rbd: enable object map checksums for object_map_save
- + Pull request 7429
- |\
- | + fsx: checkout old version until it compiles properly on miras
- + Pull request 7431
- |\
- | + mds: properly set STATE_STRAY/STATE_ORPHAN for stray dentry/inode
- | + mon: don't require OSD W for MRemoveSnaps
- + Pull request 7484
- |\
- | + librbd: ensure librados callbacks are flushed prior to destroying image
- | + librbd: simplify IO flush handling
- | + WorkQueue: PointerWQ drain no longer waits for other queues
- | + test: new librbd flatten test case
- + Pull request 7514
- |\
- | + mon: compact full epochs also
- + Pull request 7543
- + rgw-admin: document orphans commands in usage
Updated by Loïc Dachary about 8 years ago
rgw¶
teuthology-suite --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch infernalis --email loic@dachary.org --ceph infernalis-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=4/5
- fail http://pulpito.ceph.com/loic-2016-02-07_21:00:09-rgw-infernalis-backports---basic-multi/
- environmental noise valgrind: mmap(...) failed in UME with error 12 (Cannot allocate memory). 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/mon.b.log --time-stamp=yes --tool=memcheck --leak-check=full --show-reachable=yes ceph-mon -f -i b'
- known bug swift/test/functional -v -a '!fails_on_rgw' "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw'"
Updated by Loïc Dachary about 8 years ago
git --no-pager log --format='%H %s' --graph ceph/infernalis..infernalis-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 6490
- |\
- | + rgw:swift use Civetweb ssl cannot get right url
- + Pull request 6621
- |\
- | + rgw: fix modification to index attrs when setting acls
- + Pull request 6629
- |\
- | + osd: move misdirected op check from OSD thread to PG thread
- | + osd: ensure op rwm flags are checked before they are initialized
- + Pull request 6635
- |\
- | + rgw: fix swift API returning incorrect account metadata
- + Pull request 6636
- |\
- | + rgw: Add default quota config
- + Pull request 7423
- |\
- | + rgw: radosgw-admin bucket check --fix not work
- + Pull request 7424
- |\
- | + Fixing NULL pointer dereference
- + Pull request 7514
- |\
- | + mon: compact full epochs also
- + Pull request 7557
- |\
- | + Compare parted output with the dereferenced path
- | + Add test cases to validate symlinks pointing to devs
- + Pull request 7558
- |\
- | + osd: clear pg_stat_queue after stopping pgs
- + Pull request 7559
- |\
- | + ReplicatedPG: fix sparse-read result code checking logic
- + Pull request 7561
- |\
- | + osd: recency should look at newest (not oldest) hitsets
- | + osd/ReplicatedPG: fix promotion recency logic
- + Pull request 7562
- |\
- | + mon: add mon_config_key prefix when sync full
- + Pull request 7563
- |\
- | + OSD::consume_map: correctly remove pg shards which are no longer acting
- + Pull request 7564
- + global: do not start two daemons with a single pid-file
- + test: add unitest test_pidfile.sh
- + global/pidfile: do not start two daemons with a single pid-file
Updated by Loïc Dachary about 8 years ago
rados¶
teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 400)/400 --suite-branch infernalis --email loic@dachary.org --ceph infernalis-backports --machine-type smithi,mira
INFO:teuthology.suite:Passed subset=38/400
- fail http://pulpito.ceph.com/loic-2016-02-12_12:43:40-rados-infernalis-backports---basic-multi/
- 'yes | sudo mkfs.btrfs -m single -l 32768 -n 32768 -f /dev/sdf'
- Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50 --pool unique_pool_0'
- rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml supported/ubuntu_14.04.yaml thrashers/pggrow.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}
- rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml supported/ubuntu_14.04.yaml thrashers/mapgap.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}
- rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/osd-delay.yaml supported/centos_7.2.yaml thrashers/default.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4610fd8be30a1022a42feca80884cf8e5640a9d8 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/osdc/stress_objectcacher.sh' - Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0'
- , 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'qemu-system-x86\' \'libfcgi\' \'dmapi\' \'tgt\'\' failed: E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)\nE: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?\n'}}
- 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --mkfs --mkkey -i 10 --monmap /home/ubuntu/cephtest/monmap'
- 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a'
- 'sudo tar cz
f /tmp/tmp1Kgas8 -C /var/log/ceph -.'
Re-running failed tests
Updated by Loïc Dachary about 8 years ago
upgrade¶
teuthology-suite --verbose --suite upgrade/infernalis --suite-branch infernalis --ceph infernalis-backports --machine-type vps --priority 1000 machine_types/vps.yaml
Updated by Loïc Dachary about 8 years ago
fs¶
Verifying if the bug found at http://tracker.ceph.com/issues/13750#note-19 is in infernalis. See also http://tracker.ceph.com/issues/13583#note-5
filter='fs/recovery/{clusters/2-remote-clients.yaml debug/mds_client.yaml dirfrag/frag_enable.yaml mounts/ceph-fuse.yaml tasks/client-limits.yaml}' teuthology-suite --priority 101 --suite fs --filter="$filter" --suite-branch infernalis --email loic@dachary.org --ceph infernalis --machine-type smithi,mira
Updated by Yuri Weinstein about 8 years ago
QE VALIDATION (STARTED 2/18/16)¶
re-runs command lines and filters are captured in http://pad.ceph.com/p/infernalis_v9.2.1_QE_validation_notes
fs | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-02-19_17:43:52-fs-infernalis---basic-openstack/ | PASSED |
http://pulpito.ceph.com/teuthology-2016-02-20_09:17:15-fs-infernalis---basic-smithi/ | ||
krbd | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-02-19_17:44:22-krbd-infernalis-testing-basic-openstack/ | FAILED env issues #14848, #14850 Approved for release by Jason |
http://pulpito.ceph.com/teuthology-2016-02-20_09:20:03-krbd-infernalis-testing-basic-smithi/ | ||
http://pulpito.ceph.com/teuthology-2016-02-22_08:43:11-krbd-infernalis-testing-basic-smithi/ | ||
http://pulpito.ceph.com/teuthology-2016-02-23_10:25:55-krbd-infernalis-testing-basic-smithi/ | fixed #14850 (use "--filter-out xfstests" till this fixed) | |
knfs | http://pulpito.ceph.com/teuthology-2016-02-19_20:46:37-knfs-infernalis-testing-basic-smithi/ | PASSED |
http://pulpito.ceph.com/teuthology-2016-02-20_09:25:27-knfs-infernalis-testing-basic-smithi/ | ||
hadoop | http://pulpito.ceph.com/teuthology-2016-02-20_09:26:23-hadoop-infernalis---basic-mira/ | PASSED |
rest | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-02-24_22:13:15-ceph-disk-infernalis-testing-basic-openstack/ | PASSED |
ceph-disk | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-02-24_22:13:15-ceph-disk-infernalis-testing-basic-openstack/ | PASSED |
ceph-deploy | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-02-19_17:45:28-ceph-deploy-infernalis-distro-basic-openstack/ | FAILED #14223 Approved for release by Sage |
upgrade/client-upgrade | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-02-19_17:34:21-upgrade:client-upgrade-infernalis-distro-basic-openstack/ | PASSED |
upgrade/hammer-x | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-02-23_17:02:02-upgrade:hammer-x-infernalis-distro-basic-openstack/ | PASSED |
upgrade/firefly-hammer-x | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-02-18_11:18:03-upgrade:firefly-hammer-x-infernalis-distro-basic-openstack/ | PASSED |
upgrade/infernalis | http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-02-18_16:39:47-upgrade:infernalis-infernalis-distro-basic-openstack/ | PASSED |
Suite | Runs/Reruns | Notes/Issues |
PASSED / FAILED | ||
Updated by Loïc Dachary about 8 years ago
- Description updated (diff)
- Status changed from In Progress to Resolved