Tasks #16344
closedjewel v10.2.3
0%
Description
Workflow¶
- Preparing the release
- Cutting the release
- Abhishek asks Sage if a point release should be published YES
- Abhishek gets approval from all leads
- Sage writes and commits the release notes
- Abhishek informs Yuri that the branch is ready for testing
- Yuri runs additional integration tests
- If Yuri discovers new bugs that need to be backported urgently (i.e. their priority is set to Urgent), the release goes back to being prepared, it was not ready after all - DONE
- Yuri informs Alfredo that the branch is ready for release - DONE
- Alfredo creates the packages and sets the release tag
Release information¶
- branch to build from: jewel, commit: TBD
- version: v10.2.3
- type of release: point release
- where to publish the release: http://download.ceph.com/debian-jewel and http://download.ceph.com/rpm-jewel
Updated by Loïc Dachary almost 8 years ago
- Copied from Tasks #15988: jewel v10.2.2 added
Updated by Loïc Dachary almost 8 years ago
- Copied from deleted (Tasks #15988: jewel v10.2.2)
Updated by Loïc Dachary almost 8 years ago
- Status changed from New to In Progress
Updated by Loïc Dachary almost 8 years ago
git --no-pager log --format='%H %s' --graph ceph/jewel..jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 8754
- |\
- | + admin-socket: use chown instead of fchown
- | + global-init: fixup inconsistent use of ceph ctx and conf
- | + global-init: chown pid files
- | + global-init: chown run dir
- | + common-init: chown admin socket after service thread started
- | + global-init: check init flags and set accordingly
- | + global-init: add a path chown wrapper function
- | + ceph-context: add function to set init flags
- + Pull request 8904
- |\
- | + tests: be more generous with test timeout
- + Pull request 9099
- |\
- | + rgw/s3website: Fix x-amz-website-redirect-location support.
- + Pull request 9105
- |\
- | + os/FileStore::sync_entry check for stop in after wait
- + Pull request 9242
- |\
- | + rgw: don't unregister request if request is not connected to manager
- + Pull request 9265
- |\
- | + rgw: handle errors properly during GET on Swift's DLO.
- + Pull request 9266
- |\
- | + rgw: fix realm pull and period pull for apache frontend
- + Pull request 9267
- |\
- | + rgw: camelcase names of custom attributes in Swift's responses.
- + Pull request 9268
- |\
- | + rgw: remove -EEXIST error msg for ZoneCreate
- + Pull request 9294
- |\
- | + rgw: add missing metadata_heap pool to old zones
- + Pull request 9316
- |\
- | + rgw: handle initial slashes properly in BulkDelete of Swift API.
- + Pull request 9327
- |\
- | + rgw: add_zone only clears master_zone if --master=false
- + Pull request 9390
- |\
- | + rgw : cleanup radosgw-admin temp command as it was deprecated and also implementation code for this command was removed in commit 8d7c8828b02c46e119adc4b9e8f655551512fc2d
- + Pull request 9405
- |\
- | + mds: wrongly treat symlink inode as normal file/dir when symlink inode is stale on kcephfs
- + Pull request 9425
- |\
- | + rgw: back off if error repo is empty
- | + rgw: data sync retries sync on prevously failed bucket shards
- | + rgw: store failed data sync entries in separate omap
- | + rgw: configurable window size to RGWOmapAppend
- | + rgw: add a cr for omap keys removal
- + Pull request 9453
- |\
- | + rgw: Set Access-Control-Allow-Origin to a Asterisk if allowed in a rule
- + Pull request 9542
- |\
- | + rgw: fix update of already existing account/bucket's custom attributes.
- | + rgw: fix updating account/container metadata of Swift API.
- + Pull request 9543
- |\
- | + rgw: remove unnecessary data copying in RGWPutMetadataBucket.
- | + rgw: Fix updating CORS/ACLs during POST on Swift's container.
- + Pull request 9544
- |\
- | + rgw: properly handle initial slashes in SLO's segment path.
- + Pull request 9545
- |\
- | + rgw: reduce string copy
- | + rgw: rework aws4 header parsing
- | + rgw: don't add port to aws4 canonical string if using default port
- | + rgw: use correct method to get current epoch
- | + rgw: check for aws4 headers size where needed
- + Pull request 9546
- |\
- | + rgw: can set negative max_buckets on RGWUserInfo
- + Pull request 9547
- |\
- | + mds: fix mdsmap print_summary with standby replays
- + Pull request 9557
- |\
- | + osdc: send error to recovery waiters on shutdown
- + Pull request 9558
- |\
- | + ceph_volume_client: allow read-only authorization for volumes
- + Pull request 9559
- |\
- | + mds: fix race between StrayManager::{eval_stray,reintegrate_stray}
- + Pull request 9560
- |\
- | + mds: finish lock waiters in the same order that they were added.
- + Pull request 9561
- |\
- | + mon/MDSMonitor: fix wrongly set expiration time of blacklist
- | + mon/MDSMonitor: fix wrong positive of jewel flag check
- + Pull request 9562
- |\
- | + client: fstat should take CEPH_STAT_CAP_INODE_ALL
- + Pull request 9565
- |\
- | + rados: Improve list-inconsistent json format
- | + test: Fix test to not use jq -S which isn't avail in all distributions
- | + test: Add testing of new scrub commands in rados
- | + rados: Don't bother showing list-inconsistent-* errors that aren't set
- | + osd, rados: Fixes for list-inconsistent-snapset
- | + include, rados: Fixes for list-inconsistent-obj and librados
- | + rados: Balance format sections in same do_get_inconsistent_cmd()
- | + rados: Include epoch in the list-inconsistent-* command output
- | + rados: Improve error messages for list-inconsistent commands
- + Pull request 9568
- |\
- | + rgw/s3website: whitespace style fixes
- | + rgw/s3website: Fix ErrocDoc memory leak.
- | + rgw/s3website: Fix x-amz-website-redirect-location support.
- | + rgw/s3website: Implement ErrorDoc & fix Double-Fault handler
- + Pull request 9574
- |\
- | + librados: Added declaration for rados_aio_get_version
- + Pull request 9575
- |\
- | + PG: update PGPool to detect map gaps and reset cached_removed_snaps
- + Pull request 9576
- |\
- | + ReplicatedPG: adjust num_pinned in _delete_oid
- + Pull request 9577
- |\
- | + mon: tolerate missing osd metadata
- | + mon: fix metadata dumps for empty lists
- | + mon: 'std::move` Metadata when updating it
- | + mon: enable dump all mds metadata at once
- | + mon: enable dump all mon metadata at once
- | + mon: fix 'mon metadata' for lone monitors
- + Pull request 9578
- |\
- | + osd: fix sched_time not actually randomized
- + Pull request 9581
- |\
- | + admin-socket: use chown instead of fchown
- | + global-init: fixup inconsistent use of ceph ctx and conf
- | + global-init: chown pid files
- | + global-init: chown run dir
- | + common-init: chown admin socket after service thread started
- | + global-init: check init flags and set accordingly
- | + global-init: add a path chown wrapper function
- | + ceph-context: add function to set init flags
- + Pull request 9631
- |\
- | + qa/workunits/rbd: specify source path
- | + qa/workunits/rbd: additional rbd-mirror stress tests
- | + vstart: add --nolockdep option
- + Pull request 9633
- |\
- | + msg/async: Implement smarter worker thread selection
- | + Event: fix delete_time_event while in processing list
- | + test_msgr: add delay inject test
- | + AsyncConnection: make delay message happen within original thread
- | + msg/async: add missing DelayedDelivery and delay injection
- | + Event: replace ceph_clock_now with coarse_real_clock
- | + msg/async: fix some return values and misspellings.
- | + msg/async: delete the confused comments.
- | + msg/async: add numevents statistics for external_events
- | + AsyncConnection: remove unnecessary send flag
- | + async: skip unnecessary steps when parsing simple messages
- + Pull request 9721
- + qa/workunits/rbd: respect RBD_CREATE_ARGS environment variable
Updated by Loïc Dachary almost 8 years ago
rbd¶
teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=1/5
- fail http://pulpito.ceph.com/loic-2016-06-16_02:23:32-rbd-jewel-backports---basic-smithi/
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eed6fbe90d4ff48cec776dcd5f2449c1c5f37395 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=61 VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd.sh' - 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1'
- 'CEPH_REF=master CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram
v -/home/ubuntu/cephtest/archive/cram.client.0/.t'* - 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eed6fbe90d4ff48cec776dcd5f2449c1c5f37395 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=61 VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eed6fbe90d4ff48cec776dcd5f2449c1c5f37395 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_ARGS=\'\' RBD_MIRROR_USE_RBD_MIRROR=1 RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.cluster1.client.mirror/rbd/rbd_mirror_stress.sh' - Stale jobs detected, aborting.
- rbd/librbd/{cache/none.yaml cachepool/small.yaml clusters/{fixed-3.yaml openstack.yaml} copy-on-read/on.yaml fs/xfs.yaml msgr-failures/few.yaml workloads/python_api_tests_with_defaults.yaml}
- rbd/thrash/{base/install.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml thrashers/cache.yaml workloads/rbd_api_tests_copy_on_read.yaml}
- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- rbd/maintenance/{xfs.yaml base/install.yaml clusters/{fixed-3.yaml openstack.yaml} qemu/xfstests.yaml workloads/dynamic_features.yaml}
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eed6fbe90d4ff48cec776dcd5f2449c1c5f37395 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=1 VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd.sh' - saw valgrind issues
- 'mkdir
Re-running failed tests:
- fail http://pulpito.ceph.com/loic-2016-06-27_00:40:39-rbd-jewel-backports---basic-smithi/
- 'CEPH_REF=master CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram
v -/home/ubuntu/cephtest/archive/cram.client.0/.t'* - 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3'
- saw valgrind issues
- 'CEPH_REF=master CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram
Re-running failed tests:
- fail http://pulpito.ceph.com/loic-2016-06-29_06:08:03-rbd-jewel-backports---basic-smithi/
- 'CEPH_REF=master CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram
v -/home/ubuntu/cephtest/archive/cram.client.0/.t'* - saw valgrind issues
- 'CEPH_REF=master CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram
Verifying the failed tests on jewel
- fail http://pulpito.ceph.com/loic-2016-06-28_23:56:41-rbd-jewel---basic-smithi/
- new bug jewel: tests: rbd bench rbd_other/child fails 'CEPH_REF=master CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram
v -/home/ubuntu/cephtest/archive/cram.client.0/*.t' - saw valgrind issues
- new bug jewel: tests: rbd bench rbd_other/child fails 'CEPH_REF=master CEPH_ID="0" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/virtualenv/bin/cram
Updated by Loïc Dachary almost 8 years ago
rgw¶
teuthology-suite --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=3/5
- fail http://pulpito.ceph.com/loic-2016-06-16_02:24:31-rgw-jewel-backports---basic-smithi/
- HTTPConnectionPool(host='smithi026.front.sepia.ceph.com', port=8000): Max retries exceeded with url: /metadata/incremental (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fb796e76b90>: Failed to establish a new connection: [Errno 111] Connection refused',))
Re-running failed tests
Verifying failed tests on jewel
Updated by Loïc Dachary almost 8 years ago
rados¶
teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 2000)/2000 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=1901/2000
- fail http://pulpito.ceph.com/loic-2016-06-16_02:25:08-rados-jewel-backports---basic-smithi/
- Need https://github.com/ceph/ceph-qa-suite/pull/1042 test_list_inconsistent_obj: assert len(objs) 1
Re-run failed test now that https://github.com/ceph/ceph-qa-suite/pull/1042 is merged
Updated by Loïc Dachary almost 8 years ago
powercycle¶
teuthology-suite -l2 -v -c jewel-backports -k testing -m smithi -s powercycle -p 1000 --email loic@dachary.org
Updated by Loïc Dachary almost 8 years ago
fs¶
teuthology-suite --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=0/5
- fail http://pulpito.ceph.com/loic-2016-06-16_02:28:14-fs-jewel-backports---basic-smithi/
- 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3'
- environmental noise saw valgrind issues
Re-running failed tests
- pass http://pulpito.ceph.com/loic-2016-06-27_00:36:26-fs-jewel-backports---basic-smithi/
- 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3'
- environmental noise saw valgrind issues
Verifying the failed tests on jewel
Updated by Loïc Dachary almost 8 years ago
git --no-pager log --format='%H %s' --graph ceph/jewel..jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
In the following the pull requests with no commits have been merged after the backport branch was created and can be ignored.
- + Pull request 9266
- |\
- | + rgw: fix realm pull and period pull for apache frontend
- + Pull request 9268
- + Pull request 9405
- |\
- | + mds: wrongly treat symlink inode as normal file/dir when symlink inode is stale on kcephfs
- + Pull request 9544
- |\
- | + rgw: properly handle initial slashes in SLO's segment path.
- + Pull request 9545
- + Pull request 9547
- + Pull request 9557
- + Pull request 9558
- |\
- | + ceph_volume_client: allow read-only authorization for volumes
- + Pull request 9559
- + Pull request 9560
- + Pull request 9561
- + Pull request 9562
- + Pull request 9568
- + Pull request 9577
- |\
- | + mon: tolerate missing osd metadata
- | + mon: fix metadata dumps for empty lists
- | + mon: 'std::move` Metadata when updating it
- | + mon: enable dump all mds metadata at once
- | + mon: enable dump all mon metadata at once
- | + mon: fix 'mon metadata' for lone monitors
- + Pull request 9739
- |\
- | + msg/msg_types: update sockaddr, sockaddr_storage accessors
- | + osd: add peer_addr in heartbeat_check log message
- + Pull request 9740
- + Pull request 9743
- + Pull request 9752
- |\
- | + TaskFinisher: cancel all tasks wait until finisher done
- + Pull request 9790
- |\
- | + rgw: check for -ERR_NOT_MODIFIED in rgw_rest_s3.cc
- + Pull request 9917
- |\
- | + Drop ceph Resource Agent
- + Pull request 9952
- |\
- | + librbd: potential use after free on refresh error
- + Pull request 9996
- + Pull request 9997
- + Pull request 9998
- + Pull request 10001
- + Pull request 10003
- + Pull request 10004
- + Pull request 10006
- + Pull request 10007
- + Pull request 10008
- |\
- | + packaging: move parted requirement to -osd subpkg
- + Pull request 10009
- |\
- | + librbd: removal of partially deleted image needs id lookup
- + Pull request 10010
- |\
- | + librbd: ignore missing object map during snap remove
- + Pull request 10026
- |\
- | + rgw_swift: newer versions of boost/utility no longer include in_place
- + Pull request 10036
- + Pull request 10041
- |\
- | + rbd: Skip rbd cache flush if journaling is enabled under aio_flush
- + Pull request 10042
- |\
- | + journal: do not log watch errors against deleted journal
- | + librbd: force-remove journal when disabling feature and removing image
- | + librbd: ignore ENOENT error when removing image from mirror directory
- + Pull request 10043
- |\
- | + rbd-mirror: ensure replay status formatter has completed before stopping
- + Pull request 10044
- |\
- | + librbd: fix lockdep issue when duplicate event detected
- + Pull request 10045
- |\
- | + rbd-mirror: keep events from different epochs independent
- + Pull request 10046
- |\
- | + librbd: journal callback to interrupt replay
- | + rbd-mirror: keep local pointer to image journal
- + Pull request 10047
- |\
- | + librbd: potential race when replaying journal ops
- + Pull request 10048
- |\
- | + rbd-mirror: resync: Added unit tests
- | + rbd-mirror: image-replayer: Implementation of resync operation
- | + rbd: journal: Support for listening updates on client metadata
- | + journal: Support for registering metadata listeners in the Journaler
- + Pull request 10050
- |\
- | + rbd-mirror: block proxied ops with -EROFS return code
- | + librbd: optionally block proxied requests with an error code
- + Pull request 10051
- |\
- | + librbd: fix crash while using advisory locks with R/O image
- + Pull request 10052
- |\
- | + librbd: do not propagate mirror status notification failures
- + Pull request 10053
- |\
- | + librbd: mark exclusive lock as released after journal is closed
- + Pull request 10054
- |\
- | + librbd: delete ExclusiveLock instance when switching to snapshot
- + Pull request 10055
- + librbd: memory leak possible if journal op event failed
- + librbd: ignore snap unprotect -EBUSY errors during journal replay
Updated by Loïc Dachary almost 8 years ago
rbd¶
teuthology-suite --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=1/5
- fail http://pulpito.ceph.com/loic-2016-07-07_08:15:11-rbd-jewel-backports---basic-smithi/
- 'smithi050.front.sepia.ceph.com': {'invocation': {'module_name': 'apt', 'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': False, 'force': False, 'package': None, 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': None, 'update_cache': True, 'deb': None, 'only_upgrade': False, 'cache_valid_time': None, 'default_release': None, 'install_recommends': None}}, 'failed': True, 'changed': False, '_ansible_no_log': False, 'msg': 'Could not fetch updated apt files'
- 'mkdir
p -/home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbb7b39faab8f0740a8d8b587e88539084a9d47e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_ARGS=\'\' RBD_MIRROR_USE_RBD_MIRROR=1 RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.cluster1.client.mirror/rbd/rbd_mirror_stress.sh' - }, 'failed': True, 'changed': False, '_ansible_no_log': False, 'msg': 'Could not fetch updated apt files'}, 'smithi038.front.sepia.ceph.com': {'invocation': {'module_name': 'apt', 'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': False, 'force': False, 'package': None, 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': None, 'update_cache': True, 'deb': None, 'only_upgrade': False, 'cache_valid_time': None, 'default_release': None, 'install_recommends': None}}, 'failed': True, 'changed': False, '_ansible_no_log': False, 'msg': 'Could not fetch updated apt files'}, 'smithi031.front.sepia.ceph.com': {'invocation': {'module_name': 'apt', 'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': False, 'force': False, 'package': None, 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': None, 'update_cache': True, 'deb': None, 'only_upgrade': False, 'cache_valid_time': None, 'default_release': None, 'install_recommends': None}}, 'failed': True, 'changed': False, '_ansible_no_log': False, 'msg': 'Could not fetch updated apt files'}}
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbb7b39faab8f0740a8d8b587e88539084a9d47e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=125 VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbb7b39faab8f0740a8d8b587e88539084a9d47e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_rbd_mirror.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbb7b39faab8f0740a8d8b587e88539084a9d47e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=61 VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd.sh' - saw valgrind issues
- }, 'failed': True, 'changed': False, '_ansible_no_log': False, 'msg': 'Could not fetch updated apt files'}}
- 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3'
Re-running failed jobs
- fail http://pulpito.ceph.com/loic-2016-08-03_00:03:18-rbd-jewel-backports---basic-smithi/
- known bug rbd/mirror/workloads/rbd-mirror-stress-workunit.yaml unstable on slow clusters 'mkdir
p -/home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbb7b39faab8f0740a8d8b587e88539084a9d47e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_ARGS=\'\' RBD_MIRROR_USE_RBD_MIRROR=1 RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.cluster1.client.mirror/rbd/rbd_mirror_stress.sh'
- known bug rbd/mirror/workloads/rbd-mirror-stress-workunit.yaml unstable on slow clusters 'mkdir
The dead jobs are because of rbd-nbd IO hang
Updated by Loïc Dachary almost 8 years ago
rgw¶
teuthology-suite --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=3/5
- fail http://pulpito.ceph.com/loic-2016-07-07_08:16:03-rgw-jewel-backports---basic-smithi/
- HTTPConnectionPool(host='smithi016.front.sepia.ceph.com', port=8000): Max retries exceeded with url: /metadata/incremental (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f31d8762b50>: Failed to establish a new connection: [Errno 111] Connection refused',))
Re-running failed jobs
- fail http://pulpito.ceph.com/loic-2016-08-03_00:04:42-rgw-jewel-backports---basic-smithi
- HTTPConnectionPool(host='smithi028.front.sepia.ceph.com', port=8000): Max retries exceeded with url: /metadata/incremental (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f770e592fd0>: Failed to establish a new connection: [Errno 111] Connection refused',))
Re-running failed jobs
Running the failed job against the jewel branch to verify if it fails
Updated by Loïc Dachary almost 8 years ago
rados¶
teuthology-suite --priority 1000 --suite rados --subset $(expr $RANDOM % 2000)/2000 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=1885/2000
- fail http://pulpito.ceph.com/loic-2016-07-07_08:16:48-rados-jewel-backports---basic-smithi/
- known bug tests: test_list_inconsistent_obj: assert len(objs) 1
- SELinux denials found on ubuntu@smithi013.front.sepia.ceph.com: ['type=AVC msg=audit(1467959148.776:4408): avc: denied { read } for pid=25870 comm="ceph-osd" name="magic" dev="nvme0n1p3" ino=26 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1467959148.776:4408): avc: denied { open } for pid=25870 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-1/magic" dev="nvme0n1p3" ino=26 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1467959145.405:4370): avc: denied { open } for pid=25448 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-1/magic" dev="nvme0n1p3" ino=26 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1467959145.405:4370): avc: denied { read } for pid=25448 comm="ceph-osd" name="magic" dev="nvme0n1p3" ino=26 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file']
- 'smithi034.front.sepia.ceph.com': {'msg': 'SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh', 'unreachable': True, 'changed': False}
Re-running failed jobs
Updated by Loïc Dachary almost 8 years ago
powercycle¶
teuthology-suite -l2 -v -c jewel-backports -k testing -m smithi -s powercycle -p 1000 --email loic@dachary.org
Updated by Loïc Dachary almost 8 years ago
fs¶
teuthology-suite --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=2/5
- fail http://pulpito.ceph.com/loic-2016-07-07_08:18:41-fs-jewel-backports---basic-smithi/
- saw valgrind issues
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_dbench.yaml validater/valgrind.yaml}
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_fsstress.yaml validater/valgrind.yaml}
- saw valgrind issues
Re-running the failed test from the previous integration branch to verify that they still fail
Re-running the failed valgrind test above
Updated by Loïc Dachary over 7 years ago
git --no-pager log --format='%H %s' --graph ceph/jewel..jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 9266
- |\
- | + rgw: fix realm pull and period pull for apache frontend
- + Pull request 9405
- |\
- | + mds: wrongly treat symlink inode as normal file/dir when symlink inode is stale on kcephfs
- + Pull request 9453
- |\
- | + rgw: Set Access-Control-Allow-Origin to a Asterisk if allowed in a rule
- + Pull request 9544
- |\
- | + rgw: properly handle initial slashes in SLO's segment path.
- + Pull request 9739
- |\
- | + msg/msg_types: update sockaddr, sockaddr_storage accessors
- | + osd: add peer_addr in heartbeat_check log message
- + Pull request 9790
- |\
- | + rgw: check for -ERR_NOT_MODIFIED in rgw_rest_s3.cc
- + Pull request 9917
- |\
- | + Drop ceph Resource Agent
- + Pull request 10008
- |\
- | + packaging: move parted requirement to -osd subpkg
- + Pull request 10026
- |\
- | + rgw_swift: newer versions of boost/utility no longer include in_place
- + Pull request 10054
- |\
- | + librbd: delete ExclusiveLock instance when switching to snapshot
- + Pull request 10073
- |\
- | + rgw: finish error_repo cr in stop_spawned_services()
- + Pull request 10074
- |\
- | + test: fix CMake build of ceph_test_objectcacher_stress
- | + ObjectCacher: fix bh_read_finish offset logic
- | + osd: provide some contents on ObjectExtent usage in testing
- | + test: build a correctness test for the ObjectCacher
- | + test: split objectcacher test into 'stress' and 'correctness'
- | + test: add a data-storing MemWriteback for testing ObjectCacher
- + Pull request 10103
- |\
- | + MDSMonitor.cc: fix mdsmap.<namespace> subscriptions
- + Pull request 10104
- |\
- | + mds: add maximum fragment size constraint
- + Pull request 10105
- |\
- | + mds: fix Session::check_access()
- + Pull request 10106
- |\
- | + client: skip executing async invalidates while umounting
- + Pull request 10107
- |\
- | + client: kill QuotaTree
- + Pull request 10108
- |\
- | + ceph-fuse: add option to disable kernel pagecache
- + Pull request 10144
- |\
- | + librbd: fix missing return statement if failed to get mirror image state
- + Pull request 10148
- |\
- | + rgw: fix double counting in RGWRados::update_containers_stats()
- + Pull request 10167
- |\
- | + rgw: aws4: fix buffer sharing issue with chunked uploads
- | + rgw: aws4: add STREAMING-AWS4-HMAC-SHA256-PAYLOAD support
- | + rgw: use std::unique_ptr for rgw_aws4_auth management.
- | + rgw: add handling of memory allocation failure in AWS4 auth.
- + Pull request 10188
- |\
- | + rgw: fix multi-delete query param parsing.
- + Pull request 10199
- |\
- | + mds: disallow 'open truncate' non-regular inode
- | + mds: only open non-regular inode with mode FILE_MODE_PIN
- + Pull request 10216
- |\
- | + RGW:add socket backlog setting for via ceph.conf http://tracker.ceph.com/issues/16406
- + Pull request 10217
- |\
- | + rgw: Add documentation for the Multi-tenancy feature
- + Pull request 10278
- |\
- | + common: fix value of CINIT_FLAG_DEFER_DROP_PRIVILEGES
- + Pull request 10303
- |\
- | + ceph-fuse: link to libtcmalloc or jemalloc
- + Pull request 10357
- |\
- | + rpm: move libatomic_ops-devel to non-distro-specific section
- | + rpm: move gperftools-devel to non-distro-specific section
- | + rpm: use new name of libatomic_ops-devel
- | + fix tcmalloc handling in spec file
- | + rpm: Fix creation of mount.ceph symbolic link for SUSE distros
- | + build/ops: build mount.ceph and mount.fuse.ceph as client binaries
- | + rpm: move mount.ceph from ceph-base to ceph-common
- | + rpm: create mount.ceph symlink in /sbin
- | + makefile: install mount.fuse.ceph,mount.ceph into /usr/sbin
- + Pull request 10364
- |\
- | + ceph-osd-prestart.sh: drop Upstart-specific code
- + Pull request 10420
- |\
- | + pybind/ceph_argparse: handle non ascii unicode args
- | + Fix tabs->whitespace in ceph_argparse
- | + Make usage of builtins in ceph_argparse compatible with Python 3
- + Pull request 10421
- |\
- | + osd: increment stas on recovery pull also
- + Pull request 10496
- |\
- | + crush: reset bucket->h.items[i] when removing tree item
- + Pull request 10497
- |\
- | + ceph-disk: partprobe should block udev induced BLKRRPART
- + Pull request 10499
- |\
- | + mds: fix SnapRealm::have_past_parents_open()
- + Pull request 10500
- |\
- | + mds: fix shutting down mds timed-out due to deadlock
- | + msg/async: remove the unnecessary checking to wakup event_wait
- + Pull request 10501
- |\
- | + mds: Kill C_SaferCond in evict_sessions()
- + Pull request 10502
- |\
- | + mds: move Finisher to unlocked shutdown
- + Pull request 10518
- |\
- | + rgw ldap: fix ldap bindpw parsing
- + Pull request 10519
- |\
- | + selinux: allow chown for self and setattr for /var/run/ceph
- + Pull request 10520
- |\
- | + rgw: add line space between inl. member function defns
- | + rgw-admin: return error on email address conflict
- | + rgw-admin: convert user email addresses to lower case
- + Pull request 10523
- |\
- | + rgw: fix error_repo segfault in data sync
- + Pull request 10524
- |\
- | + rgw: add missing master_zone when running with old default region config
- + Pull request 10525
- |\
- | + rgw: remove bucket index objects when deleting the bucket
- + Pull request 10537
- |\
- | + rgw multisite: preserve zone's extra pool
- + Pull request 10552
- |\
- | + buffer: fix iterator_impl visibility through typedef
- + Pull request 10580
- |\
- | + rgw: Fix civetweb IPv6
- + Pull request 10614
- |\
- | + ExclusiveArch for suse_version
- + Pull request 10616
- + rpm: pass --disable-static to configure
Updated by Loïc Dachary over 7 years ago
rados¶
teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 2000)/2000 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=76/2000
- fail http://pulpito.ceph.com/loic-2016-08-10_06:54:41-rados-jewel-backports---basic-smithi/
- lots of noise because smithi: possible circular locking dependency detected
Re-running failed jobs
Updated by Loïc Dachary over 7 years ago
rbd¶
teuthology-suite -k distro --priority 1000 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=1/5
- fail http://pulpito.ceph.com/loic-2016-08-10_06:59:05-rbd-jewel-backports---basic-smithi
- lots of noise because smithi: possible circular locking dependency detected
Re-running failed jobs
- pass http://pulpito.ceph.com/loic-2016-08-15_07:31:03-rbd-jewel-backports-distro-basic-smithi
- The dead jobs are because of rbd-nbd IO hang
Updated by Loïc Dachary over 7 years ago
rgw¶
teuthology-suite -k distro --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=0/5
- fail http://pulpito.ceph.com/loic-2016-08-10_07:00:21-rgw-jewel-backports---basic-smithi
- **
- HTTPConnectionPool(host='smithi003.front.sepia.ceph.com', port=8000): Max retries exceeded with url: /metadata/incremental (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fcd1cb91d10>: Failed to establish a new connection: [Errno 111] Connection refused',))
- HTTPConnectionPool(host='smithi013.front.sepia.ceph.com', port=8000): Max retries exceeded with url: /metadata/incremental (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f49a0ccec90>: Failed to establish a new connection: [Errno 111] Connection refused',))
Re-running failed jobs
- fail http://pulpito.ceph.com/loic-2016-08-12_08:10:12-rgw-jewel-backports---basic-smithi/
- radosgw-admin-data-sync fails because of data_extra_pool
- rgw/singleton/{overrides.yaml xfs.yaml all/radosgw-admin-multi-region.yaml frontend/apache.yaml fs/xfs.yaml rgw_pool_type/ec.yaml}
- rgw/singleton/{overrides.yaml xfs.yaml all/radosgw-admin-data-sync.yaml frontend/civetweb.yaml fs/xfs.yaml rgw_pool_type/ec-profile.yaml}
- rgw/singleton/{overrides.yaml xfs.yaml all/radosgw-admin-multi-region.yaml frontend/civetweb.yaml fs/xfs.yaml rgw_pool_type/ec-profile.yaml}
- rgw/singleton/{overrides.yaml xfs.yaml all/radosgw-admin-data-sync.yaml frontend/apache.yaml fs/xfs.yaml rgw_pool_type/ec-cache.yaml}
- radosgw-admin-data-sync fails because of data_extra_pool
Re-running failed jobs after reverting https://github.com/ceph/ceph/pull/10537 which causes the above failures, to make sure it is not hiding another failure from another pull request
- pass http://pulpito.front.sepia.ceph.com:80/loic-2016-08-15_15:26:36-rgw-jewel-backports---basic-smithi/
Verify it passes on jewel
Updated by Loïc Dachary over 7 years ago
powercycle¶
teuthology-suite -l2 -v -c jewel-backports -k testing -m smithi -s powercycle -p 1000 --email loic@dachary.org
Updated by Loïc Dachary over 7 years ago
fs¶
teuthology-suite -k distro --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=1/5
- fail http://pulpito.ceph.com/loic-2016-08-10_07:02:27-fs-jewel-backports---basic-smithi
- '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-08-11T16:40:03.490606+00:00 smithi025 kernel: [ INFO: possible recursive locking detected ]
' in syslog - 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/mds.a-s.log --time-stamp=yes --tool=memcheck ceph-mds -f --cluster ceph -i a-s'
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/libcephfs_interface_tests.yaml validater/valgrind.yaml}
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_dbench.yaml validater/valgrind.yaml}
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_fsstress.yaml validater/valgrind.yaml}
- '/home/ubuntu/cephtest/archive/syslog/kern.log:2016-08-11T16:40:03.490606+00:00 smithi025 kernel: [ INFO: possible recursive locking detected ]
Re-running failed jobs
- fail http://pulpito.ceph.com/loic-2016-08-15_07:35:11-fs-jewel-backports-distro-basic-smithi
- transient failure see jewel backports: cephfs.InvalidValue: error in setxattr thread on ceph-devel Test failure: test_data_isolated (tasks.cephfs.test_volume_client.TestVolumeClient)
- known bug valgrind osd leak Pipe::* 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/mds.a-s.log --time-stamp=yes --tool=memcheck ceph-mds -f --cluster ceph -i a-s'
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/libcephfs_interface_tests.yaml validater/valgrind.yaml}
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_dbench.yaml validater/valgrind.yaml}
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_fsstress.yaml validater/valgrind.yaml}
Running the failed job (but not the valgrind failures)
Running the failed job against jewel
Updated by Loïc Dachary over 7 years ago
Upgrade¶
teuthology-suite --verbose --suite upgrade/jewel-x --suite-branch jewel --ceph jewel-backports --machine-type vps --priority 1000 machine_types/vps.yaml
teuthology-suite -k distro --verbose --suite upgrade/hammer-x --suite-branch jewel --ceph jewel-backports --machine-type vps --priority 1000 machine_types/vps.yaml
- fail http://pulpito.ceph.com/loic-2016-08-16_08:49:47-upgrade:hammer-x-jewel-backports-distro-basic-vps/
- known bug python-rados install failure 'sudo yum install python-rados -y'
Updated by Loïc Dachary over 7 years ago
git --no-pager log --format='%H %s' --graph ceph/jewel..jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
- + Pull request 9266
- |\
- | + rgw: fix realm pull and period pull for apache frontend
- + Pull request 9388
- |\
- | + mon : Display full flag in ceph status if full flag is set
- + Pull request 9405
- |\
- | + mds: wrongly treat symlink inode as normal file/dir when symlink inode is stale on kcephfs
- + Pull request 9544
- |\
- | + rgw: properly handle initial slashes in SLO's segment path.
- + Pull request 9739
- |\
- | + msg/msg_types: update sockaddr, sockaddr_storage accessors
- | + osd: add peer_addr in heartbeat_check log message
- + Pull request 9917
- + Pull request 10008
- + Pull request 10074
- |\
- | + test: fix CMake build of ceph_test_objectcacher_stress
- | + ObjectCacher: fix bh_read_finish offset logic
- | + osd: provide some contents on ObjectExtent usage in testing
- | + test: build a correctness test for the ObjectCacher
- | + test: split objectcacher test into 'stress' and 'correctness'
- | + test: add a data-storing MemWriteback for testing ObjectCacher
- + Pull request 10103
- |\
- | + MDSMonitor.cc: fix mdsmap.<namespace> subscriptions
- + Pull request 10104
- |\
- | + mds: add maximum fragment size constraint
- + Pull request 10105
- |\
- | + mds: fix Session::check_access()
- + Pull request 10106
- |\
- | + client: skip executing async invalidates while umounting
- + Pull request 10107
- |\
- | + client: kill QuotaTree
- + Pull request 10108
- |\
- | + ceph-fuse: add option to disable kernel pagecache
- + Pull request 10148
- + Pull request 10167
- |\
- | + rgw: aws4: fix buffer sharing issue with chunked uploads
- | + rgw: aws4: add STREAMING-AWS4-HMAC-SHA256-PAYLOAD support
- | + rgw: use std::unique_ptr for rgw_aws4_auth management.
- | + rgw: add handling of memory allocation failure in AWS4 auth.
- + Pull request 10188
- |\
- | + rgw: fix multi-delete query param parsing.
- + Pull request 10199
- |\
- | + mds: disallow 'open truncate' non-regular inode
- | + mds: only open non-regular inode with mode FILE_MODE_PIN
- + Pull request 10216
- |\
- | + RGW:add socket backlog setting for via ceph.conf http://tracker.ceph.com/issues/16406
- + Pull request 10278
- |\
- | + common: fix value of CINIT_FLAG_DEFER_DROP_PRIVILEGES
- + Pull request 10357
- |\
- | + rpm: move libatomic_ops-devel to non-distro-specific section
- | + rpm: move gperftools-devel to non-distro-specific section
- | + rpm: use new name of libatomic_ops-devel
- | + fix tcmalloc handling in spec file
- | + rpm: Fix creation of mount.ceph symbolic link for SUSE distros
- | + build/ops: build mount.ceph and mount.fuse.ceph as client binaries
- | + rpm: move mount.ceph from ceph-base to ceph-common
- | + rpm: create mount.ceph symlink in /sbin
- | + makefile: install mount.fuse.ceph,mount.ceph into /usr/sbin
- + Pull request 10364
- + Pull request 10421
- |\
- | + osd: increment stas on recovery pull also
- + Pull request 10496
- |\
- | + crush: reset bucket->h.items[i] when removing tree item
- + Pull request 10499
- |\
- | + mds: fix SnapRealm::have_past_parents_open()
- + Pull request 10500
- |\
- | + mds: fix shutting down mds timed-out due to deadlock
- | + msg/async: remove the unnecessary checking to wakup event_wait
- + Pull request 10501
- |\
- | + mds: Kill C_SaferCond in evict_sessions()
- + Pull request 10502
- |\
- | + mds: move Finisher to unlocked shutdown
- + Pull request 10519
- + Pull request 10525
- |\
- | + rgw: remove bucket index objects when deleting the bucket
- + Pull request 10580
- |\
- | + rgw: Fix civetweb IPv6
- + Pull request 10614
- + Pull request 10645
- |\
- | + librbd: journal::Replay no longer holds lock while completing callback
- + Pull request 10646
- |\
- | + rbd-mirror: gracefully fail if object map is unavailable
- + Pull request 10647
- |\
- | + librbd: failed assertion after shrinking a clone image twice
- + Pull request 10648
- |\
- | + librbd: re-register watch on old format image rename
- + Pull request 10649
- + Pull request 10650
- |\
- | + librbd: prevent creation of clone from non-primary mirrored image
- + Pull request 10652
- |\
- | + librbd: prevent creation of v2 image ids that are too large
- + Pull request 10653
- |\
- | + ceph.spec.in: fix rpm package building error as follows:
- | + udev: always populate /dev/disk/by-parttypeuuid
- + Pull request 10654
- |\
- | + mon: tolerate missing osd metadata
- | + mon: fix metadata dumps for empty lists
- | + mon: 'std::move` Metadata when updating it
- | + mon: fix 'mon metadata' for lone monitors
- + Pull request 10655
- |\
- | + rgw: can set negative max_buckets on RGWUserInfo
- + Pull request 10656
- |\
- | + rgw: fix potential memory leaks in RGWPutCORS_ObjStore_S3::get_params
- + Pull request 10657
- |\
- | + rgw: fix marker tracker completion handling
- + Pull request 10658
- |\
- | + radosgw-admin: zone[group] modify can change realm id
- + Pull request 10659
- |\
- | + rgw: use endpoints from master zone instead of zonegroup
- + Pull request 10660
- |\
- | + rgw: clear realm watch on failed watch_restart
- + Pull request 10661
- |\
- | + rgw: Have a flavor of bucket deletion to bypass GC and to trigger object deletions async.
- + Pull request 10662
- |\
- | + rgw: RGWMetaSyncCR holds refs to stacks for wakeup
- + Pull request 10663
- |\
- | + rgw: added zone rename to radosgw_admin
- + Pull request 10664
- |\
- | + rgw: Fix for using port 443 with pre-signed urls.
- + Pull request 10678
- + Pull request 10679
- + Pull request 10710
- + rgw: improve support for Swift's object versioning.
Updated by Loïc Dachary over 7 years ago
rbd¶
teuthology-suite -k distro --priority 101 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports-loic --machine-type smithi
- fail http://pulpito.ceph.com/loic-2016-08-15_07:38:06-rbd-jewel-backports-loic-distro-basic-smithi/
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=61 VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_rbd_mirror.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=125 VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_ARGS=\'\' RBD_MIRROR_USE_RBD_MIRROR=1 RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.cluster1.client.mirror/rbd/rbd_mirror_stress.sh'- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/many.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin VALGRIND=memcheck adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_rbd_mirror.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=125 VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd.sh' - saw valgrind issues
- 'mkdir
The dead jobs are because of rbd-nbd IO hang
Re-running failed jobs
- fail http://pulpito.ceph.com/loic-2016-08-15_13:56:34-rbd-jewel-backports-loic-distro-basic-smithi/
- 'mkdir
p -/home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_ARGS=\'\' RBD_MIRROR_USE_RBD_MIRROR=1 RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.cluster1.client.mirror/rbd/rbd_mirror_stress.sh'- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/many.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- saw valgrind issues
- 'mkdir
Re-running failed jobs
- fail http://pulpito.ceph.com/loic-2016-08-15_21:11:09-rbd-jewel-backports-loic-distro-basic-smithi/
- 'mkdir
p -/home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_ARGS=\'\' RBD_MIRROR_USE_RBD_MIRROR=1 RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.cluster1.client.mirror/rbd/rbd_mirror_stress.sh'- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/many.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- saw valgrind issues
- 'mkdir
Updated by Loïc Dachary over 7 years ago
rados¶
teuthology-suite -k distro --priority 1000 --suite rados --subset $(expr $RANDOM % 2000)/2000 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=85/2000
- fail http://pulpito.ceph.com/loic-2016-08-17_11:21:21-rados-jewel-backports-distro-basic-smithi
- known bug tests: test_list_inconsistent_obj: assert len(objs) 1
- SELinux denials found on ubuntu@smithi003.front.sepia.ceph.com: ['type=AVC msg=audit(1471695081.216:8885): avc: denied { read } for pid=23585 comm="ceph-osd" name="magic" dev="nvme0n1p2" ino=26 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1471695081.216:8885): avc: denied { open } for pid=23585 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-0/magic" dev="nvme0n1p2" ino=26 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file']
- SELinux denials found on ubuntu@smithi003.front.sepia.ceph.com: ['type=AVC msg=audit(1471696064.971:4448): avc: denied { read } for pid=25371 comm="ceph-osd" name="magic" dev="nvme0n1p2" ino=26 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1471696066.003:4459): avc: denied { open } for pid=25524 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-0/magic" dev="nvme0n1p2" ino=26 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1471696064.971:4448): avc: denied { open } for pid=25371 comm="ceph-osd" path="/var/lib/ceph/osd/ceph-0/magic" dev="nvme0n1p2" ino=26 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file', 'type=AVC msg=audit(1471696066.003:4459): avc: denied { read } for pid=25524 comm="ceph-osd" name="magic" dev="nvme0n1p2" ino=26 scontext=system_u:system_r:ceph_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file']
Re-running failed tests
Updated by Loïc Dachary over 7 years ago
rgw¶
teuthology-suite -k distro --priority 1000 --suite rgw --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=2/5
- fail http://pulpito.ceph.com/loic-2016-08-17_11:24:55-rgw-jewel-backports-distro-basic-smithi
- HTTPConnectionPool(host='smithi003.front.sepia.ceph.com', port=8000): Max retries exceeded with url: /metadata/incremental (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f4bf19a0490>: Failed to establish a new connection: [Errno 111] Connection refused',))
- HTTPConnectionPool(host='smithi018.front.sepia.ceph.com', port=8000): Max retries exceeded with url: /metadata/incremental (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fb214c454d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
Re-running failed tests
- fail http://pulpito.ceph.com/loic-2016-08-21_19:17:40-rgw-jewel-backports-distro-basic-smithi
- HTTPConnectionPool(host='smithi041.front.sepia.ceph.com', port=8000): Max retries exceeded with url: /metadata/incremental (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f1efe004510>: Failed to establish a new connection: [Errno 111] Connection refused',))
- HTTPConnectionPool(host='smithi010.front.sepia.ceph.com', port=8000): Max retries exceeded with url: /metadata/incremental (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f7faf3214d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
Updated by Loïc Dachary over 7 years ago
powercycle¶
teuthology-suite -l2 -v -c jewel-backports -k testing -m smithi -s powercycle -p 1000 --email loic@dachary.org
- fail http://pulpito.ceph.com/loic-2016-08-21_19:12:58-powercycle-jewel-backports-testing-basic-smithi/
Trying with the jewel branch
Updated by Loïc Dachary over 7 years ago
Upgrade¶
teuthology-suite -k distro --verbose --suite upgrade/jewel-x --suite-branch jewel --ceph jewel-backports --machine-type vps --priority 1000 machine_types/vps.yaml
teuthology-suite -k distro --verbose --suite upgrade/hammer-x --suite-branch jewel --ceph jewel-backports --machine-type vps --priority 1000 machine_types/vps.yaml
- fail http://pulpito.ceph.com/loic-2016-08-17_11:27:50-upgrade:hammer-x-jewel-backports-distro-basic-vps
- 'sudo yum install ceph-radosgw -y'
- 'sudo yum install python-rados -y'
Updated by Loïc Dachary over 7 years ago
fs¶
teuthology-suite -k distro --priority 1000 --suite fs --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports --machine-type smithi
INFO:teuthology.suite:Passed subset=1/5
- fail http://pulpito.ceph.com/loic-2016-08-17_11:28:48-fs-jewel-backports-distro-basic-smithi
- Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
- 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/mds.a-s.log --time-stamp=yes --tool=memcheck ceph-mds -f --cluster ceph -i a-s'
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/libcephfs_interface_tests.yaml validater/valgrind.yaml}
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_dbench.yaml validater/valgrind.yaml}
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_fsstress.yaml validater/valgrind.yaml}
Re-running failed tests
- fail http://pulpito.ceph.com/loic-2016-08-21_19:23:20-fs-jewel-backports-distro-basic-smithi
- 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/mds.a-s.log --time-stamp=yes --tool=memcheck ceph-mds -f --cluster ceph -i a-s'
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/libcephfs_interface_tests.yaml validater/valgrind.yaml}
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_dbench.yaml validater/valgrind.yaml}
- fs/verify/{clusters/fixed-2-ucephfs.yaml debug/{mds_client.yaml mon.yaml} dirfrag/frag_enable.yaml fs/btrfs.yaml overrides/whitelist_wrongly_marked_down.yaml tasks/cfuse_workunit_suites_fsstress.yaml validater/valgrind.yaml}
- 'cd /home/ubuntu/cephtest && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term valgrind --trace-children=no --child-silent-after-fork=yes --num-callers=50 --suppressions=/home/ubuntu/cephtest/valgrind.supp --xml=yes --xml-file=/var/log/ceph/valgrind/mds.a-s.log --time-stamp=yes --tool=memcheck ceph-mds -f --cluster ceph -i a-s'
Updated by Yuri Weinstein over 7 years ago
QE VALIDATION (STARTED 8/29/16)¶
(Note: PASSED / FAILED - indicates "TEST IS IN PROGRESS")
re-runs command lines and filters are captured in http://pad.ceph.com/p/hammer_v10.2.3_QE_validation_notes
command line CEPH_BRANCH=9bfc0cf178dc21b0fe33e0ce3b90a18858abaf1b; MACHINE_NAME=vps; teuthology-suite -v -S $CEPH_BRANCH -m $MACHINE_NAME -k distro -s rados -e $CEPH_QA_EMAIL --suite-branch jewel
Suite | Runs/Reruns | Notes/Issues |
rados | http://pulpito.ceph.com/yuriw-2016-08-29_20:24:43-rados-master-distro-basic-smithi/ | FAILED #16516 Approved by David |
http://pulpito.front.sepia.ceph.com:80/yuriw-2016-08-30_16:06:19-rados-master-distro-basic-smithi/ | ||
rbd | http://pulpito.ceph.com/yuriw-2016-08-29_20:49:56-rbd-master-distro-basic-smithi/ | FAILED Approved by Jason 391321 + 391453 are known and OK; 391464 was multiple OSD crashes, known issue, and OK; retesting 391479 + 391222; 391226 + 391322 + 391404 are known helgrind, and OK; 391275 is a removed test case (fixed in qa suites) |
fs | http://pulpito.ceph.com/yuriw-2016-08-30_16:47:00-fs-master---basic-smithi/ | PASSED |
http://pulpito.front.sepia.ceph.com:80/yuriw-2016-08-31_16:20:02-fs-master---basic-smithi/ | ||
kcephfs | http://pulpito.ceph.com/yuriw-2016-08-31_20:40:50-kcephfs-master-testing-basic-smithi/ | PASSED |
knfs | http://pulpito.front.sepia.ceph.com:80/yuriw-2016-08-31_20:42:45-knfs-master-testing-basic-smithi/ | FAILED #17192 Approved by Jeff Layton |
rest | http://pulpito.ceph.com/teuthology-2016-08-31_02:40:02-rest-jewel---basic-mira/ | PASSED | ||||
hadoop | http://pulpito.ceph.com/teuthology-2016-08-31_02:30:01-hadoop-jewel---basic-mira/ | PASSED | ||||
samba | http://pulpito.ceph.com/teuthology-2016-08-31_02:35:02-samba-jewel---basic-mira/ | PASSED #17184 |
http://pulpito.ceph.com/teuthology-2016-09-11_02:35:02-samba-jewel---basic-mira/ | (tip of jewel) | |
ceph-deploy | http://pulpito.ceph.com/teuthology-2016-08-30_02:55:01-ceph-deploy-jewel-distro-basic-vps/ | PASSED | ||||
ceph-disk | http://pulpito.ceph.com/teuthology-2016-08-28_03:10:02-ceph-disk-jewel-distro-basic-mira/ | FAILED #15350, #17100, #17114 Approved by Loic |
http://pulpito.front.sepia.ceph.com:80/yuriw-2016-09-07_15:00:16-ceph-disk-master-distro-basic-vps/ | must run on vps | |
Suite | Runs/Reruns | Notes/Issues |
PASSED / FAILED | ||
Updated by Loïc Dachary over 7 years ago
- Status changed from In Progress to Resolved