Project

General

Profile

Actions

Tasks #21742

closed

jewel v10.2.11

Added by Nathan Cutler over 6 years ago. Updated almost 6 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Target version:
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

Workflow

  • Preparing the release
    • Nathan patches upgrade/jewel-x/point-to-point-x to do 10.2.0 -> current Jewel point release -> x
  • Cutting the release
    • Nathan asks Abhishek L. if a point release should be published
    • Nathan gets approval from all leads
      • Yehuda, rgw
      • Patrick, fs
      • Jason, rbd
      • Josh, rados
    • Abhishek L. writes and commits the release notes
    • Nathan informs Yuri that the branch is ready for testing
    • Yuri runs additional integration tests
      • If Yuri discovers new bugs that need to be backported urgently (i.e. their priority is set to Urgent or Immediate), the release goes back to being prepared; it was not ready after all
    • Yuri informs Alfredo that the branch is ready for release
    • Alfredo creates the packages and sets the release tag
    • Abhishek L. posts release announcement on https://ceph.com/community/blog

Release information

Actions #1

Updated by Nathan Cutler over 6 years ago

  • Status changed from New to In Progress

https://shaman.ceph.com/builds/ceph/wip-jewel-backports/7c4c04a59c15d8e53f2baed03dd8f67743d1d847/

git --no-pager log --format='%H %s' --graph ceph/jewel..wip-jewel-backports | perl -p -e 's/"/ /g; if (/\w+\s+Merge pull request #(\d+)/) { s|\w+\s+Merge pull request #(\d+).*|"Pull request $1":https://github.com/ceph/ceph/pull/$1|; } else { s|(\w+)\s+(.*)|"$2":https://github.com/ceph/ceph/commit/$1|; } s/\*/+/; s/^/* /;'
Actions #2

Updated by Nathan Cutler over 6 years ago

rados

teuthology-suite -k distro --priority 999 --suite rados --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

running http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-10-10_18:25:20-rados-wip-jewel-backports-distro-basic-smithi/

rados baseline

https://shaman.ceph.com/builds/ceph/wip-jewel-baseline/5bebd7cdfcda93eb7755c42408fcb785bbb663fc/

teuthology-suite -k distro --priority 101 --subset $(expr $RANDOM % 50)/50 --email ncutler@suse.com --ceph wip-jewel-baseline --suite rados --machine-type smithi

running http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-10-19_13:55:19-rados-wip-jewel-baseline-distro-basic-smithi/

Possible issues

Actions #3

Updated by Nathan Cutler over 6 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 999 -l 2 --email ncutler@suse.com

running http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-10-10_18:31:20-powercycle-wip-jewel-backports-distro-basic-smithi/

Actions #4

Updated by Nathan Cutler over 6 years ago

Upgrade jewel point-to-point-x

teuthology-suite -k distro --verbose --suite upgrade/jewel-x/point-to-point-x --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

*

Actions #5

Updated by Nathan Cutler over 6 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

running http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-10-10_18:33:40-upgrade:hammer-x-wip-jewel-backports-distro-basic-vps/

Actions #6

Updated by Nathan Cutler over 6 years ago

fs

teuthology-suite -k distro --priority 999 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

running http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-10-10_18:31:39-fs-wip-jewel-backports-distro-basic-smithi/

Actions #7

Updated by Nathan Cutler over 6 years ago

rgw

teuthology-suite -k distro --priority 999 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2

*

Actions #8

Updated by Nathan Cutler over 6 years ago

ceph-disk

teuthology-suite -k distro --verbose --suite ceph-disk --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-10-10_18:33:12-ceph-disk-wip-jewel-backports-distro-basic-vps/

Actions #9

Updated by Nathan Cutler over 6 years ago

Upgrade client-upgrade

teuthology-suite -k distro --verbose --suite upgrade/client-upgrade --ceph wip-jewel-backports --machine-type vps --priority 999 --email ncutler@suse.com

pass http://pulpito.front.sepia.ceph.com:80/smithfarm-2017-10-10_18:34:08-upgrade:client-upgrade-wip-jewel-backports-distro-basic-vps/

Actions #10

Updated by Nathan Cutler over 6 years ago

jewel baseline

Test Run: teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi

info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/
logs: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/
failed: 21
dead: 0
running: 0
waiting: 0
queued: 0
passed: 345

failed jobs

http://tracker.ceph.com/issues/22064

Search for "virtual void WriteOp" 

[1946733] rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/pggrow.yaml workloads/snaps-few-objects.yaml}
-----------------------------------------------------------------
time: 00:23:55
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946733/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946733/

[1946550] rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/morepggrow.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml}
-----------------------------------------------------------------
time: 00:24:32
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946550/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946550/

[1946659] rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/pggrow.yaml workloads/small-objects.yaml}
-----------------------------------------------------------------
time: 00:19:37
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946659/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946659/

[1946560] rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml supported/centos_latest.yaml thrashers/pggrow.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}
-----------------------------------------------------------------
time: 00:19:39
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946560/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946560/

[1946610] rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/default.yaml workloads/cache.yaml}
-----------------------------------------------------------------
time: 00:07:59
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946610/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946610/

[1946540] rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/mapgap.yaml workloads/cache.yaml}
-----------------------------------------------------------------
time: 00:09:12
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946540/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946540/

[1946505] rados/upgrade/{hammer-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{ec-rados-plugin=jerasure-k=3-m=1.yaml rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml test_cache-pool-snaps.yaml}} rados.yaml}
-----------------------------------------------------------------
time: 00:48:23
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946505/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946505/

[1946624] rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml supported/centos_latest.yaml thrashers/morepggrow.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}
-----------------------------------------------------------------
time: 00:22:22
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946624/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946624/

[1946742] rados/thrash-erasure-code/{clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/fastread.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml}
-----------------------------------------------------------------
time: 00:10:59
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946742/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946742/

unknown bug (pg scrub)

[1946661] rados/singleton-nomsgr/{all/lfn-upgrade-hammer.yaml rados.yaml}
-----------------------------------------------------------------
time: 00:07:59
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946661/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946661/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=9e91146071534358a11fcd801e885bb6

Command failed on smithi066 with status 11: 'sudo adjust-ulimits ceph-
coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph pg
scrub 1.0'

[1946668] rados/singleton-nomsgr/{all/lfn-upgrade-hammer.yaml rados.yaml}
-----------------------------------------------------------------
time: 00:08:11
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946668/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946668/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=33e63388606641688e9bd060215eedca

Command failed on smithi097 with status 11: 'sudo adjust-ulimits ceph-
coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph pg
scrub 1.0'

[1946702] rados/singleton-nomsgr/{all/lfn-upgrade-infernalis.yaml rados.yaml}
-----------------------------------------------------------------
time: 00:07:32
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946702/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946702/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=cb2ac784f453498c93c0e035291a9168

Command failed on smithi066 with status 11: 'sudo adjust-ulimits ceph-
coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph pg
scrub 1.0'

unknown bug (s3tests-test-readwrite)

[1946832] rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/morepggrow.yaml workloads/rgw_snaps.yaml}
-----------------------------------------------------------------
time: 00:17:20
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946832/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946832/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=5441064ef9de493fa0ab04a706defa5a

Command failed on smithi194 with status 1:
'/home/ubuntu/cephtest/s3-tests/virtualenv/bin/s3tests-test-readwrite'

[1946791] rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/default.yaml workloads/rgw_snaps.yaml}
-----------------------------------------------------------------
time: 00:17:19
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946791/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946791/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=461d2f0603494ed4a17deb9a0584f507

Command failed on smithi189 with status 1:
'/home/ubuntu/cephtest/s3-tests/virtualenv/bin/s3tests-test-readwrite'

unknown bug (rados/test.sh)

[1946711] rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/morepggrow.yaml workloads/rados_api_tests.yaml}
-----------------------------------------------------------------
time: 00:21:37
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946711/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946711/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=fb5a82abf33a4f2aaa71d9b356517095

Command failed (workunit test rados/test.sh) on smithi072 with status 1:
'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd --
/home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph"
CEPH_ID="0" PATH=$PATH:/usr/sbin
CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage
/home/ubuntu/cephtest/archive/coverage timeout 3h
/home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

[1946818] rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/pggrow.yaml workloads/rados_api_tests.yaml}
-----------------------------------------------------------------
time: 00:18:55
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946818/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946818/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=169c9f3c3d7b480d9c9a248fc062637e

Command failed (workunit test rados/test.sh) on smithi140 with status 1:
'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd --
/home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph"
CEPH_ID="0" PATH=$PATH:/usr/sbin
CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage
/home/ubuntu/cephtest/archive/coverage timeout 3h
/home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

unknown bug (ceph_test_objectstore)

[1946822] rados/objectstore/objectstore.yaml
-----------------------------------------------------------------
time: 00:29:18
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946822/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946822/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=4208171c7e2741fd858f83593bab6c94

Command failed on smithi079 with status 1: "sudo
TESTDIR=/home/ubuntu/cephtest bash c 'mkdir $TESTDIR/ostest && cd
$TESTDIR/ostest && ceph_test_objectstore --gtest_filter=
*/2:-*/3'"

[1946825] rados/objectstore/objectstore.yaml
-----------------------------------------------------------------
time: 00:30:51
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946825/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946825/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=fc7c7caa0cb048c6ad81f146bf0a5632

Command failed on smithi012 with status 1: "sudo
TESTDIR=/home/ubuntu/cephtest bash c 'mkdir $TESTDIR/ostest && cd
$TESTDIR/ostest && ceph_test_objectstore --gtest_filter=
*/2:-*/3'"

unknown bug (rados/test_python.sh)

[1946510] rados/basic/{clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_python.yaml}
-----------------------------------------------------------------
time: 00:11:23
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946510/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946510/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=dcda0e82ef434650865b17fc96c8b64b

Command failed (workunit test rados/test_python.sh) on smithi080 with
status 134: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd --
/home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph"
CEPH_ID="0" PATH=$PATH:/usr/sbin
CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage
/home/ubuntu/cephtest/archive/coverage timeout 3h
/home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh'

unsynchronized clocks

[1946757] rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/default.yaml workloads/cache-pool-snaps-readproxy.yaml}
-----------------------------------------------------------------
time: 00:29:30
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946757/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946757/

"2017-12-13 18:39:27.046136 mon.0 172.21.15.175:6789/0 3 : cluster [WRN]
message from mon.1 was stamped 0.565550s in the future, clocks not
synchronized" in cluster log

[1946740] rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/morepggrow.yaml workloads/cache.yaml}
-----------------------------------------------------------------
time: 00:14:27
info: http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946740/
log: http://qa-proxy.ceph.com/teuthology/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/1946740/

"2017-12-13 18:33:09.130862 mon.2 172.21.15.36:6790/0 2 : cluster [WRN]
message from mon.0 was stamped 0.626381s in the future, clocks not
synchronized" in cluster log
Actions #11

Updated by Nathan Cutler about 6 years ago

rgw integration run

PRs in wip-jewel-backports:

https://github.com/ceph/ceph/pull/20074 - jewel: rgw: user creation can overwrite existing user even if different uid is given
https://github.com/ceph/ceph/pull/20061 - jewel: rgw: automated trimming of datalog and mdlog
https://github.com/ceph/ceph/pull/20057 - jewel: rgw_file: alternate fix deadlock on lru eviction
https://github.com/ceph/ceph/pull/20039 - jewel: rgw: resharding needs to set back the bucket ACL after link
https://github.com/ceph/ceph/pull/19908 - jewel: rgw: dont log EBUSY errors in 'sync error list'
https://github.com/ceph/ceph/pull/19783 - jewel: rgw: segfaults after running radosgw-admin data sync init
https://github.com/ceph/ceph/pull/19769 - jewel: rgw: resolve Random 500 errors in Swift PutObject (22517)
https://github.com/ceph/ceph/pull/19747 - jewel: rgw: fix doubled underscore with s3/swift server-side copy
https://github.com/ceph/ceph/pull/19469 - jewel: rgw: fix chained cache invalidation to prevent cache size growth
https://github.com/ceph/ceph/pull/19194 - jewel: rgw: fix swift anonymous access.
https://github.com/ceph/ceph/pull/18305 - jewel: rgw: multisite: Get bucket location which is located in another zonegroup, will return 301 Moved Permanently
https://github.com/ceph/ceph/pull/18121 - jewel: RGW: Multipart upload may double the quota
https://github.com/ceph/ceph/pull/18116 - jewel: rgw: release cls lock if taken in RGWCompleteMultipart
https://github.com/ceph/ceph/pull/17731 - jewel: rgw: fix marker encoding problem.
https://github.com/ceph/ceph/pull/17151 - jewel: rgw: log includes zero byte sometimes
https://github.com/ceph/ceph/pull/17149 - jewel: rgw: rgw_file: recursive lane lock can occur in LRU drain
https://github.com/ceph/ceph/pull/16763 - jewel: rgw: fix the bug that part's index can't be removed after completing
https://github.com/ceph/ceph/pull/16708 - jewel: rgw: 15912 15673 (Fix duplicate tag removal during GC, cls/refcount: store and use list of retired tags)

Build: pass https://shaman.ceph.com/builds/ceph/wip-jewel-backports/47dfab2209db5731e5d2931b8a2df8068ee56bbe/

teuthology-suite -k distro --priority 99 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 3)/3 teuthology-suite -k distro --priority 99 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Conclusion: PASS

Actions #12

Updated by Nathan Cutler about 6 years ago

cephfs integration run

PRs in wip-jewel-backports:

https://github.com/ceph/ceph/pull/20067 - jewel: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
https://github.com/ceph/ceph/pull/19975 - jewel: client: release revoking Fc after invalidate cache
https://github.com/ceph/ceph/pull/19907 - jewel: cephfs: ceph.in: pass RADOS inst to LibCephFS
https://github.com/ceph/ceph/pull/19611 - jewel: include/fs_types: fix unsigned integer overflow
https://github.com/ceph/ceph/pull/18084 - jewel: ceph_volume_client: fix setting caps for IDs
https://github.com/ceph/ceph/pull/17925 - jewel: client: set client_try_dentry_invalidate to false by default
https://github.com/ceph/ceph/pull/17188 - jewel: mds: fix integer overflow

Build: pass https://shaman.ceph.com/builds/ceph/wip-jewel-backports/0576fd69f7b956f43df0229f1360c3946e91ef93/

teuthology-suite -k distro --priority 101 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Conclusion: PASS

Actions #13

Updated by Nathan Cutler about 6 years ago

rbd integration run

PRs in wip-jewel-backports:

https://github.com/ceph/ceph/pull/20287 - jewel: class rbd.Image discard----OSError: [errno 2147483648] error discarding region
https://github.com/ceph/ceph/pull/20286 - jewel: abort in listing mapped nbd devices when running in a container
https://github.com/ceph/ceph/pull/20282 - jewel: [journal] tags are not being expired if no other clients are registered
https://github.com/ceph/ceph/pull/20281 - jewel: [rbd] image-meta list does not return all entries
https://github.com/ceph/ceph/pull/20280 - jewel: [cli] rename of non-existent image results in seg fault
https://github.com/ceph/ceph/pull/20146 - jewel: repair_test fails due to race with osd start
https://github.com/ceph/ceph/pull/20143 - jewel: ObjectStore/StoreTest.FiemapHoles/3 fails with kstore
https://github.com/ceph/ceph/pull/19855 - jewel: rbd: librbd: filter out potential race with image rename
https://github.com/ceph/ceph/pull/19801 - jewel: rbd ls -l crashes with SIGABRT
https://github.com/ceph/ceph/pull/19644 - jewel: rbd-mirror: cluster watcher should ensure it has latest OSD map
https://github.com/ceph/ceph/pull/19186 - jewel: rbd: disk usage on empty pool no longer returns an error message
https://github.com/ceph/ceph/pull/19115 - jewel: [rbd-nbd] Fedora does not register resize events
https://github.com/ceph/ceph/pull/19098 - jewel: librbd: set deleted parent pointer to null
https://github.com/ceph/ceph/pull/18843 - jewel: rbd: rbd crashes during map

Build: pass https://shaman.ceph.com/builds/ceph/wip-jewel-backports/eef7cddfe52e3a6df8172804af17024cd606245d/

teuthology-suite -k distro --priority 101 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2 teuthology-suite -k distro --priority 999 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type mira --subset $(expr $RANDOM % 2000)/2000

Conclusion: PASS

Actions #14

Updated by Nathan Cutler about 6 years ago

rados integration run

PRs included:

https://github.com/ceph/ceph/pull/20331 - jewel: snapset xattr corruption propagated from primary to other shards
https://github.com/ceph/ceph/pull/20306 - jewel: throttle: Minimal destructor fix for Luminous
https://github.com/ceph/ceph/pull/20289 - jewel: test_health_warnings.sh can fail
https://github.com/ceph/ceph/pull/20285 - jewel: possible deadlock in various maintenance operations
https://github.com/ceph/ceph/pull/20284 - jewel: ceph-objectstore-tool: "$OBJ get-omaphdr" and "$OBJ list-omap" scan all pgs instead of using specific pg
https://github.com/ceph/ceph/pull/20146 - jewel: repair_test fails due to race with osd start
https://github.com/ceph/ceph/pull/20143 - jewel: ObjectStore/StoreTest.FiemapHoles/3 fails with kstore
https://github.com/ceph/ceph/pull/20108 - jewel: osd: update heartbeat peers when a new OSD is added
https://github.com/ceph/ceph/pull/19978 - jewel: common: compute SimpleLRU's size with contents.size() instead of lru.…
https://github.com/ceph/ceph/pull/19927 - jewel: config: lower default omap entries recovered at once
https://github.com/ceph/ceph/pull/19906 - jewel: HashIndex: randomize split threshold by a configurable amount
https://github.com/ceph/ceph/pull/19611 - jewel: include/fs_types: fix unsigned integer overflow
https://github.com/ceph/ceph/pull/19330 - jewel: tools/ceph-conf: dump parsed config in plain text or as json
https://github.com/ceph/ceph/pull/19141 - jewel: cephfs: Processes stuck waiting for write with ceph-fuse
https://github.com/ceph/ceph/pull/19057 - jewel: rgw: add cors header rule check in cors option request
https://github.com/ceph/ceph/pull/18850 - jewel: common/config: set rocksdb_cache_size to OPT_U64
https://github.com/ceph/ceph/pull/18780 - jewel: RHEL 7.3 Selinux denials at OSD start
https://github.com/ceph/ceph/pull/18743 - jewel: Objecter::C_ObjectOperation_sparse_read throws/catches exceptions on -ENOENT
https://github.com/ceph/ceph/pull/18294 - jewel: Ubuntu amd64 client can not discover the ubuntu arm64 ceph cluster
https://github.com/ceph/ceph/pull/18207 - jewel: rgw: bi list entry count incremented on error, distorting error code
https://github.com/ceph/ceph/pull/17952 - jewel: mon: implement max pg per osd limit
https://github.com/ceph/ceph/pull/17898 - jewel: common/config_opts.h: Set Filestore rocksdb compaction readahead option.
https://github.com/ceph/ceph/pull/17883 - jewel: core: global/signal_handler.cc: fix typo
https://github.com/ceph/ceph/pull/17707 - jewel: osd: also check the exsistence of clone obc for "CEPH_SNAPDIR" requests

Build: pending https://shaman.ceph.com/builds/ceph/wip-jewel-backports/a95e2c4958b256d75ea1c732c2f2cce45f024081/

teuthology-suite -k distro --priority 101 --suite rados --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2000)/2000
Actions #15

Updated by Nathan Cutler about 6 years ago

rados integration run

PRs included:

https://github.com/ceph/ceph/pull/20508 - jewel: ceph.restart + ceph_manager.wait_for_clean is racy
https://github.com/ceph/ceph/pull/20463 - test: Adjust for Jewel quirk caused of differences with master
https://github.com/ceph/ceph/pull/20446 - jewel: Filestore rocksdb compaction readahead option not set by default
https://github.com/ceph/ceph/pull/20421 - rgw: bucket resharding should not update bucket ACL or user stats
https://github.com/ceph/ceph/pull/20344 - jewel: mon/OSDMonitor: fix dividing by zero in OSDUtilizationDumper
https://github.com/ceph/ceph/pull/20289 - jewel: test_health_warnings.sh can fail
https://github.com/ceph/ceph/pull/20285 - jewel: possible deadlock in various maintenance operations
https://github.com/ceph/ceph/pull/20284 - jewel: ceph-objectstore-tool: "$OBJ get-omaphdr" and "$OBJ list-omap" scan all pgs instead of using specific pg
https://github.com/ceph/ceph/pull/19611 - jewel: include/fs_types: fix unsigned integer overflow
https://github.com/ceph/ceph/pull/19141 - jewel: cephfs: Processes stuck waiting for write with ceph-fuse
https://github.com/ceph/ceph/pull/19057 - jewel: rgw: add cors header rule check in cors option request
https://github.com/ceph/ceph/pull/18743 - jewel: Objecter::C_ObjectOperation_sparse_read throws/catches exceptions on -ENOENT
https://github.com/ceph/ceph/pull/18207 - jewel: rgw: bi list entry count incremented on error, distorting error code
https://github.com/ceph/ceph/pull/17952 - jewel: mon: implement max pg per osd limit

Build: ok https://shaman.ceph.com/builds/ceph/wip-jewel-backports/15a7431ae1866f503265623ee0678a472ddb3d23/

teuthology-suite -k distro --priority 101 --suite rados --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2000)/2000

Conclusion: PASS

Actions #16

Updated by Nathan Cutler about 6 years ago

rgw integration run

PRs included:

https://github.com/ceph/ceph/pull/20561 - jewel: rgw: ECANCELED in rgw_get_system_obj() leads to infinite loop
https://github.com/ceph/ceph/pull/20524 - jewel: test/librbd: utilize unique pool for cache tier testing
https://github.com/ceph/ceph/pull/20518 - jewel: osd: objecter sends out of sync with pg epochs for proxied ops
https://github.com/ceph/ceph/pull/20508 - jewel: ceph.restart + ceph_manager.wait_for_clean is racy
https://github.com/ceph/ceph/pull/20496 - jewel: rgw: upldate the max-buckets when the quota is uploaded
https://github.com/ceph/ceph/pull/20479 - jewel: rgw: fix the max-uploads parameter not work
https://github.com/ceph/ceph/pull/20435 - jewel: cephfs: osdc/Journaler: make sure flush() writes enough data
https://github.com/ceph/ceph/pull/20421 - jewel: rgw: bucket resharding should not update bucket ACL or user stats
https://github.com/ceph/ceph/pull/20418 - jewel: rbd-mirror: fix potential infinite loop when formatting status message
https://github.com/ceph/ceph/pull/20381 - jewel: librados: Double free in rados_getxattrs_next
https://github.com/ceph/ceph/pull/20335 - jewel: mds: fix scrub crash
https://github.com/ceph/ceph/pull/20333 - jewel: cephfs-journal-tool: move shutdown to the deconstructor of MDSUtility
https://github.com/ceph/ceph/pull/20312 - jewel: cephfs: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-master-1.19.18-distro-basic-smithi
https://github.com/ceph/ceph/pull/20293 - jewel: rgw: stale bucket index entry remains after object deletion
https://github.com/ceph/ceph/pull/20292 - jewel: rgw: segmentation fault when starting radosgw after reverting .rgw.root
https://github.com/ceph/ceph/pull/20291 - jewel: rgw: list bucket which enable versioning get wrong result when user marker
https://github.com/ceph/ceph/pull/20285 - jewel: rbd: possible deadlock in various maintenance operations
https://github.com/ceph/ceph/pull/20271 - jewel: cephfs: client::mkdirs not handle well when two clients send mkdir request for a same dir
https://github.com/ceph/ceph/pull/20269 - jewel: rgw: multisite: data sync status advances despite failure in RGWListBucketIndexesCR
https://github.com/ceph/ceph/pull/20262 - jewel: rgw: null instance mtime incorrect when enable versioning
https://github.com/ceph/ceph/pull/20179 - jewel: rgw: add ability to sync user stats from admin api
https://github.com/ceph/ceph/pull/19993 - jewel: cephfs: fuse client: ::rmdir() uses a deleted memory structure of dentry leads …
https://github.com/ceph/ceph/pull/19961 - jewel: mds: fix dump last_sent
https://github.com/ceph/ceph/pull/19635 - jewel: rgw: S3 POST policy should not require Content-Type
https://github.com/ceph/ceph/pull/19488 - jewel: rgw: fix GET website response error code
https://github.com/ceph/ceph/pull/19189 - jewel: rgw: radosgw-admin zonegroup get and zone get return defaults when there i…
https://github.com/ceph/ceph/pull/19141 - jewel: cephfs: Processes stuck waiting for write with ceph-fuse
https://github.com/ceph/ceph/pull/19057 - jewel: rgw: add cors header rule check in cors option request
https://github.com/ceph/ceph/pull/18772 - jewel: rgw: boto3 v4 SignatureDoesNotMatch failure due to sorting of sse-kms headers
https://github.com/ceph/ceph/pull/18743 - jewel: Objecter::C_ObjectOperation_sparse_read throws/catches exceptions on -ENOENT

Build: green https://shaman.ceph.com/builds/ceph/wip-jewel-backports/c98fd4df3abaca68ce973c1cf58c908a8f280198/

teuthology-suite -k distro --priority 99 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 3)/3

Rerunning to determing reproducibility

Actions #17

Updated by Nathan Cutler about 6 years ago

new integration branch without PR#19189 and PR#20421:

https://github.com/ceph/ceph/pull/20749 - jewel: rgw: libcurl & ssl fixes
https://github.com/ceph/ceph/pull/20622 - jewel: follow-on: osd: be_select_auth_object() sanity check oi soid
https://github.com/ceph/ceph/pull/20561 - jewel: rgw: ECANCELED in rgw_get_system_obj() leads to infinite loop
https://github.com/ceph/ceph/pull/20524 - jewel: test/librbd: utilize unique pool for cache tier testing
https://github.com/ceph/ceph/pull/20518 - jewel: osd: objecter sends out of sync with pg epochs for proxied ops
https://github.com/ceph/ceph/pull/20508 - jewel: ceph.restart + ceph_manager.wait_for_clean is racy
https://github.com/ceph/ceph/pull/20496 - jewel: rgw: upldate the max-buckets when the quota is uploaded
https://github.com/ceph/ceph/pull/20479 - jewel: rgw: fix the max-uploads parameter not work
https://github.com/ceph/ceph/pull/20435 - jewel: cephfs: osdc/Journaler: make sure flush() writes enough data
https://github.com/ceph/ceph/pull/20418 - jewel: rbd-mirror: fix potential infinite loop when formatting status message
https://github.com/ceph/ceph/pull/20381 - jewel: librados: Double free in rados_getxattrs_next
https://github.com/ceph/ceph/pull/20335 - jewel: mds: fix scrub crash
https://github.com/ceph/ceph/pull/20333 - jewel: cephfs-journal-tool: move shutdown to the deconstructor of MDSUtility
https://github.com/ceph/ceph/pull/20312 - jewel: cephfs: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-master-1.19.18-distro-basic-smithi
https://github.com/ceph/ceph/pull/20293 - jewel: rgw: stale bucket index entry remains after object deletion
https://github.com/ceph/ceph/pull/20292 - jewel: rgw: segmentation fault when starting radosgw after reverting .rgw.root
https://github.com/ceph/ceph/pull/20291 - jewel: rgw: list bucket which enable versioning get wrong result when user marker
https://github.com/ceph/ceph/pull/20285 - jewel: rbd: possible deadlock in various maintenance operations
https://github.com/ceph/ceph/pull/20271 - jewel: cephfs: client::mkdirs not handle well when two clients send mkdir request for a same dir
https://github.com/ceph/ceph/pull/20269 - jewel: rgw: multisite: data sync status advances despite failure in RGWListBucketIndexesCR
https://github.com/ceph/ceph/pull/20262 - jewel: rgw: null instance mtime incorrect when enable versioning
https://github.com/ceph/ceph/pull/20179 - jewel: rgw: add ability to sync user stats from admin api
https://github.com/ceph/ceph/pull/19993 - jewel: cephfs: fuse client: ::rmdir() uses a deleted memory structure of dentry leads …
https://github.com/ceph/ceph/pull/19961 - jewel: mds: fix dump last_sent
https://github.com/ceph/ceph/pull/19635 - jewel: rgw: S3 POST policy should not require Content-Type
https://github.com/ceph/ceph/pull/19488 - jewel: rgw: fix GET website response error code
https://github.com/ceph/ceph/pull/19141 - jewel: cephfs: Processes stuck waiting for write with ceph-fuse
https://github.com/ceph/ceph/pull/19057 - jewel: rgw: add cors header rule check in cors option request
https://github.com/ceph/ceph/pull/18772 - jewel: rgw: boto3 v4 SignatureDoesNotMatch failure due to sorting of sse-kms headers
https://github.com/ceph/ceph/pull/18743 - jewel: core: Objecter::C_ObjectOperation_sparse_read throws/catches exceptions on -ENOENT

Build: green https://shaman.ceph.com/builds/ceph/wip-jewel-backports/ff281b3758cabe133a0a217330389069936be42e/

teuthology-suite -k distro --priority 99 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 3)/3 teuthology-suite -k distro --priority 99 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Re-running 1 dead job:

Actions #18

Updated by Nathan Cutler about 6 years ago

cephfs

teuthology-suite -k distro --priority 101 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi
Actions #19

Updated by Nathan Cutler about 6 years ago

rbd

Integration branch includes the following rbd and core PRs:

https://github.com/ceph/ceph/pull/20524 - jewel: test/librbd: utilize unique pool for cache tier testing
https://github.com/ceph/ceph/pull/20518 - jewel: osd: objecter sends out of sync with pg epochs for proxied ops
https://github.com/ceph/ceph/pull/20508 - jewel: ceph.restart + ceph_manager.wait_for_clean is racy
https://github.com/ceph/ceph/pull/20418 - jewel: rbd-mirror: fix potential infinite loop when formatting status message
https://github.com/ceph/ceph/pull/20381 - jewel: librados: Double free in rados_getxattrs_next
https://github.com/ceph/ceph/pull/20285 - jewel: rbd: possible deadlock in various maintenance operations
https://github.com/ceph/ceph/pull/18743 - jewel: core: Objecter::C_ObjectOperation_sparse_read throws/catches exceptions on -ENOENT

teuthology-suite -k distro --priority 101 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2

Mykola Golub analyzed all three failures here: https://github.com/ceph/ceph/pull/20285#issuecomment-377152873

Fail =================================================================
[2330958] rbd/qemu/{cache/writeback.yaml cachepool/none.yaml clusters/{fixed-3.yaml openstack.yaml} features/journaling.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml workloads/qemu_xfstests.yaml}
-----------------------------------------------------------------
time: 01:36:02
info: http://pulpito.ceph.com/smithfarm-2018-03-28_12:50:32-rbd-wip-jewel-backports-distro-basic-smithi/2330958/
log: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-03-28_12:50:32-rbd-wip-jewel-backports-distro-basic-smithi/2330958/

Command failed on smithi198 with status 11: 'sudo adjust-ulimits ceph-
coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd
deep-scrub 3'

[2330979] rbd/maintenance/{base/install.yaml clusters/{fixed-3.yaml openstack.yaml} qemu/xfstests.yaml workloads/rebuild_object_map.yaml xfs.yaml}
-----------------------------------------------------------------
time: 00:53:23
info: http://pulpito.ceph.com/smithfarm-2018-03-28_12:50:32-rbd-wip-jewel-backports-distro-basic-smithi/2330979/
log: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-03-28_12:50:32-rbd-wip-jewel-backports-distro-basic-smithi/2330979/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=24c4ba668f814b0e9458c3dda62f54d1

Command failed (workunit test rbd/qemu_rebuild_object_map.sh) on smithi195
with status 30: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd
-- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
CEPH_REF=wip-jewel-backports TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--
cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin
CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 IMAGE_NAME=client.0.1-clone
adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout
3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/qemu_rebuild_objec
t_map.sh'

[2330879] rbd/singleton/{all/admin_socket.yaml openstack.yaml}
-----------------------------------------------------------------
time: 00:07:28
info: http://pulpito.ceph.com/smithfarm-2018-03-28_12:50:32-rbd-wip-jewel-backports-distro-basic-smithi/2330879/
log: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-03-28_12:50:32-rbd-wip-jewel-backports-distro-basic-smithi/2330879/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=e53ed784c56a4754bb0a39534537d233

Command failed (workunit test rbd/test_admin_socket.sh) on smithi149 with
status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd --
/home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
CEPH_REF=wip-jewel-backports TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--
cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin
CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage
/home/ubuntu/cephtest/archive/coverage timeout 3h
/home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_admin_socket.sh'

Re-running 3 failed jobs:

Fail =================================================================
[2331461] rbd/qemu/{cache/writeback.yaml cachepool/none.yaml clusters/{fixed-3.yaml openstack.yaml} features/journaling.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml workloads/qemu_xfstests.yaml}
-----------------------------------------------------------------
time: 01:36:16
info: http://pulpito.ceph.com/smithfarm-2018-03-28_16:37:04-rbd-wip-jewel-backports-distro-basic-smithi/2331461/
log: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-03-28_16:37:04-rbd-wip-jewel-backports-distro-basic-smithi/2331461/

Command failed on smithi006 with status 11: 'sudo adjust-ulimits ceph-
coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd
deep-scrub 3'

Re-running one failed job:

Conclusion: ruled a PASS

Actions #20

Updated by Nathan Cutler about 6 years ago

rados

Integration branch includes the following core PRs:

https://github.com/ceph/ceph/pull/20518 - jewel: osd: objecter sends out of sync with pg epochs for proxied ops
https://github.com/ceph/ceph/pull/20508 - jewel: ceph.restart + ceph_manager.wait_for_clean is racy
https://github.com/ceph/ceph/pull/20381 - jewel: librados: Double free in rados_getxattrs_next
https://github.com/ceph/ceph/pull/18743 - jewel: core: Objecter::C_ObjectOperation_sparse_read throws/catches exceptions on -ENOENT

teuthology-suite -k distro --priority 101 --suite rados --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2000)/2000

Subset=1073/2000

Re-running 11 failed jobs:

Ruled a PASS, but found a new bug #23529 which partially blocks further integration testing.


Testing #23529 reproducer on the integration branch

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --suite rados --filter="rados/basic/{clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml}"

Testing reproducer on jewel baseline:

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph-repo https://github.com/ceph/ceph.git --ceph jewel --suite-repo https://github.com/ceph/ceph.git --suite-branch jewel --machine-type smithi --suite rados --filter="rados/basic/{clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml}" --newest 5

--newest supplied, backtracked 1 commits to d2e85794d94abcc6427e8c7f272c3d1f92149c65


Testing reproducer on jewel minus 10 last PRs merged - 556a0b20ca Merge pull request #20271

Build pass https://shaman.ceph.com/builds/ceph/wip-jewel-minus-10/

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph wip-jewel-minus-10 --machine-type smithi --suite rados --filter="rados/basic/{clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml}"

Testing reproducer on jewel minus 5 last PRs merged - 1c7b6b0004 Merge pull request #20333

Build pass https://shaman.ceph.com/builds/ceph/wip-jewel-minus-5/

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph wip-jewel-minus-5 --machine-type smithi --suite rados --filter="rados/basic/{clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml}"

Testing reproducer on jewel minus 7 last PRs merged - 1c7b6b0004 Merge pull request #19993

Build pass https://shaman.ceph.com/builds/ceph/wip-jewel-minus-7/

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph wip-jewel-minus-7 --machine-type smithi --suite rados --filter="rados/basic/{clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml}"
Actions #21

Updated by Nathan Cutler about 6 years ago

rados

These PRs were included:
https://github.com/ceph/ceph/pull/21234 - jewel: rocksdb: disable jemalloc explicitly
https://github.com/ceph/ceph/pull/21208 - jewel: cephfs: fix tmap_upgrade crash
https://github.com/ceph/ceph/pull/21184 - jewel: osd: OSDMap cache assert on shutdown
https://github.com/ceph/ceph/pull/21175 - jewel: mds: session reference leak
https://github.com/ceph/ceph/pull/21172 - jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34 vs 0
https://github.com/ceph/ceph/pull/21163 - jewel: client reconnect gather race
https://github.com/ceph/ceph/pull/21158 - jewel: cli/crushtools/build.t sometimes fails in jenkins' make check run
https://github.com/ceph/ceph/pull/21156 - jewel: mds: FAILED assert(get_version() < pv) in CDir::mark_dirty
https://github.com/ceph/ceph/pull/21153 - jewel: client: dual client segfault with racing ceph_shutdown
https://github.com/ceph/ceph/pull/21152 - jewel: qa: failures from pjd fstest
https://github.com/ceph/ceph/pull/21135 - jewel: tests: unittest_pglog timeout
https://github.com/ceph/ceph/pull/21125 - jewel: test_admin_socket.sh may fail on wait_for_clean
https://github.com/ceph/ceph/pull/21100 - jewel: rgw: s3website error handler uses original object name
https://github.com/ceph/ceph/pull/21098 - jewel: rgw: inefficient buffer usage for PUTs
https://github.com/ceph/ceph/pull/21084 - jewel: log: Fix AddressSanitizer: new-delete-type-mismatch
https://github.com/ceph/ceph/pull/21073 - jewel: rgw: tcmalloc
https://github.com/ceph/ceph/pull/20999 - jewel: legal: remove doc license ambiguity
https://github.com/ceph/ceph/pull/20882 - jewel: ceph-objectstore-tool command to trim the pg log
https://github.com/ceph/ceph/pull/20800 - jewel: rgw: make init env methods return an error
https://github.com/ceph/ceph/pull/20763 - jewel: ceph.in: bypass codec when writing raw binary data
https://github.com/ceph/ceph/pull/20639 - jewel: rgw: core dump, recursive lock of RGWKeystoneTokenCache
https://github.com/ceph/ceph/pull/20627 - jewel: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
https://github.com/ceph/ceph/pull/20622 - jewel: follow-on: osd: be_select_auth_object() sanity check oi soid
https://github.com/ceph/ceph/pull/20421 - jewel: rgw: bucket resharding should not update bucket ACL or user stats
https://github.com/ceph/ceph/pull/20111 - jewel : cephfs-journal-tool: add "set pool_id" option
https://github.com/ceph/ceph/pull/20076 - jewel: rgw: file deadlock on lru evicting
https://github.com/ceph/ceph/pull/19887 - jewel: rgw: add xml output header in RGWCopyObj_ObjStore_S3 response msg
https://github.com/ceph/ceph/pull/18304 - jewel: rgw: file write error
https://github.com/ceph/ceph/pull/18207 - jewel: rgw: bi list entry count incremented on error, distorting error code
https://github.com/ceph/ceph/pull/17818 - jewel: tests: make upgrade tests use supported distros

Build pass https://shaman.ceph.com/builds/ceph/wip-jewel-backports/c1b09de47f94934698b0c725aa871969ed0c0407/

teuthology-suite -k distro --priority 101 --suite rados --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2000)/2000

Fail =================================================================
[2352515] rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml clusters/{fixed-2.yaml openstack.yaml} hobj-sort.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/default.yaml workloads/cache-pool-snaps.yaml}
-----------------------------------------------------------------
time: 00:13:16
info: http://pulpito.ceph.com/smithfarm-2018-04-04_11:30:23-rados-wip-jewel-backports-distro-basic-smithi/2352515/
log: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-04-04_11:30:23-rados-wip-jewel-backports-distro-basic-smithi/2352515/

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage
/home/ubuntu/cephtest/archive/coverage ceph_test_rados --pool-snaps --max-
ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size
400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op
snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write
50 --op write_excl 50 --op delete 50 --pool base'

[2352483] rados/objectstore/objectstore.yaml
-----------------------------------------------------------------
time: 00:05:53
info: http://pulpito.ceph.com/smithfarm-2018-04-04_11:30:23-rados-wip-jewel-backports-distro-basic-smithi/2352483/
log: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-04-04_11:30:23-rados-wip-jewel-backports-distro-basic-smithi/2352483/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=86ff1cdabc05412a878190214f412356

Command failed on smithi101 with status 139: "sudo
TESTDIR=/home/ubuntu/cephtest bash c 'mkdir $TESTDIR/ostest && cd
$TESTDIR/ostest && ceph_test_objectstore --gtest_filter=
*/2:-*/3'"

[2352499] rados/basic/{clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_python.yaml}
-----------------------------------------------------------------
time: 00:11:07
info: http://pulpito.ceph.com/smithfarm-2018-04-04_11:30:23-rados-wip-jewel-backports-distro-basic-smithi/2352499/
log: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-04-04_11:30:23-rados-wip-jewel-backports-distro-basic-smithi/2352499/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=999c597c68b0499f9d3f4de6802ce929

Command failed (workunit test rados/test_python.sh) on smithi202 with
status 134: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd --
/home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
CEPH_REF=wip-jewel-backports TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--
cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin
CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage
/home/ubuntu/cephtest/archive/coverage timeout 3h
/home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh'

[2352539] rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml}
-----------------------------------------------------------------
time: 00:00:00
info: http://pulpito.ceph.com/smithfarm-2018-04-04_11:30:23-rados-wip-jewel-backports-distro-basic-smithi/2352539/
log: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-04-04_11:30:23-rados-wip-jewel-backports-distro-basic-smithi/2352539/
sentry: http://sentry.ceph.com/sepia/teuthology/?q=f1a07817298540119c85c33e3a5a50f3

Command failed on smithi064 with status 100: 'sudo apt-get update'

Re-running 4 failed jobs:

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --rerun smithfarm-2018-04-04_11:30:23-rados-wip-jewel-backports-distro-basic-smithi
Actions #22

Updated by Nathan Cutler about 6 years ago

rgw

Integration branch includes these PRs:

https://github.com/ceph/ceph/pull/21208 - jewel: cephfs: fix tmap_upgrade crash
https://github.com/ceph/ceph/pull/21184 - jewel: osd: OSDMap cache assert on shutdown
https://github.com/ceph/ceph/pull/21175 - jewel: mds: session reference leak
https://github.com/ceph/ceph/pull/21172 - jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34 vs 0
https://github.com/ceph/ceph/pull/21163 - jewel: client reconnect gather race
https://github.com/ceph/ceph/pull/21156 - jewel: mds: FAILED assert(get_version() < pv) in CDir::mark_dirty
https://github.com/ceph/ceph/pull/21152 - jewel: qa: failures from pjd fstest
https://github.com/ceph/ceph/pull/21125 - jewel: test_admin_socket.sh may fail on wait_for_clean
https://github.com/ceph/ceph/pull/21100 - jewel: rgw: s3website error handler uses original object name
https://github.com/ceph/ceph/pull/21098 - jewel: rgw: inefficient buffer usage for PUTs
https://github.com/ceph/ceph/pull/21073 - jewel: rgw: tcmalloc
https://github.com/ceph/ceph/pull/20800 - jewel: rgw: make init env methods return an error
https://github.com/ceph/ceph/pull/20639 - jewel: rgw: core dump, recursive lock of RGWKeystoneTokenCache
https://github.com/ceph/ceph/pull/20627 - jewel: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
https://github.com/ceph/ceph/pull/20421 - jewel: rgw: bucket resharding should not update bucket ACL or user stats
https://github.com/ceph/ceph/pull/20111 - jewel: cephfs-journal-tool: add "set pool_id" option
https://github.com/ceph/ceph/pull/20076 - jewel: rgw: file deadlock on lru evicting
https://github.com/ceph/ceph/pull/19887 - jewel: rgw: add xml output header in RGWCopyObj_ObjStore_S3 response msg
https://github.com/ceph/ceph/pull/18304 - jewel: rgw: file write error
https://github.com/ceph/ceph/pull/18207 - jewel: rgw: bi list entry count incremented on error, distorting error code
https://github.com/ceph/ceph/pull/17818 - jewel: tests: make upgrade tests use supported distros

Build pass https://shaman.ceph.com/builds/ceph/wip-jewel-backports/759bc63864e50c7e15b397f082d1a5839fb8a3be/

teuthology-suite -k distro --priority 102 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi

Running reproducer on baseline jewel

teuthology-suite -k distro --priority 101 --email ncutler@suse.com --ceph-repo https://github.com/ceph/ceph.git --ceph jewel --suite-repo https://github.com/ceph/ceph.git --suite-branch jewel --machine-type smithi --suite rgw --filter="rgw/multifs/{clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml objectstore/filestore-xfs.yaml overrides.yaml rgw_pool_type/replicated.yaml tasks/rgw_swift.yaml}"

Running reproducer on just PR#21098

Build pass https://shaman.ceph.com/builds/ceph/wip-jewel-pr21098/

teuthology-suite -k distro --priority 99 --email ncutler@suse.com --ceph wip-jewel-pr21098 --machine-type smithi --suite rgw --filter="rgw/multifs/{clusters/fixed-2.yaml frontend/apache.yaml fs/btrfs.yaml objectstore/filestore-xfs.yaml overrides.yaml rgw_pool_type/replicated.yaml tasks/rgw_swift.yaml}"
Actions #23

Updated by Nathan Cutler about 6 years ago

cephfs

Integration branch same as #21742-22

teuthology-suite -k distro --priority 101 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi
Actions #24

Updated by Nathan Cutler about 6 years ago

rgw

These PRs:

https://github.com/ceph/ceph/pull/21184 - jewel: osd: OSDMap cache assert on shutdown
https://github.com/ceph/ceph/pull/21156 - jewel: mds: FAILED assert(get_version() < pv) in CDir::mark_dirty
https://github.com/ceph/ceph/pull/21125 - jewel: test_admin_socket.sh may fail on wait_for_clean
https://github.com/ceph/ceph/pull/21100 - jewel: rgw: s3website error handler uses original object name
https://github.com/ceph/ceph/pull/21073 - jewel: rgw: tcmalloc
https://github.com/ceph/ceph/pull/20800 - jewel: rgw: make init env methods return an error
https://github.com/ceph/ceph/pull/20639 - jewel: rgw: core dump, recursive lock of RGWKeystoneTokenCache
https://github.com/ceph/ceph/pull/20421 - jewel: rgw: bucket resharding should not update bucket ACL or user stats
https://github.com/ceph/ceph/pull/20076 - jewel: rgw: file deadlock on lru evicting
https://github.com/ceph/ceph/pull/19887 - jewel: rgw: add xml output header in RGWCopyObj_ObjStore_S3 response msg
https://github.com/ceph/ceph/pull/18304 - jewel: rgw: file write error
https://github.com/ceph/ceph/pull/18207 - jewel: rgw: bi list entry count incremented on error, distorting error code
https://github.com/ceph/ceph/pull/17818 - jewel: tests: make upgrade tests use supported distros

Build pass https://shaman.ceph.com/builds/ceph/wip-jewel-backports/b6f9fd62843c1702fed57ce656af0b1a977fbf63/

teuthology-suite -k distro --priority 102 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi
Actions #25

Updated by Nathan Cutler about 6 years ago

rados

--- done. these PRs were included:
https://github.com/ceph/ceph/pull/21232 - jewel: rbd-nbd: fix ebusy when do map
https://github.com/ceph/ceph/pull/21228 - jewel: librbd: cannot clone all image-metas if we have more than 64 key/value pairs
https://github.com/ceph/ceph/pull/21227 - jewel: rbd: clean up warnings when mirror commands used on non-setup pool
https://github.com/ceph/ceph/pull/21225 - jewel: rbd-mirror: ignore permission errors on rbd_mirroring object
https://github.com/ceph/ceph/pull/21224 - jewel: librbd: list_children should not attempt to refresh image
https://github.com/ceph/ceph/pull/21223 - jewel: rbd-mirror: strip environment/CLI overrides for remote cluster
https://github.com/ceph/ceph/pull/21220 - jewel: librbd: object map batch update might cause OSD suicide timeout
https://github.com/ceph/ceph/pull/21219 - jewel: librbd: create+truncate for whole-object layered discards
https://github.com/ceph/ceph/pull/21215 - jewel: journal: Message too long error when appending journal
https://github.com/ceph/ceph/pull/21213 - jewel: rgw: data sync of versioned objects, note updating bi marker
https://github.com/ceph/ceph/pull/21210 - jewel: rgw: add radosgw-admin sync error trim to trim sync error log
https://github.com/ceph/ceph/pull/21207 - jewel: rbd: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_features.sh may return false positive
https://github.com/ceph/ceph/pull/21206 - jewel: rbd: [journal] allocating a new tag after acquiring the lock should use on-disk committed position
https://github.com/ceph/ceph/pull/21205 - jewel: rbd: rbd-mirror split brain test case can have a false-positive failure until teuthology
https://github.com/ceph/ceph/pull/21203 - jewel: librbd: cannot copy all image-metas if we have more than 64 key/value pairs
https://github.com/ceph/ceph/pull/21200 - jewel: osd/PrimaryLogPG: dump snap_trimq size
https://github.com/ceph/ceph/pull/21199 - jewel: osd: replica read can trigger cache promotion
https://github.com/ceph/ceph/pull/21197 - jewel: ceph_authtool: add mode option
https://github.com/ceph/ceph/pull/21190 - jewel: rpm: bump epoch ahead of ceph-common in RHEL base
https://github.com/ceph/ceph/pull/21189 - jewel: cephfs: client: prevent fallback to remount when dentry_invalidate_cb is true but root->dir is NULL
https://github.com/ceph/ceph/pull/21185 - jewel: mds: underwater dentry check in CDir::_omap_fetched is racy
https://github.com/ceph/ceph/pull/21125 - jewel: test_admin_socket.sh may fail on wait_for_clean
https://github.com/ceph/ceph/pull/21098 - jewel: rgw: inefficient buffer usage for PUTs
https://github.com/ceph/ceph/pull/20627 - jewel: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
https://github.com/ceph/ceph/pull/20381 - jewel: librados: Double free in rados_getxattrs_next
https://github.com/ceph/ceph/pull/18010 - jewel: core: enable rocksdb for filestore
https://github.com/ceph/ceph/pull/17818 - jewel: tests: make upgrade tests use supported distros

Build pass https://shaman.ceph.com/builds/ceph/wip-jewel-backports/64695235ff1ba4b33491329af2dc51863c1b3707/

teuthology-suite -k distro --priority 101 --suite rados --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2000)/2000

Re-running 6 failed jobs:

[2371464] rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml supported/centos_latest.yaml thrashers/mapgap.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml}
-----------------------------------------------------------------
time: 00:14:33
info: http://pulpito.ceph.com/smithfarm-2018-04-08_15:19:43-rados-wip-jewel-backports-distro-basic-smithi/2371464/
log: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-04-08_15:19:43-rados-wip-jewel-backports-distro-basic-smithi/2371464/

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage
/home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops
4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size
400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op
snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op
copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50
--pool unique_pool_0'

[2371460] rados/upgrade/{hammer-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{ec-rados-plugin=jerasure-k=3-m=1.yaml rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml test_cache-pool-snaps.yaml}} rados.yaml}
-----------------------------------------------------------------
time: 01:13:34
info: http://pulpito.ceph.com/smithfarm-2018-04-08_15:19:43-rados-wip-jewel-backports-distro-basic-smithi/2371460/
log: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-04-08_15:19:43-rados-wip-jewel-backports-distro-basic-smithi/2371460/

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage
/home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000
--objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000
--max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op
snap_create 50 --op rollback 50 --op read 100 --op write 100 --op delete 50
--pool unique_pool_1'

Re-running 2 failed jobs

[2371518] rados/upgrade/{hammer-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{ec-rados-plugin=jerasure-k=3-m=1.yaml rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml test_cache-pool-snaps.yaml}} rados.yaml}
-----------------------------------------------------------------
time: 01:06:17
info: http://pulpito.ceph.com/smithfarm-2018-04-08_16:53:21-rados-wip-jewel-backports-distro-basic-smithi/2371518/
log: http://qa-proxy.ceph.com/teuthology/smithfarm-2018-04-08_16:53:21-rados-wip-jewel-backports-distro-basic-smithi/2371518/

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage
/home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000
--objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000
--max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op
snap_create 50 --op rollback 50 --op read 100 --op write 100 --op delete 50
--pool unique_pool_8'

Re-running 1 failed job

Actions #26

Updated by Nathan Cutler about 6 years ago

rbd

Integration branch: same as #21742-25

teuthology-suite -k distro --priority 101 --suite rbd --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi --subset $(expr $RANDOM % 2)/2
Actions #27

Updated by Nathan Cutler about 6 years ago

fs

Integration branch: same as in #21742-25

teuthology-suite -k distro --priority 102 --suite fs --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi
Actions #28

Updated by Nathan Cutler about 6 years ago

powercycle

teuthology-suite -v -c wip-jewel-backports -k distro -m smithi -s powercycle -p 99 -l 2 --email ncutler@suse.com
Actions #29

Updated by Nathan Cutler about 6 years ago

Upgrade hammer-x

teuthology-suite -k distro --verbose --suite upgrade/hammer-x --ceph wip-jewel-backports --machine-type smithi --priority 99 --email ncutler@suse.com
Actions #30

Updated by Nathan Cutler about 6 years ago

rgw

Integration branch: same as #21742-25

teuthology-suite -k distro --priority 102 --suite rgw --email ncutler@suse.com --ceph wip-jewel-backports --machine-type smithi
Actions #32

Updated by Josh Durgin almost 6 years ago

The logs for those #23290 failures are gone now, but since it's a pre-existing issue in jewel, I don't think it should block the 10.2.11 release.

Actions #33

Updated by Yuri Weinstein almost 6 years ago

QE VALIDATION (STARTED 6/27/18)

CEPH_QA_MAIL=""; MACHINE_NAME=smithi; CEPH_BRANCH=wip-jewel-20180627

teuthology-suite -v --ceph-repo https://github.com/ceph/ceph-ci.git --suite-repo https://github.com/ceph/ceph-ci.git -c $CEPH_BRANCH -m $MACHINE_NAME -s rados --subset 35/50 -k distro -p 60 -e $CEPH_QA_MAIL --suite-branch $CEPH_BRANCH --dry-run

http://pulpito.ceph.com/yuriw-2018-06-27_16:36:18-rados-wip-jewel-20180627-distro-basic-smithi/

Josh, Sage, Neha pls review ^

http://pulpito.ceph.com/yuriw-2018-06-27_16:43:03-rgw-wip-jewel-20180627-distro-basic-smithi/ - approved by Casey

http://pulpito.ceph.com/yuriw-2018-06-27_16:47:10-rbd-wip-jewel-20180627-distro-basic-smithi - approved by Jason see http://tracker.ceph.com/issues/24017

http://pulpito.ceph.com/yuriw-2018-06-27_16:52:11-fs-wip-jewel-20180627-distro-basic-smithi

http://pulpito.ceph.com/yuriw-2018-06-27_19:54:08-krbd-wip-jewel-20180627-testing-basic-smithi
Ilya: Can’t approve. Lots of failed jobs with
[ERR] : OSD full dropping all updates 97% full
mon.b@1(peon).data_health(4) reached critical levels of available space on local monitor storage -- shutdown!
and some Filestore crashes too.
Needs to be rerun to separate environmental failures from real failures.

One more rerun
http://pulpito.ceph.com/yuriw-2018-07-05_16:48:41-krbd-wip-jewel-20180627-testing-basic-smithi/

http://pulpito.ceph.com/yuriw-2018-06-27_19:54:54-kcephfs-wip-jewel-20180627-testing-basic-smithi

Patrick pls review ^ (http://tracker.ceph.com/issues/24017 ? maybe)

http://pulpito.ceph.com/yuriw-2018-06-27_19:55:30-knfs-wip-jewel-20180627-testing-basic-smithi PASSED

http://pulpito.ceph.com/yuriw-2018-06-27_19:55:47-rest-wip-jewel-20180627-distro-basic-smithi/ PASSED

http://pulpito.ceph.com/yuriw-2018-06-27_19:55:51-ceph-deploy-wip-jewel-20180627-distro-basic-ovh/ ALL FAILED

Will fix, using wrong branch for testing, latest ceph deploy branch works only with luminous+ builds.

Testing: https://github.com/ceph/ceph/pull/22881

Sage, Vasu pls review ^

http://pulpito.ceph.com/yuriw-2018-06-27_20:04:37-ceph-disk-wip-jewel-20180627-distro-basic-ovh/ PASSED

http://pulpito.ceph.com/yuriw-2018-06-27_20:00:53-upgrade:client-upgrade-wip-jewel-20180627-distro-basic-smithi/
http://pulpito.ceph.com/yuriw-2018-06-28_20:27:42-upgrade:client-upgrade-wip-jewel-20180627-distro-basic-smithi/
Failed on centos 7.2 which we don’t have, so PASSED

http://pulpito.ceph.com/yuriw-2018-06-27_20:01:13-upgrade:hammer-x-wip-jewel-20180627-distro-basic-smithi/
http://pulpito.ceph.com/yuriw-2018-06-28_20:29:18-upgrade:hammer-x-wip-jewel-20180627-distro-basic-smithi/

Sage pls review ^, maybe we can deprecate as hammer is EOL
http://pulpito.ceph.com/yuriw-2018-06-27_20:01:37-upgrade:jewel-x:point-to-point-x-wip-jewel-20180627-distro-basic-smithi/ FAILED, same as in previouse point
Sage approved ? ^

http://pulpito.ceph.com/yuriw-2018-06-27_20:01:43-ceph-ansible-wip-jewel-20180627-distro-basic-ovh/

Check with josh, workunit failed for cls_rgw, outside ceph-ansible.

Sebastian, Vasu pls review ^ see http://tracker.ceph.com/issues/24640
Link is different tracker ?

http://pulpito.ceph.com/yuriw-2018-06-27_20:03:10-powercycle-wip-jewel-20180627-distro-basic-smithi/
http://pulpito.ceph.com/yuriw-2018-06-28_20:32:43-powercycle-wip-jewel-20180627-distro-basic-smithi/

Sage pls review ^ ======================================================= =======================================================

QE VALIDATION (STARTED 5/22/18 and is ON-HOLD NOW as we testing more PRs)

(Note: PASSED / FAILED - indicates "TEST IS IN PROGRESS")

re-runs command lines and filters are captured in http://pad.ceph.com/p/hammer_v10.2.11_QE_validation_notes

command line CEPH_QA_MAIL="ceph-qa@ceph.com"; MACHINE_NAME=smithi; CEPH_BRANCH=jewel; SHA1=98aa3b683021b0d7329319bbc74736d777603968 ; teuthology-suite -v --ceph-repo https://github.com/ceph/ceph.git --suite-repo https://github.com/ceph/ceph.git -c $CEPH_BRANCH -S $SHA1 -m $MACHINE_NAME -s rados --subset 35/50 -k distro -p 100 -e $CEPH_QA_MAIL --suite-branch jewel --dry-run

teuthology-suite -v -c $CEPH_BRANCH -S $SHA1 -m $MACHINE_NAME -r $RERUN --suite-repo https://github.com/ceph/ceph.git --ceph-repo https://github.com/ceph/ceph.git --suite-branch jewel -p 90 -R fail,dead,running -N 3

Suite Runs/Reruns Notes/Issues
rados http://pulpito.ceph.com/yuriw-2018-05-22_16:34:34-rados-jewel-distro-basic-smithi/ PASSED / FAILED
rgw http://pulpito.ceph.com/yuriw-2018-05-22_16:47:07-rgw-jewel-distro-basic-smithi/ PASSED / FAILED
http://pulpito.front.sepia.ceph.com/yuriw-2018-05-22_23:06:31-rgw-jewel-distro-basic-smithi
rbd http://pulpito.ceph.com/yuriw-2018-05-22_16:49:35-rbd-jewel-distro-basic-smithi/ PASSED / FAILED
fs http://pulpito.ceph.com/yuriw-2018-05-22_16:53:42-fs-jewel-distro-basic-smithi/ PASSED / FAILED
krbd http://pulpito.ceph.com/yuriw-2018-05-22_16:56:08-krbd-jewel-testing-basic-smithi PASSED / FAILED
kcephfs http://pulpito.ceph.com/yuriw-2018-05-22_16:57:08-kcephfs-jewel-testing-basic-smithi/ PASSED / FAILED
knfs http://pulpito.ceph.com/yuriw-2018-05-22_16:57:56-knfs-jewel-testing-basic-smithi PASSED / FAILED
rest http://pulpito.ceph.com/yuriw-2018-05-22_16:58:38-rest-jewel-distro-basic-smithi PASSED / FAILED
hadoop EXCLUDED FROM THIS RELEASE
samba EXCLUDED FROM THIS RELEASE
ceph-deploy http://pulpito.ceph.com/yuriw-2018-05-22_16:59:39-ceph-deploy-jewel-distro-basic-ovh PASSED / FAILED
http://pulpito.front.sepia.ceph.com:80/yuriw-2018-05-22_23:10:19-ceph-deploy-jewel-distro-basic-mira/
ceph-disk http://pulpito.ceph.com/yuriw-2018-05-22_17:00:30-ceph-disk-jewel-distro-basic-vps/ PASSED / FAILED
http://pulpito.front.sepia.ceph.com:80/yuriw-2018-05-22_23:12:05-ceph-disk-jewel-distro-basic-ovh/
upgrade/client-upgrade http://pulpito.ceph.com/yuriw-2018-05-22_17:01:45-upgrade:client-upgrade-jewel-distro-basic-ovh PASSED / FAILED
upgrade/hammer-x (jewel) http://pulpito.front.sepia.ceph.com:80/yuriw-2018-05-22_23:13:57-upgrade:hammer-x-jewel-distro-basic-smithi/ PASSED / FAILED
upgrade/jewel-x/point-to-point-x http://pulpito.ceph.com/yuriw-2018-05-22_17:06:51-upgrade:jewel-x:point-to-point-x-jewel-distro-basic-ovh PASSED / FAILED
http://pulpito.front.sepia.ceph.com:80/yuriw-2018-05-22_23:15:11-upgrade:jewel-x:point-to-point-x-jewel-distro-basic-smithi/
powercycle http://pulpito.ceph.com/yuriw-2018-05-22_17:07:42-powercycle-jewel-distro-basic-smithi PASSED / FAILED
ceph-ansible http://pulpito.ceph.com/yuriw-2018-05-22_17:08:59-ceph-ansible-jewel-distro-basic-ovh PASSED
===========
PASSED / FAILED
Actions #34

Updated by Nathan Cutler almost 6 years ago

  • Status changed from In Progress to Resolved
Actions

Also available in: Atom PDF