Actions
Backport #16701
closedjewel: rbd-mirror: image sync throttle needs to use pool id + image id to form unique key
Release:
jewel
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Updated by Nathan Cutler almost 8 years ago
- Copied from Bug #16536: rbd-mirror: image sync throttle needs to use pool id + image id to form unique key added
Updated by Loïc Dachary over 7 years ago
Updated by Jason Dillaman over 7 years ago
- Description updated (diff)
- Status changed from New to In Progress
Updated by Loïc Dachary over 7 years ago
teuthology-suite -k distro --priority 101 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports-loic --machine-type smithi
- fail http://pulpito.ceph.com/loic-2016-08-15_07:38:06-rbd-jewel-backports-loic-distro-basic-smithi/
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=61 VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_rbd_mirror.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=125 VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_ARGS=\'\' RBD_MIRROR_USE_RBD_MIRROR=1 RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.cluster1.client.mirror/rbd/rbd_mirror_stress.sh'- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/many.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin VALGRIND=memcheck adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_rbd_mirror.sh' - 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=92c554af6421a3d9b9d5cb58dac91f15995178ef TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=125 VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd.sh' - saw valgrind issues
- 'mkdir
The dead jobs are because of rbd-nbd IO hang
Re-running failed jobs
- fail http://pulpito.ceph.com/loic-2016-08-15_13:56:34-rbd-jewel-backports-loic-distro-basic-smithi/
- known bug rbd/mirror/workloads/rbd-mirror-stress-workunit.yaml unstable on slow clusters 'mkdir
p -/home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbb7b39faab8f0740a8d8b587e88539084a9d47e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_ARGS=\'\' RBD_MIRROR_USE_RBD_MIRROR=1 RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.cluster1.client.mirror/rbd/rbd_mirror_stress.sh'- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/many.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- saw valgrind issues
- known bug rbd/mirror/workloads/rbd-mirror-stress-workunit.yaml unstable on slow clusters 'mkdir
Re-running failed jobs
- fail http://pulpito.ceph.com/loic-2016-08-15_21:11:09-rbd-jewel-backports-loic-distro-basic-smithi/
- known bug rbd/mirror/workloads/rbd-mirror-stress-workunit.yaml unstable on slow clusters 'mkdir
p -/home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbb7b39faab8f0740a8d8b587e88539084a9d47e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_ARGS=\'\' RBD_MIRROR_USE_RBD_MIRROR=1 RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.cluster1.client.mirror/rbd/rbd_mirror_stress.sh'- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/many.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- saw valgrind issues
- known bug rbd/mirror/workloads/rbd-mirror-stress-workunit.yaml unstable on slow clusters 'mkdir
Updated by Loïc Dachary over 7 years ago
- Status changed from In Progress to Resolved
- Target version set to v10.2.3
Actions