Actions
Backport #16904
closedjewel: journal should prefetch small chunks of the object during replay
Release:
jewel
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Updated by Nathan Cutler over 7 years ago
- Copied from Bug #16223: journal should prefetch small chunks of the object during replay added
Updated by Loïc Dachary over 7 years ago
Updated by Jason Dillaman over 7 years ago
- Description updated (diff)
- Status changed from New to In Progress
Updated by Loïc Dachary over 7 years ago
teuthology-suite -k distro --priority 101 --suite rbd --subset $(expr $RANDOM % 5)/5 --suite-branch jewel --email loic@dachary.org --ceph jewel-backports-loic --machine-type smithi
- fail http://pulpito.ceph.com/loic-2016-08-17_22:11:27-rbd-jewel-backports-loic-distro-basic-smithi/
- fixed by https://github.com/ceph/ceph-qa-suite/pull/1123 Spec did not match any workunits: 'rbd/rbd_mirror_image_replay.sh'
- known bug rbd/mirror/workloads/rbd-mirror-stress-workunit.yaml unstable on slow clusters 'mkdir
p -/home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && cd -- /home/ubuntu/cephtest/mnt.cluster1.mirror/client.mirror/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3dbc21ff3f5a92fab4f7bbd69746efff546f06e4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster cluster1" CEPH_ID="mirror" PATH=$PATH:/usr/sbin CEPH_ARGS=\'\' RBD_MIRROR_USE_RBD_MIRROR=1 RBD_MIRROR_USE_EXISTING_CLUSTER=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.cluster1.client.mirror/rbd/rbd_mirror_stress.sh'- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/many.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- rbd/mirror/{base/install.yaml cluster/{2-node.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml rbd-mirror/one-per-cluster.yaml workloads/rbd-mirror-stress-workunit.yaml}
- known issue see "helgrind incorrectly reports lock order violated" thread on ceph-devel 'mkdir
p -/home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3dbc21ff3f5a92fab4f7bbd69746efff546f06e4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_FEATURES=125 VALGRIND=helgrind adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd.sh' - saw valgrind issues
The dead jobs are because of rbd-nbd IO hang
Updated by Loïc Dachary over 7 years ago
- Status changed from In Progress to Resolved
- Target version set to v10.2.3
Actions