Actions
Fix #11441
closedTEST_bench: 69: ./ceph tell osd.0 bench 3145728001 1048577
% Done:
0%
Source:
other
Tags:
Backport:
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Seen at https://github.com/ceph/ceph/pull/4411 on commit:518ede705b739d9bbc33bb9b10200a116e4a3eb5
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH *** osd.0 up in weight 1 up_from 4 up_thru 4 down_at 0 last_clean_interval [0,0) 127.0.0.1:6830/30214 127.0.0.1:6831/30214 127.0.0.1:6878/30214 127.0.0.1:6879/30214 exists,up 02877f63-391e-4959-89f4-db544839b868 run_osd: 67: status=0 run_osd: 68: break run_osd: 72: return 0 TTEST_bench: 42: CEPH_ARGS= TTEST_bench: 42: ./ceph-conf --show-config-value osd_bench_small_size_max_iops TEST_bench: 42: local osd_bench_small_size_max_iops=100 TTEST_bench: 44: CEPH_ARGS= TTEST_bench: 44: ./ceph-conf --show-config-value osd_bench_large_size_max_throughput TEST_bench: 44: local osd_bench_large_size_max_throughput=104857600 TTEST_bench: 46: CEPH_ARGS= TTEST_bench: 46: ./ceph-conf --show-config-value osd_bench_max_block_size TEST_bench: 46: local osd_bench_max_block_size=67108864 TTEST_bench: 48: CEPH_ARGS= TTEST_bench: 48: ./ceph-conf --show-config-value osd_bench_duration TEST_bench: 48: local osd_bench_duration=30 TEST_bench: 53: ./ceph tell osd.0 bench 1024 67108865 TEST_bench: 54: grep osd_bench_max_block_size testdir/osd-bench/out Error EINVAL: block 'size' values are capped at 65536 kB. If you wish to use a higher value, please adjust 'osd_bench_max_block_size' TEST_bench: 59: local bsize=1024 TEST_bench: 60: local max_count=3072000 TEST_bench: 61: ./ceph tell osd.0 bench 3072001 1024 TEST_bench: 62: grep osd_bench_small_size_max_iops testdir/osd-bench/out Error EINVAL: 'count' values greater than 3072000 for a block size of 1024 bytes, assuming 100 IOPS, for 30 seconds, can cause ill effects on osd. Please adjust 'osd_bench_small_size_max_iops' with a higher value if you wish to use a higher 'count'. TEST_bench: 67: local bsize=1048577 TEST_bench: 68: local max_count=3145728000 TEST_bench: 69: ./ceph tell osd.0 bench 3145728001 1048577 TEST_bench: 70: grep osd_bench_large_size_max_throughput testdir/osd-bench/out TEST_bench: 70: return 1 call_TEST_functions: 100: return 1 run: 31: return 1 main: 120: code=1 main: 122: teardown testdir/osd-bench
Updated by Loïc Dachary about 9 years ago
- Status changed from In Progress to Fix Under Review
This really is a won't fix but it should be more verbose clarify the false negative. The command failed for a reason that is unrelated to the test and the output presumably contains information about the failure. But the grep only fails with "not found what was expected" and hides the output. https://github.com/ceph/ceph/pull/4421 will help when/if that happens again.
Updated by Loïc Dachary about 9 years ago
- Status changed from Fix Under Review to Need More Info
now waiting for this to happen again
Updated by Loïc Dachary almost 9 years ago
- Status changed from Need More Info to Rejected
Let's reopen if it shows again.
Updated by Loïc Dachary over 8 years ago
- Copied from Fix #14556: man: document listwatchers cmd in "rados" manpage added
Updated by Loïc Dachary over 8 years ago
- Copied from deleted (Fix #14556: man: document listwatchers cmd in "rados" manpage)
Actions