Actions
Bug #20909
closedError ETIMEDOUT: crush test failed with -110: timed out during smoke test (5 seconds)
Status:
Can't reproduce
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2017-08-04T08:19:51.319 INFO:tasks.workunit.client.0.smithi003.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:267: test_tiering_agent: local slow=slow_eviction 2017-08-04T08:19:51.319 INFO:tasks.workunit.client.0.smithi003.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:268: test_tiering_agent: local fast=fast_eviction 2017-08-04T08:19:51.319 INFO:tasks.workunit.client.0.smithi003.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:269: test_tiering_agent: ceph osd pool create slow_eviction 1 1 2017-08-04T08:19:51.320 INFO:tasks.workunit.client.0.smithi003.stdout:{ 2017-08-04T08:19:51.320 INFO:tasks.workunit.client.0.smithi003.stdout: "success": "filestore_merge_threshold = '-1' (not observed, change may require restart) " 2017-08-04T08:19:51.320 INFO:tasks.workunit.client.0.smithi003.stdout:} 2017-08-04T08:19:52.207 INFO:tasks.workunit.client.0.smithi003.stderr:pool 'slow_eviction' already exists 2017-08-04T08:19:52.224 INFO:tasks.workunit.client.0.smithi003.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:270: test_tiering_agent: ceph osd pool application enable slow_eviction rados 2017-08-04T08:19:54.898 INFO:tasks.workunit.client.0.smithi003.stderr:enabled application 'rados' on pool 'slow_eviction' 2017-08-04T08:19:54.918 INFO:tasks.workunit.client.0.smithi003.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:271: test_tiering_agent: ceph osd pool create fast_eviction 1 1 2017-08-04T08:20:00.224 INFO:tasks.ceph.mon.a.smithi003.stderr:: timed out (5 sec) 2017-08-04T08:20:00.241 INFO:tasks.workunit.client.0.smithi003.stderr:Error ETIMEDOUT: crush test failed with -110: timed out during smoke test (5 seconds) 2017-08-04T08:20:00.350 INFO:tasks.workunit.client.0.smithi003.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1: test_tiering_agent: rm -fr /tmp/cephtool.Zl8 2017-08-04T08:20:00.350 INFO:tasks.workunit:Stopping ['cephtool'] on client.0...
/a/sage-2017-08-04_05:26:53-rados-wip-sage-testing2-20170803b-distro-basic-smithi/1482101
rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml}
Updated by Sage Weil over 6 years ago
2017-08-11T20:25:12.174 INFO:tasks.workunit.client.0.smithi081.stderr:+ ceph osd pool create fooo 123 123 erasure default 2017-08-11T20:25:12.191 INFO:tasks.workunit.client.0.smithi081.stderr:2017-08-11 20:25:12.109369 7fb4413ac700 -1 WARNING: all dangerous and experimental features are enabled. 2017-08-11T20:25:12.208 INFO:tasks.workunit.client.0.smithi081.stderr:2017-08-11 20:25:12.141612 7fb4413ac700 -1 WARNING: all dangerous and experimental features are enabled. 2017-08-11T20:25:17.252 INFO:tasks.ceph.mon.a.smithi081.stderr:: timed out (5 sec) 2017-08-11T20:25:17.396 INFO:tasks.workunit.client.0.smithi081.stderr:Error ETIMEDOUT: crush test failed with -110: timed out during smoke test (5 seconds)
/a/sage-2017-08-11_17:22:37-rados-wip-sage-testing-20170811a-distro-basic-smithi/1512023
Updated by Kefu Chai over 6 years ago
/a//kchai-2017-08-20_09:42:12-rados-wip-kefu-testing-distro-basic-mira/1545387/
Updated by Neha Ojha over 6 years ago
- Status changed from 12 to Fix Under Review
Updated by Kefu Chai over 6 years ago
Updated by huang jun over 6 years ago
with https://github.com/ceph/ceph/pull/17179, we also met this error:
mon.b@0(leader).osd e15 tester.test_with_fork returns -110: timed out during smoke test (3 seconds)
Updated by Kefu Chai over 6 years ago
2017-10-11T10:32:56.454 INFO:tasks.workunit.client.0.mira042.stderr:+ ceph osd pool create foooo 123 2017-10-11T10:32:56.549 INFO:tasks.workunit.client.0.mira042.stderr:2017-10-11 10:32:56.542 7faff5d9d700 -1 WARNING: all dangerous and experimental features a re enabled. 2017-10-11T10:32:56.566 INFO:tasks.workunit.client.0.mira042.stderr:2017-10-11 10:32:56.558 7faff5d9d700 -1 WARNING: all dangerous and experimental features a re enabled. 2017-10-11T10:33:01.776 INFO:tasks.ceph.mon.a.mira042.stderr:: timed out (5 sec) 2017-10-11T10:33:01.803 INFO:tasks.workunit.client.0.mira042.stderr:Error ETIMEDOUT: crush test failed with -110: timed out during smoke test (5 seconds)
http://pulpito.ceph.com/kchai-2017-10-11_05:43:26-rados-wip-kefu-testing-2017-10-11-1203-distro-basic-mira/1725753/
Updated by Sage Weil over 6 years ago
- Status changed from Fix Under Review to Can't reproduce
Updated by Josh Durgin over 6 years ago
Haven't seen since mons were moved to nvme on smithi
Updated by Sage Weil about 5 years ago
- Status changed from Can't reproduce to 12
- Assignee deleted (
Neha Ojha)
2019-02-16T20:53:04.003 INFO:tasks.workunit.client.0.smithi129.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:306: test_tiering_agent: ceph osd pool create fast_eviction 1 1 2019-02-16T20:53:09.367 INFO:tasks.ceph.mon.a.smithi129.stderr:: timed out (5 sec) 2019-02-16T20:53:09.413 INFO:tasks.workunit.client.0.smithi129.stderr:Error ETIMEDOUT: crush test failed with -110: timed out during smoke test (5 seconds) 2019-02-16T20:53:09.429 INFO:tasks.workunit.client.0.smithi129.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1: test_tiering_agent: rm -fr /tmp/cephtool.Pm4 2019-02-16T20:53:09.432 DEBUG:teuthology.orchestra.run:got remote process result: 110
/a/sage-2019-02-16_18:46:49-rados-wip-sage-testing-2019-02-16-0946-distro-basic-smithi/3601799
still pops up periodically.
Updated by Sage Weil about 5 years ago
2019-02-20T04:51:56.493 INFO:tasks.workunit.client.0.smithi055.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:933: test_mon_mds: rm /tmp/cephtool.ilp/mdsmap.14349 2019-02-20T04:51:56.493 INFO:tasks.workunit.client.0.smithi055.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:935: test_mon_mds: ceph osd pool create data2 10 2019-02-20T04:51:57.565 INFO:tasks.workunit.client.0.smithi055.stderr:pool 'data2' already exists 2019-02-20T04:51:57.576 INFO:tasks.workunit.client.0.smithi055.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:936: test_mon_mds: ceph osd pool create data3 10 2019-02-20T04:52:02.857 INFO:tasks.ceph.mon.a.smithi055.stderr:: timed out (5 sec) 2019-02-20T04:52:02.879 INFO:tasks.workunit.client.0.smithi055.stderr:Error ETIMEDOUT: crush test failed with -110: timed out during smoke test (5 seconds) 2019-02-20T04:52:02.905 INFO:tasks.workunit.client.0.smithi055.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1: test_mon_mds: rm -fr /tmp/cephtool.ilp 2019-02-20T04:52:02.906 DEBUG:teuthology.orchestra.run:got remote process result: 110
/a/sage-2019-02-19_23:03:51-rados-wip-sage3-testing-2019-02-19-1008-distro-basic-smithi/3614198
Updated by Josh Durgin over 4 years ago
- Status changed from 12 to Can't reproduce
Updated by Greg Farnum over 4 years ago
We've made some improvements and fixed some bad inefficiencies in the CRUSH code and updates.
Updated by Deepika Upadhyay over 3 years ago
<pre> 2020-09-24T00:32:11.586 INFO:tasks.workunit.client.0.smithi183.stderr:Error EINVAL: invalid command 2020-09-24T00:32:11.595 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:49: expect_false: return 0 2020-09-24T00:32:11.595 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:984: test_mon_mds: ceph fs set cephfs allow_new_snaps true 2020-09-24T00:32:13.124 INFO:tasks.workunit.client.0.smithi183.stderr:enabled new snapshots 2020-09-24T00:32:13.136 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:985: test_mon_mds: ceph fs set cephfs allow_new_snaps 0 2020-09-24T00:32:15.348 INFO:tasks.workunit.client.0.smithi183.stderr:disabled new snapshots 2020-09-24T00:32:15.360 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:986: test_mon_mds: ceph fs set cephfs allow_new_snaps false 2020-09-24T00:32:17.580 INFO:tasks.workunit.client.0.smithi183.stderr:disabled new snapshots 2020-09-24T00:32:17.594 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:987: test_mon_mds: ceph fs set cephfs allow_new_snaps no 2020-09-24T00:32:19.809 INFO:tasks.workunit.client.0.smithi183.stderr:disabled new snapshots 2020-09-24T00:32:19.823 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:988: test_mon_mds: expect_false ceph fs set cephfs allow_new_snaps taco 2020-09-24T00:32:19.823 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:48: expect_false: set -x 2020-09-24T00:32:19.824 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:49: expect_false: ceph fs set cephfs allow_new_snaps taco 2020-09-24T00:32:20.104 INFO:tasks.workunit.client.0.smithi183.stderr:Error EINVAL: value must be false|no|0 or true|yes|1 2020-09-24T00:32:20.119 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:49: expect_false: return 0 2020-09-24T00:32:20.120 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:992: test_mon_mds: ceph osd pool create mds-ec-pool 10 10 erasure 2020-09-24T00:32:21.151 INFO:tasks.workunit.client.0.smithi183.stderr:pool 'mds-ec-pool' already exists 2020-09-24T00:32:21.165 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:993: test_mon_mds: set +e 2020-09-24T00:32:21.166 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:994: test_mon_mds: ceph fs add_data_pool cephfs mds-ec-pool 2020-09-24T00:32:21.501 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:995: test_mon_mds: check_response erasure-code 22 22 2020-09-24T00:32:21.502 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:116: check_response: expected_string=erasure-code 2020-09-24T00:32:21.502 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:117: check_response: retcode=22 2020-09-24T00:32:21.502 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:118: check_response: expected_retcode=22 2020-09-24T00:32:21.502 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:119: check_response: '[' 22 -a 22 '!=' 22 ']' 2020-09-24T00:32:21.503 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:124: check_response: grep --quiet -- erasure-code /tmp/cephtool.m8Z/test_invalid.xKt 2020-09-24T00:32:21.503 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:996: test_mon_mds: set -e 2020-09-24T00:32:21.504 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:997: test_mon_mds: ceph osd dump 2020-09-24T00:32:21.504 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:997: test_mon_mds: grep 'pool.* '\''mds-ec-pool' 2020-09-24T00:32:21.505 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:997: test_mon_mds: awk '{print $2;}' 2020-09-24T00:32:21.916 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:997: test_mon_mds: ec_poolnum=44 2020-09-24T00:32:21.917 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:998: test_mon_mds: ceph osd dump 2020-09-24T00:32:21.918 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:998: test_mon_mds: grep 'pool.* '\''fs_data' 2020-09-24T00:32:21.918 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:998: test_mon_mds: awk '{print $2;}' 2020-09-24T00:32:22.349 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:998: test_mon_mds: data_poolnum=40 2020-09-24T00:32:22.351 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:999: test_mon_mds: ceph osd dump 2020-09-24T00:32:22.351 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:999: test_mon_mds: grep 'pool.* '\''fs_metadata' 2020-09-24T00:32:22.351 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:999: test_mon_mds: awk '{print $2;}' 2020-09-24T00:32:22.778 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:999: test_mon_mds: metadata_poolnum=41 2020-09-24T00:32:22.779 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1001: test_mon_mds: fail_all_mds cephfs 2020-09-24T00:32:22.779 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:822: fail_all_mds: fs_name=cephfs 2020-09-24T00:32:22.779 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:823: fail_all_mds: ceph fs set cephfs cluster_down true 2020-09-24T00:32:25.297 INFO:tasks.workunit.client.0.smithi183.stderr:cephfs marked not joinable; MDS cannot join as newly active. WARNING: cluster_down flag is deprecated and will be removed in a future version. Please use "joinable". 2020-09-24T00:32:25.312 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:824: fail_all_mds: get_mds_gids cephfs 2020-09-24T00:32:25.312 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:816: get_mds_gids: fs_name=cephfs 2020-09-24T00:32:25.313 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:817: get_mds_gids: ceph fs get cephfs --format=json 2020-09-24T00:32:25.313 INFO:tasks.workunit.client.0.smithi183.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:817: get_mds_gids: python -c 'import json; import sys; print '\'' '\''.join([m['\''gid'\''].__str__() for m in json.load(sys.stdin)['\''mdsmap'\'']['\''info'\''].values()])' 2020-09-24T00:32:25.710 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:824: fail_all_mds: mds_gids= 2020-09-24T00:32:25.711 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:828: fail_all_mds: check_mds_active cephfs 2020-09-24T00:32:25.711 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:795: check_mds_active: fs_name=cephfs 2020-09-24T00:32:25.711 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:796: check_mds_active: ceph fs get cephfs 2020-09-24T00:32:25.711 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:796: check_mds_active: grep active 2020-09-24T00:32:26.113 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1003: test_mon_mds: set +e 2020-09-24T00:32:26.113 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1005: test_mon_mds: expect_false ceph mds rmfailed 0 2020-09-24T00:32:26.113 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:48: expect_false: set -x 2020-09-24T00:32:26.113 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:49: expect_false: ceph mds rmfailed 0 2020-09-24T00:32:26.427 INFO:tasks.workunit.client.0.smithi183.stderr:Error EPERM: WARNING: this can make your filesystem inaccessible! Add --yes-i-really-mean-it if you are sure you wish to continue. 2020-09-24T00:32:26.440 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:49: expect_false: return 0 2020-09-24T00:32:26.440 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1006: test_mon_mds: ceph mds rmfailed 0 --yes-i-really-mean-it 2020-09-24T00:32:26.779 INFO:tasks.workunit.client.0.smithi183.stderr:Error EINVAL: Rank '0' not foundinvalid role '0' 2020-09-24T00:32:26.792 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1007: test_mon_mds: set -e 2020-09-24T00:32:26.793 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1010: test_mon_mds: expect_false ceph fs new cephfs 41 40 --yes-i-really-mean-it 2020-09-24T00:32:27.101 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1013: test_mon_mds: ceph fs reset cephfs --yes-i-really-mean-it 2020-09-24T00:32:29.672 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1016: test_mon_mds: ceph osd pool create fs_metadata2 10 2020-09-24T00:32:30.913 INFO:tasks.workunit.client.0.smithi183.stderr:pool 'fs_metadata2' already exists 2020-09-24T00:32:30.926 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1017: test_mon_mds: ceph osd pool create fs_data2 10 2020-09-24T00:32:36.243 INFO:tasks.ceph.mon.a.smithi183.stderr:: timed out (5 sec) 2020-09-24T00:32:36.263 INFO:tasks.workunit.client.0.smithi183.stderr:Error ETIMEDOUT: crush test failed with -110: timed out during smoke test (5 seconds) 2020-09-24T00:32:36.284 INFO:tasks.workunit.client.0.smithi183.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1: test_mon_mds: rm -fr /tmp/cephtool.m8Z 2020-09-24T00:32:36.287 DEBUG:teuthology.orchestra.run:got remote process result: 110 </pre>
Updated by Deepika Upadhyay about 3 years ago
- Backport set to nautilus
not seen in octopus and pacific so far, but pops sometimes in nautilus:
2021-03-02T21:14:40.417 INFO:tasks.ceph.mon.a.smithi152.stderr:: timed out (5 sec) 2021-03-02T21:14:40.463 INFO:tasks.workunit.client.0.smithi152.stderr:Error ETIMEDOUT: crush test failed with -110: timed out during smoke test (5 seconds) 2021-03-02T21:14:40.487 DEBUG:teuthology.orchestra.run:got remote process result: 110
/ceph/teuthology-archive/yuriw-2021-03-01_23:47:21-rados-wip-yuri2-testing-2021-03-01-1417-nautilus-distro-basic-smithi/5925989/teuthology.log
Actions