Project

General

Profile

Actions

Bug #14391

closed

reached maximum tries (41) after waiting for 246 seconds

Added by 云辉 陈 over 8 years ago. Updated over 7 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
Category:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rados
Crash signature (v1):
Crash signature (v2):

Description

os_type: ubuntu
overrides:
admin_socket:
branch: master
ceph:
conf:
client.0:
debug ms: 1
debug objecter: 20
debug rados: 20
global:
ms inject delay max: 1
ms inject delay probability: 0.005
ms inject delay type: osd
ms inject internal delays: 0.002
ms inject socket failures: 2500
osd_max_pg_log_entries: 200
osd_min_pg_log_entries: 100
osd_pool_default_min_size: 1
mon:
debug mon: 20
debug ms: 1
debug paxos: 20
osd:
debug filestore: 20
debug journal: 20
debug ms: 1
debug osd: 20
osd sloppy crc: true
fs: xfs
log-whitelist:
- slow request
sha1: 6a8648811103835d57927149b419b0e037cc53c6
ceph-deploy:
branch:
dev-commit: 6a8648811103835d57927149b419b0e037cc53c6
conf:
client:
log file: /var/log/ceph/ceph-$name.$pid.log
mon:
debug mon: 1
debug ms: 20
debug paxos: 20
osd default pool size: 2
install:
ceph:
sha1: 6a8648811103835d57927149b419b0e037cc53c6
workunit:
sha1: 6a8648811103835d57927149b419b0e037cc53c6
owner: rados.9.14.chenyunhui
priority: 1000
roles:
- - mon.a
- mon.c
- osd.0
- osd.1
- osd.2
- client.0
- - mon.b
- osd.3
- osd.4
- osd.5
- client.1
sha1: 6a8648811103835d57927149b419b0e037cc53c6
suite: rados
suite_branch: master
suite_path: /home/teuthworker/src/ceph-qa-suite_master
tasks:
- ansible.cephlab: null
- clock.check: null
- install: null
- ceph:
conf:
osd:
osd debug reject backfill probability: 0.3
osd max backfills: 1
osd scrub max interval: 120
osd scrub min interval: 60
log-whitelist:
- wrongly marked me down
- objects unfound and apparently lost
- thrashosds:
chance_pgnum_grow: 1
chance_pgpnum_fix: 1
timeout: 1200
- radosbench:
clients:
- client.0
time: 50
teuthology_branch: master
tube: multi
verbose: false
2016-01-15T18:04:24.698 INFO:tasks.thrashosds.thrasher:Testing ceph-objectstore-tool on down osd
2016-01-15T18:04:24.699 INFO:teuthology.orchestra.run.ott003:Running: 'sudo adjust-ulimits ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-2 --journal-path /var/lib/ceph/osd/ceph-2/journal --log-file=/var/log/ceph/objectstore_tool.\\$pid.log --op list-pgs'
2016-01-15T18:04:25.485 INFO:teuthology.orchestra.run.ott003:Running: 'sudo adjust-ulimits ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-2 --journal-path /var/lib/ceph/osd/ceph-2/journal --log-file=/var/log/ceph/objectstore_tool.\\$pid.log --op export --pgid 0.19 --file /home/ubuntu/cephtest/data/exp.0.19.2'
2016-01-15T18:04:25.663 INFO:teuthology.orchestra.run.ott003.stderr:Exporting 0.19
2016-01-15T18:04:25.672 INFO:teuthology.orchestra.run.ott003.stderr:Export successful
2016-01-15T18:04:25.673 INFO:teuthology.orchestra.run.ott003:Running: 'sudo adjust-ulimits ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-2 --journal-path /var/lib/ceph/osd/ceph-2/journal --log-file=/var/log/ceph/objectstore_tool.\\$pid.log --op remove --pgid 0.19'
2016-01-15T18:04:25.837 INFO:teuthology.orchestra.run.ott003.stdout:finish_remove_pgs 1.6_TEMP clearing temp
2016-01-15T18:04:25.846 INFO:teuthology.orchestra.run.ott003.stdout:finish_remove_pgs 1.0_TEMP clearing temp
2016-01-15T18:04:25.851 INFO:teuthology.orchestra.run.ott003.stdout:finish_remove_pgs 1.b_TEMP clearing temp
2016-01-15T18:04:25.859 INFO:teuthology.orchestra.run.ott003.stdout:finish_remove_pgs 1.18_TEMP clearing temp
2016-01-15T18:04:25.864 INFO:teuthology.orchestra.run.ott003.stdout: marking collection for removal
2016-01-15T18:04:25.864 INFO:teuthology.orchestra.run.ott003.stdout:setting '_remove' omap key
2016-01-15T18:04:25.883 INFO:teuthology.orchestra.run.ott003.stdout:finish_remove_pgs removing 0.19_head pgid is 0.19
2016-01-15T18:04:25.883 INFO:teuthology.orchestra.run.ott003.stdout:remove_coll 0.19_head
2016-01-15T18:04:25.883 INFO:teuthology.orchestra.run.ott003.stdout:remove 19//head//0
2016-01-15T18:04:25.896 INFO:teuthology.orchestra.run.ott003.stdout:Remove successful
2016-01-15T18:04:25.939 INFO:teuthology.orchestra.run.ott003:Running: 'sudo adjust-ulimits ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-2 --journal-path /var/lib/ceph/osd/ceph-2/journal --log-file=/var/log/ceph/objectstore_tool.\\$pid.log --op import --file /home/ubuntu/cephtest/data/exp.0.19.2'
2016-01-15T18:04:26.175 INFO:teuthology.orchestra.run.ott003.stdout:Importing pgid 0.19
2016-01-15T18:04:26.183 INFO:teuthology.orchestra.run.ott003.stdout:Import successful
2016-01-15T18:04:26.214 INFO:teuthology.orchestra.run.ott003:Running: 'rm -f /home/ubuntu/cephtest/data/exp.0.19.2'
2016-01-15T18:04:31.245 INFO:tasks.thrashosds.thrasher:in_osds: [1, 3, 5, 4, 2] out_osds: [0] dead_osds: [2] live_osds: [0, 3, 5, 4, 1]
2016-01-15T18:04:31.245 INFO:tasks.thrashosds.thrasher:choose_action: min_in 3 min_out 0 min_live 2 min_dead 0
2016-01-15T18:04:31.245 INFO:tasks.thrashosds.thrasher:Removing osd 5, in_osds are: [1, 3, 5, 4, 2]
2016-01-15T18:04:31.246 INFO:teuthology.orchestra.run.ott003:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd out 5'
2016-01-15T18:04:31.628 INFO:teuthology.orchestra.run.ott003.stderr:marked out osd.5.
2016-01-15T18:04:36.553 INFO:tasks.ceph.osd.3.plana010.stderr:2016-01-15 18:04:36.545977 7fa3bb5ae700 -1 osd.3 57 heartbeat_check: no reply from osd.2 since back 2016-01-15 18:04:16.336650 front 2016-01-15 18:04:16.336650 (cutoff 2016-01-15 18:04:16.545974)
2016-01-15T18:04:36.641 INFO:tasks.thrashosds.thrasher:in_osds: [1, 3, 4, 2] out_osds: [0, 5] dead_osds: [2] live_osds: [0, 3, 5, 4, 1]
2016-01-15T18:04:36.641 INFO:tasks.thrashosds.thrasher:choose_action: min_in 3 min_out 0 min_live 2 min_dead 0
2016-01-15T18:04:36.641 INFO:tasks.thrashosds.thrasher:Growing pool unique_pool_0
2016-01-15T18:04:36.642 INFO:teuthology.orchestra.run.ott003:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format=json'
2016-01-15T18:04:36.838 INFO:teuthology.orchestra.run.ott003.stderr:dumped all in format json
2016-01-15T18:04:36.949 INFO:tasks.ceph.ceph_manager:increase pool size by 10
2016-01-15T18:04:36.949 INFO:teuthology.orchestra.run.ott003:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool set unique_pool_0 pg_num 36'
2016-01-15T18:04:37.268 INFO:tasks.ceph.osd.4.plana010.stderr:2016-01-15 18:04:37.256735 7f6cb49c5700 -1 osd.4 57 heartbeat_check: no reply from osd.2 since back 2016-01-15 18:04:15.850837 front 2016-01-15 18:04:15.850837 (cutoff 2016-01-15 18:04:17.256732)
2016-01-15T18:04:37.983 INFO:tasks.ceph.osd.5.plana010.stderr:2016-01-15 18:04:37.970913 7f1f858d2700 -1 osd.5 57 heartbeat_check: no reply from osd.2 since back 2016-01-15 18:04:16.866259 front 2016-01-15 18:04:16.866259 (cutoff 2016-01-15 18:04:17.970912)
2016-01-15T18:04:38.148 INFO:teuthology.orchestra.run.ott003.stderr:set pool 1 pg_num to 36
2016-01-15T18:04:39.105 INFO:tasks.ceph.osd.1.ott003.stderr:2016-01-15 18:04:39.088974 7f22aeceb700 -1 osd.1 58 heartbeat_check: no reply from osd.2 since back 2016-01-15 18:04:18.289336 front 2016-01-15 18:04:18.289336 (cutoff 2016-01-15 18:04:19.088973)
2016-01-15T18:04:39.498 INFO:tasks.ceph.osd.0.ott003.stderr:2016-01-15 18:04:39.483484 7fc7c08d6700 -1 osd.0 59 heartbeat_check: no reply from osd.2 since back 2016-01-15 18:04:18.580130 front 2016-01-15 18:04:18.580130 (cutoff 2016-01-15 18:04:19.483482)
2016-01-15T18:04:39.571 INFO:tasks.ceph.osd.4.plana010.stderr:2016-01-15 18:04:39.558235 7f6cb49c5700 -1 osd.4 59 heartbeat_check: no reply from osd.2 since back 2016-01-15 18:04:15.850837 front 2016-01-15 18:04:15.850837 (cutoff 2016-01-15 18:04:19.558233)
2016-01-15T18:04:40.215 INFO:tasks.ceph.osd.1.ott003.stderr:2016-01-15 18:04:40.193279 7f229735d700 -1 osd.1 59 heartbeat_check: no reply from osd.2 since back 2016-01-15 18:04:18.289336 front 2016-01-15 18:04:18.289336 (cutoff 2016-01-15 18:04:20.193277)
2016-01-15T18:04:40.240 INFO:tasks.ceph.osd.3.plana010.stderr:2016-01-15 18:04:40.224331 7fa3d2f3c700 -1 osd.3 59 heartbeat_check: no reply from osd.2 since back 2016-01-15 18:04:16.336650 front 2016-01-15 18:04:16.336650 (cutoff 2016-01-15 18:04:20.224329)
2016-01-15T18:04:40.240 INFO:tasks.ceph.osd.4.plana010.stderr:2016-01-15 18:04:40.225059 7f6ccc353700 -1 osd.4 59 heartbeat_check: no reply from osd.2 since back 2016-01-15 18:04:15.850837 front 2016-01-15 18:04:15.850837 (cutoff 2016-01-15 18:04:20.225058)
2016-01-15T18:04:40.251 INFO:tasks.ceph.osd.5.plana010.stderr:2016-01-15 18:04:40.235585 7f1f9d260700 -1 osd.5 59 heartbeat_check: no reply from osd.2 since back 2016-01-15 18:04:16.866259 front 2016-01-15 18:04:16.866259 (cutoff 2016-01-15 18:04:20.235582)
2016-01-15T18:04:40.375 INFO:tasks.ceph.osd.0.ott003.stderr:2016-01-15 18:04:40.353453 7fc7d8264700 -1 osd.0 59 heartbeat_check: no reply from osd.2 since back 2016-01-15 18:04:18.580130 front 2016-01-15 18:04:18.580130 (cutoff 2016-01-15 18:04:20.353451)
2016-01-15T18:04:43.164 INFO:tasks.thrashosds.thrasher:in_osds: [1, 3, 4, 2] out_osds: [0, 5] dead_osds: [2] live_osds: [0, 3, 5, 4, 1]
2016-01-15T18:04:43.164 INFO:tasks.thrashosds.thrasher:choose_action: min_in 3 min_out 0 min_live 2 min_dead 0
2016-01-15T18:04:43.164 INFO:tasks.thrashosds.thrasher:Growing pool unique_pool_0
2016-01-15T18:04:43.165 INFO:teuthology.orchestra.run.ott003:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format=json'
2016-01-15T18:04:43.368 INFO:teuthology.orchestra.run.ott003.stderr:dumped all in format json
2016-01-15T18:04:48.504 INFO:tasks.thrashosds.thrasher:in_osds: [1, 3, 4, 2] out_osds: [0, 5] dead_osds: [2] live_osds: [0, 3, 5, 4, 1]
2016-01-15T18:04:48.504 INFO:tasks.thrashosds.thrasher:choose_action: min_in 3 min_out 0 min_live 2 min_dead 0
2016-01-15T18:04:48.504 INFO:tasks.thrashosds.thrasher:Removing osd 4, in_osds are: [1, 3, 4, 2]
2016-01-15T18:04:48.504 INFO:teuthology.orchestra.run.ott003:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd out 4'
2016-01-15T18:04:49.045 INFO:teuthology.orchestra.run.ott003.stderr:marked out osd.4.
2016-01-15T18:04:54.065 INFO:tasks.thrashosds.thrasher:in_osds: [1, 3, 2] out_osds: [0, 5, 4] dead_osds: [2] live_osds: [0, 3, 5, 4, 1]
2016-01-15T18:04:54.065 INFO:tasks.thrashosds.thrasher:choose_action: min_in 3 min_out 0 min_live 2 min_dead 0
2016-01-15T18:04:54.065 INFO:tasks.thrashosds.thrasher:inject_pause on 0
2016-01-15T18:04:54.065 INFO:tasks.thrashosds.thrasher:Testing filestore_inject_stall pause injection for duration 3
2016-01-15T18:04:54.065 INFO:tasks.thrashosds.thrasher:Checking after 0, should_be_down=False
2016-01-15T18:04:54.065 INFO:teuthology.orchestra.run.ott003:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config set filestore_inject_stall 3'
2016-01-15T18:04:59.167 INFO:tasks.thrashosds.thrasher:in_osds: [1, 3, 2] out_osds: [0, 5, 4] dead_osds: [2] live_osds: [0, 3, 5, 4, 1]
2016-01-15T18:04:59.168 INFO:tasks.thrashosds.thrasher:choose_action: min_in 3 min_out 0 min_live 2 min_dead 0
2016-01-15T18:04:59.168 INFO:tasks.thrashosds.thrasher:Reweighting osd 3 to 0.202319357123
2016-01-15T18:04:59.168 INFO:teuthology.orchestra.run.ott003:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd reweight 3 0.202319357123'
2016-01-15T18:05:00.877 INFO:teuthology.orchestra.run.ott003.stderr:reweighted osd.3 to 0.202319 (33cb)
2016-01-15T18:05:05.901 INFO:tasks.thrashosds.thrasher:in_osds: [1, 3, 2] out_osds: [0, 5, 4] dead_osds: [2] live_osds: [0, 3, 5, 4, 1]
2016-01-15T18:05:05.901 INFO:tasks.thrashosds.thrasher:choose_action: min_in 3 min_out 0 min_live 2 min_dead 0
2016-01-15T18:05:05.901 INFO:tasks.thrashosds.thrasher:Reviving osd 2
2016-01-15T18:05:05.902 INFO:tasks.ceph.osd.2:Restarting daemon
2016-01-15T18:05:05.902 INFO:teuthology.orchestra.run.ott003:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 2'
2016-01-15T18:05:05.957 INFO:tasks.ceph.osd.2:Started
2016-01-15T18:05:05.957 INFO:teuthology.orchestra.run.ott003:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --admin-daemon /var/run/ceph/ceph-osd.2.asok dump_ops_in_flight'
2016-01-15T18:05:05.996 INFO:tasks.ceph.osd.2.ott003.stdout:starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
2016-01-15T18:05:06.080 INFO:teuthology.orchestra.run.ott003.stderr:no valid command found; 10 closest matches:
2016-01-15T18:05:06.080 INFO:teuthology.orchestra.run.ott003.stderr:config set <var> <val> [<val>...]
2016-01-15T18:05:06.080 INFO:teuthology.orchestra.run.ott003.stderr:version
2016-01-15T18:05:06.080 INFO:teuthology.orchestra.run.ott003.stderr:perfcounters_schema
2016-01-15T18:05:06.081 INFO:teuthology.orchestra.run.ott003.stderr:git_version
2016-01-15T18:05:06.093 INFO:teuthology.orchestra.run.ott003.stderr:help
2016-01-15T18:05:06.093 INFO:teuthology.orchestra.run.ott003.stderr:config show
2016-01-15T18:05:06.093 INFO:teuthology.orchestra.run.ott003.stderr:get_command_descriptions
2016-01-15T18:05:06.093 INFO:teuthology.orchestra.run.ott003.stderr:config get <var>
2016-01-15T18:05:06.093 INFO:teuthology.orchestra.run.ott003.stderr:perf schema
2016-01-15T18:05:06.093 INFO:teuthology.orchestra.run.ott003.stderr:2
2016-01-15T18:05:06.093 INFO:teuthology.orchestra.run.ott003.stderr:admin_socket: invalid command
2016-01-15T18:05:06.094 INFO:tasks.ceph.ceph_manager:waiting on admin_socket for osd-2, ['dump_ops_in_flight']
2016-01-15T18:05:06.137 INFO:tasks.ceph.osd.2.ott003.stderr:2016-01-15 18:05:06.130793 7f4d81bb0900 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2016-01-15T18:05:06.177 INFO:tasks.ceph.osd.2.ott003.stderr:2016-01-15 18:05:06.169823 7f4d81bb0900 -1 osd.2 55 log_to_monitors {default=true}
2016-01-15T18:05:11.095 INFO:teuthology.orchestra.run.ott003:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --admin-daemon /var/run/ceph/ceph-osd.2.asok dump_ops_in_flight'
2016-01-15T18:05:11.721 ERROR:teuthology.run_tasks:Manager failed: radosbench
Traceback (most recent call last):
File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 125, in run_tasks
suppress = manager.__exit__(*exc_info)
File "/usr/lib/python2.7/contextlib.py", line 24, in exit
self.gen.next()
File "/home/teuthworker/src/ceph-qa-suite_master/tasks/radosbench.py", line 97, in task
run.wait(radosbench.itervalues(), timeout=timeout)
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 395, in wait
check_time()
File "/home/teuthworker/src/teuthology_master/teuthology/contextutil.py", line 134, in call
raise MaxWhileTries(error_msg)
MaxWhileTries: reached maximum tries (41) after waiting for 246 seconds

Actions #1

Updated by Sage Weil over 7 years ago

  • Status changed from New to Rejected
Actions

Also available in: Atom PDF