Project

General

Profile

Actions

Bug #10601

closed

thrashosd shutdown: No JSON object could be decoded

Added by David Zafman over 9 years ago. Updated about 9 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
teuthology
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Found in

dzafman-2015-01-20_13:03:28-rados:thrash-wip-zafman-testing---basic-multi/715179
dzafman-2015-01-20_13:03:28-rados:thrash-wip-zafman-testing---basic-multi/715051

Not sure what is causing this.

2015-01-21T01:53:42.873 INFO:teuthology.orchestra.run.plana80:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rados rmpool unique_pool_0 unique_pool_0 --yes-i-really-really-mean-it'
2015-01-21T01:53:43.323 INFO:teuthology.orchestra.run.plana80.stdout:successfully deleted pool unique_pool_0
2015-01-21T01:53:43.326 INFO:tasks.thrashosds:joining thrashosds
2015-01-21T01:53:43.326 ERROR:teuthology.run_tasks:Manager failed: thrashosds
Traceback (most recent call last):
File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 122, in run_tasks
suppress = manager.__exit__(*exc_info)
File "/usr/lib/python2.7/contextlib.py", line 24, in exit
self.gen.next()
File "/var/lib/teuthworker/src/ceph-qa-suite_wip-9780-9781/tasks/thrashosds.py", line 174, in task
thrash_proc.do_join()
File "/var/lib/teuthworker/src/ceph-qa-suite_wip-9780-9781/tasks/ceph_manager.py", line 415, in do_join
self.thread.get()
File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 308, in get
raise self._exception
ValueError: No JSON object could be decoded


Related issues 1 (0 open1 closed)

Is duplicate of Ceph - Bug #10630: failed to decode json output from thrashosdsResolved01/24/2015

Actions
Actions #1

Updated by Sage Weil about 9 years ago

  • Status changed from New to Duplicate
  • Source changed from other to Q/A
Actions #2

Updated by Kefu Chai about 9 years ago

i am trying to reproduce this issue before diving into it. the tests reported by sage and david are slightly different, i started with the one from sage:

$ teuthology-schedule --name `date +'ubuntu-%Y-%m-%d_%H:%M:%S-thrash-test-master---basic-multi'` \
 --num 1 --worker multi --owner tchaikov@gmail.com \
 --description 'rados/thrash-erasure-code/{clusters/fixed-2.yaml fs/ext4.yaml msgr-failures/few.yaml thrashers/morepggrow.yaml workloads/ec-radosbench.yaml}' \
 -- \
 /tmp/schedule_suite_ZQQTIx /home/ubuntu/src/ceph-qa-suite_master/suites/rados/thrash-erasure-code/{clusters/fixed-2.yaml,fs/ext4.yaml,msgr-failures/few.yaml,thrashers/morepggrow.yaml,workloads/ec-radosbench.yaml}

is there any way i can check the status of the job?

Actions

Also available in: Atom PDF