Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2018-10-10T17:11:17ZCeph
Redmine teuthology - Feature #36383 (New): support for nfs-ganesha in teuthologyhttps://tracker.ceph.com/issues/363832018-10-10T17:11:17ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>It would be nice to have nfs-ganesha testing added in teuthology.</p> teuthology - Feature #21606 (Closed): implement simple tests for rgw multisite feature https://tracker.ceph.com/issues/216062017-09-29T20:27:31ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>reference tracker ticket: <a class="external" href="http://tracker.ceph.com/issues/20251">http://tracker.ceph.com/issues/20251</a></p>
<p>Test case:<br />setup rgw multisite cluster with master and secondary zones.</p>
<p>1. On master, create ~20 buckets to generate some metadata sync load.<br />2. On secondary, use systemctl to restart radosgw a few times, maybe every ~60sec or so.</p>
<p>test result: ensure there is no crash in RGWMetaSyncShardCR::incremental_sync completion.</p>
<p>note: the fix for this ticket was shipped as cutomer hot fix in ceph version 10.2.7-28.0.hotfix.bz1476888.el7cp</p>
<p>we already have test scripts to be run on primary and secondary setup to run, all we need is include the test as part of rgw qa suite.</p>
<p>on master zone - createbuckets.py [run the script in master zone]</p>
<p>import sys<br />from time import sleep<br />import boto<br />import boto.s3.connection<br />access_key = '7ZYL6IDWBC1C1K3WBI6E'<br />secret_key = 'loL0logoEpqsKMynNlreeNDCpxkTvQS2ay2sXzw1'</p>
<p>conn = boto.connect_s3(<br /> aws_access_key_id = access_key,<br /> aws_secret_access_key = secret_key,<br /> host = 'magna057.ceph.redhat.com',<br /> is_secure=False, port=80,<br /> calling_format = boto.s3.connection.OrdinaryCallingFormat(),<br /> )</p>
<p>count = 0<br />while True:<br /> count += 1<br /> bname = 'bzthursbucketnumber%d' % count<br /> bucket = conn.create_bucket(bname)<br /> sleep(1)</p>
<p>on secondary zone - rgwstart.sh & [run the script in background, so it restarts rgw every 1 minute]</p>
<p>#! /bin/bash -fv<br />while true; do<br /> systemctl restart ceph-radosgw@rgw.`hostname -s`<br /> sleep 60<br />done</p>
<p>leave the test scripts running for 10 hours and ensure there is no segfault in rgw log [/var/log/ceph/ceph-rgw.<hostname>.log]</p> ceph-ansible - Bug #18981 (New): Don't try to install ceph-fs-common in kraken or laterhttps://tracker.ceph.com/issues/189812017-02-17T21:16:19ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>ceph-ansible version: 2.2.1<br />ceph version: Kraken<br />Distro: Xenial</p>
<p>looks like broken packages causing the following error,</p>
<pre>
2017-02-16T11:34:52.654 INFO:teuthology.orchestra.run.vpm027.stdout:TASK [ceph.ceph-common : install ceph] *****************************************
2017-02-16T11:34:52.654 INFO:teuthology.orchestra.run.vpm027.stdout:task path: /home/ubuntu/ceph-ansible/roles/ceph-common/tasks/installs/install_on_debian.yml:18
2017-02-16T11:34:54.269 INFO:teuthology.orchestra.run.vpm027.stdout:failed: [vpm137.front.sepia.ceph.com] (item=[u'ceph', u'ceph-common', u'ceph-fs-common', u'ceph-fuse']) => {"cache_update_time": 1487244892, "cache_updated": false, "failed": true, "item": ["ceph", "ceph-common", "ceph-fs-common", "ceph-fuse"], "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'ceph' 'ceph-common' 'ceph-fs-common' 'ceph-fuse' -t 'xenial'' failed: E: Unable to correct problems, you have held broken packages.\n", "stderr": "E: Unable to correct problems, you have held broken packages.\n", "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n ceph-common : Breaks: ceph-fs-common (< 11.0) but 10.2.5-0ubuntu0.16.04.1 is to be installed\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "Some packages could not be installed. This may mean that you have", "requested an impossible situation or if you are using the unstable", "distribution that some required packages have not yet been created", "or been moved out of Incoming.", "The following information may help to resolve the situation:", "", "The following packages have unmet dependencies:", " ceph-common : Breaks: ceph-fs-common (< 11.0) but 10.2.5-0ubuntu0.16.04.1 is to be installed"]}
2017-02-16T11:34:54.439 INFO:teuthology.orchestra.run.vpm027.stdout:failed: [vpm013.front.sepia.ceph.com] (item=[u'ceph', u'ceph-common', u'ceph-fs-common', u'ceph-fuse']) => {"cache_update_time": 1487244892, "cache_updated": false, "failed": true, "item": ["ceph", "ceph-common", "ceph-fs-common", "ceph-fuse"], "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'ceph' 'ceph-common' 'ceph-fs-common' 'ceph-fuse' -t 'xenial'' failed: E: Unable to correct problems, you have held broken packages.\n", "stderr": "E: Unable to correct problems, you have held broken packages.\n", "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n ceph-common : Breaks: ceph-fs-common (< 11.0) but 10.2.5-0ubuntu0.16.04.1 is to be installed\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "Some packages could not be installed. This may mean that you have", "requested an impossible situation or if you are using the unstable", "distribution that some required packages have not yet been created", "or been moved out of Incoming.", "The following information may help to resolve the situation:", "", "The following packages have unmet dependencies:", " ceph-common : Breaks: ceph-fs-common (< 11.0) but 10.2.5-0ubuntu0.16.04.1 is to be installed"]}
2017-02-16T11:34:54.603 INFO:teuthology.orchestra.run.vpm027.stdout:failed: [vpm027.front.sepia.ceph.com] (item=[u'ceph', u'ceph-common', u'ceph-fs-common', u'ceph-fuse']) => {"cache_update_time": 1487244892, "cache_updated": false, "failed": true, "item": ["ceph", "ceph-common", "ceph-fs-common", "ceph-fuse"], "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'ceph' 'ceph-common' 'ceph-fs-common' 'ceph-fuse' -t 'xenial'' failed: E: Unable to correct problems, you have held broken packages.\n", "stderr": "E: Unable to correct problems, you have held broken packages.\n", "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n ceph-common : Breaks: ceph-fs-common (< 11.0) but 10.2.5-0ubuntu0.16.04.1 is to be installed\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "Some packages could not be installed. This may mean that you have", "requested an impossible situation or if you are using the unstable", "distribution that some required packages have not yet been created", "or been moved out of Incoming.", "The following information may help to resolve the situation:", "", "The following packages have unmet dependencies:", " ceph-common : Breaks: ceph-fs-common (< 11.0) but 10.2.5-0ubuntu0.16.04.1 is to be installed"]}
2017-02-16T11:34:54.605 INFO:teuthology.orchestra.run.vpm027.stdout: to retry, use: --limit @/home/ubuntu/ceph-ansible/site.retry
2017-02-16T11:34:54.605 INFO:teuthology.orchestra.run.vpm027.stdout:
2017-02-16T11:34:54.605 INFO:teuthology.orchestra.run.vpm027.stdout:PLAY RECAP *********************************************************************
2017-02-16T11:34:54.606 INFO:teuthology.orchestra.run.vpm027.stdout:vpm013.front.sepia.ceph.com : ok=17 changed=5 unreachable=0 failed=1
2017-02-16T11:34:54.606 INFO:teuthology.orchestra.run.vpm027.stdout:vpm027.front.sepia.ceph.com : ok=17 changed=5 unreachable=0 failed=1
2017-02-16T11:34:54.606 INFO:teuthology.orchestra.run.vpm027.stdout:vpm137.front.sepia.ceph.com : ok=17 changed=5 unreachable=0 failed=1
2017-02-16T11:34:54.606 INFO:teuthology.orchestra.run.vpm027.stdout:
2017-02-16T11:34:54.706 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks
manager.__enter__()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 121, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 196, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 149, in execute_playbook
self.run_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 357, in run_playbook
run.Raw(str_args)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 192, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 403, in run
r.wait()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 166, in wait
label=self.label)
CommandFailedError: Command failed on vpm027 with status 2: "cd ~/ceph-ansible ; virtualenv venv ; source venv/bin/activate ; pip install --upgrade pip ; pip install 'setuptools>=11.3' ansible==2.2.1 ; ansible-playbook -vv -i inven.yml site.yml"
2017-02-16T11:34:54.722 ERROR:teuthology.run_tasks: Sentry event: http://sentry.ceph.com/sepia/teuthology/?q=59e9d1f2b24f4fe08097519f9b8ad649
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks
manager.__enter__()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 121, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 196, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 149, in execute_playbook
self.run_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 357, in run_playbook
run.Raw(str_args)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 192, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 403, in run
r.wait()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 166, in wait
label=self.label)
CommandFailedError: Command failed on vpm027 with status 2: "cd ~/ceph-ansible ; virtualenv venv ; source venv/bin/activate ; pip install --upgrade pip ; pip install 'setuptools>=11.3' ansible==2.2.1 ; ansible-playbook -vv -i inven.yml site.yml"
2017-02-16T11:34:54.723 DEBUG:teuthology.run_tasks:Unwinding manager ceph_ansible
2017-02-16T11:34:54.735 INFO:teuthology.task.ceph_ansible:Cleaning up temporary files
2017-02-16T11:34:54.735 DEBUG:teuthology.run_tasks:Unwinding manager ssh-keys
2017-02-16T11:34:54.748 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/contextutil.py", line 32, in nested
yield vars
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ssh_keys.py", line 206, in task
yield
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks
manager.__enter__()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 121, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 196, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 149, in execute_playbook
self.run_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 357, in run_playbook
run.Raw(str_args)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 192, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 403, in run
r.wait()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 166, in wait
label=self.label)
CommandFailedError: Command failed on vpm027 with status 2: "cd ~/ceph-ansible ; virtualenv venv ; source venv/bin/activate ; pip install --upgrade pip ; pip install 'setuptools>=11.3' ansible==2.2.1 ; ansible-playbook -vv -i inven.yml site.yml"
</pre> teuthology - Bug #18858 (Resolved): ceph-ansible fails during monitor creationhttps://tracker.ceph.com/issues/188582017-02-08T19:03:28ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>ceph branch: jewel<br />ansible version: 2.2.1</p>
<p>ceph-ansible task running to deploy ceph [jewel branch] fails during mon creation.</p>
<pre>
2017-02-08 00:10:42,190.190 INFO:teuthology.orchestra.run.mira109.stdout:TASK [ceph-mon : start the monitor service] ************************************
2017-02-08 00:10:42,190.190 INFO:teuthology.orchestra.run.mira109.stdout:task path: /home/ubuntu/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2
2017-02-08 00:10:42,441.441 INFO:teuthology.orchestra.run.mira109.stdout:fatal: [mira109.front.sepia.ceph.com]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to start service ceph-mon@mira109: Job for ceph-mon@mira109.service failed because the control process exited with error code. See \"systemctl status ceph-mon@mira109.service\" and \"journalctl -xe\" for details.\n"}
2017-02-08 00:10:42,444.444 INFO:teuthology.orchestra.run.mira109.stdout:fatal: [mira090.front.sepia.ceph.com]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to start service ceph-mon@mira090: Job for ceph-mon@mira090.service failed because the control process exited with error code. See \"systemctl status ceph-mon@mira090.service\" and \"journalctl -xe\" for details.\n"}
2017-02-08 00:10:42,445.445 INFO:teuthology.orchestra.run.mira109.stdout:
2017-02-08 00:10:42,445.445 INFO:teuthology.orchestra.run.mira109.stdout:RUNNING HANDLER [ceph.ceph-common : restart ceph mons] *************************
2017-02-08 00:10:42,446.446 INFO:teuthology.orchestra.run.mira109.stdout:
2017-02-08 00:10:42,446.446 INFO:teuthology.orchestra.run.mira109.stdout:RUNNING HANDLER [ceph.ceph-common : restart ceph osds] *************************
2017-02-08 00:10:42,447.447 INFO:teuthology.orchestra.run.mira109.stdout:
2017-02-08 00:10:42,447.447 INFO:teuthology.orchestra.run.mira109.stdout:RUNNING HANDLER [ceph.ceph-common : restart ceph mdss] *************************
2017-02-08 00:10:42,447.447 INFO:teuthology.orchestra.run.mira109.stdout:
2017-02-08 00:10:42,447.447 INFO:teuthology.orchestra.run.mira109.stdout:RUNNING HANDLER [ceph.ceph-common : restart ceph rgws] *************************
2017-02-08 00:10:42,447.447 INFO:teuthology.orchestra.run.mira109.stdout:
2017-02-08 00:10:42,447.447 INFO:teuthology.orchestra.run.mira109.stdout:RUNNING HANDLER [ceph.ceph-common : restart ceph nfss] *************************
2017-02-08 00:10:42,448.448 INFO:teuthology.orchestra.run.mira109.stdout: to retry, use: --limit @/home/ubuntu/ceph-ansible/site.retry
2017-02-08 00:10:42,448.448 INFO:teuthology.orchestra.run.mira109.stdout:
2017-02-08 00:10:42,449.449 INFO:teuthology.orchestra.run.mira109.stdout:PLAY RECAP *********************************************************************
2017-02-08 00:10:42,449.449 INFO:teuthology.orchestra.run.mira109.stdout:mira090.front.sepia.ceph.com : ok=49 changed=14 unreachable=0 failed=1
2017-02-08 00:10:42,449.449 INFO:teuthology.orchestra.run.mira109.stdout:mira109.front.sepia.ceph.com : ok=48 changed=12 unreachable=0 failed=1
2017-02-08 00:10:42,449.449 INFO:teuthology.orchestra.run.mira109.stdout:
2017-02-08 00:10:42,553.553 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
File "/home/tamil/new/teuthology/teuthology/run_tasks.py", line 89, in run_tasks
manager.__enter__()
File "/home/tamil/new/teuthology/teuthology/task/__init__.py", line 121, in __enter__
self.begin()
File "/home/tamil/new/teuthology/teuthology/task/ceph_ansible.py", line 197, in begin
self.execute_playbook()
File "/home/tamil/new/teuthology/teuthology/task/ceph_ansible.py", line 150, in execute_playbook
self.run_playbook()
File "/home/tamil/new/teuthology/teuthology/task/ceph_ansible.py", line 353, in run_playbook
run.Raw(str_args)
File "/home/tamil/new/teuthology/teuthology/orchestra/remote.py", line 192, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/tamil/new/teuthology/teuthology/orchestra/run.py", line 403, in run
r.wait()
File "/home/tamil/new/teuthology/teuthology/orchestra/run.py", line 166, in wait
label=self.label)
CommandFailedError: Command failed on mira109 with status 2: 'cd ~/ceph-ansible ; virtualenv venv ; source venv/bin/activate ; pip install setuptools==32.3.1 ansible==2.2.1 ; ansible-playbook -vv -i inven.yml site.yml'
</pre>
<p>detailed logs can be found at teuthology:/home/tamil/new/teuthology/today1/teuthology.log</p> teuthology - Bug #18790 (Won't Fix): ceph-ansible task fails with error "no module named six.moves"https://tracker.ceph.com/issues/187902017-02-02T01:04:04ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>running ceph-ansible task against kraken branch fails with the following error<br /><pre>
2017-01-26T00:47:58.429 INFO:teuthology.orchestra.run.vpm135:Running: "cd ~/ceph-ansible ; virtualenv venv ; source venv/bin/activate ; pip install 'setuptools>=11.3' ansible==2.1 ; ansible-playbook -vv -i inven.yml site.yml"
2017-01-26T00:47:58.835 INFO:teuthology.orchestra.run.vpm135.stdout:New python executable in venv/bin/python
2017-01-26T00:47:59.437 INFO:teuthology.orchestra.run.vpm135.stdout:Installing Setuptools..............................................................................................................................................................................................................................done.
2017-01-26T00:48:00.047 INFO:teuthology.orchestra.run.vpm135.stdout:Installing Pip.....................................................................................................................................................................................................................................................................................................................................done.
2017-01-26T00:48:01.570 INFO:teuthology.orchestra.run.vpm135.stdout:Downloading/unpacking setuptools>=11.3
2017-01-26T00:48:01.570 INFO:teuthology.orchestra.run.vpm135.stdout: Running setup.py egg_info for package setuptools
2017-01-26T00:48:01.570 INFO:teuthology.orchestra.run.vpm135.stdout: Traceback (most recent call last):
2017-01-26T00:48:01.571 INFO:teuthology.orchestra.run.vpm135.stdout: File "<string>", line 3, in <module>
2017-01-26T00:48:01.571 INFO:teuthology.orchestra.run.vpm135.stdout: File "setuptools/__init__.py", line 10, in <module>
2017-01-26T00:48:01.571 INFO:teuthology.orchestra.run.vpm135.stdout: from six.moves import filter, map
2017-01-26T00:48:01.571 INFO:teuthology.orchestra.run.vpm135.stdout: ImportError: No module named six.moves
2017-01-26T00:48:01.571 INFO:teuthology.orchestra.run.vpm135.stdout: Complete output from command python setup.py egg_info:
2017-01-26T00:48:01.572 INFO:teuthology.orchestra.run.vpm135.stdout: Traceback (most recent call last):
2017-01-26T00:48:01.572 INFO:teuthology.orchestra.run.vpm135.stdout:
2017-01-26T00:48:01.572 INFO:teuthology.orchestra.run.vpm135.stdout: File "<string>", line 3, in <module>
2017-01-26T00:48:01.572 INFO:teuthology.orchestra.run.vpm135.stdout:
2017-01-26T00:48:01.572 INFO:teuthology.orchestra.run.vpm135.stdout: File "setuptools/__init__.py", line 10, in <module>
2017-01-26T00:48:01.573 INFO:teuthology.orchestra.run.vpm135.stdout:
2017-01-26T00:48:01.573 INFO:teuthology.orchestra.run.vpm135.stdout: from six.moves import filter, map
2017-01-26T00:48:01.573 INFO:teuthology.orchestra.run.vpm135.stdout:
2017-01-26T00:48:01.573 INFO:teuthology.orchestra.run.vpm135.stdout:ImportError: No module named six.moves
2017-01-26T00:48:01.573 INFO:teuthology.orchestra.run.vpm135.stdout:
2017-01-26T00:48:01.574 INFO:teuthology.orchestra.run.vpm135.stdout:----------------------------------------
2017-01-26T00:48:01.574 INFO:teuthology.orchestra.run.vpm135.stdout:Cleaning up...
2017-01-26T00:48:01.574 INFO:teuthology.orchestra.run.vpm135.stdout:Command python setup.py egg_info failed with error code 1 in /home/ubuntu/ceph-ansible/venv/build/setuptools
2017-01-26T00:48:01.574 INFO:teuthology.orchestra.run.vpm135.stdout:Storing complete log in /tmp/tmp1qN5HK
2017-01-26T00:48:01.597 INFO:teuthology.orchestra.run.vpm135.stderr:bash: ansible-playbook: command not found
2017-01-26T00:48:01.597 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks
manager.__enter__()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 121, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 196, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 149, in execute_playbook
self.run_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 352, in run_playbook
run.Raw(str_args)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 192, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 403, in run
r.wait()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 166, in wait
label=self.label)
CommandFailedError: Command failed on vpm135 with status 127: "cd ~/ceph-ansible ; virtualenv venv ; source venv/bin/activate ; pip install 'setuptools>=11.3' ansible==2.1 ; ansible-playbook -vv -i inven.yml site.yml"
2017-01-26T00:48:01.634 ERROR:teuthology.run_tasks: Sentry event: http://sentry.ceph.com/sepia/teuthology/?q=00745e0e57a3485ea4d25de4eb4cea40
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks
manager.__enter__()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 121, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 196, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 149, in execute_playbook
self.run_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ceph_ansible.py", line 352, in run_playbook
run.Raw(str_args)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 192, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 403, in run
r.wait()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 166, in wait
label=self.label)
CommandFailedError: Command failed on vpm135 with status 127: "cd ~/ceph-ansible ; virtualenv venv ; source venv/bin/activate ; pip install 'setuptools>=11.3' ansible==2.1 ; ansible-playbook -vv -i inven.yml site.yml"
</pre></p>
<p>logs:<br /><a class="external" href="http://qa-proxy.ceph.com/teuthology/yuriw-2017-01-25_18:41:00-ceph-ansible-kraken-distro-basic-vps/747665/teuthology.log">http://qa-proxy.ceph.com/teuthology/yuriw-2017-01-25_18:41:00-ceph-ansible-kraken-distro-basic-vps/747665/teuthology.log</a></p> sepia - Bug #18702 (Resolved): mira009 crashed and so all the jobs that tried to use mira009 failedhttps://tracker.ceph.com/issues/187022017-01-27T18:22:45ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>an excerpt from the teuthology log:</p>
<pre>
2017-01-26T00:30:55.005 DEBUG:teuthology.lock:locked ubuntu@vpm053.front.sepia.ceph.com
2017-01-26T00:30:55.032 INFO:teuthology.provision.downburst:Provisioning a ubuntu 14.04 vps
2017-01-26T00:30:55.620 INFO:teuthology.provision.downburst:Downburst failed on ubuntu@vpm053.front.sepia.ceph.com: libvirt: XML-RPC error : Cannot recv data: ssh: connect to host mira009.front.sepia.ceph.com port 22: No route to host: Connection reset by peer
Traceback (most recent call last):
File "/home/teuthworker/src/downburst/virtualenv/bin/downburst", line 9, in <module>
load_entry_point('downburst', 'console_scripts', 'downburst')()
File "/home/teuthworker/src/downburst/downburst/cli.py", line 64, in main
return args.func(args)
File "/home/teuthworker/src/downburst/downburst/create.py", line 22, in create
conn = libvirt.open(args.connect)
File "/home/teuthworker/src/downburst/virtualenv/local/lib/python2.7/site-packages/libvirt.py", line 255, in open
if ret is None:raise libvirtError('virConnectOpen() failed')
libvirt.libvirtError: Cannot recv data: ssh: connect to host mira009.front.sepia.ceph.com port 22: No route to host: Connection reset by peer
2017-01-26T00:30:55.621 ERROR:teuthology.lock:Unable to create virtual machine: ubuntu@vpm053.front.sepia.ceph.com
2017-01-26T00:30:58.964 ERROR:teuthology.provision.downburst:Error destroying vpm053.front.sepia.ceph.com: libvirt: XML-RPC error : Cannot recv data: ssh: connect to host mira009.front.sepia.ceph.com port 22: No route to host: Connection reset by peer
Traceback (most recent call last):
File "/home/teuthworker/src/downburst/virtualenv/bin/downburst", line 9, in <module>
load_entry_point('downburst', 'console_scripts', 'downburst')()
File "/home/teuthworker/src/downburst/downburst/cli.py", line 64, in main
return args.func(args)
File "/home/teuthworker/src/downburst/downburst/destroy.py", line 42, in destroy
conn = libvirt.open(args.connect)
File "/home/teuthworker/src/downburst/virtualenv/local/lib/python2.7/site-packages/libvirt.py", line 255, in open
if ret is None:raise libvirtError('virConnectOpen() failed')
libvirt.libvirtError: Cannot recv data: ssh: connect to host mira009.front.sepia.ceph.com port 22: No route to host: Connection reset by peer
2017-01-26T00:30:58.964 ERROR:teuthology.lock:destroy failed for vpm053.front.sepia.ceph.com
2017-01-26T00:30:59.002 INFO:teuthology.lock:unlocked vpm053.front.sepia.ceph.com
</pre>
<p>full logs available in <a class="external" href="http://qa-proxy.ceph.com/teuthology/yuriw-2017-01-25_18:41:00-ceph-ansible-kraken-distro-basic-vps/747666/teuthology.log">http://qa-proxy.ceph.com/teuthology/yuriw-2017-01-25_18:41:00-ceph-ansible-kraken-distro-basic-vps/747666/teuthology.log</a></p> teuthology - Bug #18528 (New): ceph-ansible task does not create client keyrings yethttps://tracker.ceph.com/issues/185282017-01-13T23:16:57ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>ceph-ansible task currently assumes only client.admin.keyring as the client keyring file.</p>
<p>rather, it should be able to create client keyrings [like client.0.keyring,..] just like how ceph/other tasks create client keyrings.</p>
<p>since we do not have client keyring files created, ceph-fuse workunits fail with the following error</p>
<pre>
2017-01-13T00:41:27.736 INFO:teuthology.run_tasks:Running task ceph-fuse...
2017-01-13T00:41:27.826 INFO:tasks.ceph_fuse:Mounting ceph-fuse clients...
2017-01-13T00:41:27.826 INFO:tasks.cephfs.fuse_mount:Client client.0 config is {}
2017-01-13T00:41:27.826 INFO:tasks.cephfs.fuse_mount:Mounting ceph-fuse client.0 at ubuntu@vpm153.front.sepia.ceph.com /home/ubuntu/cephtest/mnt.0...
2017-01-13T00:41:27.826 INFO:teuthology.orchestra.run.vpm153:Running: 'mkdir -- /home/ubuntu/cephtest/mnt.0'
2017-01-13T00:41:27.832 INFO:teuthology.orchestra.run.vpm153:Running: 'sudo mount -t fusectl /sys/fs/fuse/connections /sys/fs/fuse/connections'
2017-01-13T00:41:27.913 INFO:teuthology.orchestra.run.vpm153.stderr:mount: /sys/fs/fuse/connections already mounted or /sys/fs/fuse/connections busy
2017-01-13T00:41:27.913 INFO:teuthology.orchestra.run.vpm153.stderr:mount: according to mtab, none is already mounted on /sys/fs/fuse/connections
2017-01-13T00:41:27.914 INFO:teuthology.orchestra.run.vpm153:Running: 'ls /sys/fs/fuse/connections'
2017-01-13T00:41:27.983 INFO:tasks.cephfs.fuse_mount:Pre-mount connections: []
2017-01-13T00:41:27.983 INFO:teuthology.orchestra.run.vpm153:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --name client.0 /home/ubuntu/cephtest/mnt.0'
2017-01-13T00:41:28.049 INFO:teuthology.orchestra.run.vpm153:Running: 'sudo mount -t fusectl /sys/fs/fuse/connections /sys/fs/fuse/connections'
2017-01-13T00:41:28.067 INFO:teuthology.orchestra.run.vpm153.stderr:mount: /sys/fs/fuse/connections already mounted or /sys/fs/fuse/connections busy
2017-01-13T00:41:28.068 INFO:teuthology.orchestra.run.vpm153.stderr:mount: according to mtab, none is already mounted on /sys/fs/fuse/connections
2017-01-13T00:41:28.068 INFO:teuthology.orchestra.run.vpm153:Running: 'ls /sys/fs/fuse/connections'
2017-01-13T00:41:28.095 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vpm153.stdout:ceph-fuse[25023]: starting ceph client
2017-01-13T00:41:28.096 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vpm153.stderr:2017-01-13 00:41:28.094804 7f0ae3701ec0 -1 init, newargv = 0x7f0aed10c900 newargc=11
2017-01-13T00:41:28.096 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vpm153.stderr:2017-01-13 00:41:28.095409 7f0ae3701ec0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.0.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2017-01-13T00:41:28.097 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vpm153.stderr:ceph-fuse[25023]: ceph mount failed with (95) Operation not supported
2017-01-13T00:41:28.097 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vpm153.stderr:2017-01-13 00:41:28.096634 7f0ae3701ec0 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
2017-01-13T00:41:28.293 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.vpm153.stderr:daemon-helper: command failed with exit status 161
2017-01-13T00:41:29.106 INFO:teuthology.orchestra.run.vpm153:Running: 'sudo mount -t fusectl /sys/fs/fuse/connections /sys/fs/fuse/connections'
2017-01-13T00:41:29.120 INFO:teuthology.orchestra.run.vpm153.stderr:mount: /sys/fs/fuse/connections already mounted or /sys/fs/fuse/connections busy
2017-01-13T00:41:29.120 INFO:teuthology.orchestra.run.vpm153.stderr:mount: according to mtab, none is already mounted on /sys/fs/fuse/connections
2017-01-13T00:41:29.121 INFO:teuthology.orchestra.run.vpm153:Running: 'ls /sys/fs/fuse/connections'
2017-01-13T00:41:29.191 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks
manager.__enter__()
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/home/teuthworker/src/github.com_ceph_ceph_master/qa/tasks/ceph_fuse.py", line 126, in task
mount.mount()
File "/home/teuthworker/src/github.com_ceph_ceph_master/qa/tasks/cephfs/fuse_mount.py", line 121, in mount
self.fuse_daemon.wait()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 166, in wait
label=self.label)
CommandFailedError: Command failed on vpm153 with status 161: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --name client.0 /home/ubuntu/cephtest/mnt.0'
2017-01-13T00:41:29.205 ERROR:teuthology.run_tasks: Sentry event: http://sentry.ceph.com/sepia/teuthology/?q=cba893a0153a4ef98eeebc9f3c3600b3
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks
manager.__enter__()
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/home/teuthworker/src/github.com_ceph_ceph_master/qa/tasks/ceph_fuse.py", line 126, in task
mount.mount()
File "/home/teuthworker/src/github.com_ceph_ceph_master/qa/tasks/cephfs/fuse_mount.py", line 121, in mount
self.fuse_daemon.wait()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 166, in wait
label=self.label)
CommandFailedError: Command failed on vpm153 with status 161: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --name client.0 /home/ubuntu/cephtest/mnt.0'
</pre> Ceph - Documentation #17626 (Closed): installation and upgrade docs are way out of datehttps://tracker.ceph.com/issues/176262016-10-19T20:33:36ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>looks like we don't have documentation up to date for all new features upstream [<a class="external" href="http://docs.ceph.com/docs/master/">http://docs.ceph.com/docs/master/</a>]<br />however it is really important and would be great to have updated docs for installation and upgrade procedures for upstream.</p>
<p>we need upgrade procedures for "firefly to hammer to jewel" and "hammer to jewel".</p> RADOS - Documentation #16356 (Resolved): doc: manual deployment of ceph monitor needs fixhttps://tracker.ceph.com/issues/163562016-06-17T01:37:05ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p><a class="external" href="http://docs.ceph.com/docs/master/install/manual-deployment/">http://docs.ceph.com/docs/master/install/manual-deployment/</a></p>
<p>while setting up ceph-mon<br />step 8 in the doc says to create a mon keyring,<br />ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'</p>
<p>after the keyring is created in /tmp/ceph.mon.keyring has to be accessible by ceph user for mkfs to succeed.</p>
<p>so please add a step here like,</p>
<p>sudo chown ceph:ceph /tmp/ceph.mon.keyring</p>
<p>and on step 11: we need to run the command as root user or it is going to error out during mkfs</p>
<p>ex: sudo monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap</p>
<p>otherwise, the ceph-mon is never going to be created and will be stuck at ceph-create-keys.</p>
<p>the permissions of the files should be as follows:</p>
<pre><code>ceph ceph 214 Jun 16 22:44 /tmp/ceph.mon.keyring<br /> root root 194 Jun 16 22:45 /tmp/monmap<br />ceph ceph 4096 Jun 17 01:28 /var/lib/ceph/mon/ceph-magna103</code></pre> Ceph - Documentation #15438 (Closed): package to be installed needs correctionhttps://tracker.ceph.com/issues/154382016-04-08T18:53:36ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>in <a class="external" href="http://docs.ceph.com/docs/master/rbd/rbd-openstack/">http://docs.ceph.com/docs/master/rbd/rbd-openstack/</a> <br />under the section "Configure Openstack Ceph Clients",</p>
<p>the yum equivalent of ceph-common package is mis-spelled as "ceph", instead of "ceph-common". it has to be corrected as it is misleading.</p>
<p>pasting below the exact sentences,<br />sudo apt-get install ceph-common<br />sudo yum install ceph</p> teuthology - Feature #9317 (New): run tests with a mix of distroshttps://tracker.ceph.com/issues/93172014-09-02T12:00:04ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>ability to run tests on teuthology on a mix of distros: rhel7,rhel6,trusty,precise.</p> devops - Bug #9232 (Closed): disk zap doesnt remove the dmcrypt settings on diskhttps://tracker.ceph.com/issues/92322014-08-25T18:08:07ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>Well, am really not sure how this is supposed to behave.</p>
<p>deployed a cluster using ceph-deploy and enabled dmcrypt on the disks when creating osds.<br />after I have destroyed the cluster [purge and purgedata], the disks that were dmcrypt enabled still remains the same, no change even after i zap it.</p>
<p>while i think zap would officially deal with the data, am not sure if zap could also clear the dmcrypt flag or something.</p>
<p>This makes me think, if dmcrypt is a one time setting that i can never dare to play with and the only way i can reset is by formatting the disk? doesnt sound like a good idea.</p> Ceph-deploy - Bug #9122 (Closed): ceph-deploy: disk list throws an exceptionhttps://tracker.ceph.com/issues/91222014-08-14T15:00:20ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>ceph-deploy version: 1.5.11<br />ceph version: firefly [v0.80.5]</p>
<pre>
ubuntu@vpm070:~/ceph-deploy$ ./ceph-deploy disk list vpm070
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.11): ./ceph-deploy disk list vpm070
[vpm070][DEBUG ] connected to host: vpm070
[vpm070][DEBUG ] detect platform information from remote host
[vpm070][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Listing disks on vpm070...
[vpm070][DEBUG ] find the location of an executable
[vpm070][INFO ] Running command: sudo /usr/sbin/ceph-disk list
[vpm070][DEBUG ] /dev/sr0 other, iso9660
[vpm070][DEBUG ] /dev/vda :
[vpm070][DEBUG ] /dev/vda1 other, ext4, mounted on /
[vpm070][DEBUG ] /dev/vdb other, unknown
[vpm070][DEBUG ] /dev/vdc other, unknown
[vpm070][DEBUG ] /dev/vdd other, unknown
Unhandled exception in thread started by <bound method WorkerPool._perform_spawn of <execnet.gateway_base.WorkerPool object at 0x2ab4c10>>
Traceback (most recent call last):
File "/home/ubuntu/ceph-deploy/ceph_deploy/lib/vendor/remoto/lib/vendor/execnet/gateway_base.py", line 257, in _perform_spawn
with self._running_lock:
File "/usr/lib/python2.7/threading.py", line 167, in acquire
me = _get_ident()
TypeError: 'NoneType' object is not callable
</pre> rgw - Bug #6621 (Resolved): quota: the max-size and max-objects value when zero https://tracker.ceph.com/issues/66212013-10-23T17:19:37ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>ceph version: 0.71-249-g31a9492 (31a94922a9ada132bea06be308484ead84e4d879)</p>
<p>while setting quota, the max-size field has to be validated to not accept 0 or any negative values.</p>
<pre>
ubuntu@mira026:~$ sudo radosgw-admin quota set --bucket=kftestbucket2013_09_23__17_12_02 --max-size=0
ubuntu@mira026:~$ sudo radosgw-admin bucket stats --bucket=kftestbucket2013_09_23__17_12_02
{ "bucket": "kftestbucket2013_09_23__17_12_02",
"pool": ".rgw.buckets",
"index_pool": ".rgw.buckets",
"id": "ny-ny.5511.30",
"marker": "ny-ny.5511.30",
"owner": "qa_user",
"ver": 5,
"master_ver": 0,
"mtime": 1382573765,
"max_marker": "",
"usage": { "rgw.main": { "size_kb": 20480,
"size_kb_actual": 20480,
"num_objects": 2}},
"bucket_quota": { "enabled": true,
"max_size_kb": 0,
"max_objects": -1}}
</pre> CephFS - Bug #4721 (Resolved): libcephfs tests fail when using ceph-deployhttps://tracker.ceph.com/issues/47212013-04-12T17:41:46ZTamilarasi muthamizhantamil.muthamizhan@inktank.com
<p>ceph version : 0.60-467-g6b98162-1precise</p>
<p>config.yaml used to reproduce</p>
<p>tamil@ubuntu:~/test_logs_cuttlefish/apr12_cdep_libcephfs$ cat orig.config.yaml <br />roles:<br />- - mon.a<br /> - mds.a<br /> - osd.0<br /> - osd.1<br />- - mon.c<br /> - osd.2<br /> - osd.3<br />- - client.0<br />targets:<br /> <a class="email" href="mailto:ubuntu@plana56.front.sepia.ceph.com">ubuntu@plana56.front.sepia.ceph.com</a>: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCiHA0zb9WJvp1awABMKJDad6u+wnobe01aTfD8YSHiydDuzxClecFiX1yX2007aVQNxAW9AVPH+avGVA5PHzcaweOLVosjll3FuJ4C+jmDmM5jy2xLHamBXLCAcEeM0VUFyUled38pjCZiZwlxGReHh7C4p5msAi/fObLTqqKO62cX7omoH/INmVoVA+4FztOpPH5CJ/NUzYtBrtp2j2BEYCHHTav6sjJfOFINvbaLGtfPQCus8Zx1u+acdoeGKs6vRS4V/dftMi42SGJ8XuNyUSxiUa0kfgk2lC/069nJHBGZ5roTXWA0feJ4zVFLqPYENaXsHqcJbvctuCVq1tRL<br /> <a class="email" href="mailto:ubuntu@plana57.front.sepia.ceph.com">ubuntu@plana57.front.sepia.ceph.com</a>: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCuMOcu2XPQovy/Qzmwyvc9tvGP9JZVJ6cqiJ3RPOSGgAifKLTxe2ramHpD8AKcdthu8VAfouFpZK4CtBWKJowurR+4yZKgEugzvYuZ/nK/np56vreBQmRBWD1vLPtxPsTT3YGu5qx+ixdSwrSxexxc0/7+EW9x1D6knL+OGUNWksoGIRlXxjh9qafbw/1XKeQQF28vxBXHofXUFY8USMUcq5HDuaFfmgKzufH6vk84oqyr/jtGej6b4g6tbGiHPYR+o5tmTQHyxpOxqLZP2RFFqHlQ/QaOmRvSNIoOo+1UbqdcWsLk16/lXIS1mI+BZsZouk1H+fGeMTEUDGktiPW7<br /> <a class="email" href="mailto:ubuntu@plana58.front.sepia.ceph.com">ubuntu@plana58.front.sepia.ceph.com</a>: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDSrEdvZzLOAbSb6/bmBSAazRmmiN+zGTF6PoS2K83VPDk5CkrpMjxOnHhOC+FnOBo9tss/XZ/UaBx6BOmzjMqVvCZaS81LQ2hM7fR4qrBOBjFATab2XgaU0sbS1T4ihFrEgN+j5FTBSAiR+F6mEJhxiLaf3IJSDarp9aD5c7pij29pKjzp8FcoKsmLyMbdW+6AzAVwKcdEsawcXEQL8P6CE04hj5r2NdLGKOohSDZYNrXWozQ0sq3l+MMFjQKJKGPzSKvQCVklngex6XgfNTssNI1Se0WqpPlpSJNeajbUegGVEEbYFn5au3eQvrttl78zghEtENwCBCij063zNlkf<br />tasks:<br />- install:<br /> extras: true<br />- ssh_keys: null<br />- ceph-deploy:<br /> branch:<br /> dev: next<br />- ceph-fuse: null<br />- workunit:<br /> clients:<br /> client.0:<br /> - libcephfs/test.sh</p>
<pre>
2013-04-12T17:21:00.137 DEBUG:teuthology.orchestra.run:Running [10.214.132.22]: 'mkdir -p -- /tmp/cephtest/None/mnt.0/client.0/tmp && cd -- /tmp/cephtest/None/mnt.0/client.0/tmp && CEPH_REF=master TESTDIR="/tmp/cephtest/None" CEPH_ID="0" PYTHONPATH="$PYTHONPATH:/tmp/cephtest/None/binary/usr/local/lib/python2.7/dist-packages:/tmp/cephtest/None/binary/usr/local/lib/python2.6/dist-packages" /tmp/cephtest/None/enable-coredump ceph-coverage /tmp/cephtest/None/archive/coverage /tmp/cephtest/None/workunit.client.0/libcephfs/test.sh'
2013-04-12T17:21:00.259 INFO:teuthology.task.workunit.client.0.out:Running main() from gtest_main.cc
2013-04-12T17:21:00.259 INFO:teuthology.task.workunit.client.0.out:[==========] Running 33 tests from 2 test cases.
2013-04-12T17:21:00.259 INFO:teuthology.task.workunit.client.0.out:[----------] Global test environment set-up.
2013-04-12T17:21:00.260 INFO:teuthology.task.workunit.client.0.out:[----------] 32 tests from LibCephFS
2013-04-12T17:21:00.260 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.OpenEmptyComponent
2013-04-12T17:21:00.262 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:31: Failure
2013-04-12T17:21:00.263 INFO:teuthology.task.workunit.client.0.out:Value of: ceph_mount(cmount, "/")
2013-04-12T17:21:00.263 INFO:teuthology.task.workunit.client.0.out: Actual: -2
2013-04-12T17:21:00.263 INFO:teuthology.task.workunit.client.0.out:Expected: 0
2013-04-12T17:21:00.264 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.OpenEmptyComponent (3 ms)
2013-04-12T17:21:00.264 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.MountNonExist
2013-04-12T17:21:00.265 INFO:teuthology.task.workunit.client.0.out:[ OK ] LibCephFS.MountNonExist (3 ms)
2013-04-12T17:21:00.265 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.MountDouble
2013-04-12T17:21:00.267 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:77: Failure
2013-04-12T17:21:00.268 INFO:teuthology.task.workunit.client.0.out:Value of: ceph_mount(cmount, "/")
2013-04-12T17:21:00.268 INFO:teuthology.task.workunit.client.0.out: Actual: -2
2013-04-12T17:21:00.268 INFO:teuthology.task.workunit.client.0.out:Expected: 0
2013-04-12T17:21:00.269 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.MountDouble (2 ms)
2013-04-12T17:21:00.269 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.MountRemount
2013-04-12T17:21:00.270 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:90: Failure
2013-04-12T17:21:00.270 INFO:teuthology.task.workunit.client.0.out:Value of: ceph_mount(cmount, "/")
2013-04-12T17:21:00.270 INFO:teuthology.task.workunit.client.0.out: Actual: -2
2013-04-12T17:21:00.271 INFO:teuthology.task.workunit.client.0.out:Expected: 0
2013-04-12T17:21:00.271 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.MountRemount (3 ms)
2013-04-12T17:21:00.271 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.UnmountUnmounted
2013-04-12T17:21:00.272 INFO:teuthology.task.workunit.client.0.out:[ OK ] LibCephFS.UnmountUnmounted (1 ms)
2013-04-12T17:21:00.274 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.ReleaseUnmounted
2013-04-12T17:21:00.275 INFO:teuthology.task.workunit.client.0.out:[ OK ] LibCephFS.ReleaseUnmounted (1 ms)
2013-04-12T17:21:00.275 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.ReleaseMounted
2013-04-12T17:21:00.277 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:123: Failure
2013-04-12T17:21:00.277 INFO:teuthology.task.workunit.client.0.out:Value of: ceph_mount(cmount, "/")
2013-04-12T17:21:00.277 INFO:teuthology.task.workunit.client.0.out: Actual: -2
2013-04-12T17:21:00.277 INFO:teuthology.task.workunit.client.0.out:Expected: 0
2013-04-12T17:21:00.278 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.ReleaseMounted (3 ms)
2013-04-12T17:21:00.278 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.UnmountRelease
2013-04-12T17:21:00.279 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:134: Failure
2013-04-12T17:21:00.279 INFO:teuthology.task.workunit.client.0.out:Value of: ceph_mount(cmount, "/")
2013-04-12T17:21:00.279 INFO:teuthology.task.workunit.client.0.out: Actual: -2
2013-04-12T17:21:00.279 INFO:teuthology.task.workunit.client.0.out:Expected: 0
2013-04-12T17:21:00.280 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.UnmountRelease (3 ms)
2013-04-12T17:21:00.280 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.Mount
2013-04-12T17:21:00.281 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:143: Failure
2013-04-12T17:21:00.282 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.282 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.282 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.283 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.Mount (2 ms)
2013-04-12T17:21:00.283 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.OpenLayout
2013-04-12T17:21:00.284 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:156: Failure
2013-04-12T17:21:00.284 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.286 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.286 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.286 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.OpenLayout (3 ms)
2013-04-12T17:21:00.287 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.DirLs
2013-04-12T17:21:00.287 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:202: Failure
2013-04-12T17:21:00.288 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.288 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, "/")
2013-04-12T17:21:00.288 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.289 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.DirLs (2 ms)
2013-04-12T17:21:00.289 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.ManyNestedDirs
2013-04-12T17:21:00.290 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:360: Failure
2013-04-12T17:21:00.290 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.290 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.290 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.291 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.ManyNestedDirs (3 ms)
2013-04-12T17:21:00.291 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.Xattrs
2013-04-12T17:21:00.292 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:404: Failure
2013-04-12T17:21:00.292 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.292 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.293 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.293 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.Xattrs (3 ms)
2013-04-12T17:21:00.293 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.LstatSlashdot
2013-04-12T17:21:00.294 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:463: Failure
2013-04-12T17:21:00.294 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.295 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.295 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.295 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.LstatSlashdot (2 ms)
2013-04-12T17:21:00.295 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.DoubleChmod
2013-04-12T17:21:00.297 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:477: Failure
2013-04-12T17:21:00.297 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.297 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.298 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.298 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.DoubleChmod (3 ms)
2013-04-12T17:21:00.298 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.Fchmod
2013-04-12T17:21:00.299 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:531: Failure
2013-04-12T17:21:00.299 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.300 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.300 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.300 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.Fchmod (2 ms)
2013-04-12T17:21:00.300 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.Fchown
2013-04-12T17:21:00.301 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:574: Failure
2013-04-12T17:21:00.302 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.302 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.302 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.302 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.Fchown (3 ms)
2013-04-12T17:21:00.302 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.Symlinks
2013-04-12T17:21:00.304 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:599: Failure
2013-04-12T17:21:00.304 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.304 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.304 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.304 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.Symlinks (2 ms)
2013-04-12T17:21:00.304 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.DirSyms
2013-04-12T17:21:00.306 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:653: Failure
2013-04-12T17:21:00.307 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.307 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.307 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.307 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.DirSyms (3 ms)
2013-04-12T17:21:00.307 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.LoopSyms
2013-04-12T17:21:00.309 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:684: Failure
2013-04-12T17:21:00.309 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.309 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.309 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.310 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.LoopSyms (2 ms)
2013-04-12T17:21:00.310 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.HardlinkNoOriginal
2013-04-12T17:21:00.312 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:727: Failure
2013-04-12T17:21:00.312 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.312 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.312 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.312 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.HardlinkNoOriginal (3 ms)
2013-04-12T17:21:00.312 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.BadFileDesc
2013-04-12T17:21:00.314 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:763: Failure
2013-04-12T17:21:00.314 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.314 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.315 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.315 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.BadFileDesc (2 ms)
2013-04-12T17:21:00.315 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.ReadEmptyFile
2013-04-12T17:21:00.317 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:791: Failure
2013-04-12T17:21:00.317 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.317 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.318 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.318 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.ReadEmptyFile (3 ms)
2013-04-12T17:21:00.318 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.StripeUnitGran
2013-04-12T17:21:00.319 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:819: Failure
2013-04-12T17:21:00.320 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.320 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.320 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.320 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.StripeUnitGran (3 ms)
2013-04-12T17:21:00.320 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.Rename
2013-04-12T17:21:00.322 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:828: Failure
2013-04-12T17:21:00.322 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.322 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.322 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.323 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.Rename (2 ms)
2013-04-12T17:21:00.323 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.UseUnmounted
2013-04-12T17:21:00.324 INFO:teuthology.task.workunit.client.0.out:[ OK ] LibCephFS.UseUnmounted (2 ms)
2013-04-12T17:21:00.324 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.GetPoolId
2013-04-12T17:21:00.327 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:942: Failure
2013-04-12T17:21:00.327 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.327 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.327 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.327 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.GetPoolId (3 ms)
2013-04-12T17:21:00.327 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.GetPoolReplication
2013-04-12T17:21:00.329 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:955: Failure
2013-04-12T17:21:00.330 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.330 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.330 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.330 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.GetPoolReplication (2 ms)
2013-04-12T17:21:00.330 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.GetExtentOsds
2013-04-12T17:21:00.334 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:975: Failure
2013-04-12T17:21:00.334 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.335 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.335 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.335 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.GetExtentOsds (3 ms)
2013-04-12T17:21:00.335 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.GetOsdCrushLocation
2013-04-12T17:21:00.336 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:1025: Failure
2013-04-12T17:21:00.336 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.337 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.337 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.337 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.GetOsdCrushLocation (2 ms)
2013-04-12T17:21:00.338 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.GetOsdAddr
2013-04-12T17:21:00.338 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/test.cc:1074: Failure
2013-04-12T17:21:00.339 INFO:teuthology.task.workunit.client.0.out:Value of: 0
2013-04-12T17:21:00.339 INFO:teuthology.task.workunit.client.0.out:Expected: ceph_mount(cmount, __null)
2013-04-12T17:21:00.339 INFO:teuthology.task.workunit.client.0.out:Which is: -2
2013-04-12T17:21:00.339 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.GetOsdAddr (3 ms)
2013-04-12T17:21:00.340 INFO:teuthology.task.workunit.client.0.out:[ RUN ] LibCephFS.ReaddirRCB
2013-04-12T17:21:00.341 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/readdir_r_cb.cc:24: Failure
2013-04-12T17:21:00.341 INFO:teuthology.task.workunit.client.0.out:Value of: ceph_mount(cmount, "/")
2013-04-12T17:21:00.341 INFO:teuthology.task.workunit.client.0.out: Actual: -2
2013-04-12T17:21:00.342 INFO:teuthology.task.workunit.client.0.out:Expected: 0
2013-04-12T17:21:00.342 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.ReaddirRCB (2 ms)
2013-04-12T17:21:00.342 INFO:teuthology.task.workunit.client.0.out:[----------] 32 tests from LibCephFS (80 ms total)
2013-04-12T17:21:00.343 INFO:teuthology.task.workunit.client.0.out:
2013-04-12T17:21:00.343 INFO:teuthology.task.workunit.client.0.out:[----------] 1 test from Caps
2013-04-12T17:21:00.343 INFO:teuthology.task.workunit.client.0.out:[ RUN ] Caps.ReadZero
2013-04-12T17:21:00.346 INFO:teuthology.task.workunit.client.0.out:test/libcephfs/caps.cc:35: Failure
2013-04-12T17:21:00.347 INFO:teuthology.task.workunit.client.0.out:Value of: ceph_mount(cmount, "/")
2013-04-12T17:21:00.347 INFO:teuthology.task.workunit.client.0.out: Actual: -2
2013-04-12T17:21:00.347 INFO:teuthology.task.workunit.client.0.out:Expected: 0
2013-04-12T17:21:00.348 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] Caps.ReadZero (3 ms)
2013-04-12T17:21:00.350 INFO:teuthology.task.workunit.client.0.out:[----------] 1 test from Caps (3 ms total)
2013-04-12T17:21:00.350 INFO:teuthology.task.workunit.client.0.out:
2013-04-12T17:21:00.350 INFO:teuthology.task.workunit.client.0.out:[----------] Global test environment tear-down
2013-04-12T17:21:00.351 INFO:teuthology.task.workunit.client.0.out:[==========] 33 tests from 2 test cases ran. (83 ms total)
2013-04-12T17:21:00.351 INFO:teuthology.task.workunit.client.0.out:[ PASSED ] 4 tests.
2013-04-12T17:21:00.351 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] 29 tests, listed below:
2013-04-12T17:21:00.352 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.OpenEmptyComponent
2013-04-12T17:21:00.352 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.MountDouble
2013-04-12T17:21:00.352 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.MountRemount
2013-04-12T17:21:00.352 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.ReleaseMounted
2013-04-12T17:21:00.353 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.UnmountRelease
2013-04-12T17:21:00.353 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.Mount
2013-04-12T17:21:00.353 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.OpenLayout
2013-04-12T17:21:00.354 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.DirLs
2013-04-12T17:21:00.354 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.ManyNestedDirs
2013-04-12T17:21:00.354 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.Xattrs
2013-04-12T17:21:00.354 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.LstatSlashdot
2013-04-12T17:21:00.355 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.DoubleChmod
2013-04-12T17:21:00.355 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.Fchmod
2013-04-12T17:21:00.355 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.Fchown
2013-04-12T17:21:00.355 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.Symlinks
2013-04-12T17:21:00.356 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.DirSyms
2013-04-12T17:21:00.356 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.LoopSyms
2013-04-12T17:21:00.356 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.HardlinkNoOriginal
2013-04-12T17:21:00.357 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.BadFileDesc
2013-04-12T17:21:00.357 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.ReadEmptyFile
2013-04-12T17:21:00.357 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.StripeUnitGran
2013-04-12T17:21:00.357 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.Rename
2013-04-12T17:21:00.357 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.GetPoolId
2013-04-12T17:21:00.357 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.GetPoolReplication
2013-04-12T17:21:00.358 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.GetExtentOsds
2013-04-12T17:21:00.358 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.GetOsdCrushLocation
2013-04-12T17:21:00.358 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.GetOsdAddr
2013-04-12T17:21:00.358 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] LibCephFS.ReaddirRCB
2013-04-12T17:21:00.358 INFO:teuthology.task.workunit.client.0.out:[ FAILED ] Caps.ReadZero
2013-04-12T17:21:00.358 INFO:teuthology.task.workunit.client.0.out:
2013-04-12T17:21:00.358 INFO:teuthology.task.workunit.client.0.out:29 FAILED TESTS
2013-04-12T17:21:00.359 INFO:teuthology.task.workunit:Stopping libcephfs/test.sh on client.0...
2013-04-12T17:21:00.359 DEBUG:teuthology.orchestra.run:Running [10.214.132.22]: 'rm -rf -- /tmp/cephtest/None/workunits.list /tmp/cephtest/None/workunit.client.0'
2013-04-12T17:21:00.376 ERROR:teuthology.run_tasks:Saw exception from tasks
Traceback (most recent call last):
File "/home/tamil/test_teuth_latest/teuthology/teuthology/run_tasks.py", line 25, in run_tasks
manager = _run_one_task(taskname, ctx=ctx, config=config)
File "/home/tamil/test_teuth_latest/teuthology/teuthology/run_tasks.py", line 14, in _run_one_task
return fn(**kwargs)
File "/home/tamil/test_teuth_latest/teuthology/teuthology/task/workunit.py", line 90, in task
all_spec = True
File "/home/tamil/test_teuth_latest/teuthology/teuthology/parallel.py", line 83, in __exit__
for result in self:
File "/home/tamil/test_teuth_latest/teuthology/teuthology/parallel.py", line 100, in next
resurrect_traceback(result)
File "/home/tamil/test_teuth_latest/teuthology/teuthology/parallel.py", line 19, in capture_traceback
return func(*args, **kwargs)
File "/home/tamil/test_teuth_latest/teuthology/teuthology/task/workunit.py", line 302, in _run_tests
args=args,
File "/home/tamil/test_teuth_latest/teuthology/teuthology/orchestra/remote.py", line 42, in run
r = self._runner(client=self.ssh, **kwargs)
File "/home/tamil/test_teuth_latest/teuthology/teuthology/orchestra/run.py", line 266, in run
r.exitstatus = _check_status(r.exitstatus)
File "/home/tamil/test_teuth_latest/teuthology/teuthology/orchestra/run.py", line 262, in _check_status
raise CommandFailedError(command=r.command, exitstatus=status, node=host)
CommandFailedError: Command failed on 10.214.132.22 with status 1: 'mkdir -p -- /tmp/cephtest/None/mnt.0/client.0/tmp && cd -- /tmp/cephtest/None/mnt.0/client.0/tmp && CEPH_REF=master TESTDIR="/tmp/cephtest/None" CEPH_ID="0" PYTHONPATH="$PYTHONPATH:/tmp/cephtest/None/binary/usr/local/lib/python2.7/dist-packages:/tmp/cephtest/None/binary/usr/local/lib/python2.6/dist-packages" /tmp/cephtest/None/enable-coredump ceph-coverage /tmp/cephtest/None/archive/coverage /tmp/cephtest/None/workunit.client.0/libcephfs/test.sh'
</pre>
<p>The logs from the teuthology run is copied to /home/ubuntu/apr12_cdep_libcephfs</p>