Project

General

Profile

Actions

Feature #10091

open

segregate client configuration from ceph task in teuthology

Added by Tamilarasi muthamizhan over 9 years ago. Updated over 9 years ago.

Status:
New
Priority:
High
Assignee:
Category:
-
% Done:

0%

Source:
other
Tags:
Backport:
Reviewed:
Affected Versions:

Description

segregate client configuration from ceph task inorder to be able to run tests on an already existing cluster.

an example of what currently happens when we try to do that,

2014-11-10 18:29:34,284.284 WARNING:teuthology.report:No job_id found; not reporting results
2014-11-10 18:29:34,292.292 DEBUG:teuthology.run:Config:
  interactive-on-error: true
  roles:
  - - mon.a
    - mds.a
    - osd.0
    - osd.1
    - osd.2
    - client.0
  targets:
    vpm120.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDY7h5WAc/C0L4oyAYd67gtvUbuSLJ2UJfg7ylsEo5uCneiYxLRzVENkRBgbXwb7k52XLx3SCeVZC5gJFUO7xkFIzhvIT3Ad+f2gfrNml/csh2lPQskeSgSw1Jjyl7YYWtEi534m1M+0R2jE8uB7bVr59p1+5tbYA5IeLkslu7U4zZiDEWrL8x91BKJx890sAaW1icQO4gPLC0Bz5MhQQrhPQgVBryHwJzKlHc3wJ0ua9EJ1+IpcoRTffqu8Q/7Za1SLZeoWpLDzVrAVyqxvy43zkYExYlM+7mR1Oz+uscEGRn4Yb36BHhC9BZunG88SwvvGX+nj7wRlGk9dlsYooh7
  tasks:
  - rgw:
    - client.0
  - s3tests:
      client.0:
        rgw_server: client.0
  use_existing_cluster: true
2014-11-10 18:29:34,300.300 INFO:teuthology.run:Tasks not found; will attempt to fetch
2014-11-10 18:29:34,300.300 INFO:teuthology.repo_utils:Fetching from upstream into /home/tamil/src/ceph-qa-suite_master
2014-11-10 18:29:35,777.777 INFO:teuthology.repo_utils:Resetting repo at /home/tamil/src/ceph-qa-suite_master to branch master
2014-11-10 18:29:35,889.889 INFO:teuthology.run_tasks:Running task internal.save_config...
2014-11-10 18:29:35,892.892 INFO:teuthology.task.internal:Saving configuration
2014-11-10 18:29:35,912.912 INFO:teuthology.run_tasks:Running task internal.check_lock...
2014-11-10 18:29:35,916.916 INFO:teuthology.task.internal:Checking locks...
2014-11-10 18:29:36,415.415 DEBUG:teuthology.task.internal:machine status is {u'is_vm': True, u'locked': True, u'locked_since': u'2014-11-12 19:51:33.556075', u'locked_by': u'tamil@tamil-VirtualBox', u'up': True, u'mac_address': u'52:54:00:e2:1b:a9', u'name': u'vpm120.front.sepia.ceph.com', u'os_version': u'7.0', u'machine_type': u'vps', u'vm_host': {u'is_vm': False, u'locked': True, u'locked_since': u'2013-03-14 19:29:52', u'locked_by': u'VPSHOST@VPSHOST', u'up': True, u'mac_address': u'00:25:90:09:e1:c0', u'name': u'mira020.front.sepia.ceph.com', u'os_version': None, u'machine_type': u'mira', u'vm_host': None, u'os_type': u'ubuntu', u'arch': u'x86_64', u'ssh_pub_key': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC1hIFcoxSVdHnv++JJjxIASwI+Gq8m+ghCQN+Dy3tkA4B5+opZ8vg3mPp5GX9vFvKm4XMJ7edrORw8zFufciJNd9b5WhzpxRbBjP+yB0cmO9/rGZZmeNtudgQdgUzLuIv8tjA/rgnaGttcpRBnBH3dJt+KV64bLVyfbXn8MZZ2iZlg1mdXGaWWEr5xO2qfPd3IzT/PRIoislFVZmUrvhHohrpCdzvjSqznisxrP0fg/6b/yxGeHP+tEddZsOGkzq4QKGGRm+HJH4NI6yIMIeL95iIfbSztAZxgEafRGE1WJVQfS3ckL6F3jDdxrvHj6VInMMgj8tHW6QNwFpyKvzZx', u'description': u''}, u'os_type': u'rhel', u'arch': u'x86_64', u'ssh_pub_key': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDY7h5WAc/C0L4oyAYd67gtvUbuSLJ2UJfg7ylsEo5uCneiYxLRzVENkRBgbXwb7k52XLx3SCeVZC5gJFUO7xkFIzhvIT3Ad+f2gfrNml/csh2lPQskeSgSw1Jjyl7YYWtEi534m1M+0R2jE8uB7bVr59p1+5tbYA5IeLkslu7U4zZiDEWrL8x91BKJx890sAaW1icQO4gPLC0Bz5MhQQrhPQgVBryHwJzKlHc3wJ0ua9EJ1+IpcoRTffqu8Q/7Za1SLZeoWpLDzVrAVyqxvy43zkYExYlM+7mR1Oz+uscEGRn4Yb36BHhC9BZunG88SwvvGX+nj7wRlGk9dlsYooh7', u'description': None}
2014-11-10 18:29:36,420.420 INFO:teuthology.run_tasks:Running task internal.connect...
2014-11-10 18:29:36,421.421 INFO:teuthology.task.internal:Opening connections...
2014-11-10 18:29:36,421.421 DEBUG:teuthology.task.internal:connecting to ubuntu@vpm120.front.sepia.ceph.com
2014-11-10 18:29:36,440.440 INFO:teuthology.orchestra.connection:{'username': 'ubuntu', 'hostname': 'vpm120.front.sepia.ceph.com', 'timeout': 60}
2014-11-10 18:29:37,572.572 INFO:teuthology.task.internal:roles: ubuntu@vpm120.front.sepia.ceph.com - ['mon.a', 'mds.a', 'osd.0', 'osd.1', 'osd.2', 'client.0']
2014-11-10 18:29:37,572.572 INFO:teuthology.run_tasks:Running task internal.push_inventory...
2014-11-10 18:29:37,575.575 INFO:teuthology.orchestra.run.vpm120:Running: 'uname -m'
2014-11-10 18:29:37,668.668 INFO:teuthology.orchestra.run.vpm120:Running: 'cat /etc/os-release'
2014-11-10 18:29:37,836.836 INFO:teuthology.lock:Updating vpm120.front.sepia.ceph.com on lock server
2014-11-10 18:29:42,984.984 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles...
2014-11-10 18:29:42,999.999 INFO:teuthology.run_tasks:Running task internal.check_conflict...
2014-11-10 18:29:43,000.000 INFO:teuthology.task.internal:Checking for old test directory...
2014-11-10 18:29:43,000.000 INFO:teuthology.orchestra.run.vpm120:Running: "test '!' -e /home/ubuntu/cephtest" 
2014-11-10 18:29:43,188.188 INFO:teuthology.run_tasks:Running task internal.base...
2014-11-10 18:29:43,188.188 INFO:teuthology.task.internal:Creating test directory...
2014-11-10 18:29:43,188.188 INFO:teuthology.orchestra.run.vpm120:Running: 'mkdir -m0755 -- /home/ubuntu/cephtest'
2014-11-10 18:29:43,444.444 INFO:teuthology.run_tasks:Running task internal.archive...
2014-11-10 18:29:43,447.447 INFO:teuthology.task.internal:Creating archive directory...
2014-11-10 18:29:43,448.448 INFO:teuthology.orchestra.run.vpm120:Running: 'install -d -m0755 -- /home/ubuntu/cephtest/archive'
2014-11-10 18:29:43,751.751 INFO:teuthology.run_tasks:Running task internal.coredump...
2014-11-10 18:29:43,752.752 INFO:teuthology.task.internal:Enabling coredump saving...
2014-11-10 18:29:43,752.752 INFO:teuthology.orchestra.run.vpm120:Running: 'install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core'
2014-11-10 18:29:44,028.028 INFO:teuthology.orchestra.run.vpm120.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core
2014-11-10 18:29:44,028.028 INFO:teuthology.run_tasks:Running task internal.sudo...
2014-11-10 18:29:44,029.029 INFO:teuthology.task.internal:Configuring sudo...
2014-11-10 18:29:44,031.031 INFO:teuthology.orchestra.run.vpm120:Running: "sudo sed -i.orig.teuthology -e 's/^\\([^#]*\\) \\(requiretty\\)/\\1 !\\2/g' -e 's/^\\([^#]*\\) !\\(visiblepw\\)/\\1 \\2/g' /etc/sudoers" 
2014-11-10 18:29:44,197.197 INFO:teuthology.run_tasks:Running task internal.syslog...
2014-11-10 18:29:44,199.199 INFO:teuthology.task.internal:Starting syslog monitoring...
2014-11-10 18:29:44,200.200 INFO:teuthology.orchestra.run.vpm120:Running: 'mkdir -m0755 -- /home/ubuntu/cephtest/archive/syslog'
2014-11-10 18:29:44,354.354 INFO:teuthology.orchestra.run.vpm120:Running: 'sudo python -c \'import shutil, sys; shutil.copyfileobj(sys.stdin, file(sys.argv[1], "wb"))\' /etc/rsyslog.d/80-cephtest.conf'
2014-11-10 18:29:44,573.573 INFO:teuthology.orchestra.run.vpm120:Running: 'sudo service rsyslog restart'
2014-11-10 18:29:44,731.731 INFO:teuthology.orchestra.run.vpm120.stderr:Redirecting to /bin/systemctl restart  rsyslog.service
2014-11-10 18:29:44,744.744 INFO:teuthology.run_tasks:Running task internal.timer...
2014-11-10 18:29:44,745.745 INFO:teuthology.task.internal:Starting timer...
2014-11-10 18:29:44,748.748 INFO:teuthology.run_tasks:Running task rgw...
2014-11-10 18:29:44,776.776 INFO:tasks.rgw:Using apache as radosgw frontend
2014-11-10 18:29:44,780.780 INFO:tasks.rgw:Creating apache directories...
2014-11-10 18:29:44,785.785 INFO:teuthology.orchestra.run.vpm120:Running: 'mkdir -p /home/ubuntu/cephtest/apache/htdocs.client.0 /home/ubuntu/cephtest/apache/tmp.client.0/fastcgi_sock && mkdir /home/ubuntu/cephtest/archive/apache.client.0'
2014-11-10 18:29:44,943.943 INFO:tasks.rgw:Configuring users...
2014-11-10 18:29:44,943.943 INFO:tasks.rgw:creating data pools
2014-11-10 18:29:44,943.943 INFO:teuthology.orchestra.run.vpm120:Running: 'ceph osd pool create .rgw.buckets 64 64'
2014-11-10 18:29:45,324.324 INFO:teuthology.orchestra.run.vpm120.stderr:2014-11-12 17:06:29.295464 7fd3099b1700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2014-11-10 18:29:45,325.325 INFO:teuthology.orchestra.run.vpm120.stderr:2014-11-12 17:06:29.295466 7fd3099b1700  0 librados: client.admin initialization error (2) No such file or directory
2014-11-10 18:29:45,328.328 INFO:teuthology.orchestra.run.vpm120.stderr:Error connecting to cluster: ObjectNotFound
2014-11-10 18:29:45,334.334 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/tamil/test_review/teuthology/teuthology/contextutil.py", line 27, in nested
    vars.append(enter())
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/tamil/src/ceph-qa-suite_master/tasks/rgw.py", line 538, in create_nonregion_pools
    create_replicated_pool(remote, data_pool, 64)
  File "/home/tamil/src/ceph-qa-suite_master/tasks/util/rados.py", line 36, in create_replicated_pool
    'ceph', 'osd', 'pool', 'create', name, str(pgnum), str(pgnum),
  File "/home/tamil/test_review/teuthology/teuthology/orchestra/remote.py", line 128, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/tamil/test_review/teuthology/teuthology/orchestra/run.py", line 365, in run
    r.wait()
  File "/home/tamil/test_review/teuthology/teuthology/orchestra/run.py", line 106, in wait
    exitstatus=status, node=self.hostname)
CommandFailedError: Command failed on vpm120 with status 1: 'ceph osd pool create .rgw.buckets 64 64'
2014-11-10 18:29:45,371.371 INFO:tasks.rgw:Cleaning up apache directories...
2014-11-10 18:29:45,376.376 INFO:teuthology.orchestra.run.vpm120:Running: 'rm -rf /home/ubuntu/cephtest/apache/tmp.client.0 && rmdir /home/ubuntu/cephtest/apache/htdocs.client.0'
2014-11-10 18:29:45,630.630 INFO:teuthology.orchestra.run.vpm120:Running: 'rmdir /home/ubuntu/cephtest/apache'
2014-11-10 18:29:45,842.842 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/tamil/test_review/teuthology/teuthology/run_tasks.py", line 55, in run_tasks
    manager.__enter__()
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/tamil/src/ceph-qa-suite_master/tasks/rgw.py", line 845, in task
    with contextutil.nested(*subtasks):
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/tamil/test_review/teuthology/teuthology/contextutil.py", line 27, in nested
    vars.append(enter())
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/home/tamil/src/ceph-qa-suite_master/tasks/rgw.py", line 538, in create_nonregion_pools
    create_replicated_pool(remote, data_pool, 64)
  File "/home/tamil/src/ceph-qa-suite_master/tasks/util/rados.py", line 36, in create_replicated_pool
    'ceph', 'osd', 'pool', 'create', name, str(pgnum), str(pgnum),
  File "/home/tamil/test_review/teuthology/teuthology/orchestra/remote.py", line 128, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/tamil/test_review/teuthology/teuthology/orchestra/run.py", line 365, in run
    r.wait()
  File "/home/tamil/test_review/teuthology/teuthology/orchestra/run.py", line 106, in wait
    exitstatus=status, node=self.hostname)
CommandFailedError: Command failed on vpm120 with status 1: 'ceph osd pool create .rgw.buckets 64 64'

and the config file that was used:

interactive-on-error: true
use_existing_cluster: true

roles:
- [mon.a, mds.a, osd.0, osd.1, osd.2, client.0]

targets:
  vpm120.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDY7h5WAc/C0L4oyAYd67gtvUbuSLJ2UJfg7ylsEo5uCneiYxLRzVENkRBgbXwb7k52XLx3SCeVZC5gJFUO7xkFIzhvIT3Ad+f2gfrNml/csh2lPQskeSgSw1Jjyl7YYWtEi534m1M+0R2jE8uB7bVr59p1+5tbYA5IeLkslu7U4zZiDEWrL8x91BKJx890sAaW1icQO4gPLC0Bz5MhQQrhPQgVBryHwJzKlHc3wJ0ua9EJ1+IpcoRTffqu8Q/7Za1SLZeoWpLDzVrAVyqxvy43zkYExYlM+7mR1Oz+uscEGRn4Yb36BHhC9BZunG88SwvvGX+nj7wRlGk9dlsYooh7

tasks:
- rgw: [client.0]
- s3tests:
    client.0:
      rgw_server: client.0

Actions #1

Updated by Zack Cerza over 9 years ago

  • Status changed from New to Need More Info
  • Assignee set to Tamilarasi muthamizhan

I don't see the ceph task being run at all here.

I see a keyring error, and something about ObjectNotFound. What exactly needs to happen to make this work?

Actions #2

Updated by Tamilarasi muthamizhan over 9 years ago

ah i see, what happened here is that the cluster was deployed manually but the keyrings were not copied to the client [the client config part was not done manually] and so the error when we try to start using teuthology to run workloads on clients without having configured the client.

The original idea was to have the client config part[creating/copying keyrings] segregated from the ceph task and put it out in a separate task [say, config_client] so we can simply use that task before running any workloads.

ex:
tasks:
- config_client
- rgw: [client.0]
- s3tests:
client.0:
rgw_server: client.0

Actions #3

Updated by Tamilarasi muthamizhan over 9 years ago

  • Status changed from Need More Info to New
  • Assignee changed from Tamilarasi muthamizhan to Zack Cerza
Actions

Also available in: Atom PDF