Project

General

Profile

Actions

Bug #22318

open

ceph_ansible installs fail.

Added by Anonymous over 6 years ago. Updated over 6 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
ansible
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

A lot of tests fail trying to install using ceph-ansible:

Command failed on vpm081 with status 2: 'cd ~/ceph-ansible ; virtualenv --system-site-packages venv ; source venv/bin/activate ; pip install --upgrade pip ; pip install setuptools>=11.3 ansible==2.3.2 ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -i inven.yml site.yml'

See: http://pulpito.ceph.com/teuthology-2017-10-17_05:15:02-ceph-ansible-luminous-distro-basic-vps/
I ran into a similar problem while attempting to add some new workunits for rgw. I think that the problem here
is due to the ceph-ansible code not being cloned into the right directory. I have a fix for this that I will push for review shortly.

Actions #1

Updated by Anonymous over 6 years ago

I ran the following command under the python virtual environment in ~wusui/teuthology on teuthology.front.sepia.ceph.com:

~/teuthology/virtualenv/bin/teuthology -v --owner wusui@teuthology /home/wusui/rgw_tests/ans1.yaml

The contents of ans1.yaml is:

branch: luminous
interactive-on-error: true
kernel:
  kdb: true
  sha1: distro
meta:
- desc: 3-node cluster
- desc: Build the ceph cluster using ceph-ansible
- desc: without dmcrypt
nuke-on-error: true
openstack:
- volumes:
    count: 3
    size: 10
os_type: ubuntu
os_version: '16.04'
overrides:
  admin_socket:
    branch: luminous
  ceph:
    conf:
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 25
    log-whitelist:
    - slow request
    sha1: bf5f5ec7cf0e06125515866acedcc04c393f90b9
  ceph-deploy:
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        osd default pool size: 2
  ceph_ansible:
    vars:
      ceph_conf_overrides:
        global:
          mon pg warn min per osd: 2
          osd default pool size: 2
      ceph_origin: repository
      ceph_repository: dev
      ceph_stable_release: luminous
      ceph_test: true
      dmcrypt: false
      journal_size: 1024
      osd_auto_discovery: false
      osd_scenario: collocated
      cephfs_pools:
        - name: "cephfs_data" 
          pgs: "64" 
        - name: "cephfs_metadata" 
          pgs: "64" 
      osd pool default pg num: 64
      osd pool default pgp num: 64
      pg per osd: 1024
  install:
    ceph:
      sha1: bf5f5ec7cf0e06125515866acedcc04c393f90b9
priority: 100
repo: https://github.com/ceph/ceph.git
roles:
- - mon.a
  - mds.a
  - osd.0
  - osd.1
  - osd.2
- - mon.b
  - mgr.x
  - osd.3
  - osd.4
  - osd.5
- - mon.c
  - mgr.y
  - osd.6
  - osd.7
  - osd.8
  - client.0
sha1: bf5f5ec7cf0e06125515866acedcc04c393f90b9
suite: ceph-ansible
suite_path: /home/teuthworker/src/github.com_ceph_ceph_master/qa
suite_relpath: qa
suite_repo: https://github.com/ceph/ceph.git
suite_sha1: 25e60f042bd380afda62b494e47655a9830965e6
tasks:
- ssh-keys: null
- ceph_ansible: null
- install.ship_utilities: null
- interactive:
teuthology_branch: wip-22318-wusui
verbose: true
targets:
  vpm083.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDcOV2mJJO0QNS7TXxqEuN5SGcDhTNFzXwG4R8aY9sibs2xn2mzutLPam58hfGgX/HAOuOpwhLrTM9Ua8CaoPJOS00cTWTxkpkcOcy6vCd5Eh15Koy3rTWvvPiuWGzXUz2lKmdKUk1ySXhrTDeztL6wad6b/o0nAwu0ECLpO4r0KEo2dfWOo5QPTUDRYgNF59A7A467TfBx6KWHDTtiLuP+IiqW1hoF24wH7GnKHQa9VVjn7BeIS497r8nP6yWzvuS66EEXzw95vm6skaGZic3gZVArzVt1ILCbJjIu5fgi2nFVOWczwVNiujFKfaM+AGdxDuvxQdDaPf2cR5hBc+cP
  vpm091.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMxlB7NLl309UnTbbTHxw05/t/cUsVyaYqkztte8HgkiF5ogRWniQVjMzoKUz4w/i0HS2d8ClbYOL5xANKQPxHNx4DW6mRwkkEkl9sN/F+mIcXw0xQqzpDb0bE6dD9aWDzFx5pSheL0RYvo8kqhyahlabBuD1NXmReZFWV+Fw858pWNqigHbQy2mthgU35rDnDxEqKD1nSQ+aNG/hEf9ujRwbJVBeEtXy39qP687xBtcWIA2Zc/pya5K39ZxJPJdK7/YI8Wvb5wAx/CwWj9CJbIbmvTV1qcx+fRtPfYz/DXFZk6TJk+eLohmIPsKRRnwhZaWM2pNPhY/Rb3NEsaONf
  vpm181.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDr3YdOuwIwSK8n5uAo7jaoC67VK1UTJaXtgIYaWB7Jo2uhBozGh6tiKeAvyl2dr1fcuQZscyrnT0jro3o2vH/VEauss0ZnrBGcXISRQ2gpqK+xAlv9a/jJpYq6/E/jvTLNwHVumzvji/ZBhUsUL3jTs8662hfENCkU4YisAPPFmVRRe55QyK0G0760SIsBXhj1yA9bHHkQMCztE0V6ORdShvSolTHba5pAlg1dTNbjpmo2T6PESfOD/NzI2D0Hoj2WJL9kxbzyhIu9TlukBe1e4Iaiuhbx9loc1yf9/zdl+FcThQKYBZz5w3ORiupWvrmkopEN7j4mkg4Vbe/7RzZl

Before making the changes in wip-22318-wusui, the teuthology run would terminate with the same
errors as those in the onveright runs. (using master instead of wip-22318-wusui)

When the change was implemented, after the teuthology run hit the interactive breakpoint, I sshed
to the vpm machines and verified the ceph cluster.

I created pull request https://github.com/ceph/teuthology/pull/1132

Actions

Also available in: Atom PDF