Project

General

Profile

Actions

Bug #22684

open

ceph-ansible installs may be broken.

Added by Anonymous over 6 years ago. Updated about 6 years ago.

Status:
In Progress
Priority:
Normal
Assignee:
-
Category:
-
% Done:

0%

Spent time:
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

THen again, maybe it's me. Teuthology ansible-install tasks that ran last week are not running today.

The following is the command that I ran:

teuthology -v --suite-path ~/ceph-qa-suite --owner wusui@teuthology /home/wusui/rgw_tests/rgw_test1.yaml 2>&1 | tee /home/wusui/logfile.w

The following is a copy of the yaml file:

branch: luminous
kernel:
  kdb: true
  sha1: distro
meta:
- desc: 4-node cluster
- desc: Build the ceph cluster using ceph-ansible
- desc: without dmcrypt
nuke-on-error: true
openstack:
- volumes:
    count: 3
    size: 10
os_type: ubuntu
os_version: '16.04'
overrides:
  admin_socket:
    branch: luminous
  ceph:
    conf:
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 25
    log-whitelist:
    - slow request
    sha1: bf5f5ec7cf0e06125515866acedcc04c393f90b9
  ceph-deploy:
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        osd default pool size: 2
  ceph_ansible:
    vars:
      ceph_conf_overrides:
        global:
          mon pg warn min per osd: 2
          osd default pool size: 2
      ceph_origin: repository
      ceph_repository: dev
      ceph_stable_release: luminous
      ceph_test: true
      dmcrypt: false
      journal_size: 1024
      osd_auto_discovery: false
      osd_scenario: collocated
      cephfs_pools:
        - name: "cephfs_data" 
          pgs: "64" 
        - name: "cephfs_metadata" 
          pgs: "64" 
      osd pool default pg num: 64
      osd pool default pgp num: 64
      pg per osd: 1024
  install:
    ceph:
      sha1: bf5f5ec7cf0e06125515866acedcc04c393f90b9
  rgw:
    frontend: civetweb
priority: 100
repo: https://github.com/ceph/ceph.git
roles:
- - mon.a
  - mds.a
  - osd.0
  - osd.1
  - osd.2
  - mgr.w
  - client.0
- - mon.b
  - mgr.x
  - osd.3
  - osd.4
  - osd.5
- - mon.c
  - mgr.y
  - osd.6
  - osd.7
  - osd.8
- - installer.0
sha1: bf5f5ec7cf0e06125515866acedcc04c393f90b9
suite: ceph-ansible
suite_path: /home/teuthworker/src/github.com_ceph_ceph_master/qa
suite_relpath: qa
suite_repo: https://github.com/ceph/ceph.git
suite_sha1: 25e60f042bd380afda62b494e47655a9830965e6
tasks:
- print: "installer.0 is a separate machine" 
- ssh-keys: null
- ceph_ansible: null
- install.ship_utilities: null
- rgw:
    client.0:
- workunit:
    clients:
      client.0:
        - ceph-tests/ceph-admin-commands.sh
teuthology_branch: master
verbose: true
targets:
  vpm019.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXVFBL+3jLXC21Y3CZQrzzSt82ayLwWu5tLzoN6/uUisJwcjLutoadczuwYfAdEDMza3pnk26MV2cr7gM3CMv6uxbk2kmrAZmgJwbYxvyGpDPsGvhlSfxtJvvBimxde+3Irqm9SjCcsiTH+naEBQVvO8brfyR8BGGAF72jeBpWtELqGtmL5NfIISb2cdt7hHbZh1gErFB6ihknnVvJhd6I7Ti1oGBP44z7mQCdDG3jRmFcqDdsr/zErvgsmkP9B9UUl8bXQBzrhn8wwowJ+7H/moH/nasVKNhSV7DUEwdRbZ5F6PKsEcb/9bBGGwIjlmUXOSTFlQ5Sq5a1rRVQtVel
  vpm083.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDcOV2mJJO0QNS7TXxqEuN5SGcDhTNFzXwG4R8aY9sibs2xn2mzutLPam58hfGgX/HAOuOpwhLrTM9Ua8CaoPJOS00cTWTxkpkcOcy6vCd5Eh15Koy3rTWvvPiuWGzXUz2lKmdKUk1ySXhrTDeztL6wad6b/o0nAwu0ECLpO4r0KEo2dfWOo5QPTUDRYgNF59A7A467TfBx6KWHDTtiLuP+IiqW1hoF24wH7GnKHQa9VVjn7BeIS497r8nP6yWzvuS66EEXzw95vm6skaGZic3gZVArzVt1ILCbJjIu5fgi2nFVOWczwVNiujFKfaM+AGdxDuvxQdDaPf2cR5hBc+cP
  vpm091.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMxlB7NLl309UnTbbTHxw05/t/cUsVyaYqkztte8HgkiF5ogRWniQVjMzoKUz4w/i0HS2d8ClbYOL5xANKQPxHNx4DW6mRwkkEkl9sN/F+mIcXw0xQqzpDb0bE6dD9aWDzFx5pSheL0RYvo8kqhyahlabBuD1NXmReZFWV+Fw858pWNqigHbQy2mthgU35rDnDxEqKD1nSQ+aNG/hEf9ujRwbJVBeEtXy39qP687xBtcWIA2Zc/pya5K39ZxJPJdK7/YI8Wvb5wAx/CwWj9CJbIbmvTV1qcx+fRtPfYz/DXFZk6TJk+eLohmIPsKRRnwhZaWM2pNPhY/Rb3NEsaONf
  vpm181.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDr3YdOuwIwSK8n5uAo7jaoC67VK1UTJaXtgIYaWB7Jo2uhBozGh6tiKeAvyl2dr1fcuQZscyrnT0jro3o2vH/VEauss0ZnrBGcXISRQ2gpqK+xAlv9a/jJpYq6/E/jvTLNwHVumzvji/ZBhUsUL3jTs8662hfENCkU4YisAPPFmVRRe55QyK0G0760SIsBXhj1yA9bHHkQMCztE0V6ORdShvSolTHba5pAlg1dTNbjpmo2T6PESfOD/NzI2D0Hoj2WJL9kxbzyhIu9TlukBe1e4Iaiuhbx9loc1yf9/zdl+FcThQKYBZz5w3ORiupWvrmkopEN7j4mkg4Vbe/7RzZl

The command that ends up failing appears to be an apt-get update.

Actions #1

Updated by Anonymous over 6 years ago

  • Status changed from New to Closed

Okay. I tried this on four reimaged vpns and did not see this problem. I will close and reopen if this comes up again.

Actions #2

Updated by Anonymous about 6 years ago

  • Status changed from Closed to New

Ick. This happened again. I am not sure what's causing it.

See ~wusui/log.rgw on teuthology.front.sepia.ceph.com

command was:

teuthology -v --suite-path /home/wusui/ceph-qa-suite /home/wusui/rgw_tests/newtest2.yaml 2>&1 | tee /home/wusui/log.rgw

Actions #3

Updated by Anonymous about 6 years ago

Note that I was running happy as a clam and then this suddenly popped up again, for a second time.

Actions #4

Updated by Anonymous about 6 years ago

  • Status changed from New to Rejected

probable user error. I am closing again.

Actions #5

Updated by Anonymous about 6 years ago

  • Status changed from Rejected to In Progress
  • Assignee set to Anonymous

Reopened yet again. I have been bit with this a few times. Since I am currently running on vpms, I just reimage a different set of vpms and free the set that problem occurs on as a workaround. I will hold this for the moment.

Actions

Also available in: Atom PDF