Project

General

Profile

Actions

Bug #9496

closed

mon: pg scrub timestamps must be populated at pg creation

Added by Tamilarasi muthamizhan over 9 years ago. Updated over 9 years ago.

Status:
Resolved
Priority:
High
Assignee:
Joao Eduardo Luis
Category:
Monitor
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

logs: ubuntu@teuthology:/a/teuthology-2014-09-15_16:05:01-upgrade:firefly-giant-x:parallel-giant-distro-basic-multi/485164


config.yaml

ubuntu@teuthology:/a/teuthology-2014-09-15_16:05:01-upgrade:firefly-giant-x:parallel-giant-distro-basic-multi/485164$ cat orig.config.yaml 
archive_path: /var/lib/teuthworker/archive/teuthology-2014-09-15_16:05:01-upgrade:firefly-giant-x:parallel-giant-distro-basic-multi/485164
branch: giant
description: upgrade:firefly-giant-x:parallel/{0-cluster/start.yaml 1-firefly-install/firefly.yaml
  2-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml}
  3-giant-upgrade/giant.yaml 4-workload/{rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml
  test_rbd_python.yaml} 5-upgrade-sequence/upgrade-by-type.yaml 6-final-workload/{ec-rados-default.yaml
  ec-rados-plugin=jerasure-k=3-m=1.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml
  rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml rgw_swift.yaml}
  distros/ubuntu_12.04.yaml}
email: ceph-qa@ceph.com
job_id: '485164'
kernel:
  kdb: true
  sha1: distro
last_in_suite: false
machine_type: plana,burnupi,mira
name: teuthology-2014-09-15_16:05:01-upgrade:firefly-giant-x:parallel-giant-distro-basic-multi
nuke-on-error: true
os_type: ubuntu
os_version: '12.04'
overrides:
  admin_socket:
    branch: giant
  ceph:
    conf:
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
        mon warn on legacy crush tunables: false
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
    log-whitelist:
    - slow request
    - scrub mismatch
    - ScrubResult
    sha1: 8c23ef09491dbb9814f028765356e570dfcc6ee8
  ceph-deploy:
    branch:
      dev: giant
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: 8c23ef09491dbb9814f028765356e570dfcc6ee8
  s3tests:
    branch: giant
  workunit:
    sha1: 8c23ef09491dbb9814f028765356e570dfcc6ee8
owner: scheduled_teuthology@teuthology
priority: 1000
roles:
- - mon.a
  - mds.a
  - osd.0
  - osd.1
- - mon.b
  - mon.c
  - osd.2
  - osd.3
- - client.0
  - client.1
suite: upgrade:firefly-giant-x:parallel
suite_branch: wip_9398
suite_path: /var/lib/teuthworker/src/ceph-qa-suite_wip_9398
tasks:
- chef: null
- clock.check: null
- install:
    branch: firefly
- print: '**** done firefly install'
- ceph:
    fs: xfs
- parallel:
  - workload
- print: '**** done parallel'
- install.upgrade:
    client.0:
      branch: giant
    mon.a:
      branch: giant
    mon.b:
      branch: giant
- print: '**** done install.upgrade'
- ceph.restart: null
- print: '**** done restart'
- parallel:
  - workload2
  - upgrade-sequence
- print: '**** done parallel 2'
- install.upgrade:
    client.0: null
- print: '**** done install.upgrade client.0 to the version from teuthology-suite
    arg'
- rados:
    clients:
    - client.0
    ec_pool: true
    objects: 50
    op_weights:
      append: 100
      copy_from: 50
      delete: 50
      read: 100
      rmattr: 25
      rollback: 50
      setattr: 25
      snap_create: 50
      snap_remove: 50
      write: 0
    ops: 4000
- rados:
    clients:
    - client.0
    ec_pool: true
    erasure_code_profile:
      k: 3
      m: 1
      name: jerasure31profile
      plugin: jerasure
      ruleset-failure-domain: osd
      technique: reed_sol_van
    objects: 50
    op_weights:
      append: 100
      copy_from: 50
      delete: 50
      read: 100
      rmattr: 25
      rollback: 50
      setattr: 25
      snap_create: 50
      snap_remove: 50
      write: 0
    ops: 4000
- rados:
    clients:
    - client.1
    objects: 50
    op_weights:
      delete: 50
      read: 100
      rollback: 50
      snap_create: 50
      snap_remove: 50
      write: 100
    ops: 4000
- workunit:
    clients:
      client.1:
      - rados/load-gen-mix.sh
- sequential:
  - mon_thrash:
      revive_delay: 20
      thrash_delay: 1
  - workunit:
      clients:
        client.1:
        - rados/test.sh
  - print: '**** done rados/test.sh - 6-final-workload'
- workunit:
    clients:
      client.1:
      - cls/test_cls_rbd.sh
- workunit:
    clients:
      client.1:
      - rbd/import_export.sh
    env:
      RBD_CREATE_ARGS: --new-format
- rgw:
  - client.1
- s3tests:
    client.1:
      rgw_server: client.1
- swift:
    client.1:
      rgw_server: client.1
teuthology_branch: master
tube: multi
upgrade-sequence:
  sequential:
  - install.upgrade:
      mon.a: null
  - print: '**** done install.upgrade mon.a to the version from teuthology-suite arg'
  - install.upgrade:
      mon.b: null
  - print: '**** done install.upgrade mon.b to the version from teuthology-suite arg'
  - ceph.restart:
      daemons:
      - mon.a
      - mon.b
      - mon.c
      wait-for-healthy: true
  - sleep:
      duration: 60
  - ceph.restart:
      daemons:
      - osd.0
      - osd.1
      - osd.2
      - osd.3
      wait-for-healthy: true
  - sleep:
      duration: 60
  - ceph.restart:
    - mds.a
  - sleep:
      duration: 60
  - exec:
      mon.a:
      - ceph osd crush tunables firefly
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.multi.4142
workload:
  sequential:
  - workunit:
      branch: firefly
      clients:
        client.0:
        - rados/test.sh
        - cls
  - print: '**** done rados/test.sh &  cls'
  - workunit:
      branch: firefly
      clients:
        client.0:
        - rados/load-gen-big.sh
  - print: '**** done rados/load-gen-big.sh'
  - workunit:
      branch: firefly
      clients:
        client.0:
        - rbd/test_librbd.sh
  - print: '**** done rbd/test_librbd.sh'
  - workunit:
      branch: firefly
      clients:
        client.0:
        - rbd/test_librbd_python.sh
  - print: '**** done rbd/test_librbd_python.sh'
workload2:
  sequential:
  - workunit:
      branch: giant
      clients:
        client.0:
        - rados/test.sh
        - cls
  - print: '**** done #rados/test.sh and cls 2'
  - workunit:
      branch: giant
      clients:
        client.0:
        - rados/load-gen-big.sh
  - print: '**** done rados/load-gen-big.sh 2'
  - workunit:
      branch: giant
      clients:
        client.0:
        - rbd/test_librbd.sh
  - print: '**** done rbd/test_librbd.sh 2'
  - workunit:
      branch: giant
      clients:
        client.0:
        - rbd/test_librbd_python.sh
  - print: '**** done rbd/test_librbd_python.sh 2'

2014-09-15T18:10:02.762 INFO:tasks.ceph:Scrubbing osd osd.1
2014-09-15T18:10:02.762 INFO:teuthology.orchestra.run.mira070:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd deep-scrub osd.1'
2014-09-15T18:10:02.956 INFO:teuthology.orchestra.run.mira070.stderr:osd.1 instructed to deep-scrub
2014-09-15T18:10:02.968 INFO:teuthology.orchestra.run.mira070:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json'
2014-09-15T18:10:03.155 INFO:teuthology.orchestra.run.mira070.stderr:dumped all in format json
2014-09-15T18:10:03.178 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_master/teuthology/contextutil.py", line 29, in nested
    yield vars
  File "/var/lib/teuthworker/src/ceph-qa-suite_wip_9398/tasks/ceph.py", line 1277, in task
    osd_scrub_pgs(ctx, config)
  File "/var/lib/teuthworker/src/ceph-qa-suite_wip_9398/tasks/ceph.py", line 891, in osd_scrub_pgs
    pgtm = time.strptime(tmval[0:tmval.find('.')], '%Y-%m-%d %H:%M:%S')
  File "/usr/lib/python2.7/_strptime.py", line 454, in _strptime_time
    return _strptime(data_string, format)[0]
  File "/usr/lib/python2.7/_strptime.py", line 325, in _strptime
    (data_string, format))
ValueError: time data u'0' does not match format '%Y-%m-%d %H:%M:%S'

Actions #1

Updated by Samuel Just over 9 years ago

  • Assignee set to Joao Eduardo Luis
  • Priority changed from Normal to High

Kind of odd, last scrub timestamp should never be 0.

Actions #2

Updated by Joao Eduardo Luis over 9 years ago

  • Subject changed from failure in the nightlies to mon: pg scrub timestamps must be populated at pg creation
Actions #3

Updated by Joao Eduardo Luis over 9 years ago

  • Category set to Monitor
  • Status changed from New to Fix Under Review
Actions #4

Updated by Samuel Just over 9 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF