Project

General

Profile

Actions

Bug #8803

closed

"[ERR] 3.1s0 scrub stat mismatch" in rados-firefly-distro-basic-plana suite

Added by Yuri Weinstein almost 10 years ago. Updated almost 10 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-07-08_02:30:04-rados-firefly-distro-basic-plana/349535/

description: rados/thrash/{clusters/fixed-2.yaml fs/ext4.yaml msgr-failures/few.yaml
  thrashers/morepggrow.yaml workloads/cache-agent-big.yaml}
duration: 2612.19629406929
failure_reason: '"2014-07-09 18:15:27.716898 osd.4 10.214.131.34:6805/7359 46 : [ERR]
  3.1s0 scrub stat mismatch, got 2005/2006 objects, 0/0 clones, 2005/2006 dirty, 0/0
  omap, 0/0 hit_set_archive, 0/0 whiteouts, 3989047104/3990584848 bytes." in cluster
  log'
flavor: basic
owner: scheduled_teuthology@teuthology
success: false
archive_path: /var/lib/teuthworker/archive/teuthology-2014-07-08_02:30:04-rados-firefly-distro-basic-plana/349535
branch: firefly
description: rados/thrash/{clusters/fixed-2.yaml fs/ext4.yaml msgr-failures/few.yaml
  thrashers/morepggrow.yaml workloads/cache-agent-big.yaml}
email: null
job_id: '349535'
kernel: &id001
  kdb: true
  sha1: distro
last_in_suite: false
machine_type: plana
name: teuthology-2014-07-08_02:30:04-rados-firefly-distro-basic-plana
nuke-on-error: true
os_type: ubuntu
overrides:
  admin_socket:
    branch: firefly
  ceph:
    conf:
      global:
        ms inject socket failures: 5000
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
    fs: ext4
    log-whitelist:
    - slow request
    - must scrub before tier agent can activate
    sha1: dbee797d1344aec47fedc8d3fdd6c9460e0ffa16
  ceph-deploy:
    branch:
      dev: firefly
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: dbee797d1344aec47fedc8d3fdd6c9460e0ffa16
  s3tests:
    branch: firefly
  workunit:
    sha1: dbee797d1344aec47fedc8d3fdd6c9460e0ffa16
owner: scheduled_teuthology@teuthology
priority: 1000
roles:
- - mon.a
  - mon.c
  - osd.0
  - osd.1
  - osd.2
  - client.0
- - mon.b
  - mds.a
  - osd.3
  - osd.4
  - osd.5
  - client.1
suite: rados
targets:
  ubuntu@plana06.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDY948mb19kgHh3VRik9j+j6PubvwO2jxr+ZDBkGCyglrimqLFjGCYA0HZW0f93/zRlfGyAMK5DnLTqPVEu5aWi7DGR2AjVCKsAFRqatR/ZrtGtLxqTVs0KT366U9L3OBRV83rEyXNOvNEaP/c9f4neg8mvYJ6pPowpV7o76Eagc6C9QPBR7wz1k3tG/owr5lXnHo9DDc4Q4Nqr/Xjtmq4Y5mYhX3a7vhtJDP4IuE9C6X/uBKu1Diw6kSyzhYkkD18puj8HTRAwVXOkPR9FMy7MxeAPbFKNJctCaeEIIXN75KrcoL+yNmSZ4uvnzGdI0u14jpALN2/PFRCUQ7gPi++x
  ubuntu@plana08.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmrPbNrKBRrF2sFfhWGBXBrxP1c9wi06uF6USYU0ubQXxm+xTpudp52IT+VnIL12Lnkg/552C6VJEQhZbWyH1Y/0Udx6lkzW+jedLgQVjeK8gPVxRl3/xtP0C5b9Spao/iX8EiBZ1ijq+CIb6hAej90nEUrfh8dBrpSwX3d2b5ECpkjypoboF7OOWYOEOtUrxVxXzTC6VysQeVoJh15u3lMa1otYbOvlFvdeFE5fQqfg7yPsQqX48CptUT3H6UZXMfpXY5axu69Wqhpj4wQdAFGWW6oZtGeRYyUyowK+aFJRWQKHXIZGi4KOvH8Bgf87u+BOg/t3bFJGuUUZYokOPJ
tasks:
- internal.lock_machines:
  - 2
  - plana
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.serialize_remote_roles: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- kernel: *id001
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.sudo: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install: null
- ceph:
    log-whitelist:
    - wrongly marked me down
    - objects unfound and apparently lost
- thrashosds:
    chance_pgnum_grow: 3
    chance_pgpnum_fix: 1
    timeout: 1200
- exec:
    client.0:
    - ceph osd erasure-code-profile set teuthologyprofile ruleset-failure-domain=osd
      m=1 k=2
    - ceph osd pool create base 4 erasure teuthologyprofile
    - ceph osd pool create cache 4
    - ceph osd tier add base cache
    - ceph osd tier cache-mode cache writeback
    - ceph osd tier set-overlay base cache
    - ceph osd pool set cache hit_set_type bloom
    - ceph osd pool set cache hit_set_count 8
    - ceph osd pool set cache hit_set_period 60
    - ceph osd pool set cache target_max_objects 5000
- rados:
    clients:
    - client.0
    objects: 10000
    op_weights:
      copy_from: 50
      delete: 50
      read: 100
      write: 100
    ops: 4000
    pools:
    - base
    size: 1024
teuthology_branch: master
tube: plana
verbose: false
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.plana.30266

Related issues 1 (0 open1 closed)

Is duplicate of Ceph - Bug #7986: 3.1s0 scrub stat mismatch, got 2041/2044 objects, 0/0 clones, 2041/2044 dirty, 0/0Can't reproduce04/04/2014

Actions
Actions #1

Updated by Yuri Weinstein almost 10 years ago

  • Status changed from New to Duplicate
Actions

Also available in: Atom PDF