Project

General

Profile

Bug #9364

"Assertion: osd/Watch.cc: 290: FAILED assert(!cb)" in upgrade:dumpling-dumpling-distro-basic-vps suite

Added by Yuri Weinstein over 9 years ago. Updated over 9 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-09-04_15:40:01-upgrade:dumpling-dumpling-distro-basic-vps/468789/

Assertion: osd/Watch.cc: 290: FAILED assert(!cb)
ceph version 0.67.10-8-g5315cf0 (5315cf0a47e0a21e514df0d85be170dbca7ffc92)
 1: (Watch::get_delayed_cb()+0xc3) [0x6d3dc3]
 2: (ReplicatedPG::handle_watch_timeout(std::tr1::shared_ptr<Watch>)+0x9e9) [0x5e5029]
 3: (ReplicatedPG::check_blacklisted_obc_watchers(ObjectContext*)+0x3ba) [0x5e569a]
 4: (ReplicatedPG::populate_obc_watchers(ObjectContext*)+0x60b) [0x5e5f7b]
 5: (ReplicatedPG::get_object_context(hobject_t const&, bool)+0x1b1) [0x5e6911]
 6: (ReplicatedPG::prep_object_replica_pushes(hobject_t const&, eversion_t, int, std::map<int, std::vector<PushOp, std::allocator<PushOp> >, std::less<int>, std::allocator<std::pair<int const, std::vector<PushOp, std::allocator<PushOp> > > > >*)+0x10e) [0x5f6b4e]
 7: (ReplicatedPG::recover_replicas(int, ThreadPool::TPHandle&)+0x657) [0x5f80a7]
 8: (ReplicatedPG::start_recovery_ops(int, PG::RecoveryCtx*, ThreadPool::TPHandle&)+0x736) [0x616d46]
 9: (OSD::do_recovery(PG*, ThreadPool::TPHandle&)+0x1b8) [0x680068]
 10: (OSD::RecoveryWQ::_process(PG*, ThreadPool::TPHandle&)+0x11) [0x6c0091]
 11: (ThreadPool::worker(ThreadPool::WorkThread*)+0x4e6) [0x8b6f76]
 12: (ThreadPool::WorkThread::entry()+0x10) [0x8b8f90]
 13: (()+0x7e9a) [0x7fb5b7b49e9a]
 14: (clone()+0x6d) [0x7fb5b5e3931d]
archive_path: /var/lib/teuthworker/archive/teuthology-2014-09-04_15:40:01-upgrade:dumpling-dumpling-distro-basic-vps/468789
branch: dumpling
description: upgrade:dumpling/rbd/{0-cluster/start.yaml 1-dumpling-install/cuttlefish.v0.67.1.yaml
  2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mds-mon-osd.yaml 4-final/osdthrash.yaml}
email: ceph-qa@ceph.com
job_id: '468789'
kernel: &id001
  kdb: true
  sha1: distro
last_in_suite: false
machine_type: vps
name: teuthology-2014-09-04_15:40:01-upgrade:dumpling-dumpling-distro-basic-vps
nuke-on-error: true
os_type: ubuntu
overrides:
  admin_socket:
    branch: dumpling
  ceph:
    conf:
      global:
        osd heartbeat grace: 100
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
    fs: xfs
    log-whitelist:
    - slow request
    - scrub
    - wrongly marked me down
    - objects unfound and apparently lost
    - log bound mismatch
    sha1: 5315cf0a47e0a21e514df0d85be170dbca7ffc92
  ceph-deploy:
    branch:
      dev: dumpling
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: 5315cf0a47e0a21e514df0d85be170dbca7ffc92
  rgw:
    default_idle_timeout: 1200
  s3tests:
    branch: dumpling
    idle_timeout: 1200
  workunit:
    sha1: 5315cf0a47e0a21e514df0d85be170dbca7ffc92
owner: scheduled_teuthology@teuthology
priority: 1000
roles:
- - mon.a
  - mds.a
  - osd.0
  - osd.1
  - osd.2
- - mon.b
  - mon.c
  - osd.3
  - osd.4
  - osd.5
  - client.0
suite: upgrade:dumpling
suite_branch: dumpling
suite_path: /var/lib/teuthworker/src/ceph-qa-suite_dumpling
targets:
  ubuntu@vpm035.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDO8K681P5zNZ81+S45FHD3l9XJoiBOTaJWBlJ5QJDe8TxJtMnpoYmGnXyBU5k8Xh5McpmjLdUa5AZpW1FxSqKkjIiCvqmZQbvRxGJnR1vmLqBOJDLHYQYU6p3dHHzjoX+MSGqqMzpGxMrSjbBSLPTH1jVH7jKmNn3SPZJ+K4kcF8aDi0TZrDHkctCFgMKFa+6ykEdkWmUl2/lAWCRCZd86XzO6geIU1OnaTsJ1rcGQ1jt20/I2UaS7NHetD3ecKf8q/Zs/pOLTBWbDDLl3v1WgsW6RgpihCV3KqPTKUXFyUn6RiHdIlaIK+Fddg6pIPWFUyCqKr5sP6BI43uSnyq0J
  ubuntu@vpm199.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCgVNVHFtAoppyRKVe+uSpw000Gb839baKCjAd2q8ChNzvvEuEhYEhnIpz6BYyYAkiuNUdR3I4m+yjO752eaVPzQeJuMbkAp0Eg1E0iakujN5P1DW841Cxc7BzzrFV6jOKsv+P4Y+tnjNV9FQXY7Q8Arpl4l8y7iWw/WVp+T93+/KrBRKc0qnNermZLKhy0wcrDjQlWa2iBDZmlXC3eO5B0Op5/sZvjQkLEhq7p2pqeUZZ6vcy5A3riwAZ8dOTMwrvvQVhbzB3JH13SeFhW3giIfNE+vGD1iTX1RNYbAwm1zmBJS6gORUuVIanHj4DaZml7Ll7kQ84a72XUSttFw57j
tasks:
- internal.lock_machines:
  - 2
  - vps
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.serialize_remote_roles: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- kernel: *id001
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.sudo: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install:
    branch: cuttlefish
- ceph: null
- install.upgrade:
    all:
      tag: v0.67.1
- ceph.restart: null
- install.upgrade:
    all:
      branch: dumpling
- parallel:
  - workload
  - upgrade-sequence
- thrashosds:
    chance_pgnum_grow: 1
    chance_pgpnum_fix: 1
    thrash_primary_affinity: false
    timeout: 1200
- workunit:
    clients:
      client.0:
      - rbd/test_lock_fence.sh
teuthology_branch: master
tube: vps
upgrade-sequence:
  sequential:
  - ceph.restart:
    - mds.a
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.a
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.b
  - sleep:
      duration: 60
  - ceph.restart:
    - mon.c
  - sleep:
      duration: 60
  - ceph.restart:
    - osd.0
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.1
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.2
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.3
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.4
  - sleep:
      duration: 30
  - ceph.restart:
    - osd.5
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.15658
workload:
  sequential:
  - workunit:
      clients:
        client.0:
        - rbd/import_export.sh
      env:
        RBD_CREATE_ARGS: --new-format
  - workunit:
      clients:
        client.0:
        - cls/test_cls_rbd.sh
description: upgrade:dumpling/rbd/{0-cluster/start.yaml 1-dumpling-install/cuttlefish.v0.67.1.yaml
  2-workload/rbd.yaml 3-upgrade-sequence/upgrade-mds-mon-osd.yaml 4-final/osdthrash.yaml}
duration: 1481.8956589698792
failure_reason: 'Command failed on vpm199 with status 16: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp
  && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
  CEPH_REF=5315cf0a47e0a21e514df0d85be170dbca7ffc92 TESTDIR="/home/ubuntu/cephtest" 
  CEPH_ID="0" PYTHONPATH="$PYTHONPATH:/home/ubuntu/cephtest/binary/usr/local/lib/python2.7/dist-packages:/home/ubuntu/cephtest/binary/usr/local/lib/python2.6/dist-packages" 
  adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /home/ubuntu/cephtest/workunit.client.0/rbd/test_lock_fence.sh'''
flavor: basic
owner: scheduled_teuthology@teuthology
success: false

Related issues

Duplicates Ceph - Bug #8315: osd: watch callback vs callback funky Resolved 05/08/2014

History

#1 Updated by Samuel Just over 9 years ago

  • Status changed from New to Duplicate

Also available in: Atom PDF