Project

General

Profile

Actions

Bug #9205

closed

osd: notify ops reordered

Added by Yuri Weinstein over 9 years ago. Updated over 9 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
OSD
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2014-08-21_11:40:02-upgrade:dumpling-x:stress-split-master-distro-basic-vps/439539/

Coredump in /a/teuthology-2014-08-21_11:40:02-upgrade:dumpling-x:stress-split-master-distro-basic-vps/439539/*/ceph-osd.4.log.gz

remote/vpm126/log/ceph-osd.4.log.gz:2014-08-21 21:47:57.275862 7fa973a7a700 -1 *** Caught signal (Aborted) **
remote/vpm126/log/ceph-osd.4.log.gz: in thread 7fa973a7a700
remote/vpm126/log/ceph-osd.4.log.gz:
remote/vpm126/log/ceph-osd.4.log.gz: ceph version 0.84-372-gb0aa846 (b0aa846b3f81225a779de00100e15334fb8156b3)
remote/vpm126/log/ceph-osd.4.log.gz: 1: ceph-osd() [0x9a8a0a]
remote/vpm126/log/ceph-osd.4.log.gz: 2: (()+0xfcb0) [0x7fa9916f6cb0]
remote/vpm126/log/ceph-osd.4.log.gz: 3: (gsignal()+0x35) [0x7fa98ffe14f5]
remote/vpm126/log/ceph-osd.4.log.gz: 4: (abort()+0x17b) [0x7fa98ffe4c5b]
remote/vpm126/log/ceph-osd.4.log.gz: 5: (__gnu_cxx::__verbose_terminate_handler()+0x11d) [0x7fa99093469d]
remote/vpm126/log/ceph-osd.4.log.gz: 6: (()+0xb5846) [0x7fa990932846]
remote/vpm126/log/ceph-osd.4.log.gz: 7: (()+0xb5873) [0x7fa990932873]
remote/vpm126/log/ceph-osd.4.log.gz: 8: (()+0xb596e) [0x7fa99093296e]
remote/vpm126/log/ceph-osd.4.log.gz: 9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1df) [0xa8cf7f]
remote/vpm126/log/ceph-osd.4.log.gz: 10: (ReplicatedPG::execute_ctx(ReplicatedPG::OpContext*)+0x1ab8) [0x81cf48]
remote/vpm126/log/ceph-osd.4.log.gz: 11: (ReplicatedPG::do_op(std::tr1::shared_ptr<OpRequest>&)+0x2a5c) [0x826bcc]
remote/vpm126/log/ceph-osd.4.log.gz: 12: (ReplicatedPG::do_request(std::tr1::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x63f) [0x7c21ef]
remote/vpm126/log/ceph-osd.4.log.gz: 13: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x1a2) [0x64ac22]
remote/vpm126/log/ceph-osd.4.log.gz: 14: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x6c1) [0x64b6e1]
remote/vpm126/log/ceph-osd.4.log.gz: 15: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x6fc) [0xa7de9c]
remote/vpm126/log/ceph-osd.4.log.gz: 16: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0xa7f780]
remote/vpm126/log/ceph-osd.4.log.gz: 17: (()+0x7e9a) [0x7fa9916eee9a]

Traceback:

2014-08-21T14:54:53.999 ERROR:teuthology.misc:Saw exception from osd.4
Traceback (most recent call last):
  File "/home/teuthworker/src/teuthology_master/teuthology/misc.py", line 1044, in stop_daemons_of_type
    daemon.stop()
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/daemon.py", line 45, in stop
    run.wait([self.proc], timeout=timeout)
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 424, in wait
    proc.wait()
  File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 102, in wait
    exitstatus=status, node=self.hostname)
CommandFailedError: Command failed on vpm126 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 4'
2014-08-21T14:54:54.025 INFO:teuthology.misc:Shutting down mon daemons...

archive_path: /var/lib/teuthworker/archive/teuthology-2014-08-21_11:40:02-upgrade:dumpling-x:stress-split-master-distro-basic-vps/439539
branch: master
description: upgrade:dumpling-x:stress-split/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml
  2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/rbd-cls.yaml
  6-next-mon/monb.yaml 7-workload/radosbench.yaml 8-next-mon/monc.yaml 9-workload/{rados_api_tests.yaml
  rbd-python.yaml rgw-s3tests.yaml snaps-many-objects.yaml} distros/ubuntu_12.04.yaml}
email: ceph-qa@ceph.com
job_id: '439539'
kernel: &id001
  kdb: true
  sha1: distro
last_in_suite: false
machine_type: vps
name: teuthology-2014-08-21_11:40:02-upgrade:dumpling-x:stress-split-master-distro-basic-vps
nuke-on-error: true
os_type: ubuntu
os_version: '12.04'
overrides:
  admin_socket:
    branch: master
  ceph:
    conf:
      global:
        osd heartbeat grace: 100
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
        mon warn on legacy crush tunables: false
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
    log-whitelist:
    - slow request
    - wrongly marked me down
    - objects unfound and apparently lost
    - log bound mismatch
    sha1: b0aa846b3f81225a779de00100e15334fb8156b3
  ceph-deploy:
    branch:
      dev: master
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  install:
    ceph:
      sha1: b0aa846b3f81225a779de00100e15334fb8156b3
  rgw:
    default_idle_timeout: 1200
  s3tests:
    branch: master
  workunit:
    sha1: b0aa846b3f81225a779de00100e15334fb8156b3
owner: scheduled_teuthology@teuthology
priority: 1000
roles:
- - mon.a
  - mon.b
  - mds.a
  - osd.0
  - osd.1
  - osd.2
- - osd.3
  - osd.4
  - osd.5
  - mon.c
- - client.0
suite: upgrade:dumpling-x:stress-split
suite_branch: master
suite_path: /var/lib/teuthworker/src/ceph-qa-suite_master
targets:
  ubuntu@vpm126.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCfbQRWnvxmwbgai9ELh6/tNjbWMPp4Ckjg9F6/F9YG1yYBM7Hpyy6MFgV4aUFXsvqvwqpa6Mz6P49WLHQ+AdfuCFoQ2wNmvPUWhVuddmoVwDasoseiS1mH9IBlTinxOCDxSQlczezr0SltyRNJYIbabyCmupb+oAftgpxRMA0Lbmmhan1kZSEHKAK4E0OS3hgvDwtw9kylMXUz0Km72pAvq0jfY3vqUbI7FLDpZN/p7BK3eUwx1H1gFaMpzTtWNZ+keRr41nz042oxYEijhpG/zPq2+zfcd7qedHa2V1sIZRc5L12fuoywkPSc4XbzL5mCrXcxxtl4mHnOU2o8wWz/
  ubuntu@vpm127.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC35kg1Z1yLwYoic/dcB7cYTTrD0bCRd7Wl7zxMtg24xPsGHOqn1iuCBYi0LnXSlDM3OxP4ziPVEuMj5jvUXQdINA6VQJDmrTNg+JA0lgiVDVwk7MLGK4dSpzxLeMSAMwWuKmvjCLhlKElmGfyHY7wBt1X61StiUXT+YzlQMzFzJafftO367ZrLZgZT1pzD/DbT/tu2rpitKDcOc6dEPYp3A8O3AvjFe6ZIuYDy5ppRTO1SSlLK/qbybvPq0lwxmFqWksHJhhDAoyISpPIJJvdS8Y2DPuc0sxXjvcepN0vTsFGxtJcAAq+VY4LfkqOjuadZy6OOUL62vNKU77PSnOhj
  ubuntu@vpm128.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/KLph+KmwNuX9PIKpea4V31ca2CCp+ymms56ZivTLTz+gwxoqoDauCEEOxrttz/EPZiwgLa+EICBuUfQpJ5rAdYcYN4zQ6SebSIjMTa0I0vNrqlYuIK822rMYVJDK1zsB/b3qq9KcxhydcBGi4UZuCHZhtKL6ggHmUW5FCvY8pP2LvuRjcB3xFoFiV0EiBGVTddAT6rgiiWPXVzuJW8FJpZCq2YFiD0U4vctcNeQ5LizWl2Dj3ViGGSvajJ4D4K4IB1v3xUiA9/3UhggkFeFBm2RgkZjMwAboStSzginBcniLOt8MSpbMcBb+qA2d8vF30+NnXK5JIExqf5kbJm3X
tasks:
- internal.lock_machines:
  - 3
  - vps
- internal.save_config: null
- internal.check_lock: null
- internal.connect: null
- internal.serialize_remote_roles: null
- internal.check_conflict: null
- internal.check_ceph_data: null
- internal.vm_setup: null
- kernel: *id001
- internal.base: null
- internal.archive: null
- internal.coredump: null
- internal.sudo: null
- internal.syslog: null
- internal.timer: null
- chef: null
- clock.check: null
- install:
    branch: dumpling
- ceph:
    fs: xfs
- install.upgrade:
    osd.0: null
- ceph.restart:
    daemons:
    - osd.0
    - osd.1
    - osd.2
- thrashosds:
    chance_pgnum_grow: 1
    chance_pgpnum_fix: 1
    thrash_primary_affinity: false
    timeout: 1200
- ceph.restart:
    daemons:
    - mon.a
    wait-for-healthy: false
    wait-for-osds-up: true
- workunit:
    branch: dumpling
    clients:
      client.0:
      - cls/test_cls_rbd.sh
- ceph.restart:
    daemons:
    - mon.b
    wait-for-healthy: false
    wait-for-osds-up: true
- radosbench:
    clients:
    - client.0
    time: 1800
- install.upgrade:
    mon.c: null
- ceph.restart:
    daemons:
    - mon.c
    wait-for-healthy: false
    wait-for-osds-up: true
- ceph.wait_for_mon_quorum:
  - a
  - b
  - c
- workunit:
    branch: dumpling
    clients:
      client.0:
      - rados/test-upgrade-firefly.sh
- workunit:
    branch: dumpling
    clients:
      client.0:
      - rbd/test_librbd_python.sh
- rgw:
    client.0: null
    default_idle_timeout: 300
- swift:
    client.0:
      rgw_server: client.0
- rados:
    clients:
    - client.0
    objects: 500
    op_weights:
      delete: 50
      read: 100
      rollback: 50
      snap_create: 50
      snap_remove: 50
      write: 100
    ops: 4000
teuthology_branch: master
tube: vps
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.4613
description: upgrade:dumpling-x:stress-split/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml
  2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/rbd-cls.yaml
  6-next-mon/monb.yaml 7-workload/radosbench.yaml 8-next-mon/monc.yaml 9-workload/{rados_api_tests.yaml
  rbd-python.yaml rgw-s3tests.yaml snaps-many-objects.yaml} distros/ubuntu_12.04.yaml}
duration: 13344.698642015457
failure_reason: 'Command failed on vpm126 with status 1: ''sudo adjust-ulimits ceph-coverage
  /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f -i 4'''
flavor: basic
owner: scheduled_teuthology@teuthology
success: false
Actions

Also available in: Atom PDF