Project

General

Profile

Actions

Bug #6796

closed

ceph mons interpretting pg splits very wrong

Added by Tamilarasi muthamizhan over 10 years ago. Updated over 10 years ago.

Status:
Resolved
Priority:
Urgent
Category:
Monitor
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Emperor
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

logs are copied to mira055.front.sepia.ceph.com:/home/ubuntu/bug_6776_1

to reproduce the bug,

roles:
- [mon.a, mon.b, mds.a, osd.0, osd.1, osd.2]
- [mon.c, osd.3, osd.4, client.0]

targets:
  ubuntu@mira055.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCuGXr9u0bZulWME/jgz9DahtJZoS6GQtyck22cSVLQ62NRidREQwSlJ/1VQTdfhiFehKQVw/U9mVt1ljlF/xbrbqpq+p7cdrZLnePGUau7d12cF4yedp/BV4xWdIT2MBjY1tX2a3G4OJc9fQdep96BtkdKhSGm5TUOcu/Avwhr9rbcSwQM/ZmbKGwIXEZJUHtUY+dH4IZmBoxwieF+bQmfTpk1E/VFg8VaApcdBTA3vJ7FtsB5mu+ktYAuiCpAHCW4eBDkaZ7kQoBg1WguzqYW6N1GGUuVstGazEgNaJ3A7dTt212phZbLZVJRcJnX5F+poMVMo4OgwJszZs/6gsb5
  ubuntu@mira080.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGnNwnG4C8IU3gTmWaNB9YU0gbNeoBAcyD18JRuLJlLMKaZgvD2qvDgjis/4n1Fn0w7yY9YILNAI+fRlifaZRjg0nNyjIB3MpYbK/7oB12sO3R/fpNhA8FU6bt0V/9XQWrdWLe2s1PlTVgucMOVEJnp+3eFSR+3thfR9XXHqjdpOJ7Q1Ra/dLzjk/SP94i0EshvBlPl4kClyEWqKLlkMvZNGzKeJj+J9g8jagvsSJ65fdyi9qcaLVvicuOeL6T4ZGRypPaYfUNLRsPGRjTlqE2IZxC8RBtuyL4sVdFl2hBGT3HhOF4IQuFYTWgWzh9fUDJdwCVI7FH8O3id0QLirE7

tasks:
#- chef:
- install:
    branch: dumpling
- ceph:
    conf:
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug ms: 1
        debug osd: 20  
      client:
        debug objecter: 20 
- install.upgrade: 
      osd.0:
        branch: emperor
- ceph.restart: [osd.0, osd.1, osd.2]
- thrashosds:
    timeout: 1200
    chance_pgnum_grow: 1
    chance_pgpnum_fix: 1
- ceph.restart:
    daemons: [mon.a]
    wait-for-healthy: false
    wait-for-osds-up: true
- rados:
    clients: [client.0]
    ops: 4000
    objects: 50
    op_weights:
      read: 100
      write: 100
      delete: 50
      snap_create: 50
      snap_remove: 50
      rollback: 50
      copy_from: 50
- ceph.restart:
    daemons: [mon.b]
    wait-for-healthy: false
    wait-for-osds-up: true
- ceph.wait_for_mon_quorum: [a, b]
- workunit:
    branch: dumpling
    clients:
      client.0:
        - rados/test.sh


Related issues 1 (0 open1 closed)

Has duplicate Ceph - Bug #6849: Inconsistent action of ceph osd pool set commands (e.g. volume size)DuplicateDan Mick11/22/2013

Actions
Actions

Also available in: Atom PDF