Project

General

Profile

Actions

Bug #13988

closed

new OSD re-using old OSD id fails to boot

Added by Loïc Dachary over 8 years ago. Updated over 8 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Steps to reproduce

teuthology-openstack --verbose --key-filename ~/Downloads/myself --key-name loic --teuthology-git-url http://github.com/dachary/teuthology --teuthology-branch wip-suite --ceph-qa-suite-git-url http://github.com/dachary/ceph-qa-suite --suite-branch wip-ceph-disk --ceph-git-url http://github.com/dachary/ceph --ceph master --suite ceph-disk --filter ubuntu_14.04

It will sleep forever with two targets provisionned and ready to be used.

  • ssh to the target that runs the monitory
  • git clone http://github.com/ceph/ceph
  • cd ceph/workunits/qa/ceph-disk
  • sudo bash
  • bash ceph-disk.sh
  • Control-c when it starts to run the tests

Although the problem shows when running the tests, it is easier to reproduce as follows:

ceph version 10.0.0-855-g15a81bb (15a81bb7121799ba1b71b88b356998ebc8effec9)

[root@target167114226249 ceph-disk]# uuid=$(uuidgen) ; ceph-disk prepare --osd-uuid $uuid /dev/vdd
[root@target167114226249 ceph-disk]# id=$(ceph osd create $uuid)
[root@target167114226249 ceph-disk]# echo $id
4
[root@target167114226249 ceph-disk]# ceph osd tree
ID WEIGHT  TYPE NAME                   UP/DOWN REWEIGHT PRIMARY-AFFINITY
...
 4 0.00969         osd.4                    up  1.00000          1.00000
[root@target167114226249 ceph-disk]# ceph-disk deactivate --deactivate-by-id $id ; ceph-disk destroy --zap --destroy-by-id $id
[root@target167114226249 ceph-disk]# ceph osd tree
ID WEIGHT  TYPE NAME                   UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.01938 root default
-3       0     rack localrack
-2       0         host localhost
-4 0.01938     host target167114226249
 2 0.00969         osd.2                  down  1.00000          1.00000
 3 0.00969         osd.3                  down  1.00000          1.00000
[root@target167114226249 ceph-disk]# ceph-disk list /dev/vdd
/dev/vdd other, unknown
[root@target167114226249 ceph-disk]# ceph-disk prepare --osd-uuid $uuid /dev/vdd
[root@target167114226249 ceph-disk]# sleep 300 ; ceph osd tree
...
 4       0 osd.4                          down  1.00000          1.00000
[root@target167114226249 ceph-disk]# 

Files

ceph-mon.a.log.gz (249 KB) ceph-mon.a.log.gz Loïc Dachary, 12/09/2015 04:03 PM
ceph-osd.2.log.gz (87.2 KB) ceph-osd.2.log.gz Loïc Dachary, 12/09/2015 04:04 PM
osdmap.15.plain (1.11 KB) osdmap.15.plain before removal Loïc Dachary, 12/09/2015 05:15 PM
osdmap.16.plain (904 Bytes) osdmap.16.plain after removal Loïc Dachary, 12/09/2015 05:15 PM
osdmap.17.plain (1.12 KB) osdmap.17.plain after adding the osd.2 again Loïc Dachary, 12/09/2015 05:16 PM
l.out (175 KB) l.out git bisect log output Loïc Dachary, 12/11/2015 08:10 AM

Related issues 4 (0 open4 closed)

Related to Ceph - Bug #13989: OSD boot fails with os/FileJournal.cc: 1907: FAILED assert(0)Duplicate12/05/2015

Actions
Related to Ceph - Bug #19119: pre-jewel "osd rm" incrementals are misinterpretedResolvedIlya Dryomov03/01/2017

Actions
Blocks Ceph - Bug #14080: ceph-disk: use blkid instead of sgdisk -iResolvedLoïc Dachary12/14/2015

Actions
Blocks Ceph - Bug #13970: ceph-disk list fails on /dev/cciss!c0d0ResolvedLoïc Dachary12/03/2015

Actions
Actions

Also available in: Atom PDF