Project

General

Profile

Actions

Bug #9765

closed

CachePool flush -> OSD Failed

Added by Irek Fasikhov over 9 years ago. Updated over 9 years ago.

Status:
Duplicate
Priority:
Urgent
Assignee:
-
Category:
OSD
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi,All.

I encountered a problem flushing the data before deleting CachePool.
My crushmap:

[root@ct01 ceph]# ceph osd tree
# id    weight  type name       up/down reweight
-8      3       root cache.bank-hlynov.ru
-5      1               host cache1
13      1                       osd.13  up      1
-6      1               host cache2
31      1                       osd.31  up      1
-7      1               host cache3
30      1                       osd.30  up      1
-1      109.2   root bank-hlynov.ru
-2      36.4            host ct01
2       3.64                    osd.2   up      1
4       3.64                    osd.4   up      1
5       3.64                    osd.5   up      1
6       3.64                    osd.6   up      1
7       3.64                    osd.7   up      1
8       3.64                    osd.8   up      1
9       3.64                    osd.9   up      1
10      3.64                    osd.10  up      1
11      3.64                    osd.11  up      1
0       3.64                    osd.0   up      1
-3      36.4            host ct2
12      3.64                    osd.12  up      1
14      3.64                    osd.14  up      1
15      3.64                    osd.15  up      1
16      3.64                    osd.16  up      1
17      3.64                    osd.17  up      1
18      3.64                    osd.18  up      1
19      3.64                    osd.19  up      1
20      3.64                    osd.20  up      1
21      3.64                    osd.21  up      1
32      3.64                    osd.32  up      1
-4      36.4            host ct3
1       3.64                    osd.1   up      1
3       3.64                    osd.3   up      1
22      3.64                    osd.22  up      1
23      3.64                    osd.23  up      1
24      3.64                    osd.24  up      1
25      3.64                    osd.25  up      1
26      3.64                    osd.26  up      1
27      3.64                    osd.27  up      1
28      3.64                    osd.28  up      1
29      3.64                    osd.29  up      1

My actions:
1. ceph osd tier add rbd cache
2. ceph osd tier cache-mode cache writeback
3. ceph osd tier set-overlay rbd cache
4. sets hit_set_count, hit_set_period and etc.
5. Writes data from a virtual machine:
rados -p cache ls | wc -l
28685
6. ceph osd tier cache-mode cache forward

7.

rados -p cache cache-flush-evict-all
rbd_data.53b752ae8944a.000000000000bebe
rbd_data.53b752ae8944a.000000000000bd11
rbd_data.53b752ae8944a.000000000000c286
rbd_data.53b752ae8944a.000000000000c3f7
rbd_data.45132ae8944a.0000000000001a2c
rbd_data.c1362ae8944a.0000000000017a4c
2014-10-14 11:39:18.183180 7fac98c34700  0 -- 192.168.50.1:0/1032885 >> 192.168.50.1:6807/32153 pipe(0x7fac90012f30 sd=5 :0 s=1 pgs=0 cs=0 l=1 c=0x7fac900131c0).fault
[root@ct01 ceph]# ceph osd stat
     osdmap e25538: 33 osds: 32 up, 33 in
rados -p cache ls | wc -l
   28679

8. OSD dead :(.

[root@ct01 ceph]# ceph osd tree
# id    weight  type name       up/down reweight
-8      3       root cache.bank-hlynov.ru
-5      1               host cache1
*13      1                      osd.13  down    0*
-6      1               host cache2
31      1                       osd.31  up      1
-7      1               host cache3
30      1                       osd.30  up      1
-1      109.2   root bank-hlynov.ru
-2      36.4            host ct01
2       3.64                    osd.2   up      1
4       3.64                    osd.4   up      1
5       3.64                    osd.5   up      1
6       3.64                    osd.6   up      1
7       3.64                    osd.7   up      1
8       3.64                    osd.8   up      1
9       3.64                    osd.9   up      1
10      3.64                    osd.10  up      1
11      3.64                    osd.11  up      1
0       3.64                    osd.0   up      1
-3      36.4            host ct2
12      3.64                    osd.12  up      1
14      3.64                    osd.14  up      1
15      3.64                    osd.15  up      1
16      3.64                    osd.16  up      1
17      3.64                    osd.17  up      1
18      3.64                    osd.18  up      1
19      3.64                    osd.19  up      1
20      3.64                    osd.20  up      1
21      3.64                    osd.21  up      1
32      3.64                    osd.32  up      1
-4      36.4            host ct3
1       3.64                    osd.1   up      1
3       3.64                    osd.3   up      1
22      3.64                    osd.22  up      1
23      3.64                    osd.23  up      1
24      3.64                    osd.24  up      1
25      3.64                    osd.25  up      1
26      3.64                    osd.26  up      1
27      3.64                    osd.27  up      1
28      3.64                    osd.28  up      1
29      3.64                    osd.29  up      1

OS: CentOS 6.5
Kernel: 2.6.32-431.el6.x86_64
Ceph --version: ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)


Files

ceph.conf (2.2 KB) ceph.conf Irek Fasikhov, 10/14/2014 01:19 AM
ceph-osd.13.log.tar.gz (14.2 MB) ceph-osd.13.log.tar.gz Irek Fasikhov, 10/14/2014 01:19 AM
Actions

Also available in: Atom PDF