Project

General

Profile

Actions

Bug #24511

closed

osd crushed at thread_name:safe_timer

Added by Lei Liu almost 6 years ago. Updated almost 6 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
OSD
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ENV

ceph version

ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)

system info

Distributor ID:    Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:    16.04
Codename:    xenial
Linux bj1-ceph-host6 4.4.0-87-generic #110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

ERROR LOG

2018-06-13 19:46:44.073938 7fc5178eb700  4 rocksdb: (Original Log Time 2018/06/13-19:46:44.073814) EVENT_LOG_v1 {"time_micros": 1528890404073807, "job": 4807, "event": "compaction_finished", "compaction_time_micros": 1040518, "output_level": 4, "num_output_files": 4, "total_output_size": 207144659, "num_input_records": 1158039, "num_output_records": 722315, "num_subcompactions": 1, "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 4, 49, 590, 1421, 0, 0]}
2018-06-13 19:46:44.074331 7fc5178eb700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1528890404074328, "job": 4807, "event": "table_file_deletion", "file_number": 477797}
2018-06-13 19:46:44.074342 7fc5178eb700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1528890404074341, "job": 4807, "event": "table_file_deletion", "file_number": 477793}
2018-06-13 19:46:44.074349 7fc5178eb700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1528890404074348, "job": 4807, "event": "table_file_deletion", "file_number": 477792}
2018-06-13 19:46:44.074354 7fc5178eb700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1528890404074354, "job": 4807, "event": "table_file_deletion", "file_number": 477791}
2018-06-13 19:54:14.105879 7fc522100700 -1 *** Caught signal (Segmentation fault) **
 in thread 7fc522100700 thread_name:safe_timer

 ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)
 1: (()+0xa7cab4) [0x55ead4720ab4]
 2: (()+0x11390) [0x7fc529c71390]
 3: [0x55eb00010000]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
-10000> 2018-06-13 19:53:54.978803 7fc5188ed700  1 -- 172.31.134.211:6802/728916 --> 172.31.134.214:6800/4141156 -- osd_repop_reply(client.4594839.0:83001191 9.c e1157/1022 ondisk, result = 0) v2 -- 0x55eb1346b200 con 0
 -9999> 2018-06-13 19:53:54.981654 7fc5268f2700  5 -- 172.31.134.211:6802/728916 >> 172.31.134.212:6802/4151468 conn(0x55eae8009000 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=5 cs=1 l=0). rx osd.6 seq 58422788 0x55eae0a75100 osd_repop(client.4474297.0:124347876 9.8 e1157/1010) v2
 -9998> 2018-06-13 19:53:54.981680 7fc5268f2700  1 -- 172.31.134.211:6802/728916 <== osd.6 172.31.134.212:6802/4151468 58422788 ==== osd_repop(client.4474297.0:124347876 9.8 e1157/1010) v2 ==== 974+0+652 (3228446549 0 3686730744) 0x55eae0a75100 con 0x55eae8009000
 -9997> 2018-06-13 19:53:54.981780 7fc50f0da700  5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 1157'475939390, trimmed: , trimmed_dups: , clear_divergent_priors: 0
 -9996> 2018-06-13 19:53:54.982099 7fc5188ed700  1 -- 172.31.134.211:6802/728916 --> 172.31.134.212:6802/4151468 -- osd_repop_reply(client.4474297.0:124347876 9.8 e1157/1010 ondisk, result = 0) v2 -- 0x55eb19b5cf00 con 0
 -9995> 2018-06-13 19:53:54.984684 7fc5268f2700  5 -- 172.31.134.211:6802/728916 >> 172.31.134.212:6802/4151468 conn(0x55eae8009000 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=5 cs=1 l=0). rx osd.6 seq 58422789 0x55eb6ebeae00 osd_repop(client.4474297.0:124347878 9.8 e1157/1010) v2
 -9994> 2018-06-13 19:53:54.984710 7fc5268f2700  1 -- 172.31.134.211:6802/728916 <== osd.6 172.31.134.212:6802/4151468 58422789 ==== osd_repop(client.4474297.0:124347878 9.8 e1157/1010) v2 ==== 974+0+580 (2506888730 0 285585307) 0x55eb6ebeae00 con 0x55eae8009000
 -9993> 2018-06-13 19:53:54.984810 7fc50f0da700  5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 1157'475939391, trimmed: , trimmed_dups: , clear_divergent_priors: 0
 -9992> 2018-06-13 19:53:54.985113 7fc5188ed700  1 -- 172.31.134.211:6802/728916 --> 172.31.134.212:6802/4151468 -- osd_repop_reply(client.4474297.0:124347878 9.8 e1157/1010 ondisk, result = 0) v2 -- 0x55eb1a5b2f80 con 0
 -9991> 2018-06-13 19:53:54.991405 7fc5268f2700  5 -- 172.31.134.211:6802/728916 >> 172.31.134.215:6802/4023785 conn(0x55eb06f9c000 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=6 cs=1 l=0). rx osd.19 seq 87717432 0x55eb12bd8300 osd_repop(client.4474297.0:124347882 9.4 e1157/1026) v2
 -9990> 2018-06-13 19:53:54.991434 7fc5268f2700  1 -- 172.31.134.211:6802/728916 <== osd.19 172.31.134.215:6802/4023785 87717432 ==== osd_repop(client.4474297.0:124347882 9.4 e1157/1026) v2 ==== 974+0+652 (2813667917 0 1694256806) 0x55eb12bd8300 con 0x55eb06f9c000
 -9989> 2018-06-13 19:53:54.991503 7fc50e8d9700  5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 1157'720490268, trimmed: , trimmed_dups: , clear_divergent_priors: 0
 -9988> 2018-06-13 19:53:54.991813 7fc5188ed700  1 -- 172.31.134.211:6802/728916 --> 172.31.134.215:6802/4023785 -- osd_repop_reply(client.4474297.0:124347882 9.4 e1157/1026 ondisk, result = 0) v2 -- 0x55eb582c2c80 con 0
 -9987> 2018-06-13 19:53:54.994267 7fc5268f2700  5 -- 172.31.134.211:6802/728916 >> 172.31.134.215:6802/4023785 conn(0x55eb06f9c000 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=6 cs=1 l=0). rx osd.19 seq 87717433 0x55eb66ed4a00 osd_repop(client.4474297.0:124347884 9.4 e1157/1026) v2
 -9986> 2018-06-13 19:53:54.994293 7fc5268f2700  1 -- 172.31.134.211:6802/728916 <== osd.19 172.31.134.215:6802/4023785 87717433 ==== osd_repop(client.4474297.0:124347884 9.4 e1157/1026) v2 ==== 974+0+580 (405967268 0 2603678333) 0x55eb66ed4a00 con 0x55eb06f9c000
 -9985> 2018-06-13 19:53:54.994394 7fc50e8d9700  5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 1157'720490269, trimmed: , trimmed_dups: , clear_divergent_priors: 0
 -9984> 2018-06-13 19:53:54.994712 7fc5188ed700  1 -- 172.31.134.211:6802/728916 --> 172.31.134.215:6802/4023785 -- osd_repop_reply(client.4474297.0:124347884 9.4 e1157/1026 ondisk, result = 0) v2 -- 0x55eb35930800 con 0
 -9983> 2018-06-13 19:53:54.997536 7fc5268f2700  5 -- 172.31.134.211:6802/728916 >> 172.31.134.215:6802/4023785 conn(0x55eb06f9c000 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=6 cs=1 l=0). rx osd.19 seq 87717434 0x55eb08e2e700 osd_repop(client.4594839.0:83001204 9.4 e1157/1026) v2
 -9982> 2018-06-13 19:53:54.997556 7fc5268f2700  1 -- 172.31.134.211:6802/728916 <== osd.19 172.31.134.215:6802/4023785 87717434 ==== osd_repop(client.4594839.0:83001204 9.4 e1157/1026) v2 ==== 976+0+654 (2419606517 0 1952545648) 0x55eb08e2e700 con 0x55eb06f9c000
 -9981> 2018-06-13 19:53:54.997654 7fc50e8d9700  5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 1157'720490270, trimmed: , trimmed_dups: , clear_divergent_priors: 0
 -9980> 2018-06-13 19:53:54.997943 7fc5188ed700  1 -- 172.31.134.211:6802/728916 --> 172.31.134.215:6802/4023785 -- osd_repop_reply(client.4594839.0:83001204 9.4 e1157/1026 ondisk, result = 0) v2 -- 0x55eb239a1e00 con 0
 -9979> 2018-06-13 19:53:54.999540 7fc5268f2700  5 -- 172.31.134.211:6802/728916 >> 172.31.134.215:6802/4023785 conn(0x55eb06f9c000 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=6 cs=1 l=0). rx osd.19 seq 87717435 0x55eade8cf100 osd_repop(client.4594839.0:83001206 9.4 e1157/1026) v2
 -9978> 2018-06-13 19:53:54.999560 7fc5268f2700  1 -- 172.31.134.211:6802/728916 <== osd.19 172.31.134.215:6802/4023785 87717435 ==== osd_repop(client.4594839.0:83001206 9.4 e1157/1026) v2 ==== 976+0+582 (1760582345 0 867630678) 0x55eade8cf100 con 0x55eb06f9c000
 -9977> 2018-06-13 19:53:54.999657 7fc50e8d9700  5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 1157'720490271, trimmed: , trimmed_dups: , clear_divergent_priors: 0
...
...
...
lots of events
...
...
...
    -6> 2018-06-13 19:54:14.103456 7fc5188ed700  1 -- 172.31.134.211:6802/728916 --> 172.31.134.215:6802/4023785 -- pg_update_log_missing_reply(9.4 epoch 1157/1026 rep_tid 204356241 lcod 1157'720491348) v3 -- 0x55eb30e7fb00 con 0
    -5> 2018-06-13 19:54:14.103538 7fc5188ed700  1 -- 172.31.134.211:6802/728916 --> 172.31.134.212:6802/4151468 -- osd_repop_reply(client.4484443.0:124567132 9.8 e1157/1010 ondisk, result = 0) v2 -- 0x55eb11ff9c00 con 0
    -4> 2018-06-13 19:54:14.104667 7fc5268f2700  5 -- 172.31.134.211:6802/728916 >> 172.31.134.215:6802/4023785 conn(0x55eb06f9c000 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=6 cs=1 l=0). rx osd.19 seq 87718515 0x55eb4afba300 osd_repop(client.4474322.0:128518819 9.4 e1157/1026) v2
    -3> 2018-06-13 19:54:14.104691 7fc5268f2700  1 -- 172.31.134.211:6802/728916 <== osd.19 172.31.134.215:6802/4023785 87718515 ==== osd_repop(client.4474322.0:128518819 9.4 e1157/1026) v2 ==== 976+0+582 (1647542364 0 1353653543) 0x55eb4afba300 con 0x55eb06f9c000
    -2> 2018-06-13 19:54:14.104784 7fc50e8d9700  5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 1157'720491349, trimmed: , trimmed_dups: , clear_divergent_priors: 0
    -1> 2018-06-13 19:54:14.105115 7fc5188ed700  1 -- 172.31.134.211:6802/728916 --> 172.31.134.215:6802/4023785 -- osd_repop_reply(client.4474322.0:128518819 9.4 e1157/1026 ondisk, result = 0) v2 -- 0x55eb1dd66f00 con 0
     0> 2018-06-13 19:54:14.105879 7fc522100700 -1 *** Caught signal (Segmentation fault) **
 in thread 7fc522100700 thread_name:safe_timer

 ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)
 1: (()+0xa7cab4) [0x55ead4720ab4]
 2: (()+0x11390) [0x7fc529c71390]
 3: [0x55eb00010000]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
   0/ 5 none
   0/ 1 lockdep
   0/ 1 context
   1/ 1 crush
   1/ 5 mds
   1/ 5 mds_balancer
   1/ 5 mds_locker
   1/ 5 mds_log
   1/ 5 mds_log_expire
   1/ 5 mds_migrator
   0/ 1 buffer
   0/ 1 timer
   0/ 1 filer
   0/ 1 striper
   0/ 1 objecter
   0/ 5 rados
   0/ 5 rbd
   0/ 5 rbd_mirror
   0/ 5 rbd_replay
   0/ 5 journaler
   0/ 5 objectcacher
   0/ 5 client
   1/ 5 osd
   0/ 5 optracker
   0/ 5 objclass
   1/ 3 filestore
   1/ 3 journal
   0/ 5 ms
   1/ 5 mon
   0/10 monc
   1/ 5 paxos
   0/ 5 tp
   1/ 5 auth
   1/ 5 crypto
   1/ 1 finisher
   1/ 1 reserver
   1/ 5 heartbeatmap
   1/ 5 perfcounter
   1/ 5 rgw
   1/10 civetweb
   1/ 5 javaclient
   1/ 5 asok
   1/ 1 throttle
   0/ 0 refs
   1/ 5 xio
   1/ 5 compressor
   1/ 5 bluestore
   1/ 5 bluefs
   1/ 3 bdev
   1/ 5 kstore
   4/ 5 rocksdb
   4/ 5 leveldb
   4/ 5 memdb
   1/ 5 kinetic
   1/ 5 fuse
   1/ 5 mgr
   1/ 5 mgrc
   1/ 5 dpdk
   1/ 5 eventtrace
  -2/-2 (syslog threshold)
  -1/-1 (stderr threshold)
  max_recent     10000
  max_new         1000
  log_file /var/log/ceph/ceph-osd.5.log
--- end dump of recent events ---

Restart OSD

Then i try to restart ceph-osd and the cluster works well, but anthor error occured

See below

2018-06-13 20:03:16.226373 7fb819a95700  0 osd.5 1162 crush map has features 1009107927421960192, adjusting msgr requires for osds
2018-06-13 20:03:18.777488 7fb819a95700  0 osd.5 1163 crush map has features 1009107927421960192, adjusting msgr requires for osds
2018-06-13 20:03:18.779649 7fb813288700  1 osd.5 pg_epoch: 1163 pg[9.8( v 1162'475959115 (1159'475957615,1162'475959115] local-lis/les=1161/1162 n=29 ec=282/282 lis/c 1161/1
010 les/c/f 1162/1011/0 1160/1163/1010) [6,3,5] r=2 lpr=1163 pi=[1010,1163)/1 luod=0'0 crt=1162'475959115 active] start_peering_interval up [6,3,5] -> [6,3,5], acting [6,3]
-> [6,3,5], acting_primary 6 -> 6, up_primary 6 -> 6, role -1 -> 2, features acting 2305244844532236283 upacting 2305244844532236283
2018-06-13 20:03:18.779761 7fb813288700  1 osd.5 pg_epoch: 1163 pg[9.8( v 1162'475959115 (1159'475957615,1162'475959115] local-lis/les=1161/1162 n=29 ec=282/282 lis/c 1161/1
010 les/c/f 1162/1011/0 1160/1163/1010) [6,3,5] r=2 lpr=1163 pi=[1010,1163)/1 crt=1162'475959115 unknown NOTIFY] state<Start>: transitioning to Stray
2018-06-13 20:03:19.802985 7fb819a95700  0 osd.5 1164 crush map has features 1009107927421960192, adjusting msgr requires for osds
2018-06-13 20:03:20.320251 7fb819a95700  0 osd.5 1165 crush map has features 1009107927421960192, adjusting msgr requires for osds
2018-06-13 20:03:20.323437 7fb813288700  1 osd.5 pg_epoch: 1165 pg[9.4( v 1163'720520007 (1159'720518507,1163'720520007] local-lis/les=1161/1162 n=41 ec=282/282 lis/c 1161/1
026 les/c/f 1162/1027/0 1160/1165/1026) [19,5,17] r=1 lpr=1165 pi=[1026,1165)/1 luod=0'0 crt=1163'720520007 active] start_peering_interval up [19,5,17] -> [19,5,17], acting
[19,17] -> [19,5,17], acting_primary 19 -> 19, up_primary 19 -> 19, role -1 -> 1, features acting 2305244844532236283 upacting 2305244844532236283
2018-06-13 20:03:20.323541 7fb813288700  1 osd.5 pg_epoch: 1165 pg[9.4( v 1163'720520007 (1159'720518507,1163'720520007] local-lis/les=1161/1162 n=41 ec=282/282 lis/c 1161/1
026 les/c/f 1162/1027/0 1160/1165/1026) [19,5,17] r=1 lpr=1165 pi=[1026,1165)/1 crt=1163'720520007 unknown NOTIFY] state<Start>: transitioning to Stray
2018-06-13 20:03:21.331317 7fb819a95700  0 osd.5 1166 crush map has features 1009107927421960192, adjusting msgr requires for osds
2018-06-13 20:21:55.581872 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0x3ff000; fallback to bdev 1
2018-06-13 20:22:04.553117 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0x2ff000; fallback to bdev 1
2018-06-13 20:22:13.558186 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0x1ff000; fallback to bdev 1
2018-06-13 20:22:22.519058 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:22:22.854714 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:22:30.793396 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:22:30.910438 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:22:39.084362 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:22:40.107201 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:22:47.192079 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:22:49.118530 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:22:54.829491 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:22:57.664175 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:03.063218 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:06.737683 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:11.354847 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:15.919222 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:19.571696 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:25.063593 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:28.080117 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:34.163783 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:36.406861 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:43.027518 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:44.164378 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:52.247792 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:23:52.319036 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:00.895881 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:01.410707 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:09.168447 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:10.650552 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:19.858387 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:22.084777 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:30.314184 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:32.792516 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:39.511815 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:42.807225 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:47.508285 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:51.736707 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:24:55.952630 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:00.810038 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:04.357150 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:09.929831 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:12.931532 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:19.069930 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:21.249352 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:28.211989 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:29.403614 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:37.410379 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:38.310596 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:46.436329 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:46.581235 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:54.546368 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:25:55.789055 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:26:02.775244 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:26:05.205811 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:26:11.091965 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:26:14.424950 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:26:19.531835 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:26:23.364812 7fb81a296700  1 bluefs _allocate failed to allocate 0x400000 on bdev 0, free 0xff000; fallback to bdev 1
2018-06-13 20:26:27.657921 7fb81a296700  1 bluefs _allocate failed to allocate 0x100000 on bdev 0, free 0xff000; fallback to bdev 1

Note: the cluster health is ok, Is this a bug in ceph-osd ?


Related issues 1 (0 open1 closed)

Is duplicate of RADOS - Bug #23352: osd: segfaults under normal operationResolvedBrad Hubbard03/14/2018

Actions
Actions #1

Updated by Josh Durgin almost 6 years ago

  • Is duplicate of Bug #23352: osd: segfaults under normal operation added
Actions #2

Updated by Josh Durgin almost 6 years ago

  • Status changed from New to Duplicate
Actions

Also available in: Atom PDF