Project

General

Profile

Actions

Bug #51758

closed

SeaStore Vstart fail

Added by dcslab snu almost 3 years ago. Updated 2 months ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
1 - critical
Reviewed:
07/21/2021
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Vstart with the option '--crimson' and '--seastore' doesn't work properly. (git commit version: 43d0e2c21852e732c1d19c93a104de3b5b8314d7)
It doesn't end and just goes on and on.

in the log,There're some errors...

ERROR on 'filestore - mkfs path ~/~/dev/osd0/block'
ERROR on 'filestore - check_create_device
ERROR on 'filestore - open_device'

ERROR on 'ms - ms_dispatch unhandled message ping magic

Actions #1

Updated by dcslab snu almost 3 years ago

Some important logs in "mon.a.log"

2021-07-20T08:49:51.491+0000 7fabc27fc700 1 -- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] >> conn(0x7fabbc02b040 msgr2=0x7fabbc035450 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1)._try_send send error: (32) Broken pipe
2021-07-20T08:49:51.491+0000 7fabc27fc700 1 --2- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] >> conn(0x7fabbc02b040 0x7fabbc035450 crc :-1 s=AUTH_ACCEPTING_SIGN pgs=0 cs=0 l=1 rev1=1 rx=0 tx=0).write auth signature write failed r=-32 ((32) Broken pipe)
2021-07-20T08:49:51.491+0000 7fabc27fc700 1 --2- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] >> conn(0x7fabbc02b040 0x7fabbc035450 crc :-1 s=AUTH_ACCEPTING_SIGN pgs=0 cs=0 l=1 rev1=1 rx=0 tx=0).stop
2021-07-20T08:49:51.491+0000 7fabc3fff700 10 mon.a@0(leader) e1 ms_handle_reset 0x7fabbc02b040 -
2021-07-20T08:49:51.491+0000 7fabc3fff700 10 mon.a@0(leader) e1 reset/close on session unknown.0

2021-07-20T08:50:12.642+0000 7fabc3fff700 10 mon.a@0(leader).log v15 logging 2021-07-20T08:50:12.280410+0000 mgr.x (mgr.4131) 4 : cluster [ERR] Unhandled exception from module 'devicehealth' while running on mgr.x: unknown error
2021-07-20T08:50:13.178+0000 7fabceffd700 10 mon.a@0(leader).osd e6 encode_pending e 7
2021-07-20T08:50:13.178+0000 7fabceffd700 1 mon.a@0(leader).osd e6 do_prune osdmap full prune enabled
2021-07-20T08:50:13.178+0000 7fabceffd700 10 mon.a@0(leader).osd e6 should_prune currently holding only 5 epochs (min osdmap epochs: 500); do not prune.

2021-07-20T08:50:13.246+0000 7fabc17fa700 7 mon.a@0(leader).log v16 update_from_paxos applying incremental log 16 2021-07-20T08:50:12.208936+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.4131 ' entity='mgr.x' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
2021-07-20T08:50:13.246+0000 7fabc17fa700 7 mon.a@0(leader).log v16 update_from_paxos applying incremental log 16 2021-07-20T08:50:12.209590+0000 mon.a (mon.0) 59 : cluster [DBG] osdmap e6: 1 total, 1 up, 1 in
2021-07-20T08:50:13.246+0000 7fabc17fa700 7 mon.a@0(leader).log v16 update_from_paxos applying incremental log 16 2021-07-20T08:50:12.280410+0000 mgr.x (mgr.4131) 4 : cluster [ERR] Unhandled exception from module 'devicehealth' while running on mgr.x: unknown error
2021-07-20T08:50:13.246+0000 7fabc17fa700 10 mon.a@0(leader).log v16 summary.channel_info {audit=0,51,cluster=0,36}

2021-07-20T08:50:15.198+0000 7fabc3fff700 10 mon.a@0(leader) e1 log_health updated 1 previous 0
2021-07-20T08:50:15.198+0000 7fabc3fff700 1 log_channel(cluster) log [ERR] : Health check failed: 2 mgr modules have failed (MGR_MODULE_ERROR)
2021-07-20T08:50:15.198+0000 7fabc3fff700 1 -
[v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] --> [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] -- log(1 entries from seq 61 at 2021-07-20T08:50:15.203762+0000) v1 -- 0x7fab9808ca20 con 0x557f7a37acb0
2021-07-20T08:50:15.198+0000 7fabc3fff700 1 -- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] send_to--> mon [v2:127.0.0.1:40406/0,v1:127.0.0.1:40407/0] -- paxos(begin lc 46 fc 0 pn 300 opn 0) v4 -- ?+0 0x7fab980407a0
2021-07-20T08:50:15.198+0000 7fabc3fff700 1 -- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] --> [v2:127.0.0.1:40406/0,v1:127.0.0.1:40407/0] -- paxos(begin lc 46 fc 0 pn 300 opn 0) v4 -- 0x7fab980407a0 con 0x557f7a492d90
2021-07-20T08:50:15.198+0000 7fabc3fff700 1 -- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] send_to--> mon [v2:127.0.0.1:40408/0,v1:127.0.0.1:40409/0] -- paxos(begin lc 46 fc 0 pn 300 opn 0) v4 -- ?+0 0x7fab9803ae50
2021-07-20T08:50:15.198+0000 7fabc3fff700 1 -- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] --> [v2:127.0.0.1:40408/0,v1:127.0.0.1:40409/0] -- paxos(begin lc 46 fc 0 pn 300 opn 0) v4 -- 0x7fab9803ae50 con 0x557f7a4929b0
2021-07-20T08:50:15.198+0000 7fabc3fff700 1 -- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] <== mon.0 v2:127.0.0.1:40404/0 0 ==== log(1 entries from seq 61 at 2021-07-20T08:50:15.203762+0000) v1 ==== 0+0+0 (unknown 0 0 0) 0x7fab9808ca20 con 0x557f7a37acb0
2021-07-20T08:50:15.198+0000 7fabc3fff700 10 mon.a@0(leader).log v17 preprocess_query log(1 entries from seq 61 at 2021-07-20T08:50:15.203762+0000) v1 from mon.0 v2:127.0.0.1:40404/0
2021-07-20T08:50:15.198+0000 7fabc3fff700 10 mon.a@0(leader).log v17 preprocess_log log(1 entries from seq 61 at 2021-07-20T08:50:15.203762+0000) v1 from mon.0
2021-07-20T08:50:15.198+0000 7fabc3fff700 10 mon.a@0(leader).log v17 prepare_update log(1 entries from seq 61 at 2021-07-20T08:50:15.203762+0000) v1 from mon.0 v2:127.0.0.1:40404/0
2021-07-20T08:50:15.198+0000 7fabc3fff700 10 mon.a@0(leader).log v17 prepare_log log(1 entries from seq 61 at 2021-07-20T08:50:15.203762+0000) v1 from mon.0
2021-07-20T08:50:15.198+0000 7fabc3fff700 10 mon.a@0(leader).log v17 logging 2021-07-20T08:50:15.203762+0000 mon.a (mon.0) 61 : cluster [ERR] Health check failed: 2 mgr modules have failed (MGR_MODULE_ERROR)
2021-07-20T08:50:15.206+0000 7fabc3fff700 1 -- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] <== mon.1 v2:127.0.0.1:40406/0 177 ==== paxos(accept lc 46 fc 0 pn 300 opn 0) v4 ==== 84+0+0 (crc 0 0 0) 0x7fab980407a0 con 0x557f7a492d90
2021-07-20T08:50:15.206+0000 7fabc3fff700 1 -- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] <== mon.2 v2:127.0.0.1:40408/0 210 ==== paxos(accept lc 46 fc 0 pn 300 opn 0) v4 ==== 84+0+0 (crc 0 0 0) 0x7fab9803ae50 con 0x557f7a4929b0
2021-07-20T08:50:15.214+0000 7fabc17fa700 1 -- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] send_to--> mon [v2:127.0.0.1:40406/0,v1:127.0.0.1:40407/0] -- paxos(commit lc 47 fc 0 pn 300 opn 0) v4 -- ?+0 0x7fab94170780
2021-07-20T08:50:15.214+0000 7fabc17fa700 1 -- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] --> [v2:127.0.0.1:40406/0,v1:127.0.0.1:40407/0] -- paxos(commit lc 47 fc 0 pn 300 opn 0) v4 -- 0x7fab94170780 con 0x557f7a492d90
2021-07-20T08:50:15.214+0000 7fabc17fa700 1 -- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] send_to--> mon [v2:127.0.0.1:40408/0,v1:127.0.0.1:40409/0] -- paxos(commit lc 47 fc 0 pn 300 opn 0) v4 -- ?+0 0x7fab9408c340
2021-07-20T08:50:15.214+0000 7fabc17fa700 1 -- [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] --> [v2:127.0.0.1:40408/0,v1:127.0.0.1:40409/0] -- paxos(commit lc 47 fc 0 pn 300 opn 0) v4 -- 0x7fab9408c340 con 0x557f7a4929b0

2021-07-20T08:59:59.995+0000 7fabceffd700 1 log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 2 mgr modules have failed; Reduced data availability: 127 pgs inactive; 1 pool(s) have no replicas configured
2021-07-20T08:59:59.995+0000 7fabceffd700 1 -
[v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] --> [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] -- log(1 entries from seq 89 at 2021-07-20T09:00:00.000304+0000) v1 -- 0x7fab88009c10 con 0x557f7a37acb0
2021-07-20T08:59:59.995+0000 7fabceffd700 1 log_channel(cluster) log [ERR] : [ERR] MGR_MODULE_ERROR: 2 mgr modules have failed
2021-07-20T08:59:59.995+0000 7fabceffd700 1 -
[v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] --> [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] -- log(1 entries from seq 90 at 2021-07-20T09:00:00.000382+0000) v1 -- 0x7fab88015ae0 con 0x557f7a37acb0
2021-07-20T08:59:59.995+0000 7fabceffd700 1 log_channel(cluster) log [ERR] : Module 'dashboard' has failed: No module named 'routes'
2021-07-20T08:59:59.995+0000 7fabceffd700 1 -
[v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] --> [v2:127.0.0.1:40404/0,v1:127.0.0.1:40405/0] -- log(1 entries from seq 91 at 2021-07-20T09:00:00.000431+0000) v1 -- 0x7fab88016610 con 0x557f7a37acb0
2021-07-20T08:59:59.995+0000 7fabceffd700 -1 log_channel(cluster) log [ERR] : Module 'devicehealth' has failed: unknown error

Actions #2

Updated by dcslab snu almost 3 years ago

"mgr.x.log"

2021-07-20T08:50:05.618+0000 7f05ce7fc700 -1 Traceback (most recent call last):
File "/home/cephlab/mnt/ceph_psd/ceph/src/pybind/mgr/dashboard/module.py", line 331, in serve
mapper, parent_urls = generate_routes(self.url_prefix)
File "/home/cephlab/mnt/ceph_psd/ceph/src/pybind/mgr/dashboard/controllers/__init__.py", line 385, in generate_routes
mapper = cherrypy.dispatch.RoutesDispatcher()
File "/usr/lib/python3/dist-packages/cherrypy/_cpdispatch.py", line 515, in init
import routes
ModuleNotFoundError: No module named 'routes'

2021-07-20T08:50:06.526+0000 7f05e0e7c700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
2021-07-20T08:50:07.970+0000 7f05e267f700 0 [progress INFO root] Processing OSDMap change 2..3
2021-07-20T08:50:08.034+0000 7f060f7fe700 1 mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
2021-07-20T08:50:08.526+0000 7f05e0e7c700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
2021-07-20T08:50:09.086+0000 7f05e267f700 0 [progress INFO root] Processing OSDMap change 3..4
2021-07-20T08:50:09.090+0000 7f05e1e7e700 2 mgr.server handle_open ignoring open from osd.0 v2:127.0.0.1:6803/171706; not ready for session (expect reconnect)
2021-07-20T08:50:10.526+0000 7f05e0e7c700 1 mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
2021-07-20T08:50:10.610+0000 7f05cdffb700 0 [devicehealth INFO root] creating mgr pool
2021-07-20T08:50:11.170+0000 7f05e267f700 0 [progress INFO root] Processing OSDMap change 4..5
2021-07-20T08:50:12.198+0000 7f05e267f700 0 [progress INFO root] Processing OSDMap change 5..6
2021-07-20T08:50:12.274+0000 7f05cdffb700 -1 log_channel(cluster) log [ERR] : Unhandled exception from module 'devicehealth' while running on mgr.x: unknown error
2021-07-20T08:50:12.274+0000 7f05cdffb700 -1 devicehealth.serve:
2021-07-20T08:50:12.274+0000 7f05cdffb700 -1 Traceback (most recent call last):
File "/home/cephlab/mnt/ceph_psd/ceph/src/pybind/mgr/devicehealth/module.py", line 338, in serve
if self.db_ready() and self.enable_monitoring:
File "/home/cephlab/mnt/ceph_psd/ceph/src/pybind/mgr/mgr_module.py", line 1132, in db_ready
return self.db is not None
File "/home/cephlab/mnt/ceph_psd/ceph/src/pybind/mgr/mgr_module.py", line 1144, in db
self._db = self.open_db()
File "/home/cephlab/mnt/ceph_psd/ceph/src/pybind/mgr/mgr_module.py", line 1125, in open_db
db = sqlite3.connect(uri, check_same_thread=False, uri=True)
sqlite3.DatabaseError: unknown error

Actions #3

Updated by Sebastian Wagner over 2 years ago

  • Project changed from Ceph to crimson
Actions #4

Updated by Matan Breizman 2 months ago

  • Status changed from New to Can't reproduce

Please re-open if still relevant

Actions

Also available in: Atom PDF