Project

General

Profile

Actions

Bug #5482

closed

cephx: verify_reply coudln't decrypt with error: error decoding block for decryption

Added by chen atrmat almost 11 years ago. Updated almost 11 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Category:
OSD
Target version:
-
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hi all,
i found an OSD down, my env is ceph 0.56.3 , the part of log is below:

==== 47+0+0 (2545873131 0 0) 0x627a8c0 con 0x57bb160
208> 2013-06-29 20:10:35.914689 7f3e34094700 1 - 192.168.4.103:6827/16315 --> 192.168.4.102:0/30728 -- osd_ping(ping_reply e1978 stamp 2013-06-29 20:10:41.732185) v2 -- ?+0 0x4199e00 con 0x57bb160
-207> 2013-06-29 20:10:35.914705 7f3e34094700 20 osd.16 1978 _share_map_outgoing 0x9b95420 already has epoch 1978
-206> 2013-06-29 20:10:35.972598 7f3e245ee700 10 osd.16 1978 OSD::ms_get_authorizer type=osd
  • 205> 2013-06-29 20:10:35.973050 7f3e245ee700 0 cephx: verify_reply coudln't decrypt with error: error decoding block for decryption*
    -204> 2013-06-29 20:10:35.973061 7f3e245ee700 0 -
    192.168.4.103:6826/16315 >> 192.168.4.104:6806/4338 pipe(0x2f91280 sd=70 :36979 s=1 pgs=690 cs=4 l=0).failed verifying authorize reply
    203> 2013-06-29 20:10:35.973076 7f3e245ee700 2 - 192.168.4.103:6826/16315 >> 192.168.4.104:6806/4338 pipe(0x2f91280 sd=70 :36979 s=1 pgs=690 cs=4 l=0).fault 107: Transport endpoint is not connected
other OSDs in the same node, are work well at the same time. so is it the OSD's keyring.osd cause this?
anyone knows this issue?

BR
chen

Actions #1

Updated by Sage Weil almost 11 years ago

  • Subject changed from Ceph OSD cephx error to cephx: verify_reply coudln't decrypt with error: error decoding block for decryption
  • Status changed from New to Need More Info
  • Priority changed from Urgent to Normal
  • Source changed from other to Community (user)

i haven't seen this particular message before. 0.56.3 is a bit out of date, though; please upgrade to 0.56.6 to get all of the bug fixes for the 0.56.x series. if you see this pop up again, let us know!

Actions #2

Updated by chen atrmat almost 11 years ago

ok, i'll try to upgrade.
BTW, if i upgrade from 0.56.3 to 0.61.4, my data stored on OSD would be loss? or how to backup my data and import into a new cluster?

Actions #3

Updated by chen atrmat almost 11 years ago

Sage Weil wrote:

i haven't seen this particular message before. 0.56.3 is a bit out of date, though; please upgrade to 0.56.6 to get all of the bug fixes for the 0.56.x series. if you see this pop up again, let us know!

ok, i'll try to upgrade.
BTW, if i upgrade from 0.56.3 to 0.61.4, my data stored on OSD would be loss? or how to backup my data and import into a new cluster?

Actions #4

Updated by Sage Weil almost 11 years ago

  • Status changed from Need More Info to Can't reproduce

I actually meant 0.56.6, but you can move to 0.61.x (cuttelfish) too. See ceph.com/docs/master in the upgrade section. You don't need to reimport your data.

Actions #5

Updated by chen atrmat almost 11 years ago

Sage Weil wrote:

I actually meant 0.56.6, but you can move to 0.61.x (cuttelfish) too. See ceph.com/docs/master in the upgrade section. You don't need to reimport your data.

and my kernel now is 3.2.0-29-generic or 3.5.0-23-generic, would it necessary to upgrade to 3.9.2 when upgrade to ceph 0.61.x?

Actions #6

Updated by Sage Weil almost 11 years ago

The kernel version doesn't matter if you are just running the ceph userspace.

If you are mounting cephfs via the kernel client (mount -t ceph ..) or using the kernel rbd dribe (rbd map ...) you should be running at a bare minimum.

Actions

Also available in: Atom PDF