Project

General

Profile

Bug #20951

rgw: send data-log list infinitely

Added by fang yuxiang over 6 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
High
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
jewel luminous kraken
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

radosgw send data-log list infinitely when opposite end trimmed the data-log and in quiescence.

the sending side always send data-log list with this log:
2017-08-08 16:59:01.979654 7f7a11fe3700 20 data sync: incremental_sync:1340: shard_id=63 datalog_marker=*1_1502160434.853416_118147.1* sync_marker.marker=

And in the opposite end, the related data log has already been trimmed.

when doing rados listomapkeys p master.rgw.log data_log.63, no output.
And the omapheader of data_log.63 is: (*1_1502160434.853416_118147.1*2&Y


Related issues

Copied to rgw - Backport #21109: jewel: rgw: send data-log list infinitely Resolved
Copied to rgw - Backport #21110: luminous: rgw: send data-log list infinitely Resolved
Copied to rgw - Backport #21111: kraken: rgw: send data-log list infinitely Rejected

History

#1 Updated by fang yuxiang over 6 years ago

radosgw send data-log list infinitely when opposite end trimmed the data-log and in quiescence.

the sending side always send data-log list with this log:
2017-08-08 16:59:01.979654 7f7a11fe3700 20 data sync: incremental_sync:1340: shard_id=63 datalog_marker=1_1502160434.853416_118147.1 sync_marker.marker=

And in the opposite end, the related data log has already been trimmed.

when doing rados listomapkeys p master.rgw.log data_log.63, no output.
And the omapheader of data_log.63 is: (1_1502160434.853416_118147.12&Y

#2 Updated by fang yuxiang over 6 years ago

Sending data-log infinitely caused the opposite end stuck with high cpu usage.

#3 Updated by Orit Wasserman over 6 years ago

Can you provide the rgw logs?

#4 Updated by fang yuxiang over 6 years ago

Orit Wasserman wrote:

Can you provide the rgw logs?

useful info of the rgw logs is:
2017-08-08 16:59:01.979654 7f7a11fe3700 20 data sync: incremental_sync:1340: shard_id=63 datalog_marker=1_1502160434.853416_118147.1 sync_marker.marker=

I have opened a PR to fix this issue:
https://github.com/ceph/ceph/pull/16926

#5 Updated by Matt Benjamin over 6 years ago

  • Status changed from New to In Progress
  • Assignee set to Matt Benjamin
  • Priority changed from Normal to High

https://github.com/ceph/ceph/pull/16926

(assigning to myself to shepherd this)

#6 Updated by Matt Benjamin over 6 years ago

  • Status changed from In Progress to Fix Under Review

my issue w/the PR has been addressed, waiting for review by Casey

#7 Updated by Matt Benjamin over 6 years ago

  • Status changed from Fix Under Review to Pending Backport
  • Backport set to luminous

#8 Updated by Matt Benjamin over 6 years ago

  • Backport changed from luminous to jewel luminous kraken

#9 Updated by Nathan Cutler over 6 years ago

  • Copied to Backport #21109: jewel: rgw: send data-log list infinitely added

#10 Updated by Nathan Cutler over 6 years ago

  • Copied to Backport #21110: luminous: rgw: send data-log list infinitely added

#11 Updated by Nathan Cutler over 6 years ago

  • Copied to Backport #21111: kraken: rgw: send data-log list infinitely added

#12 Updated by Nathan Cutler over 6 years ago

  • Status changed from Pending Backport to Resolved

Also available in: Atom PDF