Project

General

Profile

Bug #61300

RGWPutObj accessing two 'bucket sync policy' objects without multisite configured

Added by Casey Bodley 10 months ago. Updated 10 months ago.

Status:
Pending Backport
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
multisite backport_processed
Backport:
reef
Regression:
Yes
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2023-05-19T11:22:59.001-0400 427676c0  1 -- 192.168.245.128:0/2890409376 --> [v2:192.168.245.128:6800/2322402219,v1:192.168.245.128:6801/2322402219] -- osd_op(unknown.0.0:2040 6.0 6:25af4ead:::ec68f463-21c8-4252-8a5c-61f14aff422c.4163.37_foo:head [create,setxattr user.rgw.idtag (62) in=76b,setxattr user.rgw.tail_tag (62) in=79b,writefull 0~3 in=3b,setxattr user.rgw.manifest (364) in=381b,setxattr user.rgw.acl (147) in=159b,setxattr user.rgw.etag (32) in=45b,setxattr user.rgw.x-amz-content-sha256 (65) in=94b,setxattr user.rgw.x-amz-date (17) in=36b,call rgw.obj_store_pg_ver in=44b,setxattr user.rgw.source_zone (4) in=24b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e19) v8 -- 0x179f81c0 con 0x17ba6930
2023-05-19T11:22:59.003-0400 129806c0  1 -- 192.168.245.128:0/2890409376 <== osd.0 v2:192.168.245.128:6800/2322402219 2476 ==== osd_op_reply(2040 ec68f463-21c8-4252-8a5c-61f14aff422c.4163.37_foo [create,setxattr (62),setxattr (62),writefull 0~3,setxattr (364),setxattr (147),setxattr (32),setxattr (65),setxattr (17),call,setxattr (4)] v19'577 uv577 ondisk = 0) v8 ==== 612+0+0 (crc 0 0 0) 0xab2d550 con 0x17ba6930
2023-05-19T11:22:59.004-0400 497756c0  1 -- 192.168.245.128:0/2890409376 --> [v2:192.168.245.128:6800/2322402219,v1:192.168.245.128:6801/2322402219] -- osd_op(unknown.0.0:2041 5.0 5:c65f3293:::.dir.ec68f463-21c8-4252-8a5c-61f14aff422c.4163.37.0:head [stat,call rgw.guard_bucket_resharding in=36b,call rgw.bucket_complete_op in=362b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e19) v8 -- 0x17b56cf0 con 0x17ba6930
2023-05-19T11:22:59.005-0400 497756c0 10 req 6976501944535964825 0.022000648s s3:put_obj cache get: name=default.rgw.log++bucket.sync-source-hints.yournamehere-c4tl7erzua05m62o-37 : miss
2023-05-19T11:22:59.005-0400 497756c0 20 req 6976501944535964825 0.022000648s s3:put_obj rados->read ofs=0 len=0
2023-05-19T11:22:59.005-0400 497756c0  1 -- 192.168.245.128:0/2890409376 --> [v2:192.168.245.128:6800/2322402219,v1:192.168.245.128:6801/2322402219] -- osd_op(unknown.0.0:2042 2.0 2:93465414:::bucket.sync-source-hints.yournamehere-c4tl7erzua05m62o-37:head [call version.read in=11b,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e19) v8 -- 0x17ab61b0 con 0x17ba6930
2023-05-19T11:22:59.007-0400 129806c0  1 -- 192.168.245.128:0/2890409376 <== osd.0 v2:192.168.245.128:6800/2322402219 2477 ==== osd_op_reply(2042 bucket.sync-source-hints.yournamehere-c4tl7erzua05m62o-37 [call,read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v8 ==== 243+0+0 (crc 0 0 0) 0x6550d210 con 0x17ba6930
2023-05-19T11:22:59.008-0400 129806c0  1 -- 192.168.245.128:0/2890409376 <== osd.0 v2:192.168.245.128:6800/2322402219 2478 ==== osd_op_reply(2041 .dir.ec68f463-21c8-4252-8a5c-61f14aff422c.4163.37.0 [stat,call,call] v19'1395 uv1395 ondisk = 0) v8 ==== 279+0+0 (crc 0 0 0) 0x17abad80 con 0x17ba6930
2023-05-19T11:22:59.009-0400 48f746c0 20 req 6976501944535964825 0.026000768s s3:put_obj rados_obj.operate() r=-2 bl.length=0
2023-05-19T11:22:59.009-0400 48f746c0 10 req 6976501944535964825 0.026000768s s3:put_obj cache put: name=default.rgw.log++bucket.sync-source-hints.yournamehere-c4tl7erzua05m62o-37 info.flags=0x0
2023-05-19T11:22:59.009-0400 48f746c0 10 req 6976501944535964825 0.026000768s s3:put_obj adding default.rgw.log++bucket.sync-source-hints.yournamehere-c4tl7erzua05m62o-37 to cache LRU end
2023-05-19T11:22:59.009-0400 48f746c0 10 req 6976501944535964825 0.026000768s s3:put_obj cache get: name=default.rgw.log++bucket.sync-target-hints.yournamehere-c4tl7erzua05m62o-37 : miss
2023-05-19T11:22:59.009-0400 48f746c0 20 req 6976501944535964825 0.026000768s s3:put_obj rados->read ofs=0 len=0
2023-05-19T11:22:59.009-0400 48f746c0  1 -- 192.168.245.128:0/2890409376 --> [v2:192.168.245.128:6800/2322402219,v1:192.168.245.128:6801/2322402219] -- osd_op(unknown.0.0:2043 2.0 2:745384af:::bucket.sync-target-hints.yournamehere-c4tl7erzua05m62o-37:head [call version.read in=11b,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e19) v8 -- 0xab63be0 con 0x17ba6930
2023-05-19T11:22:59.010-0400 1b5916c0 20 handle_completion(): completion ok for obj=foo
2023-05-19T11:22:59.010-0400 129806c0  1 -- 192.168.245.128:0/2890409376 <== osd.0 v2:192.168.245.128:6800/2322402219 2479 ==== osd_op_reply(2043 bucket.sync-target-hints.yournamehere-c4tl7erzua05m62o-37 [call,read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v8 ==== 243+0+0 (crc 0 0 0) 0x17c7c310 con 0x17ba6930

the fix from https://github.com/ceph/ceph/pull/45357 was removed during multisite reshard work by https://github.com/ceph/ceph/commit/6f83f07d7f1d5301b9bb99e557565087b4ccf1a3


Related issues

Related to rgw - Bug #54531: RGWPutObj accessing two 'bucket sync policy' objects without multisite configured Pending Backport
Copied to rgw - Backport #61455: reef: RGWPutObj accessing two 'bucket sync policy' objects without multisite configured New

History

#1 Updated by Casey Bodley 10 months ago

  • Related to Bug #54531: RGWPutObj accessing two 'bucket sync policy' objects without multisite configured added

#2 Updated by Casey Bodley 10 months ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 51734

#3 Updated by Casey Bodley 10 months ago

  • Status changed from Fix Under Review to Pending Backport

#4 Updated by Backport Bot 10 months ago

  • Copied to Backport #61455: reef: RGWPutObj accessing two 'bucket sync policy' objects without multisite configured added

#5 Updated by Backport Bot 10 months ago

  • Tags changed from multisite to multisite backport_processed

Also available in: Atom PDF