Actions
Bug #45878
closedRGW crash after 2 times of osd failure
% Done:
0%
Source:
Development
Tags:
Backport:
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Way to reproduce:
1) s3cmd mb s3://hello
2) s3cmd put /etc/hosts
3) pkill osd
4) bin/ceph-osd -i 0 -c ./build/ceph.conf
5) s3cmd put /etc/hosts
6) pkill osd -> RGW crashed
Files
Updated by Or Friedmann almost 4 years ago
It looks like it is correlated with this PR:
Updated by Matt Benjamin almost 4 years ago
Or indicated that he has core files :)
Matt
Updated by Or Friedmann almost 4 years ago
Could be reproduce by:
1) pkill osd
2) bin/ceph-osd -i 0 -c ./build/ceph.conf
3)pkill osd
Updated by Or Friedmann almost 4 years ago
- File radosgw-crash.log.zip radosgw-crash.log.zip added
Logs of the rgw with debug_objecter = 20
Updated by Adam Emerson almost 4 years ago
- Status changed from New to Fix Under Review
Have a fix in https://github.com/ceph/ceph/pull/35422
Updated by Kefu Chai almost 4 years ago
- Status changed from Fix Under Review to Pending Backport
Updated by Nathan Cutler almost 4 years ago
- Status changed from Pending Backport to Resolved
https://github.com/ceph/ceph/pull/32601 (which caused this bug, right?) went in post-Octopus and is not being backported
@Kefu Chai - I'm assuming you set this to "Pending Backport" by mistake. Please correct me if I'm wrong.
Actions