Project

General

Profile

Feature #6029

cachepool: osd: separate object version from pg version

Added by Sage Weil over 10 years ago. Updated over 10 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
other
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:

History

#1 Updated by Sage Weil over 10 years ago

  • Subject changed from osd: separate object version from pg version to osd: cachepool: separate object version from pg version

#2 Updated by Samuel Just over 10 years ago

librados visible version seperate from PG version. There must also be an Objecter interface usable (in the future) from the OSD for specifying the new version on a write so that the cache pool osd can update the slow pool with the actual new version on demotion.

#3 Updated by Sage Weil over 10 years ago

  • Subject changed from osd: cachepool: separate object version from pg version to cachepool: osd: separate object version from pg version

#4 Updated by Sage Weil over 10 years ago

  • translation missing: en.field_story_points set to 3.00

#5 Updated by Sage Weil over 10 years ago

  • Assignee set to Greg Farnum

#6 Updated by Greg Farnum over 10 years ago

  • Status changed from New to In Progress

Huh, thought I updated this already. Sage went over it and liked what he'd seen, but there were some test failures I need to look into.

#7 Updated by Greg Farnum over 10 years ago

  • Status changed from In Progress to 7

A-hah, I think I found it. Running an updated branch through a short set of tests and updating the documentation, then will clean up the bug fixes and docs and do final tests for merging.

#8 Updated by Greg Farnum over 10 years ago

  • Status changed from 7 to Fix Under Review

There's a pull request at https://github.com/ceph/ceph/pull/549. Would also like to schedule another suite on it, but I don't expect any issues since the only issue we saw passed through the rbd suite (including a previous trigger) already.

#9 Updated by Greg Farnum over 10 years ago

  • Status changed from Fix Under Review to Resolved

I merged this in this morning, be9a39b766ba825ef348ca6e2de1f4db7c091dff

A suite run saw a lot of the btrfs locking failures, but the only other thing that turned up was a scrub failure which I realized this morning (after running it all night) was actually on the monitors. :D

Also available in: Atom PDF