Project

General

Profile

Actions

Bug #18586

closed

osd map update sending -1 in flags when pool hits quota

Added by Jeff Layton over 7 years ago. Updated about 7 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Actions #1

Updated by Jeff Layton over 7 years ago

  • Severity changed from 3 - minor to 2 - major

I've been working on adding the new -ENOSPC handling to kcephfs (http://tracker.ceph.com/issues/17204) and am hitting a problem that I suspect is an OSD or monitor bug.

What I'm doing is setting the pool quota on the cephfs data pool to ~10M, and then running a 'dd' with O_DIRECT calls to fill it up.

When that happens, the last OSD write hangs as expected, and we get an incremental OSD map update soon afterward that should pass along the CEPH_OSDMAP_FULL flag on the pool. That map update has '-1' in the new_flags field, which makes the client then ignore it, and it doesn't take any steps to complete the OSD writes.

So far, I'm relying on debugging in the kclient to determine this. I've poked around in the ceph code and can see that the new_flags field in the Incremental constructor gets set to -1 there. My suspicion is that the code should be updating this prior to sending out the map to show the new flags value, but I'm not sure where that should be occurring.

Actions #2

Updated by Jeff Layton about 7 years ago

  • Status changed from New to Rejected

NOTABUG. I got confused here by some protocol changes that occurred between John's original kernel patchset posting and what went into mainline ceph code. Once I got that straightened out, it worked correctly.

Actions

Also available in: Atom PDF