Bug #4652
libceph: messages from pool not initialized
0%
Description
This may not have been a problem until some of my recent
changes to the messenger (and osd client), but...
An osd request is allocated in the osd client using
ceph_osdc_alloc_request(). One of its parameters
indicates whether a memory pool should be used to
satisfy the request (so it's "guaranteed" to succeed).
If use_mempool is set, the osd request is allocated from
a request memory pool associated with the osd client,
its request message is allocated from a request ("op")
message pool (a ceph construct backed--sort of--by a
mempool), and its response message is allocated from
a response mempool.
For the request structure, it is zeroed after allocation
(similar to the kzalloc() used if use_mempool is not set).
But for the request and response messages, these are
not zeroed after allocation. The result has triggered
an assertion in my new code. That's OK--what needs to
happen is when a message pool-allocated message is
returned in ceph_msgpool_put(), I need to re-set the
fields that I count on being in a reset state in the
next allocation. And I'm going to do that as a normal
part of what I'm developing (this code is not yet
committed).
This issue is to just suggest that we should probably
re-initialize a few more of the fields when they
are released back to the pool. For example:
- some fields in the header might be better re-initialized
(like middle_len, data_len, and so on).
- the crc's in the footer could be zeroed
- more_to_follow
- needs_out_seq (could probably just go away)
- ack_stamp, which is not really used, but a non-zero
value might be misleading
I am not aware that these are causing problems but
it seems like some of the might.
History
#1 Updated by Sage Weil over 10 years ago
- Project changed from Ceph to rbd
#2 Updated by Ilya Dryomov over 9 years ago
- Project changed from rbd to Linux kernel client