[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.



>>> On 15.06.16 at 11:50, <george.dunlap@xxxxxxxxxx> wrote:
> On 14/06/16 14:31, Jan Beulich wrote:
>>>>> On 14.06.16 at 15:13, <george.dunlap@xxxxxxxxxx> wrote:
>>> On 14/06/16 11:45, Jan Beulich wrote:
>>>> Locking is somewhat strange here: You protect against the "set"
>>>> counterpart altering state while you retrieve it, but you don't
>>>> protect against the returned data becoming stale by the time
>>>> the caller can consume it. Is that not a problem? (The most
>>>> concerning case would seem to be a race of hvmop_set_mem_type()
>>>> with de-registration of the type.)
>>>
>>> How is that different than calling set_mem_type() first, and then
>>> de-registering without first unmapping all the types?
>> 
>> Didn't we all agree this is something that should be disallowed
>> anyway (not that I've seen this implemented, i.e. just being
>> reminded of it by your reply)?
> 
> I think I suggested it as a good idea, but Paul and Yang both thought it
> wasn't necessary.  Do you think it should be a requirement?

I think things shouldn't be left in a half-adjusted state.

> We could have the de-registering operation fail in those circumstances;
> but probably a more robust thing to do would be to have Xen go change
> all the ioreq_server entires back to ram_rw (since if the caller just
> ignores the failure, things are in an even worse state).

If that's reasonable to do without undue delay (e.g. by using
the usual "recalculate everything" forced to trickle down through
the page table levels, then that's as good.

>>>>> +    uint32_t flags;     /* IN - types of accesses to be forwarded to the
>>>>> +                           ioreq server. flags with 0 means to unmap the
>>>>> +                           ioreq server */
>>>>> +#define _HVMOP_IOREQ_MEM_ACCESS_READ 0
>>>>> +#define HVMOP_IOREQ_MEM_ACCESS_READ \
>>>>> +    (1u << _HVMOP_IOREQ_MEM_ACCESS_READ)
>>>>> +
>>>>> +#define _HVMOP_IOREQ_MEM_ACCESS_WRITE 1
>>>>> +#define HVMOP_IOREQ_MEM_ACCESS_WRITE \
>>>>> +    (1u << _HVMOP_IOREQ_MEM_ACCESS_WRITE)
>>>>
>>>> Is there any use for these _HVMOP_* values? The more that they
>>>> violate standard C name space rules?
>>>
>>> I assume he's just going along with what he sees in params.h.
>>> "Violating standard C name space rules" by having #defines which start
>>> with a single _ seems to be a well-established policy for Xen. :-)
>> 
>> Sadly, and I'm trying to prevent matters becoming worse.
>> Speaking of which - there are XEN_ prefixes missing here too.
> 
> Right, so in that case I think I would have said, "I realize that lots
> of other places in the Xen interface use this sort of template for
> flags, but I think it's a bad idea and I'm trying to stop it expanding.
>  Is there any actual need to have the bit numbers defined separately?
> If not, please just define each flag as (1u << 0), &c."

Actually my coding style related comment wasn't for these two
stage definitions - for those I simply questioned whether they're
needed. My style complaint was for the <underscore><uppercase>
name pattern (which would simply be avoided by not having the
individual bit number #define-s).

> I think you've tripped over "changing coding styles" in unfamiliar code
> before too, so you know how frustrating it is to try to follow the
> existing coding style only to be told that you did it wrong. :-)

Agreed, you caught me on this one. Albeit with the slight
difference that in the public interface we can't as easily correct
old mistakes to aid people who simply clone surrounding code
when adding new bits (the possibility of adding #ifdef-ery doesn't
seem very attractive to me there, unless we got reports of actual
name space collisions).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.