[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V3] tools/libxc, xen/x86: Added xc_set_mem_access_multi()



On 09/06/2016 01:26 PM, Jan Beulich wrote:
>>>> On 06.09.16 at 12:16, <ian.jackson@xxxxxxxxxxxxx> wrote:
>> Razvan Cojocaru writes ("[PATCH V3] tools/libxc, xen/x86: Added 
>> xc_set_mem_access_multi()"):
>>> Currently it is only possible to set mem_access restrictions only for
>>> a contiguous range of GFNs (or, as a particular case, for a single GFN).
>>> This patch introduces a new libxc function taking an array of GFNs.
>>> The alternative would be to set each page in turn, using a userspace-HV
>>> roundtrip for each call, and triggering a TLB flush per page set.
>>>
>>> Signed-off-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
>>> Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
>>
>> I have no objection with my tools maintainer hat on.  But I have a
>> question for you and/or the hypervisor maintainers:
>>
>> Could this aim be achieved with a multicall ?  (Can multicalls defer
>> the TLB flush?)
> 
> No, they can't, but it's not entirely excluded to make them do so.
> And then iirc there are no multicalls available to HVM (and hence
> PVHv2) guests right now.

Oh, right, Ian was talking about the Xen multicall mechanism, not simply
chaining userspace xc_set_mem_access() calls.

In any case, any single xc_set_mem_access() call does a TLB flush
hypervisor-side, and AFAIK no flush hypercall is currently available.
Other than that, I've never used the multicall mechanism so I'm not sure
what else it would imply in this case.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.