[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 4/4] x86: fix pinned cache attribute handling



>>> On 04.04.14 at 16:30, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> On Mon, 31 Mar 2014, Jan Beulich wrote:
>> >>> On 28.03.14 at 19:00, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
>> > I miss some context here.
>> > What is the issue with xc_domain_pin_memory_cacheattr_range and how does
>> > it affect QEMU (that uses the xc_domain_pin_memory_cacheattr variety)?
>> 
>> The issue is that the hypervisor (and hence libxc) interface expect the
>> passed range to be inclusive, yet the ending page number all the qemus
>> pass is one past the intended range.
> 
> Thanks for the clear explanation.
> Is this patch what you are looking for?
> 
> ---
> 
> diff --git a/xen-all.c b/xen-all.c
> index ba34739..027e7a8 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -323,7 +323,7 @@ go_physmap:
>  
>      xc_domain_pin_memory_cacheattr(xen_xc, xen_domid,
>                                     start_addr >> TARGET_PAGE_BITS,
> -                                   (start_addr + size) >> TARGET_PAGE_BITS,
> +                                   (start_addr + size - 1) >> 
> TARGET_PAGE_BITS,
>                                     XEN_DOMCTL_MEM_CACHEATTR_WB);
>  
>      snprintf(path, sizeof(path),

Something along these lines, for all maintained trees. The main thing
I'm not sure about is whether size can ever be zero, or not page
aligned - in both cases further care would need to be taken.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.