[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory



On 06/01/2017 11:38 AM, Julien Grall wrote:
> Hi Boris,
>
> On 01/06/17 16:16, Boris Ostrovsky wrote:
>> On 06/01/2017 10:01 AM, Julien Grall wrote:
>>> Hi Boris,
>>>
>>> On 01/06/17 14:33, Boris Ostrovsky wrote:
>>>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>>>> Hi Boris,
>>>>>
>>>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>>>> granularity" did
>>>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>>>
>>>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>>>> Linux
>>>>>>> is using 64KB page granularity the array of pages
>>>>>>> (vma->vm_private_data)
>>>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>>>> correctly.
>>>>>>>
>>>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>>>> XEN_PAGE_SIZE.
>>>>>>>
>>>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>>>> granularity")
>>>>>>> CC: stable@xxxxxxxxxxxxxxx
>>>>>>> Reported-by: Feng Kan <fkan@xxxxxxx>
>>>>>>> Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
>>>>>>> ---
>>>>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>>>> --- a/drivers/xen/privcmd.c
>>>>>>> +++ b/drivers/xen/privcmd.c
>>>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>>>> void *state)
>>>>>>>                  st->global_error = 1;
>>>>>>>          }
>>>>>>>      }
>>>>>>> -    st->va += PAGE_SIZE * nr;
>>>>>>> -    st->index += nr;
>>>>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>>>>
>>>>>>>      return 0;
>>>>>>>  }
>>>>>>
>>>>>>
>>>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>>>
>>>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>>>> there are no use of PAGE_MASK as the code has been converted to
>>>>> support 64K page granularity.
>>>>>
>>>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>>>> don't really care as x86 only use 4KB page granularity.
>>>>
>>>>
>>>> I meant right above the change that you made. Should it also be
>>>> replaced
>>>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>>>
>>> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
>>> page aligned. So I think we want to keep PAGE_MASK here.
>>
>> Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
>> (i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
>> this somewhere? I don't see it.
>

I now see that this should (obviously) stay as PAGE_MASK, so

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>

but

> nr might be smaller for the last batch. But all the intermediate batch
> should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0).

how can we have nr not covering full PAGE_SIZEs? If you are using 64K
pages, how can you map, say, only 4K (if nr==1)?

-boris

>
> I think the BUILD_BUG_ON in privcmd_ioctl_mmap_batch ensure that all
> the intermediate batch will always be an integral number of PAGE_SIZE.
>
> Cheers,
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.