[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory





On 01/06/2017 21:41, Boris Ostrovsky wrote:
On 06/01/2017 11:38 AM, Julien Grall wrote:
Hi Boris,

On 01/06/17 16:16, Boris Ostrovsky wrote:
On 06/01/2017 10:01 AM, Julien Grall wrote:
Hi Boris,

On 01/06/17 14:33, Boris Ostrovsky wrote:
On 06/01/2017 08:50 AM, Julien Grall wrote:
Hi Boris,

On 31/05/17 14:54, Boris Ostrovsky wrote:
On 05/31/2017 09:03 AM, Julien Grall wrote:
Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
granularity" did
not go far enough to support 64KB in mmap_batch_fn.

The variable 'nr' is the number of 4KB chunk to map. However, when
Linux
is using 64KB page granularity the array of pages
(vma->vm_private_data)
contain one page per 64KB. Fix it by incrementing st->index
correctly.

Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
XEN_PAGE_SIZE.

Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
granularity")
CC: stable@xxxxxxxxxxxxxxx
Reported-by: Feng Kan <fkan@xxxxxxx>
Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
---
 drivers/xen/privcmd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 7a92a5e1d40c..feca75b07fdd 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
void *state)
                 st->global_error = 1;
         }
     }
-    st->va += PAGE_SIZE * nr;
-    st->index += nr;
+    st->va += XEN_PAGE_SIZE * nr;
+    st->index += nr / XEN_PFN_PER_PAGE;

     return 0;
 }


Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?

Do you mean in the xen_xlate_remap_gfn_array implementation? If so
there are no use of PAGE_MASK as the code has been converted to
support 64K page granularity.

If you mean the x86 version of xen_remap_domain_gfn_array, then we
don't really care as x86 only use 4KB page granularity.


I meant right above the change that you made. Should it also be
replaced
with XEN_PAGE_MASK? (Sorry for being unclear.)

Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
page aligned. So I think we want to keep PAGE_MASK here.

Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
(i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
this somewhere? I don't see it.


I now see that this should (obviously) stay as PAGE_MASK, so

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>

Thank you!


but

nr might be smaller for the last batch. But all the intermediate batch
should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0).

how can we have nr not covering full PAGE_SIZEs? If you are using 64K
pages, how can you map, say, only 4K (if nr==1)?

Xen is using 4K page granularity and so does the toolstack and the hypercall interface. They are unaware of the page size of the kernel.

So even if Linux is using 64K pages, the resulting mapping will be broken down in 4K chunk to issue the hypercall.

To give you a concrete example, if the toolstack requests to map 4K, Linux will allocate a 64K page. Only the first 4K chunk (0-4K) will
have effective mapping to host RAM. The rest (4-64K) will not be mapped.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.