[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V4] xen/gnttab: Store frame GFN in struct page_info on Arm




On 04.01.22 10:36, Jan Beulich wrote:

Hi Jan


On 22.12.2021 13:44, Oleksandr wrote:
On 22.12.21 14:33, Julien Grall wrote:
Hi Jan,

Hi Julien, Jan



On 22/12/2021 13:05, Jan Beulich wrote:
On 22.12.2021 11:01, Julien Grall wrote:
On 14/12/2021 17:45, Jan Beulich wrote:
On 14.12.2021 17:26, Oleksandr wrote:
On 14.12.21 15:37, Jan Beulich wrote:
On 03.12.2021 21:33, Oleksandr Tyshchenko wrote:
@@ -2177,14 +2181,22 @@ void *alloc_xenheap_pages(unsigned int
order, unsigned int memflags)
         void free_xenheap_pages(void *v, unsigned int order)
     {
+    struct page_info *pg;
+    unsigned int i;
+
         ASSERT(!in_irq());
             if ( v == NULL )
             return;
     +    pg = virt_to_page(v);
+
         memguard_guard_range(v, 1 << (order + PAGE_SHIFT));
... this really want to (logically) move into the new arch hooks.
That'll effectively mean to simply drop the Arm stubs afaict (and I
notice there's some dead code there on x86, which I guess I'll make
a patch to clean up). But first of all this suggests that you want
to call the hooks with base page and order, putting the loops there.
I see your point and agree ... However I see the on-list patches that
remove common memguard_* invocations and x86 bits.
So I assume, this request is not actual anymore, or I still need
to pass
an order to new arch hooks? Please clarify.
Well, that patch (really just the Arm one) effectively takes care of
part of what I did say above. Irrespective I continue to think that
the hook should take a (page,order) tuple instead of getting invoked
once for every order-0 page. And the hook invocations should be placed
such that they could fulfill the (being removed) memguard function
(iirc that was already the case, at least mostly).
IIUC your suggestion, with your approach, alloc_xenheap_pages() would
look like:

        for ( i = 0; i < (1u << order); i++ )
            pg[i].count_info |= PGC_xen_heap;

        arch_alloc_xenheap_pages(pg, 1U << order);
Like Oleksandr said, the 2nd argument would be just "order".

The Arm implementation for arch_alloc_xenheap_pages() would also
contain
a loop.

This could turn out to be quite expensive with large allocation (1GB
allocation would require 16MB of cache) because the cache may not have
enough space contain all the pages of that range. So you would have to
pull twice the page_info in the cache.
Hmm, that's a fair point. I assume you realize that a similar issue of
higher overhead would occur when using your approach, and when some
memguard-like thing was to reappear: Such mapping operations typically
are more efficient when done on a larger range.
Yes, I was aware of that when I wrote my message. However, they are
not necessary at the moment. So I think we can defer the discussion.

  Since that's only a
hypothetical use at this point, I'm willing to accept your preference.
I'd like us to consider one more aspect though: All you need on Arm is
the setting of the exact same bits to the exact same pattern for every
struct page_info involved. Can't we simply have an arch hook returning
that pattern, for generic code to then OR it in alongside PGC_xen_heap?
arch_alloc_xenheap_pages() will modify inuse.type_info so we can't or
the value to PGC_xen_heap.
I wonder, can we apply pattern here at alloc_heap_pages() when
initializing type_info?
https://xenbits.xen.org/gitweb/?p=xen.git;f=xen/common/page_alloc.c;hb=refs/heads/master#l1027
If yes, the next question would be what indicator to use here to make
sure that page is really xenheap page.
Technically that would seem to be possible, by way of passing yet another
argument into alloc_heap_pages(). I'm not (yet) convinced, though, of this
being desirable.

Yes, adding an extra argument would work, but I also don't like that idea much. I meant to determine somehow from the existing arguments (zone_*, memflags) that a particular page is a xenheap page.



I also wonder, can we apply pattern for all type of pages here (without
differentiating)?
I'm afraid I don't understand this part: How could we get along without
differentiating Xen heap and domain heap pages?

I was thinking, what bad could happen if we would simply use the following:

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 50334a0..97cf0d8 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1024,7 +1024,7 @@ static struct page_info *alloc_heap_pages(
                                 &tlbflush_timestamp);

         /* Initialise fields which have other uses for free pages. */
-        pg[i].u.inuse.type_info = 0;
+        pg[i].u.inuse.type_info = PGT_TYPE_INFO_INIT_PATTERN;
         page_set_owner(&pg[i], NULL);

     }


on Arm:
#define PGT_TYPE_INFO_INIT_PATTERN   PGT_gfn_mask
or
#define PGT_TYPE_INFO_INIT_PATTERN   gfn_x(PGT_INVALID_XENHEAP_GFN)

on x86:
#define PGT_TYPE_INFO_INIT_PATTERN   0


Yes, we apply this pattern to *all* pages, although the gfn portion is only used for xenheap pages. I might mistake but I think this pattern (which doesn't set any bits outside of the gfn portion) is harmless for non-xenheap pages, albeit an extra action.

Well, if not appropriate I will follow the advise to OR the pattern in alloc_xenheap_pages(). For !CONFIG_SEPARATE_XENHEAP the loop is already there, but for CONFIG_SEPARATE_XENHEAP new loop is needed.


Thank you.



Jan

--
Regards,

Oleksandr Tyshchenko




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.