[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V4] xen/gnttab: Store frame GFN in struct page_info on Arm




On 14.12.21 15:37, Jan Beulich wrote:

Hi Jan, Julien.

On 03.12.2021 21:33, Oleksandr Tyshchenko wrote:
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1382,8 +1382,10 @@ void share_xen_page_with_guest(struct page_info *page, 
struct domain *d,
      spin_lock(&d->page_alloc_lock);
/* The incremented type count pins as writable or read-only. */
-    page->u.inuse.type_info =
-        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
+    page->u.inuse.type_info &= ~(PGT_type_mask | PGT_count_mask);
+    page->u.inuse.type_info |= (flags == SHARE_ro ? PGT_none
+                                                  : PGT_writable_page) |
+                                MASK_INSR(1, PGT_count_mask);
It's certainly up to the Arm maintainers to judge, but I would have
deemed it better (less risky going forward) if PGT_count_mask
continued to use the bottom bits. (I guess I may have said so before.)

If I am not mistaken the request was to make sure (re-check) that moving the count portion up was compatible with all present uses. The code above is only a single place on Arm
which needs updating.



@@ -1487,7 +1489,23 @@ int xenmem_add_to_physmap_one(
      }
/* Map at new location. */
-    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
+    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
+        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
+    else
+    {
+        struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+        p2m_write_lock(p2m);
+        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), INVALID_GFN) )
+        {
+            rc = p2m_set_entry(p2m, gfn, 1, mfn, t, p2m->default_access);
+            if ( !rc )
+                page_set_xenheap_gfn(mfn_to_page(mfn), gfn);
+        }
+        else
+            rc = -EBUSY;
May I suggest to avoid failing here when page_get_xenheap_gfn(mfn_to_page(mfn))
matches the passed in GFN?


Good question...
There was an explicit request to fail here if page_get_xenheap_gfn() returns a valid GFN. From the other side, if old GFN matches new GFN we do not remove the mapping in gnttab_set_frame_gfn(), so probably we could avoid failing here in that particular case. @Julien, what do you think?



@@ -2169,6 +2170,9 @@ void *alloc_xenheap_pages(unsigned int order, unsigned 
int memflags)
      if ( unlikely(pg == NULL) )
          return NULL;
+ for ( i = 0; i < (1u << order); i++ )
+        arch_alloc_xenheap_page(&pg[i]);
+
      memguard_unguard_range(page_to_virt(pg), 1 << (order + PAGE_SHIFT));
I think this and ...

@@ -2177,14 +2181,22 @@ void *alloc_xenheap_pages(unsigned int order, unsigned 
int memflags)
void free_xenheap_pages(void *v, unsigned int order)
  {
+    struct page_info *pg;
+    unsigned int i;
+
      ASSERT(!in_irq());
if ( v == NULL )
          return;
+ pg = virt_to_page(v);
+
      memguard_guard_range(v, 1 << (order + PAGE_SHIFT));
... this really want to (logically) move into the new arch hooks.
That'll effectively mean to simply drop the Arm stubs afaict (and I
notice there's some dead code there on x86, which I guess I'll make
a patch to clean up). But first of all this suggests that you want
to call the hooks with base page and order, putting the loops there.

I see your point and agree ... However I see the on-list patches that remove common memguard_* invocations and x86 bits. So I assume, this request is not actual anymore, or I still need to pass an order to new arch hooks? Please clarify.



@@ -166,6 +173,32 @@ extern unsigned long xenheap_base_pdx;
#define maddr_get_owner(ma) (page_get_owner(maddr_to_page((ma)))) +static inline gfn_t page_get_xenheap_gfn(struct page_info *p)
const please wherever possible.

ok, will do.


Thank you.



Jan

--
Regards,

Oleksandr Tyshchenko




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.