[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V4] xen/gnttab: Store frame GFN in struct page_info on Arm


  • To: Oleksandr Tyshchenko <olekstysh@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 14 Dec 2021 14:37:12 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=67ANt6WrOHi8j4wVKlSSrrOphozpRx579YorBVJoke8=; b=Hrprz23H7WYxX6vjfPjhk1sf+pV/pyF1e5IbjY+KBQGIETKr26sNrFzlIZ/BIBGvdPS+gIgmS4AamAHcklB3Tl/7l+Amr/cyY69cVFBqnAI7T6EJILSxGMK1uYCUCEaFLk66dYFqVvUOEgEUXO4PRK2Amsc3hruoDwgyKJiBUxoR3sZqCt7lXjJRS73jWVYwqN5f3/UHGMHK+fs5T40trLhCjNYWGyCItL+eyjqlC+CxbP73rvGNBlhc1ts9XKyUHBoqPwC7mapgQg2sjsW7dQseqg+U2Nww3XslCa+PZEmKiBTab+5MdomSB30u6qjOUS8W+ifoP+AvyUattLEaLw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fKVbWLvzQO3aXlRB6enybpNwVT62jN6bKETAokqR2OZrnoPMFoxT1Cr2LbbYTIHflMtcHw+eNp32Oi1nRXGHbqidHNJu78OPNDa722xIpsrgNiE5rf6DRDNQOLdNbu0tL+t6VQRXoiOvEZz7BZao1eEhgznNq0z1LEBoTE0IfHgJPo4ckV4AMA94jb5QZgS33XQdOTDIKP6u4C7/0sBAJlbByNcGlLDXU6JDtiCQxzkbGhzFJ4oeHq/VMkSzyX8vjwNjsbTzHIN1H49nwiRpSU1qMssyNUiRLdw2GC29lYiIGgJuw+pOoBU5iK65FpHR72VXWGiqh1oLKSVUTZtbCA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Bob Eshleman <bobbyeshleman@xxxxxxxxx>, Alistair Francis <alistair.francis@xxxxxxx>, Connor Davis <connojdavis@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 14 Dec 2021 13:37:23 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 03.12.2021 21:33, Oleksandr Tyshchenko wrote:
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1382,8 +1382,10 @@ void share_xen_page_with_guest(struct page_info *page, 
> struct domain *d,
>      spin_lock(&d->page_alloc_lock);
>  
>      /* The incremented type count pins as writable or read-only. */
> -    page->u.inuse.type_info =
> -        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
> +    page->u.inuse.type_info &= ~(PGT_type_mask | PGT_count_mask);
> +    page->u.inuse.type_info |= (flags == SHARE_ro ? PGT_none
> +                                                  : PGT_writable_page) |
> +                                MASK_INSR(1, PGT_count_mask);

It's certainly up to the Arm maintainers to judge, but I would have
deemed it better (less risky going forward) if PGT_count_mask
continued to use the bottom bits. (I guess I may have said so before.)

> @@ -1487,7 +1489,23 @@ int xenmem_add_to_physmap_one(
>      }
>  
>      /* Map at new location. */
> -    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
> +    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
> +        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
> +    else
> +    {
> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +        p2m_write_lock(p2m);
> +        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), INVALID_GFN) )
> +        {
> +            rc = p2m_set_entry(p2m, gfn, 1, mfn, t, p2m->default_access);
> +            if ( !rc )
> +                page_set_xenheap_gfn(mfn_to_page(mfn), gfn);
> +        }
> +        else
> +            rc = -EBUSY;

May I suggest to avoid failing here when page_get_xenheap_gfn(mfn_to_page(mfn))
matches the passed in GFN?

> @@ -2169,6 +2170,9 @@ void *alloc_xenheap_pages(unsigned int order, unsigned 
> int memflags)
>      if ( unlikely(pg == NULL) )
>          return NULL;
>  
> +    for ( i = 0; i < (1u << order); i++ )
> +        arch_alloc_xenheap_page(&pg[i]);
> +
>      memguard_unguard_range(page_to_virt(pg), 1 << (order + PAGE_SHIFT));

I think this and ...

> @@ -2177,14 +2181,22 @@ void *alloc_xenheap_pages(unsigned int order, 
> unsigned int memflags)
>  
>  void free_xenheap_pages(void *v, unsigned int order)
>  {
> +    struct page_info *pg;
> +    unsigned int i;
> +
>      ASSERT(!in_irq());
>  
>      if ( v == NULL )
>          return;
>  
> +    pg = virt_to_page(v);
> +
>      memguard_guard_range(v, 1 << (order + PAGE_SHIFT));

... this really want to (logically) move into the new arch hooks.
That'll effectively mean to simply drop the Arm stubs afaict (and I
notice there's some dead code there on x86, which I guess I'll make
a patch to clean up). But first of all this suggests that you want
to call the hooks with base page and order, putting the loops there.

> @@ -166,6 +173,32 @@ extern unsigned long xenheap_base_pdx;
>  
>  #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
>  
> +static inline gfn_t page_get_xenheap_gfn(struct page_info *p)

const please wherever possible.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.