[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH 4/6] xen-gntdev: Support mapping in HVM domains



On 03/04/2011 10:57 AM, Ian Campbell wrote:
> On Thu, 2011-01-27 at 18:52 +0000, Konrad Rzeszutek Wilk wrote:
>>> @@ -179,11 +184,32 @@ static void gntdev_put_map(struct grant_map *map)
>>>  
>>>       atomic_sub(map->count, &pages_mapped);
>>>  
>>> -     if (map->pages)
>>> +     if (map->pages) {
>>> +             if (!use_ptemod)
>>> +                     unmap_grant_pages(map, 0, map->count);
>>> +
>>>               for (i = 0; i < map->count; i++) {
>>> -                     if (map->pages[i])
>>> +                     uint32_t check, *tmp;
>>> +                     if (!map->pages[i])
>>> +                             continue;
>>> +                     /* XXX When unmapping in an HVM domain, Xen will
>>> +                      * sometimes end up mapping the GFN to an invalid MFN.
>>> +                      * In this case, writes will be discarded and reads 
>>> will
>>> +                      * return all 0xFF bytes.  Leak these unusable GFNs
>>
>> I forgot to ask, under what version of Xen did you run this? I want to add
>> that to the comment so when it gets fixed we know what the failing version 
>> is.
>>
>>> +                      * until Xen supports fixing their p2m mapping.
>>> +                      */
>>> +                     tmp = kmap(map->pages[i]);
>>> +                     *tmp = 0xdeaddead;
> 
> I've just tripped over this check which faults in my PV guest. Seems to
> be related to the handling failures of map_grant_pages()?
> 
> Was the underlying Xen issue here reported somewhere more obvious than
> this comment buried in a patch to the kernel?
> 
> If not please can you raise it as a separate thread clearly marked as a
> hypervisor issue/question, all I can find is bits and pieces spread
> through the threads associated with this kernel patch. I don't think
> I've got a clear enough picture of the issue from those fragments to
> pull it together into a sensible report.
> 
> Ian.
> 

I think there may be other bugs lurking here with these freed pages; I have
observed that even pages that pass this kmap check can become bad at a later
time. This might be due to TLB issues; I haven't had a chance to debug it.
I do have a patch that prevents the pages that have been granted from being
recycled into general use by the kernel; I hadn't posted it yet because it
didn't resolve the issue completely.

8<---------------------------------------------------------
xen-gntdev: avoid reuse of possibly-bad pages
    
In HVM, the unmap hypercall does not reliably associate a valid MFN with
a just-unmapped GFN. A simple validity test of the page is not
sufficient to determine if it will remain valid; pages have been
observed to remain mapped and later become invalid.
    
Instead of releasing the pages to the allocator, keep them in a list to
reuse their GFNs for future mappings, which should always produce a valid
mapping.

** Note: this patch is an RFC, not for a stable patch queue **
    
Signed-off-by: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>

diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index d43ff30..b9b1577 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -54,6 +54,12 @@ MODULE_PARM_DESC(limit, "Maximum number of grants that may 
be mapped by "
 
 static atomic_t pages_mapped = ATOMIC_INIT(0);
 
+/* Pages that are unsafe to use except as gntdev pages due to bad PFN/MFN
+ * mapping after an unmap.
+ */
+static LIST_HEAD(page_reuse);
+static DEFINE_SPINLOCK(page_reuse_lock);
+
 static int use_ptemod;
 
 struct gntdev_priv {
@@ -122,13 +128,21 @@ static struct grant_map *gntdev_alloc_map(struct 
gntdev_priv *priv, int count)
            NULL == add->pages)
                goto err;
 
+       spin_lock(&page_reuse_lock);
        for (i = 0; i < count; i++) {
-               add->pages[i] = alloc_page(GFP_KERNEL | __GFP_HIGHMEM);
-               if (add->pages[i] == NULL)
-                       goto err;
+               if (list_empty(&page_reuse)) {
+                       add->pages[i] = alloc_page(GFP_KERNEL | __GFP_HIGHMEM);
+                       if (add->pages[i] == NULL)
+                               goto err_unlock;
+               } else {
+                       add->pages[i] = list_entry(page_reuse.next,
+                                                  struct page, lru);
+                       list_del(&add->pages[i]->lru);
+               }
                add->map_ops[i].handle = -1;
                add->unmap_ops[i].handle = -1;
        }
+       spin_unlock(&page_reuse_lock);
 
        add->index = 0;
        add->count = count;
@@ -136,12 +150,13 @@ static struct grant_map *gntdev_alloc_map(struct 
gntdev_priv *priv, int count)
 
        return add;
 
+err_unlock:
+       for (i = 0; i < count; i++) {
+               if (add->pages[i])
+                       list_add(&add->pages[i]->lru, &page_reuse);
+       }
+       spin_unlock(&page_reuse_lock);
 err:
-       if (add->pages)
-               for (i = 0; i < count; i++) {
-                       if (add->pages[i])
-                               __free_page(add->pages[i]);
-               }
        kfree(add->pages);
        kfree(add->grants);
        kfree(add->map_ops);
@@ -202,29 +217,14 @@ static void gntdev_put_map(struct grant_map *map)
                if (!use_ptemod)
                        unmap_grant_pages(map, 0, map->count);
 
+               spin_lock(&page_reuse_lock);
                for (i = 0; i < map->count; i++) {
                        uint32_t check, *tmp;
                        if (!map->pages[i])
                                continue;
-                       /* XXX When unmapping in an HVM domain, Xen will
-                        * sometimes end up mapping the GFN to an invalid MFN.
-                        * In this case, writes will be discarded and reads will
-                        * return all 0xFF bytes.  Leak these unusable GFNs
-                        * until Xen supports fixing their p2m mapping.
-                        *
-                        * Confirmed present in Xen 4.1-RC3 with HVM source
-                        */
-                       tmp = kmap(map->pages[i]);
-                       *tmp = 0xdeaddead;
-                       mb();
-                       check = *tmp;
-                       kunmap(map->pages[i]);
-                       if (check == 0xdeaddead)
-                               __free_page(map->pages[i]);
-                       else
-                               pr_debug("Discard page %d=%ld\n", i,
-                                       page_to_pfn(map->pages[i]));
+                       list_add(&map->pages[i]->lru, &page_reuse);
                }
+               spin_unlock(&page_reuse_lock);
        }
        kfree(map->pages);
        kfree(map->grants);

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.