[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v5 2/3] ioreq: Indent ioreq_server_{alloc,free}_mfn() body one level deeper



No functional change. It adds a wrapping block to prepare for the loop
that the subsequent patch introduces to handle multiple ioreq pages.

Signed-off-by: Julian Vetter <julian.vetter@xxxxxxxxxx>
---
Changes in v5
- Added proper commit message and fixed commit title
---
 xen/common/ioreq.c | 40 ++++++++++++++++++++++------------------
 1 file changed, 22 insertions(+), 18 deletions(-)

diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 5b026fc1b2..b22f656701 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -277,22 +277,24 @@ static int ioreq_server_alloc_mfn(struct ioreq_server *s, 
bool buf)
         return 0;
     }
 
-    page = alloc_domheap_page(s->target, MEMF_no_refcount);
+    {
+        page = alloc_domheap_page(s->target, MEMF_no_refcount);
 
-    if ( !page )
-        return -ENOMEM;
+        if ( !page )
+            return -ENOMEM;
 
-    if ( !get_page_and_type(page, s->target, PGT_writable_page) )
-    {
-        /*
-         * The domain can't possibly know about this page yet, so failure
-         * here is a clear indication of something fishy going on.
-         */
-        domain_crash(s->emulator);
-        return -ENODATA;
-    }
+        if ( !get_page_and_type(page, s->target, PGT_writable_page) )
+        {
+            /*
+             * The domain can't possibly know about this page yet, so failure
+             * here is a clear indication of something fishy going on.
+             */
+            domain_crash(s->emulator);
+            return -ENODATA;
+        }
 
-    mfn = page_to_mfn(page);
+        mfn = page_to_mfn(page);
+    }
     iorp->va = vmap(&mfn, 1);
     if ( !iorp->va )
         goto fail;
@@ -315,12 +317,14 @@ static void ioreq_server_free_mfn(struct ioreq_server *s, 
bool buf)
     if ( !iorp->va )
         return;
 
-    page = vmap_to_page(iorp->va);
-    vunmap(iorp->va);
-    iorp->va = NULL;
+    {
+        page = vmap_to_page(iorp->va);
+        vunmap(iorp->va);
+        iorp->va = NULL;
 
-    put_page_alloc_ref(page);
-    put_page_and_type(page);
+        put_page_alloc_ref(page);
+        put_page_and_type(page);
+    }
 }
 
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
-- 
2.51.0



--
Julian Vetter | Vates Hypervisor & Kernel Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.