[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 4/9] xen: introduce XEN_DOMCTL_devour





On 04/12/2014 10:19, David Vrabel wrote:
On 04/12/14 00:50, Julien Grall wrote:
Hi Vitaly,

On 03/12/2014 17:16, Vitaly Kuznetsov wrote:
New operation sets the 'recipient' domain which will recieve all

s/recieve/receive/

memory pages from a particular domain and kills the original domain.

Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
---
@@ -1764,13 +1765,32 @@ void free_domheap_pages(struct page_info *pg,
unsigned int order)

[..]

+        else
+        {
+            mfn = page_to_mfn(pg);
+            gmfn = mfn_to_gmfn(d, mfn);
+
+            page_set_owner(pg, NULL);
+            if ( assign_pages(d->recipient, pg, order, 0) )
+                /* assign_pages reports the error by itself */
+                goto out;
+
+            if ( guest_physmap_add_page(d->recipient, gmfn, mfn,
order) )

On ARM, mfn_to_gmfn will always return the mfn. This would result to add
a 1:1 mapping in the recipient domain.

But ... only DOM0 has its memory mapped 1:1. So this code may blow up
the P2M of the recipient domain.

I'm not an x86 expert, but this may also happen when the recipient
domain is using translated page mode (i.e HVM/PVHM).

mfn_to_gmfn() does the correct thing on x86 as it does a m2p lookup.

Is it because machine_to_phys_mapping caches the translation for dying
domain?

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.