[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 5/6] x86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries.





On 4/6/2017 2:02 AM, Yu Zhang wrote:


On 4/6/2017 1:28 AM, Yu Zhang wrote:


On 4/6/2017 1:18 AM, Yu Zhang wrote:


On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:

On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
wrote:
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This patch does this
asynchronously with the current p2m_change_entry_type_global()
interface.

New field entry_count is introduced in struct p2m_domain, to record the number of p2m_ioreq_server p2m page table entries. One nature of these entries is that they only point to 4K sized page frames, because all p2m_ioreq_server entries are originated from p2m_ram_rw ones in p2m_change_type_one(). We do not need to worry about the counting for
2M/1G sized pages.
Assuming that all p2m_ioreq_server entries are *created* by
p2m_change_type_one() may valid, but can you assume that they are only
ever *removed* by p2m_change_type_one() (or recalculation)?

What happens, for instance, if a guest balloons out one of the ram
pages? I don't immediately see anything preventing a p2m_ioreq_server
page from being ballooned out, nor anything on the
decrease_reservation() path decreasing p2m->ioreq.entry_count. Or did
I miss something?

Other than that, only one minor comment...
Thanks for your thorough consideration, George. But I do not think we
need to worry about this:

If the emulation is in process, the balloon driver cannot get a
p2m_ioreq_server page - because
it is already allocated.
In theory, yes, the guest *shouldn't* do this. But what if the guest OS
makes a mistake?  Or, what if the ioreq server makes a mistake and
places a watch on a page that *isn't* allocated by the device driver, or forgets to change a page type back to ram when the device driver frees
it back to the guest kernel?
Then the lazy p2m change code will be triggered, and this page is reset
to p2m_ram_rw
before being set to p2m_invalid, just like the normal path. Will this be
a problem?
No, I'm talking about before the ioreq server detaches.
Sorry, I do not get it. Take scenario 1 for example:
Scenario 1: Bug in driver
1. Guest driver allocates page A
2. dm marks A as p2m_ioreq_server
Here in step 2. the ioreq.entry_count increases;
3. Guest driver accidentally frees A to the kernel
4. guest kernel balloons out page A; ioreq.entry_count is wrong

Here in step 4. the ioreq.entry_count decreases.

Oh. I figured out. This entry is not invalidated yet if ioreq is not unmapped. Sorry.

Isn't this what we are expecting?

Yu

Scenario 2: Bug in the kernel
1. Guest driver allocates page A
2. dm marks A as p2m_ioreq_server
3. Guest kernel tries to balloon out page B, but makes a calculation
mistake and balloons out A instead; now ioreq.entry_count is wrong

Scenario 3: Off-by-one bug in devicemodel
1. Guest driver allocates pages A-D
2. dm makes a mistake and marks pages A-E as p2m_ioreq_server (one extra
page)
3. guest kernel balloons out page E; now ioreq.entry_count is wrong

Scenario 4: "Leak" in devicemodel
1. Guest driver allocates page A
2. dm marks A as p2m_ioreq_server
3. Guest driver is done with page A, but DM forgets to reset it to
p2m_ram_rw
4. Guest driver frees A to guest kernel
5. Guest kernel balloons out page A; now ioreq.entry_count is wrong

I could keep going on; there are *lots* of bugs in the driver, the
kernel, or the devicemodel which could cause pages marked
p2m_ioreq_server to end up being ballooned out; which under the current
code would make ioreq.entry_count wrong.

It's the hypervisor's job to do the right thing even when other
components have bugs in them. This is why I initially suggested keeping count in atomic_write_ept_entry() -- no matter how the entry is changed, we always know exactly how many entries of type p2m_ioreq_server we have.


Well, count in atomic_write_ept_entry() only works for ept. Besides, it requires
interface changes - need to pass the p2m.
Another thought is - now in XenGT, PoD is disabled to make sure gfn->mfn does not change. So how about we disable ballooning if ioreq.entry_count is not 0?

Or maybe just change the p2m_ioreq_server to p2m_ram_rw before it is set to p2m_invalid?
Like below code:

diff --git a/xen/common/memory.c b/xen/common/memory.c

index 7dbddda..40e5f63 100644

--- a/xen/common/memory.c

+++ b/xen/common/memory.c

@@ -288,6 +288,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)

put_gfn(d, gmfn);

return 1;

}

+if ( unlikely(p2mt == p2m_ioreq_server) )

+p2m_change_type_one(d, gmfn,

+p2m_ioreq_server, p2m_ram_rw);

+

#else

mfn = gfn_to_mfn(d, _gfn(gmfn));

#endif


Sorry for the format. Please ignore above code, and see below one:

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 7dbddda..40e5f63 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -288,6 +288,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
         put_gfn(d, gmfn);
         return 1;
     }
+    if ( unlikely(p2mt == p2m_ioreq_server) )
+        p2m_change_type_one(d, gmfn,
+                            p2m_ioreq_server, p2m_ram_rw);
+
 #else
     mfn = gfn_to_mfn(d, _gfn(gmfn));
 #endif

Yu
Yu


Yu
  -George




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.