[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 5/6] x86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries.



On 05/04/17 17:32, Yu Zhang wrote:
> 
> 
> On 4/6/2017 12:35 AM, George Dunlap wrote:
>> On 05/04/17 17:22, Yu Zhang wrote:
>>>
>>> On 4/5/2017 10:41 PM, George Dunlap wrote:
>>>> On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
>>>> wrote:
>>>>> After an ioreq server has unmapped, the remaining p2m_ioreq_server
>>>>> entries need to be reset back to p2m_ram_rw. This patch does this
>>>>> asynchronously with the current p2m_change_entry_type_global()
>>>>> interface.
>>>>>
>>>>> New field entry_count is introduced in struct p2m_domain, to record
>>>>> the number of p2m_ioreq_server p2m page table entries. One nature of
>>>>> these entries is that they only point to 4K sized page frames, because
>>>>> all p2m_ioreq_server entries are originated from p2m_ram_rw ones in
>>>>> p2m_change_type_one(). We do not need to worry about the counting for
>>>>> 2M/1G sized pages.
>>>> Assuming that all p2m_ioreq_server entries are *created* by
>>>> p2m_change_type_one() may valid, but can you assume that they are only
>>>> ever *removed* by p2m_change_type_one() (or recalculation)?
>>>>
>>>> What happens, for instance, if a guest balloons out one of the ram
>>>> pages?  I don't immediately see anything preventing a p2m_ioreq_server
>>>> page from being ballooned out, nor anything on the
>>>> decrease_reservation() path decreasing p2m->ioreq.entry_count.  Or did
>>>> I miss something?
>>>>
>>>> Other than that, only one minor comment...
>>> Thanks for your thorough consideration, George. But I do not think we
>>> need to worry about this:
>>>
>>> If the emulation is in process, the balloon driver cannot get a
>>> p2m_ioreq_server page - because
>>> it is already allocated.
>> In theory, yes, the guest *shouldn't* do this.  But what if the guest OS
>> makes a mistake?  Or, what if the ioreq server makes a mistake and
>> places a watch on a page that *isn't* allocated by the device driver, or
>> forgets to change a page type back to ram when the device driver frees
>> it back to the guest kernel?
> 
> Then the lazy p2m change code will be triggered, and this page is reset
> to p2m_ram_rw
> before being set to p2m_invalid, just like the normal path. Will this be
> a problem?

No, I'm talking about before the ioreq server detaches.

Scenario 1: Bug in driver
1. Guest driver allocates page A
2. dm marks A as p2m_ioreq_server
3. Guest driver accidentally frees A to the kernel
4. guest kernel balloons out page A; ioreq.entry_count is wrong

Scenario 2: Bug in the kernel
1. Guest driver allocates page A
2. dm marks A as p2m_ioreq_server
3. Guest kernel tries to balloon out page B, but makes a calculation
mistake and balloons out A instead; now ioreq.entry_count is wrong

Scenario 3: Off-by-one bug in devicemodel
1. Guest driver allocates pages A-D
2. dm makes a mistake and marks pages A-E as p2m_ioreq_server (one extra
page)
3. guest kernel balloons out page E; now ioreq.entry_count is wrong

Scenario 4: "Leak" in devicemodel
1. Guest driver allocates page A
2. dm marks A as p2m_ioreq_server
3. Guest driver is done with page A, but DM forgets to reset it to
p2m_ram_rw
4. Guest driver frees A to guest kernel
5. Guest kernel balloons out page A; now ioreq.entry_count is wrong

I could keep going on; there are *lots* of bugs in the driver, the
kernel, or the devicemodel which could cause pages marked
p2m_ioreq_server to end up being ballooned out; which under the current
code would make ioreq.entry_count wrong.

It's the hypervisor's job to do the right thing even when other
components have bugs in them.  This is why I initially suggested keeping
count in atomic_write_ept_entry() -- no matter how the entry is changed,
we always know exactly how many entries of type p2m_ioreq_server we have.

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.