[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Alternate p2m design specification



On 06/11/2015 05:06 AM, Tim Deegan wrote:
> At 00:09 +0100 on 11 Jun (1433981379), Andrew Cooper wrote:
>> On 10/06/15 20:41, Ed White wrote:
>>> On 06/10/2015 11:23 AM, Andrew Cooper wrote:
>>>> Also, hardware accelerated altp2m is mutually exclusive with EPT PML, as
>>>> we have no way of determining which translation was in use when a gpa
>>>> was appended to the buffer.  We are going to have to maintain a feature
>>>> compatibility matrix.  Even for non-accelerated altp2m, the cost of
>>>> working out the real gpa is likely prohibitive, and we should probably
>>>> resort to declaring logdirty and altp2m as exclusive features.
>>>>
>>>> ~Andrew
>>>>
>>> I haven't investigated the PML code, but just to be clear, log-dirty
>>> without PML is compatible with altp2m.
>>
>> The logdirty code is built around the notion of a single linear idea of
>> a guests physical address space.  Altp2m, by its very nature, introduces
>> non-linearities into a guests physical address space.
> 
> All current users of the log-dirty code, except for PML, operate on
> MFNs and use the M2P to get a PFN so use for the dirty-bitmap
> operation.  Those paths should all be able to operate with altp2m
> active.  (IOW the single linear address space is the 'host' p2m).
> 

I was going to write something this morning to point this out.

The nested page fault handler does a gfn->mfn translation using
the p2m the hardware is currently using, and then the non-PML
log-dirty code does a mfn->gfn translation that is valid for the
host p2m and uses those gfn's to update the log-dirty bitmap and
change the state of the pages in the host p2m.

However, migration still won't work with altp2m active, because
of the extra state that it knows nothing about.

Ed


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.