[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PML (Page Modification Logging) design for Xen



>>> On 12.02.15 at 03:39, <kai.huang@xxxxxxxxxxxxxxx> wrote:
> On 02/11/2015 07:52 PM, Andrew Cooper wrote:
>> On 11/02/15 08:28, Kai Huang wrote:
>>> Design
>>> ======
>>>
>>> - PML feature is used globally
>>>
>>> A new Xen boot parameter, say 'opt_enable_pml', will be introduced to
>>> control PML feature detection, and PML feature will only be detected
>>> if opt_enable_pml = 1. Once PML feature is detected, it will be used
>>> for dirty logging for all domains globally. Currently we don't support
>>> to use PML on basis of per-domain as it will require additional
>>> control from XL tool.
>> Rather than adding in a new top level command line option for an ept
>> subfeature, it would be preferable to add an "ept=" option which has
>> "pml" as a sub boolean.
> Which is good to me, if Jan agrees.
> 
> Jan, which do you prefer here?

A single "ept=" option as Andrew suggested.

>>> Currently, PML will be used as long as there's guest memory in dirty
>>> logging mode, no matter globally or partially. And in case of partial
>>> dirty logging, we need to check if the logged GPA in PML buffer is in
>>> dirty logging range.
>> I am not sure this is a problem.  HAP vram tracking already leaks
>> non-vram frames into the dirty bitmap, caused by calls to
>> paging_mark_dirty() from paths which are not caused by a p2m_logdirty fault.
> Hmm. Seems right. Probably this also depends on how userspace uses the 
> dirty bitmap.
> 
> If this is not a problem, we can avoid the checking of whether logged 
> GPAs are in logdirty ranges but unconditionally update them to log-dirty 
> radix tree.
> 
> Jan, what's your comments here?

I agree with Andrew, but Tim's confirmation would be nice to have.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.