[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/11] Alternate p2m: support multiple copies of host p2m



On 01/13/2015 12:56 AM, Jan Beulich wrote:
>>>> On 12.01.15 at 18:36, <edmund.h.white@xxxxxxxxx> wrote:
>> On 01/12/2015 02:00 AM, Jan Beulich wrote:
>>>>>> On 10.01.15 at 00:04, <edmund.h.white@xxxxxxxxx> wrote:
>>>> On 01/09/2015 02:41 PM, Andrew Cooper wrote:
>>>>> Having some non-OS part of the guest swap the EPT tables and
>>>>> accidentally turn a DMA buffer read-only is not going to end well.
>>>>>
>>>>
>>>> The agent can certainly do bad things, and at some level you have to 
>>>> assume 
>> it
>>>> is sensible enough not to. However, I'm not sure this is fundamentally more
>>>> dangerous than what a privileged domain can do today using the MEMOP...
>>>> operations, and people are already using those for very similar purposes.
>>>
>>> I don't follow - how is what privileged domain can do related to the
>>> proposed changes here (which are - via VMFUNC - at least partially
>>> guest controllable, and that's also the case Andrew mentioned in his
>>> reply)? I'm having a hard time understanding how a P2M stripped of
>>> anything that's not plain RAM can be very useful to a guest. IOW
>>> without such fundamental aspects clarified I don't see a point in
>>> looking at the individual patches (which btw, according to your
>>> wording elsewhere, should have been marked RFC).
>>>
>> In this patch series, none of the new hypercalls are protected by xsm
>> policies. Earlier in the process of working on this code, I added such
>> a check to all the hypercalls, but then removed them all because it
>> dawned on me that I didn't actually understand what I was doing and
>> my code only worked because I only ever built the dummy permit everything
>> policy.
>>
>> Should some version of this patch series be accepted, my hope is that
>> someone who does understand xsm policies would put the appropriate checks
>> in place, and at that point I maintain that these extra capabilities
>> would not be fundamentally more dangerous than existing mechanisms
>> available to privileged domains, because policy can prevent the guest
>> using vmfunc. That's obviously not true today.
> 
> Please simply consult with the XSM maintainer on questions/issues
> like this. Proposing a partial (insecure) patch set isn't appropriate.
> 
>> The alternate p2m's only contain entries for ram pages with valid mfn's.
>> All other page types are still handled in the nested page fault handler
>> for the host p2m. Those pages (at least the ones I've encountered) don't
>> require the hardware to have a valid EPTE for the page.
> 
> I.e. the functionality requiring e.g. p2m_ram_logdirty and
> p2m_mmio_direct is then incompatible with your proposed additions
> (which I think was also already noted by Andrew). That's imo not
> a basis to think about accepting (or even reviewing) the series.

Andrew raised that question, and I answered that pages needing
special handling are compatible with these changes. Unless I
misunderstood him, he accepted that.

If the hardware is never intended to be able to satisfy an access to
a page without generating an EPT violation, then all the hardware
needs is a set of EPT's that guarantee that behaviour. These changes
take of advantage of that to avoid copying any of the EPTE's for special
pages into the alternate p2m's. Instead, the nested page fault handler
for the alternate p2m returns a status to indicate that the host p2m
nested page fault handler should handle the violation using the data
in the host p2m.

If the result is that the page becomes ram in the host p2m and the
instruction is restarted, the hardware will generate another violation
and this time the EPTE will be copied.

This works. I have vram log-dirty working, something that does not work
with the nestedhvm nested EPT code.

Ed


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.