[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH] Unmmap guest's EPT mapping for poison memory


  • To: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
  • From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
  • Date: Fri, 16 Jul 2010 14:45:21 +0800
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 15 Jul 2010 23:47:32 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcskMC0hFXxJVW1+QYWfa2YCXFqEBAAfo+HQ
  • Thread-topic: [PATCH] Unmmap guest's EPT mapping for poison memory


>-----Original Message-----
>From: Tim Deegan [mailto:Tim.Deegan@xxxxxxxxxx]
>Sent: Thursday, July 15, 2010 11:13 PM
>To: Jiang, Yunhong
>Cc: Keir Fraser; xen-devel
>Subject: Re: [PATCH] Unmmap guest's EPT mapping for poison memory
>
>Hi,
>
>At 14:56 +0100 on 14 Jul (1279119388), Jiang, Yunhong wrote:
>> >Or any of the other types?  This should be called for ram_ro, and
>> >ram_logdirty certainly, and probably mmio_direct too.
>>
>> Yes, we need consider rw/ro/logdirty. Thanks for remind and will fix
>> it. But why should we cover mmio_direct? Can you please give some
>> hints?
>
>I've seen cases where people use mmio_direct to point at actual memory,
>in order to allow uncached mappings.

Thanks for notify this.

>
>> For ram_shared, it deserve more consideration, seems currently the
>> shared memory situation is not handled in the whole offline page flow.
>
>Or vice versa.  I'm happy to ack an initial patch that deals with the
>easy cases, though.
>
>> >I'm not sure that it's safe to nobble other types - e.g. changing from
>> >grant_map_*, paging_* or ram_shared might break state-machines/refcounts
>> >elsewhere.
>>
>> I think this code does not change anything for the refcounts, we simply 
>> destroy the
>guest.
>> Or you mean race happens when other components is changing the p2m table 
>> also?
>I assume that should be ok since we only query the type and destroy the guest.
>> Did I missed anything?
>
>No, I was just suggesting that if you do handle other p2m types here
>it might not be safe to change a page from shared, grant-map &c to
>broken because it would cause bugs in the sharing/granting code.

Yes, agree. Maybe I need have a look on grant-map in future for PV guest.

>
>> The background here is: In some platform, system can find poison
>> memory through like memory scrubbing or L3 cache explicit write back
>> (i.e. async memory checking, not in current context). However,
>> whenenever the poison memory is accessed, it will cause fatal MCE and
>> system crash. So we need make sure the guest can't access the broken
>> memory.
>
>OK - you're protecting against a _host_ crash here?  Now I understand,
>thanks.
>
>In that case I definitely suggest that you move the domain_crash()
>into the p2m lookup functions - all p2m lookups of a broken page should
>return type=broken, and non-"query" p2m lookups should call
>domain_crash() too.   That will catch all the MMIO-emulator and shadow
>paths for free.

Ok, I will try this way.

>
>If you also signal qemu-dm to blow its mapcache that will catch DMA too
>(since it won't be able to re-map the broken page) though it's
>unfortunate to have to rely on good behaviour from qemu-dm for safety.
>Presumably the PV patches will solve that in a better way.

Yes, I will defer it to PV.

>
>Cheers,
>
>Tim.
>

>P.S. Another thing I forgot - please wrap the code that sets the type to
>     broken in a "#ifdef __x86_64__"; it won't work on 32-bit.

I'm not sure. At least it should work for EPT on 32-bit, since EPT table can 
support >8 p2m type per my understanding.

Thanks
--jyh

>
>--
>Tim Deegan <Tim.Deegan@xxxxxxxxxx>
>Principal Software Engineer, XenServer Engineering
>Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.