[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V1 PATCH] PVH: avoid call to handle_mmio



>>> On 05.06.14 at 01:52, <mukesh.rathor@xxxxxxxxxx> wrote:
> On Wed, 04 Jun 2014 08:24:15 +0100
> "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> 
>> >>> On 04.06.14 at 00:00, <mukesh.rathor@xxxxxxxxxx> wrote:
>> > handle_mmio() is currently unsafe for pvh guests. A call to it would
>> > result in call to vioapic_range that will crash xen since the
>> > vioapic ptr in struct hvm_domain is not initialized for pvh guests.
>> > 
>> > However, one path exists for such a call. If a pvh guest, dom0 or
>> > domU, unintentionally touches non-existing memory, an EPT violation
>> > would occur. This would result in unconditional call to
>> > hvm_hap_nested_page_fault. In that function, because
>> > get_gfn_type_access returns p2m_mmio_dm for non existing mfns by
>> > default, handle_mmio() will get called. This would result in xen
>> > crash instead of the guest crash. This patch addresses that.
>> 
>> Yes, we definitely want this until being properly handled, no matter
>> that crashing the guest here doesn't seem to be the right thing either
>> (normal x86 behavior would be to drop writes and return all ones for
>> reads).
> 
> How about doing the same we do for HVM which is inject GP. Then handle_mmio
> would just return 0 for pvh, and hvm_hap_nested_page_fault would not need
> to be modified.

The fundamentally wrong thing is that real hardware wouldn't
surface #GP on any wrong physical address - with one exception,
the only possibility would be #MC, and I don't think this would
ever happen for truly unpopulated ranges. (The exception being
on AMD, where #PF gets surfaced when trying to access a page
referring to the HT reserved address range.) IOW even on HVM
it is wrong for us to inject #GP in cases like this.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.