[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: extend LAPIC shortcuts around P2M lookups



At 08:12 +0100 on 04 Aug (1407136337), Jan Beulich wrote:
> >>> On 01.08.14 at 21:15, <tim@xxxxxxx> wrote:
> > If Xen does its own instruction fetch and decode, then we have to be
> > careful about reusing any state from the original exit because of
> > self-modifying code.  (And yes, that is a serious concern -- I once
> > spent months trying to debug occasional memory corruption in the
> > self-modifying license-enforcement code on a system stress test
> > utility.)
> > 
> > So it would be OK to reuse the GPA from the exit if we could verify
> > that the GVA we see is the same as the original fault (since there can't
> > have been a TLB flush).  But IIRC the exit doesn't tell us the
> > original GVA. :(
> 
> I don't think it needs to be as strict as this: For one, I wouldn't
> intend to use the known GPA for instruction fetches at all. And
> then I think if the instruction got modified between the exit and us
> doing the emulation, using the known GPA with the wrong
> instruction is as good or as bad as emulating an instruction that
> didn't originally cause the exit.

Not at all -- as I said, in the shadow code we did see the case where
we emulated a different instruction, and we do our best to handle it.
And at least there we have a clean failure mode: if we can't emulate
we crash.

Using the wrong GPA will silently corrupt memory and carry on, which
is about the worst failure mode a VMM can have (esp. if skipping the
GVA->GPA walk could allow a guest process to write to a read-only
mapping).  

I'd be extremely uncomfortable with anything like tis unless there's a
way to get either the ifetch buffer or a partial decode out of the CPU
(which IIRC can't be done on x86 though it can on ARM).

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.