[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/1 V3] x86/AMD: Fix nested svm crash due to assertion in __virt_to_maddr



At 10:24 +0200 on 18 Jul (1374143076), Egger, Christoph wrote:
> On 18.07.13 10:14, Egger, Christoph wrote:
> > On 17.07.13 21:43, Tim Deegan wrote:
> >>> I'm not clear about the need for this new wrapper: Is it really
> >>> benign to the caller what type, access, and order get returned
> >>> here? Is it really too much of a burden to have the two call
> >>> sites do the call here directly? The more that (see above) you'd
> >>> really need to give the caller control over the access requested?
> >>
> >> Yeah, I'm not sure the wrapper is needed.  Can the callers just use
> >> get_page_from_gfn() to do the translation from guest-MFN -- i.e. will we
> >> always be in non-nested mode when we're emulating VMLOAD/VMSAVE?
> > 
> > When you run an L2 hypervisor then you are in nested mode.
> 
> Continue thinking...
> in this case the l1 hypervisor emulates VMLOAD/VMSAVE.
> The l1 hypervisor is in non-nested mode. When the l1 hypervisor will use
> the VMLOAD/VMSAVE instructions they get intercepted and will be
> emulated by the host hypervisor and is in non-nested mode.
> 
> Tim: The answer to your question is yes, we are always in non-nested
> mode when we're emulating VMLOAD/VMSAVE

Good -- so in that case we can use get_page_from_gfn(P2M_ALLOC|P2M_UNSHARE).
The callers should also check that p2m_is_ram() && !p2m_is_readonly()
on the returned type.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.