[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Fwd: [PATCH 0/18] Nested Virtualization: Overview



At 15:57 +0100 on 15 Apr (1271347060), Keir Fraser wrote:
> > patch 04: obsolete gfn_to_mfn_current and remove it.
> >                   gfn_to_mfn_current is redundant to
> > gfn_to_mfn(current->domain, ...)
> >                   This patch reduces the size of patch 17.
> 
> This one (at least -- there may be others) needs an ack from Tim.

I've already asked for some measurement to show the effect of removing
gfn_to_mfn_current() on shadow pagetable performance.

The other patches that I was CC'd on look mostly OK, except for
introducing some clunky (and wide) y_to_z(x_to_y(foo_to_x(foo)))
patterns that I'm sure could be done a bit more neatly.

I'll read the PDFs tomorrow and have a proper look at the patches then.

Cheers,

Tim.

> > patch 05: hvm_set_cr0: Allow guest to switch into paged real mode.
> >                   This makes hvmloader boot when we use xen in xen.
> 
> What if we are not running a nestedhvm guest, or otherwise on a system not
> supporting paged real mode? Is it wise to remove the check in that case?
> Even where we *do* support nestedhvm, should all guest writes to CR0 be
> allowed to bypass that check (Isn't paged real mode architecturally only
> allowed to be entered via VMRUN)?
> 
> More generally, I will allow these patches to sit for a week or two to give
> time for potential reviewers to digest them.
> 
>  Thanks,
>  Keir
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

-- 
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, XenServer Engineering
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.