[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH][SVM] tlb control enable


  • To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
  • From: "Woller, Thomas" <thomas.woller@xxxxxxx>
  • Date: Wed, 8 Feb 2006 14:25:35 -0600
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 08 Feb 2006 20:37:09 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcYs02TTlGbpbeT2R7SGqurdOS222QACnWyQ
  • Thread-topic: [Xen-devel] [PATCH][SVM] tlb control enable

> 
> Is this needed because some shadow pagetable updates do not 
> flush the TLB, instead relying on return to guest context 
> (and consequent CR3 change) to flush the TLB? 

AMD SVM has the ability to utilize an Address Space Identifier (ASID),
tlb entries are tagged with ASID bits to distinguish different host
and/or guest address spaces.  Ideally, one ASID is allocated for various
reasons including when a SPT is created.  When a guest CR3 write occurs,
in theory, the hv only has to retire the current ASID for that core, and
allocate a new ASID, and therefore does not have to perform any flushing
of the TLBs.  On a VMEXIT write to CR3 we currently need to always
perform a tlb_local_flush() though, and this might be due to a lack of
proper SPT updates flushing the TLB - not sure yet. 

On vcpu migration to a new core, then the old ASID can be "retired"
(deallocated) for the old core, the SPT entries should be fine, and a
new ASID can then be assigned for the new core.  and when all of the
ASIDs have been used (retired) for a given core, then that is when a tlb
flush must occur.  We're seeing guest hangs, with testing vcpu core
switching.  We have noticed the vcpu migration patch recently which we
are working with for SVM and are looking at it.

There is also a new instruction (INVLPGA) which allows the hv to
selectively invalidiate the TLB mappings for a given virtual page and a
given ASID.  We might be able to squeeze a bit more performance out
using this instruction, but for now we are not.

So to finally answer your question concerning the patch...

Setting the tbl_control flag to 1 in the vmcb, causes a complete tlb
flush on this core.  We have seen issues (blue screens) when utilizing
ASID granularity with WinxpSP1 running on cores>0.  we have found that
flushing the TLBs each vmrun alleviates winxpsp1 crashes.  We have also
sometimes seen a substantial performance improvement(!) when flushing
each vmrun, which was completely unexpected.  We are continuing to
investigate root cause, but for the moment, we would like to just flush
each vmrun. Digging around in the SPT code might be necessary here also.


Cheers,
Tom



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.