WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: Poor performance on HVM (kernbench)

George Dunlap wrote:
So, the problem appears to be with a ton of brute-force searches to
remove writable mappings, both during resync and promotion.  My
analysis tool is reporting that of the 30 seconds or so in the trace
from xen-unstable, the guest spent a whopping 67% in the hypervisor:
 * 26% doing resyncs as a result of marking another page out-of-sync
 * 9% promoting pages
 * 27% resyncing as a result of cr3 switches
And almost the entirety of all of those can be attributed to
brute-force searches to remove writable mappings.

Fantastic (well, sort of)!

If I understand it correctly, Todd is using PV drivers in Linux HVM guests, so the reason for brute-force search is due to former L1 page-tables being used as I/O pages, not being unshadowed because they can get writable mappings out of it. It is, shortly, an unshadowing problem. Should be `easy` to fix. I wasn't using PV drivers, so I was not experiencing this behaviour.

Or, it could be a fixup table bug, but I doubt it.

George, did you saw excessive fixup faults in the trace?

Todd, could you try without PV drivers (plain qemu emulation) and see if the results get better?

Thanks,
Gianluca


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>