WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] windows domU disk performance graph comparing hvm vs st

On Fri, 2010-02-19 at 19:50 -0500, Keith Coleman wrote:
> On Fri, Feb 19, 2010 at 7:08 PM, Daniel Stodden
> <daniel.stodden@xxxxxxxxxx> wrote:
> > On Fri, 2010-02-19 at 17:41 -0500, Keith Coleman wrote:
> >
> >> This graph shows the performance under a webserver disk IO workload at
> >> different queue depths. It compares the 4 main IO methods for windows
> >> guests that will be available in the upcoming xen 4.0.0 and 3.4.3
> >> releases: pure HVM, stub domains, gplpv drivers, and xcp winpv
> >> drivers.
> >
> > Cool, thanks. If I may ask, what exactly did you run?
> 
> iometer
> 
> >> The gplpv and xcp winpv drivers have comparable performance with gplpv
> >> being slightly faster. Both pv drivers are considerably faster than
> >> pure hvm or stub domains. Stub domain performance was about even with
> >> HVM which is lower than we were expecting. We tried a different cpu
> >> pinning in "Stubdom B" with little impact.
> >
> > Is this an SMP dom0? A single guest?
> 
> Dual core server with dom0 pinned to core 0 and a single domU pinned
> to core 1. Stubdom was pinned to core 0 then core 1.

I don't see why stubdom would be faster in either configuration. Once
you're through DM emulation, there's plenty of cycles to spend waiting
for I/O completion. So dom0 won't mind spending them on qemu either.

Daniel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>