[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Odd blkdev throughput results



> The fun (for me, fun is probably a personal thing) part is that
> throughput is higher than with TCP. May be due to the block layer being
> much thinner than TCP/IP networking, or the fact that transfers utilize
> the whole 4KB page size for sequential reads. Possibly some of both, I
> didn't try.

The big thing is that on network RX it is currently dom0 that does the copy. In 
the CMP case this leaves the data in the shared cache ready to be accessed by 
the guest. In the SMP case it doesn't help at all. In netchannel2 we're moving 
the copy to the guest CPU, and trying to eliminate it with smart hardware.

Block IO doesn't require a copy at all. 

> This is not my question. What strikes me is that for the blkdev
> interface, the CMP setup is 13% *slower* than SMP, at 661.99 MB/s.
> 
> Now, any ideas? I'm mildly familiar with both netback and blkback, and
> I'd never expected something like that. Any hint appreciated.

How stable are your results with hdparm? I've never really trusted it as a 
benchmarking tool.

The ramdisk isn't going to be able to DMA data into the domU's buffer on a 
read, so it will have to copy it. The hdparm running in domU probably doesn't 
actually look at any of the data it requests, so it stays local to the dom0 
CPU's cache (unlike a real app). Doing all that copying in dom0 is going to 
beat up the domU in the shared cache in the CMP case, but won't effect it as 
much in the SMP case.
 

Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.