[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Load increase after memory upgrade (part2)



On Wed, Dec 14, 2011 at 04:23:51PM -0400, Konrad Rzeszutek Wilk wrote:
> On Mon, Dec 05, 2011 at 10:26:21PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Sun, Dec 04, 2011 at 01:09:28PM +0100, Carsten Schiers wrote:
> > > Here with two cards enabled and creating a bit "work" by watching TV with 
> > > one oft hem:
> > > 
> > > [   23.842720] Starting SWIOTLB debug thread.
> > > [   23.842750] swiotlb_start_thread: Go!
> > > [   23.842838] xen_swiotlb_start_thread: Go!
> > > [   28.841451] 0 [budget_av 0000:00:01.0] bounce: from:435596(slow:0)to:0 
> > > map:658 unmap:0 sync:435596
> > > [   28.841592] SWIOTLB is 4% full
> > > [   33.840147] 0 [budget_av 0000:00:01.0] bounce: from:127652(slow:0)to:0 
> > > map:0 unmap:0 sync:127652
> > > [   33.840283] SWIOTLB is 4% full
> > > [   33.844222] 0 budget_av 0000:00:01.0 alloc coherent: 8, free: 0
> > > [   38.840227] 0 [budget_av 0000:00:01.0] bounce: from:128310(slow:0)to:0 
> > > map:0 unmap:0 sync:128310
> > 
> > Whoa. Yes. You are definitly using the bounce buffer :-)
> > 
> > Now it is time to look at why the drive is not using those coherent ones - 
> > it
> > looks to allocate just eight of them but does not use them.. Unless it is
> > using them _and_ bouncing them (which would be odd).
> > 
> > And BTW, you can lower your 'swiotlb=XX' value.  The 4% is how much you
> > are using of the default size.
> 
> So I able to see this with an atl1c ethernet driver on my SandyBridge i3
> box. It looks as if the card is truly 32-bit so on a box with 8GB it
> bounces the data. If I booted the Xen hypervisor with 'mem=4GB' I get no
> bounces (no surprise there).
> 
> In other words - I see the same behavior you are seeing. Now off to:
> > 
> > I should find out_why_ the old Xen kernels do not use the bounce buffer
> > so much...
> 
> which will require some fiddling around.

And I am not seeing any difference - the swiotlb is used with the same usage 
when
booting a classic (old style XEnoLinux) 2.6.32 vs using a brand new pvops (3.2).
Obviously if I limit the physical amount of memory (so 'mem=4GB' on Xen 
hypervisor
line), the bounce usage disappears. Hmm, I wonder if there is a nice way to 
tell the hypervisor - hey, please stuff dom0 under 4GB.

Here is the patch I used against classic XenLinux. Any chance you could run
it with your classis guests and see what numbers you get?


Attachment: swiotlb-against-old-type.patch
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.