[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Load increase after memory upgrade (part2)



On Tue, Nov 29, 2011 at 10:23:18AM +0000, Ian Campbell wrote:
> On Mon, 2011-11-28 at 16:45 +0000, Konrad Rzeszutek Wilk wrote:
> > On Mon, Nov 28, 2011 at 03:40:13PM +0000, Ian Campbell wrote:
> > > On Mon, 2011-11-28 at 15:28 +0000, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Nov 25, 2011 at 11:11:55PM +0100, Carsten Schiers wrote:
> > > 
> > > > > I looked through my old mails from you and you explained already the 
> > > > > necessity of double
> > > > > bounce buffering (PCI->below 4GB->above 4GB). What I don't understand 
> > > > > is: why does the
> > > > > Xenified kernel not have this kind of issue?
> > > > 
> > > > That is a puzzle. It should not. The code is very much the same - both
> > > > use the generic SWIOTLB which has not changed for years.
> > > 
> > > The swiotlb-xen used by classic-xen kernels (which I assume is what
> > > Carsten means by "Xenified") isn't exactly the same as the stuff in
> > > mainline Linux, it's been heavily refactored for one thing. It's not
> > > impossible that mainline is bouncing something it doesn't really need
> > > to.
> > 
> > The usage, at least with 'pci_alloc_coherent' is that there is no bouncing
> > being done. The alloc_coherent will allocate a nice page, underneath the 4GB
> > mark and give it to the driver. The driver can use it as it wishes and there
> > is no need to bounce buffer.
> 
> Oh, I didn't realise dma_alloc_coherent was part of swiotlb now. Only a
> subset of swiotlb is in use then, all the bouncing stuff _should_ be
> idle/unused -- but has that been confirmed?

Nope. I hope that the diagnostic patch I have in mind will prove/disprove that.
Now I just need to find a moment to write it :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.