[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [ivtv-devel] Problems loading ivtv in Xen - DMA issues?


  • To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
  • From: David Muench <davemuench@xxxxxxxxx>
  • Date: Thu, 7 Jul 2005 12:07:26 -0400
  • Cc: ivtv-devel@xxxxxxxxxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 07 Jul 2005 16:06:15 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=Luajbkqxsb5FVjlXP6HgcVh5F7iUfPi8UU6lQQrQ/WixDIb7dbMyCVeQplZ5AV86zSqNhAhEa/cIhQ8QArDlY8kS2dLhPGLUg/oYxSPuHRgmZJs+bl9DYrLFowHF2KVzGsaggoisBk7uXVrIgWltuRzrjWJvLocbNAtYCHfc0N8=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi Keir,

I recieved this reply to my request for help with ivtv on Xen on the
ivtv-devel list. I don't feel very well qualified to respond to
Andrew. Can I ask what your thoughts are? Am I just out of luck?

Thanks,
Dave

On 7/7/05, Andrew May <acmay@xxxxxxxxxxxxxxxx> wrote:
> On Wed, 2005-07-06 at 10:20 -0400, David Muench wrote:
> > "I think the ivtv driver is probably not calculating dma addresses in
> > the way that xen requires. On native Linux, if you allocate a
> > multi-page chunk of physical memory, you can pass the start address of
> > that buffer to hardware and it can dma the entire buffer given just
> > that address. In Xen, because we give guests 'pseudo-physical' memory,
> > that physical buffer may not be really physically contiguous. So we
> > need drivers to dma_alloc_coherent or pci_alloc_consistent the memory
> > they will use for dma --- we modified those functions to ensure they
> > return suitable contiguous physical memory."
> 
> The Xen people seem to be a bit off here. You may want to look at
> linux/Documentation/DMA-mapping.txt.
> 
> But alloc_coherent and alloc_consistent are for uncached mem for DMA and
> that is very different than normal mem for DMA. That could really slow
> things down when getting the data out of buffers latter.
> 
> pci_map_single is correct for doing bulk dma to normal kmalloc data.
> 
> The coherent/consistent mem doesn't make much sense on x86 stuff, but
> for other archs like some PPC ones without cache snooping mem it becomes
> a big issue. The CPU and the DMA device cannot write to the same
> cacheline of data without causing havoc. The CPU would work with
> the data in cache, but the DMA would go directly to DRAM. When the
> cache flush does happen it would overwrite half the work done.
> 
> This only comes into play on a smaller ring of descriptors for most HW.
> 
> If the bulk of the data is only touched by one device at a time, then
> the map_single function just needs to do a cache flush before the
> transfer happens along with the other work.
> 
> I am much more familar with network drivers where it is common for one
> consistent set of mem for the descriptors, but all the packets do just
> the map_single(). But since jumbo frames aren't typical yet, the packets
> all fit in one page.
> 
> So it looks like the Xen people have some work to do if they really want
> to support any type of DMA device.
> 
> 
> 
> 
> -------------------------------------------------------
> SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
> from IBM. Find simple to follow Roadmaps, straightforward articles,
> informative Webcasts and more! Get everything you need to get up to
> speed, fast. http://ads.osdn.com/?ad_id=7477&alloc_id=16492&op=click
> _______________________________________________
> ivtv-devel mailing list
> ivtv-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.sourceforge.net/lists/listinfo/ivtv-devel
> 


-- 
David Muench - davemuench@xxxxxxxxx
Jabber ID: dave@xxxxxxxxxxxxxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.