On Wed, Nov 10, 2010 at 05:16:14PM -0800, Dante Cinco wrote:
> We have Fibre Channel HBA devices that we PCI passthrough to our pvops
> domU kernel. Without swiotlb=force in the domU's kernel command line,
> both domU and dom0 lock up after loading the kernel module drivers for
> the HBA devices. With swiotlb=force, the domU and dom0 are stable
Whoa. That is not good - what happens if you just pass in iommu=soft?
Does the PCI-DMA: Using.. show up if you don't pass in any of those parameters?
(I don't think it does, but just doing 'iommu=soft' should enable it).
> after loading the kernel module drivers but the I/O performance is at
> least an order of magnitude worse than what we were seeing with the
> HVM kernel. I see the following in /var/log/kern.log in the pvops
> PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
> Placing 64MB software IO TLB between ffff880005800000 - ffff880009800000
> software IO TLB at phys 0x5800000 - 0x9800000
> Is swiotlb=force responsible for the I/O performance degradation? I
> don't understand what swiotlb=force does so I would appreciate an
> explanation or a pointer.
So, you should only need to use 'iommu=soft'. It will enable the Linux kernel
to translate the pseudo-PFNs to the real machine frame numbers (bus addresses).
If your card is 64-bit, then that is all it would do. If however your card is
and your are DMA-ing data from above the 32-bit limit, it would copy the
to memory below 4GB, DMA that, and when done, copy it back to the where the
page is. This is called bounce-buffering and this is why you would use a mix of
pci_map_page, pci_sync_single_for_[cpu|device] calls around your driver.
However, I think your cards are 64-bit, so you don't need this
if you say 'swiotlb=force' it will force _all_ DMAs to go through the
So, try just 'iommu=soft' and see what happens.
Xen-devel mailing list