[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] x86 swiotlb questions



One thing is we change the value of PCI_BUS_IS_PHYS (or some similar macro
which I can't quite remember the name of) which I believe turns off some
bounce-buffer logic contained within the block-device subsystem. So that
will mean that we get highmem requests hitting the DMA interfaces, where on
native they would have got filtered earlier by the highmem/lowmem bounce
buffer logic that is specific to block-device requests.

Obviously we want to turn off that lowmem/highmem bounce buffer logic on Xen
as it is nonsense when there is no direct correspondence to low/high machine
addresses.

 -- Keir

On 19/12/06 14:39, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

>>> Not yet - because of the highmem handling needed for i386. I wonder,
>>> however,
>>> how native Linux gets away with not handling this through swiotlb, and why
>>> nevertheless Xen needs to special case this. Any ideas?
>> 
>> Probably because GFP_KERNEL and GFP_DMA allocations are guaranteed to be
>> DMAable by 30-bit-capable devices on native, but not on Xen.
> 
> Not sure I understand your thinking here. Nothing prevents user pages (or
> anything
> else that I/O may happen against) to come from highmem, hence the bounce logic
> in mm/highmem.c needs to control this anyway (as I understand it). And since
> all
> we're talking about here are physical addresses (and their translations to
> virtual
> ones), I would rather conclude that we'll never see a page in the I/O path
> that
> page_address() would return NULL for, but if that's the case, then there's no
> need
> to kmap such pages or to favor page_to_bus() over virt_to_bus(page_address()).


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.