[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] IOMMU support: __direct_remap_pfn_range() fails


  • To: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: "Langsdorf, Mark" <mark.langsdorf@xxxxxxx>
  • Date: Mon, 3 Oct 2005 11:24:28 -0500
  • Delivery-date: Mon, 03 Oct 2005 16:22:20 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcXGAjR4vN7kD1l3SNyUFvnONZrrbQAA058wAACUK0AAAIAVYAAAOVBAAADRFcAAijJOEA==
  • Thread-topic: [Xen-devel] IOMMU support: __direct_remap_pfn_range() fails

> See include/asm-xen/asm-i386/agp.h
> 
> /* GATT allocation. Returns/accepts GATT kernel virtual 
> address. */ #define alloc_gatt_pages(order) ({ \
>         char *_t; dma_addr_t _d;
> \
>         _t = 
> dma_alloc_coherent(NULL,PAGE_SIZE<<(order),&_d,GFP_KERNEL);
> \
>         _t; })
> #define free_gatt_pages(table, order)   \
>  
> dma_free_coherent(NULL,PAGE_SIZE<<(order),(table),virt_to_bus(table)) 
> 
> There may be other changes in the file too.

Thanks, that fixed it.

Should asm-xen/agp.h be replaced by the code in
asm-xen/asm-i386/agp.h?  It looks like its generic
across all architectures.

-Mark Langsdorf
AMD, Inc.

> > -----Original Message-----
> > From: Langsdorf, Mark [mailto:mark.langsdorf@xxxxxxx]
> > Sent: 30 September 2005 23:12
> > To: Ian Pratt; xen-devel@xxxxxxxxxxxxxxxxxxx
> > Subject: RE: [Xen-devel] IOMMU support: 
> > __direct_remap_pfn_range() fails
> > 
> > > What's calling direct_remap_page_range?
> > 
> > call stack is
> >     __direct_remap_page_range
> >     __ioremap
> >     ioremap_nocache
> >     agp_generic_create_gatt_table
> >     agp_backend_initialization
> >     etc, etc.
> > 
> > the memory that is being remapped is allocated in
> > drivers/char/agp/generic.c:agp_generic_create_gatt_table()
> > by a call to alloc_gatt_pages() (which is just a #define of
> > __get_free_pages on x86_64).
> >  
> > > My suspicion is that the driver is allocating some memory and then
> > > calling ioremap on it, which isn't a good thing to do as 
> > its not MMIO
> > > memory. You can get away with this on native, but not Xen.
> > > 
> > > Can you point us at the appropriate section of the driver code.
> > 
> > See above.  How do I fix this?
> > 
> > Thanks for the help.
> > 
> > -Mark Langsdorf
> > AMD, Inc.
> > 
> > > > > > I am working on getting IOMMU support for AMD64. ...the 
> > > > > > agpgart code is still failing.
> > > > > > 
> > > > > > I have tracked the problem down to line 92 of ioremap.c, in
> > > > > > __direct_remap_pfn_range().
> > > > > > The failing instruction is a call to 
> HYPERVISOR_mmu_update().
> > > > > 
> > > > > What are the arguments to direct_remap_pfn_range?
> > > > 
> > > > struct struct_mm *mm = 0x804c9540
> > > > unsigned long address = 80000
> > > > unsigned long mfn = 4280
> > > > unsigned long size = 80000
> > > > pgprot_t prot = 77
> > > > domid_t domid = 7ff1
> > > > 
> > > > The arguments the HYPERVISOR call are
> > > >         u = 43418
> > > >         v - u = 800
> > > >         domid = 7ff1
> > > > 
> > > > > With a verbose=y build of Xen what debug output do you
> > > get from xen
> > > > > (on the serial line).
> > > > 
> > > > (XEN) 
> (file=/usr/src/xen-unstable/xen-source/xen/include/asm/mm.h,
> > > > line=202)
> > > > Error pfn 4280: rd=ffff8300001c7080, od=0000000000000000,
> > > > caf=00000000,
> > > > taf=0000000000000006
> > > > 
> > > > If I'm reading asm/mm.h:get_page() right, it's failing
> > because the
> > > > page->count_info is 0, but I don't know who set that value or if
> > > > that's meaningful.
> > > > 
> > > > -Mark Langsdorf
> > > > AMD, Inc.
> > > > 
> > > > 
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
> > > 
> > > 
> > 
> > 
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.