[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Nouveau on dom0



On Fri, Mar 05, 2010 at 01:16:13PM +0530, Arvind R wrote:
> On Thu, Mar 4, 2010 at 11:55 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@xxxxxxxxxx> wrote:
> > On Thu, Mar 04, 2010 at 02:47:58PM +0530, Arvind R wrote:
> >> On Wed, Mar 3, 2010 at 11:43 PM, Konrad Rzeszutek Wilk
> >> <konrad.wilk@xxxxxxxxxx> wrote:
> >> >> > aio-write -
> >> >>
> >> >> which triggers do_page_fault, handle_mm_fault, do_linear_fault, 
> >> >> __do_fault
> >> >> and finally ttm_bo_vm_fault.
> >>
> >> > I've attached a simple patch I wrote some time ago to get the real MFNs
> >> Have patched - did not apply clean. Will compile and get some info.
> > take the print_data function and just jam it in the tt_bo_vm_fault code
> Linking problems. But compiled and run
> !!! CANNOT lookup_address()!!! Returns NULL on bare AND Xen
> Before AND After vm_insert/remap_pfn. Address looked_up is what

The "after" is a bit surprise. I would have thought it would would have
update the page-table with the new PFN. But maybe it did, but for a
different address (since it does not actually use the 'address' field
but __va(pfn)<< PAGE_SHIFT as the address).

> fault_handler passes in. Had to add a NULL check in print_data.
> 
> Bare-boot log.
>  [TTM] ttm_bo_vm_fault: faulting-in pages, TTM_PAGE_FLAGS=0x0
>  [         Before:]PFN: Failed lookup_address of 0x7fd82e9aa000
>  [            After :]PFN: Failed lookup_address of 0x7fd82e9aa000
> 
>  Ring any bells?

Yeah... Can you also instrument the code to print the PFN? The code goes
through insert_pfn->pfn_pte, which calls xen_make_pte, which ends up
doing pte_pfn_to_mfn. That routine does a pfn_to_mfn which does a
get_phys_to_machine(pfn). The last routine looks up the PFN->MFN lookup
table and finds a MFN that corresponds to this PFN. Since the memory
was allocated from ... well this is the big question.

Is the memory allocated from normal kernel space or is really backed by
the video card. In your previous e-mails you mentioned that is_iomem is
set to zero, which implies that the memory for these functions is NOT
memory backed. 


> 
> >> > There is an extra flag that the PTE can have when running under Xen: 
> >> > _PAGE_IOMAP.
> >> > This signifies that the PFN is actually the MFN. In this case thought
> >> > it sholdn't be enabled b/c the memory is actually gathered from
> >> > alloc_page. But if it is, it might be the culprit.
> 
> >> I think the problem lies in the vm_insert_pfn/page/mixed family of 
> >> functions.
> >> These are only used (grep'ed kernel tree) and invariably for mmaping.
> >> Scsi-tgt, mspec, some media/video, poch,android in staging and ttm
> >> - and, surprise - xen/blktap/ring.c and device.c
> >> - which both check XENFEAT_auto_translated_physmap
> >>
> >> Pls. look at xen/blktap/ring.c - it looks to be what we need
> >
> > Let me take a look at it tomorrow. Bit swamped.

I started going through the function allocations that were done and
found this in ttm_bo_mmap:

vma->vm_flags |= VM_RESERVED | VM_IO | VM_MIXEDMAP | VM_DONTEXPAND;

the VM_IO is OK if the memory that is being referenced is the video
driver memory. _BUT_ if the memory is being allocated through the
alloc_page (ttm_tt_alloc_page) , or kmalloc, then this will cause us
headaches. You might want to check in  ttm_bo_vm_fault what the
vma->vm_flags are and if VM_IO is set.

(FYI, look at
http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=commit;h=e84db8b7136d1b4a393dbd982201d0c5a3794333)

If the VM_IO is set, change that ttm_bo_mmap to not have VM_IO and see
how that works.


Thought I am not sure if the ttm_bo_mmap is used by the nvidia driver.

Attached is a re-write of the debug patch I sent earlier. I compile
tested it but haven't yet run it (just doing that now).

Attachment: debug-print-pte.patch
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.