[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] dom0 boot failure: dma_reserve in reserve_bootmem_generic()



On Mon, 28 Jun 2010 10:21:09 +0100
"Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
> I think the comment immediately before set_dma_reserve() explains
> it quite well:

I'm acutally looking at 2.6.18-164* kernel, looks like
set_dma_reserve() and comments were added later.

 
> In all our post-2.6.18 kernels we indeed have this disabled, and
> didn't have any issue with it so far. Nevertheless I'm not convinced
> us really doing a good thing with disabling it after the change (a
> pretty long while ago) to no longer put all memory in the DMA zone.

I may also just disable it for now. I'm not sure I understand the
reason behind putting it in DMA zone.


> For your issue, I rather wonder why dma_reserve reaches this high
> a value only with the particular dom0_mem= you're stating. Did
> you check where those reservations come from, and how they
>  differfrom when using smaller or larger dom0_mem= values?

Yeah, I checked two values which boot fine:
  dom0_mem = 500M
       reserve_bootmem_generic(phys = 0, len = e91000)
           if (phys+len <= MAX_DMA_PFN*PAGE_SIZE)
               dma_reserve += len / PAGE_SIZE;
 
  dom0_mem = 930M
       reserve_bootmem_generic(phys = 0, len = 1040000)

with dom0_mem = 830M, failing to boot:
       reserve_bootmem_generic(phys = 0, len = fdb000)

Add to that statically allocated pages, and with 500M, few pages are
left in DMA zone it appears. With 930M, it's skipped altogether, so no
problem with driver allocating from GFP_DMA later.


The start_pfn in dom0 is 0xe34, resulting in table_end == fdb000:

(XEN)  Dom0 alloc.:   000000021d000000->000000021e000000

(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff80000000->ffffffff80531eec
(XEN)  Init. ramdisk: ffffffff80532000->ffffffff80c88200
(XEN)  Phys-Mach map: ffffffff80c89000->ffffffff80e28000
(XEN)  Start info:    ffffffff80e28000->ffffffff80e284b4
(XEN)  Page tables:   ffffffff80e29000->ffffffff80e34000
(XEN)  Boot stack:    ffffffff80e34000->ffffffff80e35000
(XEN)  TOTAL:         ffffffff80000000->ffffffff81000000
(XEN)  ENTRY ADDRESS: ffffffff80000000


So, now that I've stumbled on this, I'm confused why the PAGE_OFFSET+
VAs, ie, gpfns 0 - 16M, are not mapped to MFNs below 16M? Would this not
be needed for ISA DMA?

thanks a lot Jan,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.