WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] dom0 boot failure: dma_reserve in reserve_bootmem_gener

To: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Subject: Re: [Xen-devel] dom0 boot failure: dma_reserve in reserve_bootmem_generic()
From: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
Date: Mon, 28 Jun 2010 20:19:54 -0700
Cc: "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 28 Jun 2010 20:21:43 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C2885A502000078000085D7@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Oracle Corporation
References: <20100625184046.73890d00@xxxxxxxxxxxxxxxxxxxx> <4C2885A502000078000085D7@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Mon, 28 Jun 2010 10:21:09 +0100
"Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
> I think the comment immediately before set_dma_reserve() explains
> it quite well:

I'm acutally looking at 2.6.18-164* kernel, looks like
set_dma_reserve() and comments were added later.

 
> In all our post-2.6.18 kernels we indeed have this disabled, and
> didn't have any issue with it so far. Nevertheless I'm not convinced
> us really doing a good thing with disabling it after the change (a
> pretty long while ago) to no longer put all memory in the DMA zone.

I may also just disable it for now. I'm not sure I understand the
reason behind putting it in DMA zone.


> For your issue, I rather wonder why dma_reserve reaches this high
> a value only with the particular dom0_mem= you're stating. Did
> you check where those reservations come from, and how they
>  differfrom when using smaller or larger dom0_mem= values?

Yeah, I checked two values which boot fine:
  dom0_mem = 500M
       reserve_bootmem_generic(phys = 0, len = e91000)
           if (phys+len <= MAX_DMA_PFN*PAGE_SIZE)
               dma_reserve += len / PAGE_SIZE;
 
  dom0_mem = 930M
       reserve_bootmem_generic(phys = 0, len = 1040000)

with dom0_mem = 830M, failing to boot:
       reserve_bootmem_generic(phys = 0, len = fdb000)

Add to that statically allocated pages, and with 500M, few pages are
left in DMA zone it appears. With 930M, it's skipped altogether, so no
problem with driver allocating from GFP_DMA later.


The start_pfn in dom0 is 0xe34, resulting in table_end == fdb000:

(XEN)  Dom0 alloc.:   000000021d000000->000000021e000000

(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff80000000->ffffffff80531eec
(XEN)  Init. ramdisk: ffffffff80532000->ffffffff80c88200
(XEN)  Phys-Mach map: ffffffff80c89000->ffffffff80e28000
(XEN)  Start info:    ffffffff80e28000->ffffffff80e284b4
(XEN)  Page tables:   ffffffff80e29000->ffffffff80e34000
(XEN)  Boot stack:    ffffffff80e34000->ffffffff80e35000
(XEN)  TOTAL:         ffffffff80000000->ffffffff81000000
(XEN)  ENTRY ADDRESS: ffffffff80000000


So, now that I've stumbled on this, I'm confused why the PAGE_OFFSET+
VAs, ie, gpfns 0 - 16M, are not mapped to MFNs below 16M? Would this not
be needed for ISA DMA?

thanks a lot Jan,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel