[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch] Make memory hole for PCI Express bigger and prevent roll-over


  • To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
  • From: "David Stone" <unclestoner@xxxxxxxxx>
  • Date: Mon, 21 Jan 2008 14:28:22 -0500
  • Cc: Xen Developers <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 21 Jan 2008 11:28:50 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=ZtJeLUxwgohYuv6J62NS9pCClEsOUMDTX/ecDcSBRkOYipIvmO+83d1g9QSaBarg6QawSzw1BhOTF5g9vTwL23K6fUDu7R7sF/aYsxJwfFpNDDBImJJjbrtNYDynsHALKdg3sU418rZh6Pd1GaSy/88Nz/A0Xn+gfmw98UiEmnI=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

> > My only hesitation is that 0xF0000000-0xF4FFFFFF = 80Mb is smallish,
> > especially considering the wasteful algorithm the Xen HVM BIOS
> > currently uses to assign addresses (it can waste a lot of space).
>
> How is it wasteful? We could only do better if we assigned PCI resources in
> descending order of size (and hence alignment requirement). Which we *could*
> do, I suppose. Certainly the resource assignment code is going to get rather
> more exciting anyway, to fully support the dynamic PCI hole.

Pretty much that...the way it is now if the order in which BARs are
enumerated is such that the first BAR wants a small amount (say 1KB
and gets 0xF0000000-0xF00003FF) and then the second BAR wants a large
amout (say 64 MB and gets 0xF4000000-0xF7FFFFFF) then a there is a big
waste (0xF0000400-0xF3FFFFFF == ~64 MB).  In the various
configurations I was running almost as much PCI device address space
got wasted as assigned.

Thanks,
Dave

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.