[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Difference between alloc_domheap_pages vs. alloc_xenheap_pages?


  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>, Xinxin Jin <xinxinjin89@xxxxxxxxx>
  • From: Keir Fraser <keir.xen@xxxxxxxxx>
  • Date: Thu, 27 Jun 2013 09:19:58 +0100
  • Cc: Xen-devel@xxxxxxxxxxxxx
  • Delivery-date: Thu, 27 Jun 2013 08:20:39 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: Ac5zDxx9C6FXvUtkVkiE7/yDRwJP5w==
  • Thread-topic: [Xen-devel] Difference between alloc_domheap_pages vs. alloc_xenheap_pages?

On 27/06/2013 09:11, "Ian Campbell" <Ian.Campbell@xxxxxxxxxx> wrote:

> On Wed, 2013-06-26 at 15:41 -0700, Xinxin Jin wrote:
> 
> 
>> I noticed the only difference between the two is to add PGC_xen_heap
>> flag in allocated xenheap pages. So does it matter to exchange these
>> two functions when allocating a heap page?
> 
> xenheap pages are always mapped, domheap pages are only mapped on demand
> with (un)map_domain_page. The big clue is that alloc_xenheap_pages
> returns a void* while alloc_domheap_pages returns a struct page_info*.
> 
> On some architectures with a large virtual address space (i.e. 64-bit
> ones) the two can be combined and the distinction becomes a bit moot,
> but it remains for the benefit of common code and also because even on
> 64-bit architectures when you have large amounts of RAM you may still
> end up with RAM which is not permanently mapped -- e.g. on x86_64 if you
> have >5TB of RAM then you end up with a split again.

Also xenheap-allocated pages must be explicitly freed by Xen, usually during
domain destruction.

> Ian.
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.