|
|
|
|
|
|
|
|
|
|
xen-ia64-devel
[Xen-ia64-devel] [PATCH] increase hv memory reservation
Not sure if this is actually relevant for hg tip anymore, but with Red Hat's
rebase to xen 3.1.2, that if we try to allocate "all"
memory to dom0, we're not leaving enough for the hypervisor anymore. Our quick
fix was to simply increase the reservation
guestimate by 128M.
----8<----
Recent additions to the xen codebase have inflated the memory requirements
of the hypervisor a bit. To compensate, we need to increase the amount of
memory we try to reserve for the hypervisor when allocating "all" system
memory to dom0.
Fixes kernel panics on a 2GB rx2600 in our lab, as well as on misc. test
systems with between 16 and 128GB when they're configured to allocate "all"
memory to dom0.
Signed-off-by: Jarod Wilson <jwilson@xxxxxxxxxx>
---
diff -r 8c921adf4833 xen/arch/ia64/xen/domain.c
--- a/xen/arch/ia64/xen/domain.c Fri Mar 14 15:07:45 2008 -0600
+++ b/xen/arch/ia64/xen/domain.c Tue Mar 18 17:04:02 2008 -0400
@@ -1943,10 +1943,10 @@ static void __init calc_dom0_size(void)
* for DMA and PCI mapping from the available domheap pages. The
* chunk for DMA, PCI, etc., is a guestimate, as xen doesn't seem
* to have a good idea of what those requirements might be ahead
- * of time, calculated at 1MB per 4GB of system memory */
+ * of time, calculated at 128MB + 1MB per 4GB of system memory */
domheap_pages = avail_domheap_pages();
p2m_pages = domheap_pages / PTRS_PER_PTE;
- spare_hv_pages = domheap_pages / 4096;
+ spare_hv_pages = 8192 + (domheap_pages / 4096);
max_dom0_size = (domheap_pages - (p2m_pages + spare_hv_pages))
* PAGE_SIZE;
printk("Maximum permitted dom0 size: %luMB\n",
--
Jarod Wilson
jwilson@xxxxxxxxxx
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-ia64-devel] [PATCH] increase hv memory reservation,
Jarod Wilson <=
|
|
|
|
|