|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 5/7] xen: support RAM at addresses 0 and 4096
On Thu, 2013-09-12 at 14:25 +0100, Jan Beulich wrote:
> >>> On 12.09.13 at 14:42, Ian Campbell <ian.campbell@xxxxxxxxxx> wrote:
> > Currently the mapping from pages to zones causes the page at zero to go into
> > zone -1 and the page at 4096 to go into zone 0, which is the Xen zone
> > (confusing various assertions).
>
> So that's a problem on ARM only, right? Because x86 avoids passing
> the low first Mb to the allocator. I wonder whether ARM shouldn't at
> least avoid making the page at 0 available for allocation too, which
> would address half of the problem. Avoiding MFN 1 would be less
> natural, I agree.
[...]
> Overall I'm really uncertain whether it wouldn't be better for ARM to
> play by the x86 rules in this respect, or alternatively to further
> generalize what you try to do here by allowing x86 to specify a bias
> for the shift to skip all zones currently covering the low Mb, which on
> ARM would end up being 1.
Actually, if I don't make a mess of my arithmetic then I don't think
this is needed, at least not for correctness.
page_to_zone() is still wrong for page 0, but that was true with the
previous version too, hence the checks to avoid adding page 0 to any
heap.
The difference is that it now ends up in zone 0 (Xen, bad) instead of
zone -1 (even worse!). Even that could be solved with this extra hunk
(which would mean we could drop all the checks in init_*_pages from
below too):
@@ -268,7 +267,7 @@ unsigned long __init alloc_boot_pages(
#define bits_to_zone(b) (((b) < (PAGE_SHIFT + 1)) ? 1 : ((b) - PAGE_SHIFT))
#define page_to_zone(pg) (is_xen_heap_page(pg) ? MEMZONE_XEN : \
- (fls(page_to_mfn(pg))))
+ (fls(page_to_mfn(pg)) ? : 1))
typedef struct page_list_head heap_by_zone_and_order_t[NR_ZONES][MAX_ORDER+1];
static heap_by_zone_and_order_t *_heap[MAX_NUMNODES];
What do you think?
8<--------------------------------------------------------
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 41251b2..0e3055c 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -159,6 +160,8 @@ void __init init_boot_pages(paddr_t ps, paddr_t pe)
ps = round_pgup(ps);
pe = round_pgdown(pe);
+ if ( ps < PAGE_SIZE )
+ ps = PAGE_SIZE; /* Always leave page 0 free */
if ( pe <= ps )
return;
@@ -257,11 +263,11 @@ unsigned long __init alloc_boot_pages(
*/
#define MEMZONE_XEN 0
-#define NR_ZONES (PADDR_BITS - PAGE_SHIFT)
+#define NR_ZONES (PADDR_BITS - PAGE_SHIFT + 1)
-#define bits_to_zone(b) (((b) < (PAGE_SHIFT + 1)) ? 0 : ((b) - PAGE_SHIFT - 1))
+#define bits_to_zone(b) (((b) < (PAGE_SHIFT + 1)) ? 1 : ((b) - PAGE_SHIFT))
#define page_to_zone(pg) (is_xen_heap_page(pg) ? MEMZONE_XEN : \
- (fls(page_to_mfn(pg)) - 1))
+ (fls(page_to_mfn(pg))))
typedef struct page_list_head heap_by_zone_and_order_t[NR_ZONES][MAX_ORDER+1];
static heap_by_zone_and_order_t *_heap[MAX_NUMNODES];
@@ -1311,6 +1332,8 @@ void init_xenheap_pages(paddr_t ps, paddr_t pe)
{
ps = round_pgup(ps);
pe = round_pgdown(pe);
+ if ( ps < PAGE_SIZE )
+ ps = PAGE_SIZE; /* Always leave page 0 free */
if ( pe <= ps )
return;
@@ -1429,6 +1451,8 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
smfn = round_pgup(ps) >> PAGE_SHIFT;
emfn = round_pgdown(pe) >> PAGE_SHIFT;
+ if ( smfn == 0 )
+ smfn = PAGE_SIZE; /* Always leave page 0 free */
init_heap_pages(mfn_to_page(smfn), emfn - smfn);
}
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |