>>> Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> 01.09.06 17:58 >>>
>On 1/9/06 4:49 pm, "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx> wrote:
>
>>>> Correct. I suggest either a clean BUG_ON() in virt_to_xen_l2e(), or
>>>> allocate-and-map the new l2 in that same function (but raises question of
>>>> how to test the new code path).
>>>
>>> Why in virt_to_xen_l2e()? We likely wouldn't make it there on a system
>>> this big, due to earlier memory corruption.
>>
>> Why? The memory is only discovered and mapped after the e820 map is parsed.
>> The mapping occurs via map_pages_to_xen(). That function discovers the l2e
>> by using virt_to_xen_l2e(). So I think it ought to work.
>>
>> You can test it out by doing a map_pages_to_xen() call on an area of virtual
>> address space that currently has no l2e. Should crash now, and work with the
>> modified virt_to_xen_l2e().
>
>Actually virt_to_xen_l2e() already allocates pagetables on demand, it turns
>out. So I think there is no issue here that needs fixing.
Here's the change I was hinting towards - replaces the BUG_ON() with proper
code.
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
Index: 2006-09-11/xen/arch/x86/x86_64/mm.c
===================================================================
--- 2006-09-11.orig/xen/arch/x86/x86_64/mm.c 2006-09-12 10:39:39.000000000
+0200
+++ 2006-09-11/xen/arch/x86/x86_64/mm.c 2006-09-12 10:41:24.000000000 +0200
@@ -78,7 +78,7 @@ void __init paging_init(void)
{
unsigned long i, mpt_size;
l3_pgentry_t *l3_ro_mpt;
- l2_pgentry_t *l2_ro_mpt;
+ l2_pgentry_t *l2_ro_mpt = NULL;
struct page_info *pg;
/* Create user-accessible L2 directory to map the MPT for guests. */
@@ -87,12 +87,6 @@ void __init paging_init(void)
idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)] =
l4e_from_page(
virt_to_page(l3_ro_mpt), __PAGE_HYPERVISOR | _PAGE_USER);
- l2_ro_mpt = alloc_xenheap_page();
- clear_page(l2_ro_mpt);
- l3_ro_mpt[l3_table_offset(RO_MPT_VIRT_START)] =
- l3e_from_page(
- virt_to_page(l2_ro_mpt), __PAGE_HYPERVISOR | _PAGE_USER);
- l2_ro_mpt += l2_table_offset(RO_MPT_VIRT_START);
/*
* Allocate and map the machine-to-phys table.
@@ -110,10 +104,20 @@ void __init paging_init(void)
PAGE_HYPERVISOR);
memset((void *)(RDWR_MPT_VIRT_START + (i << L2_PAGETABLE_SHIFT)), 0x55,
1UL << L2_PAGETABLE_SHIFT);
+ if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) )
+ {
+ unsigned long va = RO_MPT_VIRT_START + (i << L2_PAGETABLE_SHIFT);
+
+ l2_ro_mpt = alloc_xenheap_page();
+ clear_page(l2_ro_mpt);
+ l3_ro_mpt[l3_table_offset(va)] =
+ l3e_from_page(
+ virt_to_page(l2_ro_mpt), __PAGE_HYPERVISOR | _PAGE_USER);
+ l2_ro_mpt += l2_table_offset(va);
+ }
/* NB. Cannot be GLOBAL as shadow_mode_translate reuses this area. */
*l2_ro_mpt++ = l2e_from_page(
pg, /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT);
- BUG_ON(((unsigned long)l2_ro_mpt & ~PAGE_MASK) == 0);
}
/* Set up linear page table mapping. */
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|