[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v2 2/2] x86/mm: drop unmapping from marking-as-I/O in arch_init_memory()
The unmapping part would have wanted to cover UNUSABLE regions as well, and it would now have been necessary for space outside the low 16Mb (wherever Xen is placed). However, with everything up to the next 2Mb boundary now properly backed by RAM, we don't need to unmap anything anymore: Space up to __2M_rwdata_end[] is properly reserved, whereas space past that mark (up to the next 2Mb boundary) is ordinary RAM. While there, limit the scopes of involved variables. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- v2: Drop unmapping code altogether. --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -275,8 +275,6 @@ static void __init assign_io_page(struct void __init arch_init_memory(void) { - unsigned long i, pfn, rstart_pfn, rend_pfn, iostart_pfn, ioend_pfn; - /* * Basic guest-accessible flags: * PRESENT, R/W, USER, A/D, AVAIL[0,1,2], AVAIL_HIGH, NX (if available). @@ -292,12 +290,17 @@ void __init arch_init_memory(void) * case the low 1MB. */ BUG_ON(pvh_boot && trampoline_phys != 0x1000); - for ( i = 0; i < 0x100; i++ ) + for ( unsigned int i = 0; i < MB(1) >> PAGE_SHIFT; i++ ) assign_io_page(mfn_to_page(_mfn(i))); - /* Any areas not specified as RAM by the e820 map are considered I/O. */ - for ( i = 0, pfn = 0; pfn < max_page; i++ ) + /* + * Any areas not specified as RAM or UNUSABLE by the e820 map are + * considered I/O. + */ + for ( unsigned long i = 0, pfn = 0; pfn < max_page; i++ ) { + unsigned long rstart_pfn, rend_pfn; + while ( (i < e820.nr_map) && (e820.map[i].type != E820_RAM) && (e820.map[i].type != E820_UNUSABLE) ) @@ -317,17 +320,6 @@ void __init arch_init_memory(void) PFN_DOWN(e820.map[i].addr + e820.map[i].size)); } - /* - * Make sure any Xen mappings of RAM holes above 1MB are blown away. - * In particular this ensures that RAM holes are respected even in - * the statically-initialised 1-16MB mapping area. - */ - iostart_pfn = max_t(unsigned long, pfn, 1UL << (20 - PAGE_SHIFT)); - ioend_pfn = min(rstart_pfn, 16UL << (20 - PAGE_SHIFT)); - if ( iostart_pfn < ioend_pfn ) - destroy_xen_mappings((unsigned long)mfn_to_virt(iostart_pfn), - (unsigned long)mfn_to_virt(ioend_pfn)); - /* Mark as I/O up to next RAM region. */ for ( ; pfn < rstart_pfn; pfn++ ) { @@ -365,6 +357,7 @@ void __init arch_init_memory(void) const l3_pgentry_t *l3idle = map_l3t_from_l4e( idle_pg_table[l4_table_offset(split_va)]); l3_pgentry_t *l3tab = map_domain_page(l3mfn); + unsigned int i; for ( i = 0; i < l3_table_offset(split_va); ++i ) l3tab[i] = l3idle[i];
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |