[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH 5/5] xen: release all pages within 1-1 p2m mappings



On 06/09/11 22:20, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 19, 2011 at 03:57:20PM +0100, David Vrabel wrote:
>> 
>> --- a/arch/x86/xen/setup.c
>> +++ b/arch/x86/xen/setup.c
>> @@ -123,73 +123,33 @@ static unsigned long __init 
>> xen_release_chunk(phys_addr_t start_addr,
>>      return len;
>>  }
>>  
>> -static unsigned long __init xen_return_unused_memory(unsigned long max_pfn,
>> -                                                 const struct e820entry 
>> *map,
>> -                                                 int nr_map)
>> +static unsigned long __init xen_set_identity_and_release(const struct 
>> e820entry *list,
>> +                                                     ssize_t map_size,
>> +                                                     unsigned long nr_pages)
>>  {
>> -    phys_addr_t max_addr = PFN_PHYS(max_pfn);
>> -    phys_addr_t last_end = ISA_END_ADDRESS;
>> +    phys_addr_t avail_end = PFN_PHYS(nr_pages);
>> +    phys_addr_t last_end = 0;
>>      unsigned long released = 0;
>> -    int i;
>> -
>> -    /* Free any unused memory above the low 1Mbyte. */
>> -    for (i = 0; i < nr_map && last_end < max_addr; i++) {
>> -            phys_addr_t end = map[i].addr;
>> -            end = min(max_addr, end);
>> -
>> -            if (last_end < end)
>> -                    released += xen_release_chunk(last_end, end);
>> -            last_end = max(last_end, map[i].addr + map[i].size);
>> -    }
>> -
>> -    if (last_end < max_addr)
>> -            released += xen_release_chunk(last_end, max_addr);
>> -
>> -    printk(KERN_INFO "released %lu pages of unused memory\n", released);
>> -    return released;
>> -}
>> -
>> -static unsigned long __init xen_set_identity(const struct e820entry *list,
>> -                                         ssize_t map_size)
>> -{
>> -    phys_addr_t last = xen_initial_domain() ? 0 : ISA_END_ADDRESS;
>> -    phys_addr_t start_pci = last;
>>      const struct e820entry *entry;
>> -    unsigned long identity = 0;
>>      int i;
>>  
>>      for (i = 0, entry = list; i < map_size; i++, entry++) {
>> -            phys_addr_t start = entry->addr;
>> -            phys_addr_t end = start + entry->size;
>> -
>> -            if (start < last)
>> -                    start = last;
>> +            phys_addr_t begin = last_end;
> 
> The "begin" is a bit confusing. You are using the previous E820 entry's end - 
> not
> the beginning of this E820 entry. Doing a s/begin/last_end/ makes
> the code a bit easier to understand.

Really?  It seems pretty clear to me that they're the beginning and end
of the memory range we're considering to release or map.

That loop went through a number of variations and what's there is what I
think is the most readable.

>> +            phys_addr_t end = entry->addr + entry->size;
>>  
>> -            if (end <= start)
>> -                    continue;
>> +            last_end = end;
> 
> Please include the comment:
> /* This entry end. */

Not really in favour of little comments like this.  I'll put a comment
above the loop.

/*
 * For each memory region consider whether to release and map the
 * region and the preceeding gap (if any).
 */

> OK, but you have ripped out the nice printk's that existed before. So add them
> back in:
> 
> 
>       printk(KERN_INFO "released %lu pages of unused memory\n", released);
>       printk(KERN_INFO "Set %ld page(s) to 1-1 mapping.\n", identity);
> 
> as they are quite useful in the field.

What problem are these useful for diagnosing that the remaining messages
(and the e820 map) don't tell you already?

 xen_release_chunk: looking at area pfn 9e-100: 98 pages freed
 1-1 mapping on 9e->100
 1-1 mapping on bf699->bf6af
 1-1 mapping on bf6af->bf6ce
 1-1 mapping on bf6ce->c0000
 1-1 mapping on c0000->f0000
 1-1 mapping on f0000->100000

The total count just doesn't seem useful.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.