[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] x86: separate out sanitize_e820_map return codes



On 10/14/2014 02:33 AM, David Vrabel wrote:
> On 14/10/14 03:30, Martin Kelly wrote:
>> Previously, sanitize_e820_map returned -1 in all cases in which it did
>> nothing. However, sanitize_e820_map can do nothing either because the
>> input map has size 1 (this is ok) or because the input map passed in is
>> invalid (likely an issue). It is nice for the caller to be able to
>> distinguish the two cases and treat them separately.
> 
> Wouldn't it be more sensible to return 0 (success) in the case of a
> single entry map?  IMO, a 1 entry map is by definition sanitized.
> 
> David
> 

I had that thought as I writing the patch, but I was worried about breaking 
callers. Luckily, it appears there are only 11 callers in the kernel, and all 
except one either:
(1) Don't check the return value of sanitize_e820_map or
(2) Check against 0 rather than < 0

One caller is checking for < 0: arch/x86/kernel/e820.c:finish_e820_parsing :
        if (userdef) {
                u32 nr = e820.nr_map;

                if (sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &nr) < 0)
                        early_panic("Invalid user supplied memory map");
                e820.nr_map = nr;

                printk(KERN_INFO "e820: user-defined physical RAM map:\n");
                e820_print_map("user");
        }

This seems like a bug, as if the user-defined memory map is size 1, there will 
be an erroneous panic.

I will issue a new revision to change the return values to 0 or -1, with 0 
including the size 1 case. In addition, I will add a patch to either change all 
the callers to actually check this value or to panic in the error case of 
sanitize_e820_map itself. Which do you think is a cleaner approach?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.