[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH] e820: fix clip_to_limit()




Keir Fraser wrote:
> On 10/11/2009 02:13, "Xiao Guangrong" <xiaoguangrong@xxxxxxxxxxxxxx> wrote:
> 
>> Hi Keir,
>>
>> Keir Fraser wrote:
>>> I think the 'break' is in the wrong place. Actually also I think the case of
>> Why we can't break the loop if we meet the "large" end address? what am i
>> missed?
> 
> Firstly, your 'break' was not inside that if-else block; it was right at the
> end of the for loop. Secondly, just because we found one RAM region entirely
> beyond the end of the clip boundary, does not mean there isn't another. We
> can't just bail -- we have to iterate all the way to the end of the e820
> map.
> 

I think that sanitize_e820_map() can sort e820 items from low address
to high address, so, if we meet one e820 item beyond the end of the clip
boundary, subsequent items also beyond it.

Maybe I misunderstand sanitize_e820_map()? I'll reread it :-)

>> Your patch work well, IMHO, double loop is inefficient
> 
> Well, possibly. But really a typical e820 map will not have more than a
> small handful of offending RAM regions, hence there should be very few
> iterations of the outer loop. Also we already re-set the loop variable in
> the e820_change_range_type() case, so we effectively had the same double
> loop there already (and change_range_type will be by far the common case
> when we find a e820 region to clip).
> 

Yeah, you are right, I missed it before :-)

Thanks,
Xiao

>> I think we don't need reload loop hear, because e820_change_range_type() not
>> touch front object(it may merge with e820.map[i+1], but it not hurt us).
> 
> It also does a full e820 merge operation at the end. I wouldn't really like
> to make assumptions about how much that modifies e820.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.