WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: PoD issue

To: Jan Beulich <JBeulich@xxxxxxxxxx>
Subject: Re: [Xen-devel] Re: PoD issue
From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Date: Thu, 4 Feb 2010 11:12:18 -0800
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 04 Feb 2010 11:12:38 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type:content-transfer-encoding; bh=T5MjbUqoz4rgBccc6Axqla2AedKEswgDoJYAkxYjxvU=; b=fLPxJTxPxtizn5A1h8c04RRDz4ndEuOhc5DWv6GM27WxMn1W2pAZX4tzLn9uLHWgj8 SNkFHbywKUY1o0A59vV389jbOkdIPGn9h1bznphJB5mPxbHv7bWsb5u4gsplwWaWXGs5 oWx8IIJUO7cHJm77KWjEgx6J6QqIl8hMoQfAc=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=xiAHxK3JvPUvh+Uy99uMhXO9VO4huvj1yJwFvga9mYSt4gXItyDCyTCEFORT0A2cc7 C/53Tt/IYHQU4npnB97GO7BUOGq1LUUJ8xpFF9oXzn4QX03KNCxoCY1XAweTqCQ+oX/v mXSUVpfaAFoo+pjfh7nihWhSpt5mTc5ypkV0g=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4B6A90B1020000780002DA3B@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4B65C25E02000078000584AB@xxxxxxxxxxxxxxxxxx> <4B69C381.10005@xxxxxxxxxxxxx> <4B6A90B1020000780002DA3B@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Yeah, the OSS tree doesn't get the kind of regression testing it
really needs at the moment.  I was using the OSS balloon drivers when
I implemented and submitted the PoD code last year.  I didn't have any
trouble then, and I was definitely using up all of the memory.  But I
haven't done any testing on OSS since then, basically.

 -George

On Thu, Feb 4, 2010 at 12:17 AM, Jan Beulich <JBeulich@xxxxxxxxxx> wrote:
> It was in the balloon driver's interaction with xenstore - see 2.6.18 c/s
> 989.
>
> I have to admit that I cannot see how this issue could slip attention
> when the PoD code was introduced - any guest with PoD in use and
> an unfixed balloon driver is set to crash sooner or later (implying the
> unfortunate effect of requiring an update of the pv drivers in HVM
> guests when upgrading Xen from a PoD-incapable to a PoD-capable
> version).
>
> Jan
>
>>>> George Dunlap <george.dunlap@xxxxxxxxxxxxx> 03.02.10 19:42 >>>
> So did you track down where the math error is?  Do we have a plan to fix
> this going forward?
>  -George
>
> Jan Beulich wrote:
>>>>> George Dunlap  01/29/10 7:30 PM >>>
>>>>>
>>> PoD is not critical to balloon out guest memory.  You can boot with mem
>>> == maxmem and then balloon down afterwards just as you could before,
>>> without involving PoD.  (Or at least, you should be able to; if you
>>> can't then it's a bug.)  It's just that with PoD you can do something
>>> you've always wanted to do but never knew it: boot with 1GiB with the
>>> option of expanding up to 2GiB later. :-)
>>>
>>
>> Oh, no, that's not what I meant. What I really wanted to say is that
>> with PoD, a properly functioning balloon driver in the guest is crucial
>> for it to stay alive long enough.
>>
>>
>>> With the 54 megabyte difference: It's not like a GiB vs GB thing, is
>>> it?  (i.e., 2^30 vs 10^9?)  The difference between 1GiB (2^30) and 1 GB
>>> (10^9) is about 74 megs, or 18,000 pages.
>>>
>>
>> No, that's not the problem. As I understand it now, the problem is
>> that totalram_pages (which the balloon driver bases its calculations
>> on) reflects all memory available after all bootmem allocations were
>> done (i.e. includes neither the static kernel image nor any memory
>> allocated before or from the bootmem allocator).
>>
>>
>>> I guess that is a weakness of PoD in general: we can't control the guest
>>> balloon driver, but we rely on it to have the same model of how to
>>> translate "target" into # pages in the balloon as the PoD code.
>>>
>>
>> I think this isn't a weakness of PoD, but a design issue in the balloon
>> driver's xenstore interface: While a target value shown in or obtained
>> from the /proc and /sys interfaces naturally can be based on (and
>> reflect) any internal kernel state, the xenstore interface should only
>> use numbers in terms of full memory amount given to the guest.
>> Hence a target value read from the memory/target node should be
>> adjusted before put in relation to totalram_pages. And I think this
>> is a general misconception in the current implementation (i.e. it
>> should be corrected not only for the HVM case, but for the pv one
>> as well).
>>
>> The bad aspect of this is that it will require a fixed balloon driver
>> in any HVM guest that has maxmem>mem when the underlying Xen
>> gets updated to a version that supports PoD. I cannot, however,
>> see an OS and OS-version independent alternative (i.e. something
>> to be done in the PoD code or the tools).
>>
>> Jan
>>
>>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>