Yes, sorry, I was thinking about the upstream balloon driver
which fails to init if (!xen_pv_domain()).
The only other problem I can think of in the RH5 balloon
driver is I think there is no minimum-size check... i.e. if
you try to balloon to a very small size (which can happen
accidentally if you use the wrong units to /proc/xen/balloon),
the guest kernel will crash.
> -----Original Message-----
> From: George Dunlap [mailto:George.Dunlap@xxxxxxxxxxxxx]
> Sent: Monday, November 29, 2010 3:12 AM
> To: Dan Magenheimer
> Cc: cloudroot; tinnycloud; xen devel
> Subject: [Xen-devel] Re: Xen balloon driver discuss
>
> FYI, the balloon driver in 2.6.18 was meant to be working at some
> point.
> The xen tree has some drivers which will compile for 2.6.18
> externally
> and will run in HVM mode. More modern kernels need Stefano's pv-on-hvm
> patch series to be able to access xenstore (which is a requisite for a
> working balloon driver).
>
> -George
>
> On 28/11/10 02:36, Dan Magenheimer wrote:
> > Am I understanding correctly that you are running each linux-2.6.18
> as
> > HVM (not PV)? I didn't think that the linux-2.6.18 balloon driver
> worked
> > at all in an HVM guest.
> >
> > You also didn't say what version of Xen you are using. If you are
> > running xen-unstable, you should also provide the changeset number.
> >
> > In any case, any load of HVM guests should never crash Xen itself,
> but
> > if you are running HVM guests, I probably can't help much as I almost
> > never run HVM guests.
> >
> > *From:* cloudroot [mailto:cloudroot@xxxxxxxx]
> > *Sent:* Friday, November 26, 2010 11:55 PM
> > *To:* tinnycloud; Dan Magenheimer; xen devel
> > *Cc:* george.dunlap@xxxxxxxxxxxxx
> > *Subject:* re: Xen balloon driver discuss
> >
> > Hi Dan:
> >
> > I have set the benchmark to test balloon driver, but unfortunately
> the
> > Xen crashed on memory Panic.
> >
> > Before I attach the details output from serial port(which takes time
> on
> > next run), I am afraid of I might miss something on test environment.
> >
> > My dom0 kernel is 2.6.31, pvops.
> >
> > Well currently there is no driver/xen/balloon.c on this kernel source
> > tree, so I build the xen-balloon.ko, Xen-platform-pci.ko form
> >
> > linux-2.6.18.x86_64, and installed in Dom U, which is redhat 5.4.
> >
> > What I did is put a C program in the each Dom U(total 24 HVM), the
> > program will allocate the memory and fill it with random string
> repeatly.
> >
> > And in dom0, a phthon monitor will collect the meminfo from xenstore
> and
> > calculate the target to balloon from Committed_AS.
> >
> > The panic happens when the program is running in just one Dom.
> >
> > I am writing to ask whether my balloon driver is out of date, or
> where
> > can I get the latest source code,
> >
> > I've googled a lot, but still have a lot of confusion on those source
> tree.
> >
> > Many thanks.
> >
> > *From:* tinnycloud [mailto:tinnycloud@xxxxxxxxxxx]
> > *Date:* 2010.11.23 22:58
> > *TO:* 'Dan Magenheimer'; 'xen devel'
> > *CC:* 'george.dunlap@xxxxxxxxxxxxx'
> > *Subject:* re: Xen balloon driver discuss
> >
> > HI Dan:
> >
> > Appreciate for your presentation in summarizing the memory
> overcommit,
> > really vivid and in great help.
> >
> > Well, I guess recently days the strategy in my mind will fall into
> the
> > solution Set C in pdf.
> >
> > The tmem solution your worked out for memory overcommit is both
> > efficient and effective.
> >
> > I guess I will have a try on Linux Guest.
> >
> > The real situation I have is most of the running VMs on host are
> > windows. So I had to come up those policies to balance the memory.
> >
> > Although policies are all workload dependent. Good news is host
> workload
> > is configurable, and not very heavy
> >
> > So I will try to figure out some favorable policy. The policies
> referred
> > in pdf are good start for me.
> >
> > Today, instead of trying to implement "/proc/meminfo" with shared
> pages,
> > I hacked the balloon driver to have another
> >
> > workqueue periodically write meminfo into xenstore through xenbus,
> which
> > solve the problem of xenstrore high CPU
> >
> > utilization problem.
> >
> > Later I will try to google more on how Citrix does.
> >
> > Thanks for your help, or do you have any better idea for windows
> guest?
> >
> > *Sent:* Dan Magenheimer [mailto:dan.magenheimer@xxxxxxxxxx]
> > *Date:* 2010.11.23 1:47
> > *To:* MaoXiaoyun; xen devel
> > *CC:* george.dunlap@xxxxxxxxxxxxx
> > *Subject:* RE: Xen balloon driver discuss
> >
> > Xenstore IS slow and you could improve xenballoond performance by
> only
> > sending the single CommittedAS value from xenballoond in domU to dom0
> > instead of all of /proc/meminfo. But you are making an assumption
> that
> > getting memory utilization information from domU to dom0 FASTER (e.g.
> > with a shared page) will provide better ballooning results. I have
> not
> > found this to be the case, which is what led to my investigation into
> > self-ballooning, which led to Transcendent Memory. See the 2010 Xen
> > Summit for more information.
> >
> > In your last paragraph below "Regards balloon strategy", the problem
> is
> > it is not easy to define "enough memory" and "shortage of memory"
> within
> > any guest and almost impossible to define it and effectively load
> > balance across many guests. See my Linux Plumber's Conference
> > presentation (with complete speaker notes) here:
> >
> >
> http://oss.oracle.com/projects/tmem/dist/documentation/presentations/Me
> mMgmtVirtEnv-LPC2010-Final.pdf
> >
> >
> http://oss.oracle.com/projects/tmem/dist/documentation/presentations/Me
> mMgmtVirtEnv-LPC2010-SpkNotes.pdf
> >
> > *From:* MaoXiaoyun [mailto:tinnycloud@xxxxxxxxxxx]
> > *Sent:* Sunday, November 21, 2010 9:33 PM
> > *To:* xen devel
> > *Cc:* Dan Magenheimer; george.dunlap@xxxxxxxxxxxxx
> > *Subject:* RE: Xen balloon driver discuss
> >
> >
> > Since currently /cpu/meminfo is sent to domain 0 via xenstore, which
> in
> > my opinoin is slow.
> > What I want to do is: there is a shared page between domU and dom0,
> and
> > domU periodically
> > update the meminfo into the page, while on the other side dom0
> retrive
> > the updated data for
> > caculating the target, which is used by guest for balloning.
> >
> > The problem I met is, currently I don't know how to implement a
> shared
> > page between
> > dom0 and domU.
> > Would it like dom 0 alloc a unbound event and wait guest to connect,
> and
> > transfer date through
> > grant table?
> > Or someone has more efficient way?
> > many thanks.
> >
> >> From: tinnycloud@xxxxxxxxxxx
> >> To: xen-devel@xxxxxxxxxxxxxxxxxxx
> >> CC: dan.magenheimer@xxxxxxxxxx; George.Dunlap@xxxxxxxxxxxxx
> >> Subject: Xen balloon driver discuss
> >> Date: Sun, 21 Nov 2010 14:26:01 +0800
> >>
> >> Hi:
> >> Greeting first.
> >>
> >> I was trying to run about 24 HVMS (currently only Linux, later will
> >> involve Windows) on one physical server with 24GB memory, 16CPUs.
> >> Each VM is configured with 2GB memory, and I reserved 8GB memory
> for
> >> dom0.
> >> For safety reason, only domain U's memory is allowed to balloon.
> >>
> >> Inside domain U, I used xenballooned provide by xensource,
> >> periodically write /proc/meminfo into xenstore in dom
> >> 0(/local/domain/did/memory/meminfo).
> >> And in domain 0, I wrote a python script to read the meminfo, like
> >> xen provided strategy, use Committed_AS to calculate the domain U
> balloon
> >> target.
> >> The time interval is ! 1 seconds.
> >>
> >> Inside each VM, I setup a apache server for test. Well, I'd
> >> like to say the result is not so good.
> >> It appears that too much read/write on xenstore, when I give some
> of
> >> the stress(by using ab) to guest domains,
> >> the CPU usage of xenstore is up to 100%. Thus the monitor running
> in
> >> dom0 also response quite slowly.
> >> Also, in ab test, the Committed_AS grows very fast, reach to maxmem
> >> in short time, but in fact the only a small amount
> >> of memory guest really need, so I guess there should be some more
> to
> >> be taken into consideration for ballooning.
> >>
> >> For xenstore issue, I first plan to wrote a C program inside domain
> >> U to replace xenballoond to see whether the situation
> >> will be refined. If not, how about set up event channel directly
> for
> >> domU and dom0, would it be faster?
> >>
> >> Regards balloon strategy, I would do like this, when there ! are
> >> enough memory , just fulfill the guest balloon request, and when
> shortage
> >> of memory, distribute memory evenly on the guests those request
> >> inflation.
> >>
> >> Does anyone have better suggestion, thanks in advance.
> >>
> >
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|