[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/3] libxl: xl mem-max et consortes must update static-max in xenstore too [and 1 more messages]

On Thu, Apr 11, 2013 at 04:28:43PM +0200, Daniel Kiper wrote:
> On Thu, Apr 11, 2013 at 02:47:40PM +0100, Ian Campbell wrote:
> > On Thu, 2013-04-11 at 13:24 +0100, Daniel Kiper wrote:
> [...]
> > > Now we have two options:
> > >   - we could allow user to change static-max for a given guest by calling
> > >     xl mem-max (my current solution); this way we change a bit meaning
> > >     of static-max from "maximum amount of memory allowed for the guest
> > >     (usualy all guest OS structures were prepared for this amount of 
> > > memory
> > >     but they do not need to be filled at boot time)" to "maximum amount
> > >     of memory allowed for the guest at a given moment",
> > >   - we could leave static-max as is and use "xen maximum" as "maximum
> > >     amount of memory allowed for the guest at a given moment"; However,
> > >     in this case comparison with static-max in libxl_set_memory_target()
> > >     should be changed to comparison with "xen maximum".
> >
> > How does this stuff work for physical memory hotplug? Understanding that
> > might help us decide what admins expect (and is also directly relevant
> > to HVM memory hotplug).
> Memory hotplug works in the same way in PV and HVM guest.
> > In the e820 of a physical system memory which is actually present at
> > boot is obviously represented as E820_MEMORY, but how are the holes in
> > the memory map where DIMMs could subsequently be physically inserted
> > represented? Are they just "reserved" or is there a special "unpopulated
> > memory" type?
> Region for hotplugged memory is not reserved in any way. Only
> memory available at boot have e820 entries. After hotplugging
> memory hardware places it at relevant address and informs system
> about new memory and its config usualy via ACPI. System admin
> must online new memory via sysfs interface.
> Current memory hotplug implementation in balloon driver does
> not use ACPI and establishes placement for new memory itself
> (simply above max_pfn; algorithm is not perfect and it fails
> in some cases; I am going to fix it). Other things work like
> in physical case.
> > And on the PV kernel side how does this appear to the guest? If you boot
> > a "massively ballooned" guest (e.g. 1G out of max 1TB) does it
> > automatically switch to making the bulk of the difference a hotpluggable
> > region rather than a balloon region? What does "pre-ballooned" even mean
> > to a guest which supports memory hotplug?
> Guest is "massively ballooned". There is no automatic
> change from balloon to memory hotplug.
> > Do the kernels support memory unplug?
> Linux Kernel supports memory unplug (hot remove) on baremetal.
> In Xen guest case memory is ballooned down after memory hotplug.

Ian, any additional thoughts, comments, ...?
I would like to prepare next version of patch ASAP.
I hope it will be included in Xen 4.3.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.