[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Increasing domain memory beyond initial maxmem



On Wed, Apr 06, 2022 at 07:13:18AM +0200, Juergen Gross wrote:
> On 05.04.22 18:24, Marek Marczykowski-Górecki wrote:
> > On Tue, Apr 05, 2022 at 01:03:57PM +0200, Juergen Gross wrote:
> > > Hi Marek,
> > > 
> > > On 31.03.22 14:36, Marek Marczykowski-Górecki wrote:
> > > > On Thu, Mar 31, 2022 at 02:22:03PM +0200, Juergen Gross wrote:
> > > > > Maybe some kernel config differences, or other udev rules (memory 
> > > > > onlining
> > > > > is done via udev in my guest)?
> > > > > 
> > > > > I'm seeing:
> > > > > 
> > > > > # zgrep MEMORY_HOTPLUG /proc/config.gz
> > > > > CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
> > > > > CONFIG_MEMORY_HOTPLUG=y
> > > > > # CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set
> > > > > CONFIG_XEN_BALLOON_MEMORY_HOTPLUG=y
> > > > > CONFIG_XEN_MEMORY_HOTPLUG_LIMIT=512
> > > > 
> > > > I have:
> > > > # zgrep MEMORY_HOTPLUG /proc/config.gz
> > > > CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
> > > > CONFIG_MEMORY_HOTPLUG=y
> > > > CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE=y
> > > > CONFIG_XEN_BALLOON_MEMORY_HOTPLUG=y
> > > > CONFIG_XEN_MEMORY_HOTPLUG_LIMIT=512
> > > > 
> > > > Not sure if relevant, but I also have:
> > > > CONFIG_XEN_UNPOPULATED_ALLOC=y
> > > > 
> > > > on top of that, I have a similar udev rule too:
> > > > 
> > > > SUBSYSTEM=="memory", ACTION=="add", ATTR{state}=="offline", 
> > > > ATTR{state}="online"
> > > > 
> > > > But I don't think they are conflicting.
> > > > 
> > > > > What type of guest are you using? Mine was a PVH guest.
> > > > 
> > > > PVH here too.
> > > 
> > > Would you like to try the attached patch? It seemed to work for me.
> > 
> > Unfortunately it doesn't help, now the behavior is different:
> > 
> > Initially guest started with 800M:
> > 
> >      [root@personal ~]# free -m
> >                    total        used        free      shared  buff/cache   
> > available
> >      Mem:            740         223         272           2         243    
> >      401
> >      Swap:          1023           0        1023
> > 
> > Then increased:
> > 
> >      [root@dom0 ~]$ xl mem-max personal 2048
> >      [root@dom0 ~]$ xenstore-write /local/domain/$(xl domid 
> > personal)/memory/static-max $((2048*1024))
> >      [root@dom0 ~]$ xl mem-set personal 2000
> > 
> > And guest shows now only a little more memory, but not full 2000M:
> > 
> >      [root@personal ~]# [   37.657046] xen:balloon: Populating new zone
> >      [   37.658206] Fallback order for Node 0: 0
> >      [   37.658219] Built 1 zonelists, mobility grouping on.  Total pages: 
> > 175889
> >      [   37.658233] Policy zone: Normal
> > 
> >      [root@personal ~]#
> >      [root@personal ~]# free -m
> >                    total        used        free      shared  buff/cache   
> > available
> >      Mem:            826         245         337           2         244    
> >      462
> >      Swap:          1023           0        1023
> > 
> > 
> > I've applied the patch on top of 5.16.18. If you think 5.17 would make a
> > difference, I can try that too.
> 
> Hmm, weird.
> 
> Can you please post the output of
> 
> cat /proc/buddyinfo
> cat /proc/iomem
> 
> in the guest before and after the operations?

Ok, that was a stupid mistake on my side - I've run out of host memory.
With that fixed, it seems to work, on 5.16.18 too.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab

Attachment: signature.asc
Description: PGP signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.