[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/pod: Do not fragment PoD memory allocations

  • To: Elliott Mitchell <ehem+xen@xxxxxxx>
  • From: George Dunlap <George.Dunlap@xxxxxxxxxx>
  • Date: Mon, 1 Feb 2021 10:35:15 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=w1rzNtUr0SMeVPcp8/AcCHbGnGHGvDcSbttKrJJDaWw=; b=OZaTaUhLF+Wo0kKT8ojMnOcIJoMjGpFG9M0ie2RwXmwIe6ndXlqXtR6LKJKNhMKEbJZCYxnpMxctcNPLS4haGqj/gEi2i1DGlH2ATWGL8016ITMfTP2SBPWI1QlxEzoy+2J85ffuxX9R/L/nlR+FCGxReG9gz/+ZuTrDvpu5PO4LJUKyLFJPPWKrjbzfipScVMGQkMc+/VH1lBHBWyCWoWao0TpSQzmf1p/M7Ml7IG94jEe6PmPGdtxC/pHA98GMtfosF7JXEkjcpAUb0sZ4OFyf4vTL8JT/FHe8/5h2YM4/C8ncz1hhRpo42YiT651/We7x1HLLV2JqEe+X2lVWvQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dKOWYMgYyngDrb3BPCQD0KIojPzC4T5d9ENaOXHOGgbP1KmRHsW1oJXv1gBDBoJ62xBED7Xi5iGkDtEgt0j7bTTAflxg04T6iE4utDTAeY6aRhia/4Wu8GfE7ujLvi2L24NGLqt2/PMRmOjnQTXwrHMzqmfpQuiSCDJ1f7G1pT6wNXlE4EqW7wB8UM6JmIZU2ex9evCizL/66fkkXBaAOBL8svsJN75HHvdSYSf7RbgnpXsCv5EBi+1xYFiCkW/slMr84GL72iuqQJ3/Gv78H8uXa0JQw1iw8qYLq/GF4RYJ7gBoBovQmNzZSAa+gCe8zeKEw3O53KlAEZ7iEb1maQ==
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>, "open list:X86" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Delivery-date: Mon, 01 Feb 2021 10:35:25 +0000
  • Ironport-sdr: dtPK9egNufoCR5qvjktNyvRstJT78xV6UtI6PTXjDlDWiUAycvD1bd+Qq7A/bhImqT1WOOTC00 FuXuYR+II/DO+nt9S6Tbr/j3TRmi3Ed7obk7sUC6AHGMu0HCM7CAOYr00+B8EM28tDuZQv9bFX VvTNl7Y0V/0+T6hD+kk2xet0cGeH3QVYfoCS28OOdLsKDxDK7G2wNE0SZz8hotYPLMOs8tvQUM w3Fie6z17heWhScVQnZYeozd/pGRrgOYm7/LYqVdqvAVtx/lbsu6IS/nWZmEQREu2nyL9+fKRm o2U=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-topic: [PATCH] x86/pod: Do not fragment PoD memory allocations

> On Jan 31, 2021, at 6:13 PM, Elliott Mitchell <ehem+xen@xxxxxxx> wrote:
> On Thu, Jan 28, 2021 at 10:42:27PM +0000, George Dunlap wrote:
>>> On Jan 28, 2021, at 6:26 PM, Elliott Mitchell <ehem+xen@xxxxxxx> wrote:
>>> type = "hvm"
>>> memory = 1024
>>> maxmem = 1073741824
>>> I suspect maxmem > free Xen memory may be sufficient.  The instances I
>>> can be certain of have been maxmem = total host memory *7.
>> Can you include your Xen version and dom0 command-line?
>> This is on staging-4.14 from a month or two ago (i.e., what I happened to 
>> have on a random test  box), and `dom0_mem=1024M,max:1024M` in my 
>> command-line.  That rune will give dom0 only 1GiB of RAM, but also prevent 
>> it from auto-ballooning down to free up memory for the guest.
> As this is a server, not a development target, Debian's build of 4.11 is
> in use.  Your domain 0 memory allocation is extremely generous compared
> to mine.  One thing which is on the command-line though is
> "watchdog=true".

staging-4.14 is just the stable 4.14 branch which our CI loop tests before 
pushing to stable-4.14, which is essentially tagged 3 times a year for point 
releases.  It’s quite stable.  I’ll give 4.11 a try if I get a chance.

It’s not clear from your response — are you allocating a fixed amount to dom0?  
How much is it?  In fact, probably the simplest thing to do would be to attach 
the output of `xl info` and `xl dmesg`; that will save a lot of potential 
future back-and-forth.

1GiB isn’t particularly generous if you’re running a large number of guests.  
My understanding is that XenServer now defaults to 4GiB of RAM for dom0.

> I've got 3 candidates which presently concern me:ble:
> 1> There is a limited range of maxmem values where this occurs.  Perhaps
> 1TB is too high on your machine for the problem to reproduce.  As
> previously stated my sample configuration has maxmem being roughly 7
> times actual machine memory.

In fact I did a number of binary-search-style experiments to try to find out 
boundary behavior.  I don’t think I did 7x memory, but I certainly did 2x or 3x 
host memory, and the exact number you gave that caused you problems.  In all 
cases for me, it either worked or failed with a cryptic error message (the 
specific message depending on whether I had fixed dom0 memory or autoballooned 

> 2> Between issuing the `xl create` command and the machine rebooting a
> few moments of slow response have been observed.  Perhaps the memory
> allocator loop is hogging processor cores long enough for the watchdog to
> trigger?

I don’t know the balloon driver very well, but I’d hope it yielded pretty 
regularly.  It seems more likely to me that your dom0 is swapping due to low 
memory / struggling with having to work with no file cache.  Or the OOM killer 
is doing its calculation trying to figure out which process to shoot?  

> 3> Perhaps one of the patches on Debian broke things?  This seems
> unlikely since nearly all of Debian's patches are either strictly for
> packaging or else picks from Xen's main branch, but this is certainly
> possible.

Indeed, I’d consider that unlikely.  Some things I’d consider more likely to 
cause the difference:

1. The amount of host memory (my test box had only 6GiB)

2. The amount of memory assigned to dom0 

3. The number of other VMs running in the background

4. A difference in the version of Linux (I’m also running Debian, but 

5. A bug in 4.11 that was fixed by 4.14.

If you’re already allocating a fixed amount of memory to dom0, but it’s 
significantly less than 1GiB, the first thing I’d try is increasing that to 
1GiB.  Also make sure that you’re specifying a ‘max’ for dom0 memory: If you 
simply put `dom0_mem=X`, dom0 will start with X amount of memory, but allocate 
enough frame tables such that it could balloon up to the full host memory if 
requested.  (And frame tables are not free.)  `dom0_mem=X,max=X` will cause 
dom0 to only make frame tables for X memory.  (At least, so I’m guessing; I 
haven’t checked.)

If that doesn’t work, please include the output of `xl info` and `xl dmesg`; 
that will give us a lot more information to work with.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.