[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 0/1] xen/blkback: Squeeze page pools if a memory pressure



On 09.12.19 11:23, SeongJae Park wrote:
On   Mon, 9 Dec 2019 10:39:02 +0100  Juergen <jgross@xxxxxxxx> wrote:

On 09.12.19 09:58, SeongJae Park wrote:
Each `blkif` has a free pages pool for the grant mapping.  The size of
the pool starts from zero and be increased on demand while processing
the I/O requests.  If current I/O requests handling is finished or 100
milliseconds has passed since last I/O requests handling, it checks and
shrinks the pool to not exceed the size limit, `max_buffer_pages`.

Therefore, `blkfront` running guests can cause a memory pressure in the
`blkback` running guest by attaching a large number of block devices and
inducing I/O.

I'm having problems to understand how a guest can attach a large number
of block devices without those having been configured by the host admin
before.

If those devices have been configured, dom0 should be ready for that
number of devices, e.g. by having enough spare memory area for ballooned
pages.

As mentioned in the original message as below, administrators _can_ avoid this
problem, but finding the optimal configuration is hard, especially if the
number of the guests is large.

        System administrators can avoid such problematic situations by limiting
        the maximum number of devices each guest can attach.  However, finding
        the optimal limit is not so easy.  Improper set of the limit can
        results in the memory pressure or a resource underutilization.

This sounds as if the admin would set a device limit. But it is the
other way round: The admin needs to configure each possible device
with all parameters (e.g. backing dom0 resource) for enabling the
frontend to use it.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.