[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0/2] xen/blkback: Aggressively shrink page pools if a memory pressure is detected



Each `blkif` has a free pages pool for the grant mapping.  The size of
the pool starts from zero and be increased on demand while processing
the I/O requests.  If current I/O requests handling is finished or 100
milliseconds has passed since last I/O requests handling, it checks and
shrinks the pool to not exceed the size limit, `max_buffer_pages`.

Therefore, `blkfront` running guests can cause a memory pressure in the
`blkback` running guest by attaching arbitrarily large number of block
devices and inducing I/O.  This patchset avoids such problematic
situations by shrinking the pools aggressively (further the limit) for a
while if a memory pressure is detected.


Discussions
===========

The shrinking mechanism returns only pages in the pool which are not
currently be used by blkback.  In other words, the pages that will be
shrunk are not mapped with foreign pages.  Because this patchset is
changing only the shrink limit but uses the shrinking mechanism as is,
this patchset does not introduce security issues such as improper
unmappings.

The first patch keeps the aggressive shrinking limit for one milisecond
from last memory pressure detected time.  The duration should be neither
too short nor too long.  If it is too long, free pages pool shrinking
overhead can reduce the I/O performance.  If it is too short, blkback
will not free enough pages to reduce the memory pressure.  I set the
value as 1 millisecond by default because I believe that 1 millisecond
is a short duration in terms of I/O while it is a long duration in terms
of memory operations.  Also, as the original shrinking mechanism works
for every 100 milliseconds, this could be a somewhat reasonable choice.
In actual, the default value worked well for my test (refer to below
section for the detail of the test).  Nevertheless, the proper duration
would depends on given configurations and workloads.  The second patch
therefore allows users to set it via a module parameter interface.


Memory Pressure Test
====================

To show whether this patchset fixes the above mentioned memory pressure
situation well, I configured a test environment.  On the `blkfront`
running guest instances of a virtualized environment, I attach
arbitrarily large number of network-backed volume devices and induce I/O
to those.  Meanwhile, I measure the number of pages that swapped in and
out on the `blkback` running guest.  The test ran twice, once for the
`blkback` before this patchset and once for that after this patchset.

Roughly speaking, this patchset has reduced those numbers 130x (pswpin)
and 34x (pswpout) as below:

                pswpin  pswpout
    before      76,672  185,799
    after          587    5,402


Performance Overhead Test
=========================

This patchset could incur I/O performance degradation under memory
pressure because the aggressive shrinking will require more page
allocations.  To show the overhead, I artificially made an aggressive
pages pool shrinking situation and measured the I/O performance of a
`blkfront` running guest.

For the artificial shrinking, I set the `blkback.max_buffer_pages` using
the `/sys/module/xen_blkback/parameters/max_buffer_pages` file.  We set
the value to `1024` and `0`.  The `1024` is the default value.  Setting
the value as `0` incurs the worst-case aggressive shrinking stress.

For the I/O performance measurement, I use a simple `dd` command.

Default Performance
-------------------

    [dom0]# echo 1024 >  /sys/module/xen_blkback/parameters/max_buffer_pages
    [instance]$ for i in {1..5}; do dd if=/dev/zero of=file bs=4k 
count=$((256*512)); sync; done
    131072+0 records in
    131072+0 records out
    536870912 bytes (537 MB) copied, 11.7257 s, 45.8 MB/s
    131072+0 records in
    131072+0 records out
    536870912 bytes (537 MB) copied, 13.8827 s, 38.7 MB/s
    131072+0 records in
    131072+0 records out
    536870912 bytes (537 MB) copied, 13.8781 s, 38.7 MB/s
    131072+0 records in
    131072+0 records out
    536870912 bytes (537 MB) copied, 13.8737 s, 38.7 MB/s
    131072+0 records in
    131072+0 records out
    536870912 bytes (537 MB) copied, 13.8702 s, 38.7 MB/s

Worst-case Performance
----------------------

    [dom0]# echo 0 >  /sys/module/xen_blkback/parameters/max_buffer_pages
    [instance]$ for i in {1..5}; do dd if=/dev/zero of=file bs=4k 
count=$((256*512)); sync; done
    131072+0 records in
    131072+0 records out
    536870912 bytes (537 MB) copied, 11.7257 s, 45.8 MB/s
    131072+0 records in
    131072+0 records out
    536870912 bytes (537 MB) copied, 13.878 s, 38.7 MB/s
    131072+0 records in
    131072+0 records out
    536870912 bytes (537 MB) copied, 13.8746 s, 38.7 MB/s
    131072+0 records in
    131072+0 records out
    536870912 bytes (537 MB) copied, 13.8786 s, 38.7 MB/s
    131072+0 records in
    131072+0 records out
    536870912 bytes (537 MB) copied, 13.8749 s, 38.7 MB/s

In short, even worst case aggressive pools shrinking makes no visible
performance degradation.  I think this is due to the slow speed of the
I/O.  In other words, the additional page allocation overhead is hidden
under the much slower I/O time.

SeongJae Park (2):
  xen/blkback: Aggressively shrink page pools if a memory pressure is
    detected
  blkback: Add a module parameter for aggressive pool shrinking duration

 drivers/block/xen-blkback/blkback.c | 35 +++++++++++++++++++++++++++--
 1 file changed, 33 insertions(+), 2 deletions(-)

-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.