[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v20210701 15/40] tools: prepare to allocate saverestore arrays once


  • To: Olaf Hering <olaf@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Mon, 5 Jul 2021 11:44:30 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WZgoO/N88GXBS4EXddYYKtNt5r8CubHfiMkmIG34IQY=; b=TA0auydDUJTXLmMaGS4kSlxN+kgf5164FH3TPFDvNyzXohjVMPgI4Gk2L+hdwQ01TW5AZu1HgxbTHct4u7lhZi+aqbfEjduRQ3VUOZHPE2GoHycrOAuHdPN+3RBIC0k+rDJm6wbdnQXkL9IehhfzJojLK3WxDjr5BIJcuZSfr9LeB30xo4AqFiwdqLneE5cbvaWz9bjLqtwvFhRzTPZKtGOR0/oh3aesHqw6htWhR1JDAvyWQaHzAz1rfx5CyKuEdOWRLdQ+3xhD2sKgfYaXAcEN8H2a9XfZoHxMEz7tHnO1yDbK0GxhYKNjEixhrroBEuOLSN5irihBRCYBEYuPOA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XqsGZoKNrtJELy6wMxAjFx4GZm3e1vwXRuVHc/Ouz6u8RhfXbfI5fsEidk209of5Dg+lsFhrRqdPi1XvZqviHzGkk56DXy0ELXpDQkcXIDv7D7lxDtqpuRqte8UnfBmWp+k2vPhk4lCq9f75UNAcY0JhL5oEaXJxUVa0/0koUIfPK2ElyKcxB2F8m1d5xeSIeJoMz3Ljwo7TRXsyoO3/yn6rrb6r0NmlxnWMvHJ5vWc0MzNkaJdLg/+xd3yapgx8HVJOe4hIY4XTtkyZhkFgawXACkLfUMlx2PWbL155Jc1454nj4XpzmB52+ZxrIRHksxgRo1WxolW/2IACFZBG9Q==
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Ian Jackson <iwj@xxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Mon, 05 Jul 2021 10:44:49 +0000
  • Ironport-hdrordr: A9a23:YYfz4a4to6hCpMw4nQPXwSOBI+orL9Y04lQ7vn2ZFiY7TiXIra yTdaoguCMc6AxxZJkh8erwXZVoMkmsiqKdhrNhQYtKPTOWxVdASbsN0WKM+UyZJ8STzJ876U 4kSdkFNDSSNykIsS+Z2njALz9I+rDum8rJ9ISuukuFDzsaD52Ihz0JejpzeXcGIjWua6BJdq Z0qvA33AZJLh8sH7WG7zQ+LqT+juyOsKijTQ8NBhYh5gXLpTS06ITiGxzd+hsFSTtAzZor7G CAymXCl+uemsD+7iWZ+37Y7pxQltek4txfBPaUgsxQDjn3kA6naKloRrXHljEop+OE7kosjb D30lgdFvU2z0mUUnC+oBPr1QWl+i0p8WXexViRhmamidDlRRohYvAxxr5xQ1/80Q4Nrdt82K VE0yayrJxMFy7Nmyz7+pzhSwxqrEypunAv+NRjz0C3abFuLYO5kLZvuH+8SPw7bWXHAcEcYa hT5fjnlbRrmQjwVQGegoEHq+bcLEjaHX+9MwM/U4KuomFrdUtCvjwlLfok7z89HaIGOu15Dt v/Q9JVfZF1P4UrhPFGdao8qfXeMB2FffuaChPtHb2gLtBeB07w
  • Ironport-sdr: aZGOkHAP65Z7TLCZ3y+92o3HukE3WNTpIBNfCSoUw6t4JPqkbXpXj+Sh7L2/Po50EsxSlkfn/V FXOM1GvEbbv2pX53raBYmzIiVXKuYktWfD29DxG1zE+jai+rQCivSgeiIgZJ6LPmyAuZRWudU1 ADGIVz8P0+MNQH5R1yyHY2RVNX/eVkNkQuwbPrO37FIpk2MEcQMbfZOhtxbivIxB2Ihluet55G kwGOJPhllk+VLEqR34mfasj2I6n31WVyeIaQ2ghu+/J7soiFsLxSIc9d0GGdGKV440sFpv2tSa fPY=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 01/07/2021 10:56, Olaf Hering wrote:
> The hotpath 'send_dirty_pages' is supposed to do just one thing: sending.
> The other end 'handle_page_data' is supposed to do just receiving.
>
> But instead both do other costly work like memory allocations and data moving.
> Do the allocations once, the array sizes are a compiletime constant.
> Avoid unneeded copying of data by receiving data directly into mapped guest 
> memory.

There is a reason why the logic was written that way, which was good at
the time.  It was so valgrind could spot bugs with the memory handling
in these functions (And it did indeed find many bugs during development).

These days, its ASAN is perhaps the preferred tool, but both depend on
dynamic allocations to figure out the actual size of various objects.


I agree that the repeated alloc/free of same-sized memory regions on
each iteration is a waste.  However, if we are going to fix this by
using one-off allocations, then we want to compensate with logic such as
the call to VALGRIND_MAKE_MEM_UNDEFINED() in flush_batch(), and I think
we still need individual allocations to let the tools work properly.

>
> This patch is just prepartion, subsequent changes will populate the arrays.
>
> Once all changes are applied, migration of a busy HVM domU changes like that:
>
> Without this series, from sr650 to sr950 (xen-4.15.20201027T173911.16a20963b3 
> xen_testing):
> 2020-10-29 10:23:10.711+0000: xc: show_transfer_rate: 23663128 bytes + 
> 2879563 pages in 55.324905335 sec, 203 MiB/sec: Internal error
> 2020-10-29 10:23:35.115+0000: xc: show_transfer_rate: 16829632 bytes + 
> 2097552 pages in 24.401179720 sec, 335 MiB/sec: Internal error
> 2020-10-29 10:23:59.436+0000: xc: show_transfer_rate: 16829032 bytes + 
> 2097478 pages in 24.319025928 sec, 336 MiB/sec: Internal error
> 2020-10-29 10:24:23.844+0000: xc: show_transfer_rate: 16829024 bytes + 
> 2097477 pages in 24.406992500 sec, 335 MiB/sec: Internal error
> 2020-10-29 10:24:48.292+0000: xc: show_transfer_rate: 16828912 bytes + 
> 2097463 pages in 24.446489027 sec, 335 MiB/sec: Internal error
> 2020-10-29 10:25:01.816+0000: xc: show_transfer_rate: 16836080 bytes + 
> 2098356 pages in 13.447091818 sec, 609 MiB/sec: Internal error
>
> With this series, from sr650 to sr950 (xen-4.15.20201027T173911.16a20963b3 
> xen_unstable):
> 2020-10-28 21:26:05.074+0000: xc: show_transfer_rate: 23663128 bytes + 
> 2879563 pages in 52.564054368 sec, 213 MiB/sec: Internal error
> 2020-10-28 21:26:23.527+0000: xc: show_transfer_rate: 16830040 bytes + 
> 2097603 pages in 18.450592015 sec, 444 MiB/sec: Internal error
> 2020-10-28 21:26:41.926+0000: xc: show_transfer_rate: 16830944 bytes + 
> 2097717 pages in 18.397862306 sec, 445 MiB/sec: Internal error
> 2020-10-28 21:27:00.339+0000: xc: show_transfer_rate: 16829176 bytes + 
> 2097498 pages in 18.411973339 sec, 445 MiB/sec: Internal error
> 2020-10-28 21:27:18.643+0000: xc: show_transfer_rate: 16828592 bytes + 
> 2097425 pages in 18.303326695 sec, 447 MiB/sec: Internal error
> 2020-10-28 21:27:26.289+0000: xc: show_transfer_rate: 16835952 bytes + 
> 2098342 pages in 7.579846749 sec, 1081 MiB/sec: Internal error

These are good numbers, and clearly show that there is some value here,
but shouldn't they be in the series header?  They're not terribly
relevant to this patch specifically.

Also, while I can believe that the first sample is slower than the later
ones (in particular, during the first round, we've got to deal with the
non-RAM regions too and therefore spend more time making hypercalls),
I'm not sure I believe the final sample.  Given the byte/page count, the
substantially smaller elapsed time looks suspicious.

> Note: the performance improvement depends on the used network cards,
> wirespeed and the host:
> - No improvement is expected with a 1G link.
> - Improvement can be seen as shown above on a 10G link.
> - Just a slight improvment can be seen on a 100G link.

Are these observations with an otherwise idle dom0?

Even if CPU time in dom0 wasn't the bottlekneck with a 1G link, the
reduction in CPU time you observe at higher link speeds will still be
making a difference at 1G, and will probably be visible if you perform
multiple concurrent migrations.

~Andrew




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.