[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 00/10] Preemption in hypervisor (ARM only)


  • To: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 24 Feb 2021 18:07:25 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/oLhVa52hUxUKtSbAID7LLgsTNFnwK7ci+HdKGlL7TI=; b=TFx8zotsgNfyHCmuQH8br+A/mMzSKtvPjMnJa0prmvTUTYTbLYVVXlmuveZgP4sApaDrQywgLp0fgTumM2FetbaqKcjtU5P0QOuQGp6O3H0Th06tiisQqqwA+/yUpLghjJn1T/wX3lVvJaPclstlySYxXSGqRAUppQS6PTLQi0hhWe2NtWKQLR2PSkXTvBRYDx/lHoJZepzHSFpBoVhekMeq/juWE4eITWrBiK6oochK7VuiHW8+MyIjaW+hU//Ncgi/+vf0quJgmEgcWjwaucDs5an4AiuEn+nJsg5rPhAjyMEBRqB1un4/F98wPsutxPt4k+ZHptr//3sdk4eISQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DoUjZmvhqZLYRf7XZSA5FMWM4nG+//G8Dk14UgVaL4twapMjbDc7GIW8VqGRhB0XS3eHMhEw5dZM4CC7mGoCQHB37NpvSFpeaG9ZzejQw52q8cr+UQ7MiRyFDz3tkkpSe+3VhLZkFY90A7lLKNEljkJADvIVDQoMv2OccPt1N2b23Zum2OtaMIkTjZFaHpZJhO2ozmZU865A+Chx+tz8JjRHsCDd23VJf2JS/CwyY7RYt+RqGZONN8zs7CRZHawqg2uJMuIgvEE0dk3CIu2A1ZLUhdie2iLyIAi8QUA3cpQOK+btDffWG2AgCNyNexZ3uW0W3ArIstWaoSQ6lv8bZw==
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, Meng Xu <mengxu@xxxxxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Wed, 24 Feb 2021 18:07:53 +0000
  • Ironport-sdr: N8ZGRH5ZyhImsvOovHH7UxUlcq9Yv4KDNEA8DAs+qNzQivhpr9UcJQce6AMqXX4ZLlpKCaPiQF ZlPTeLc1K+TQrdv/OEOxlUzR6EyNsKbQ5Qw+l9lpKnJ8F57xarU0a0E42KZoUQqYTSBIWsJ4gq ALpGnFH6yFFInlR1pANdHTZ5D3DLyF22BGeu2GwdpIp0QfnPdWWtkehPMcblpLp2HTWfcsVPRF 3GC+SSwnszMx0Y5Me+o98TxNzjo8LqW+/aEtNTaQR8RvtLwepBI50pEjN6P/YR+hzJ00ILTY5Q OKQ=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 23/02/2021 02:34, Volodymyr Babchuk wrote:
> Hello community,
>
> Subject of this cover letter is quite self-explanatory. This patch
> series implements PoC for preemption in hypervisor mode.
>
> This is the sort of follow-up to recent discussion about latency
> ([1]).
>
> Motivation
> ==========
>
> It is well known that Xen is not preemptable. On other words, it is
> impossible to switch vCPU contexts while running in hypervisor
> mode. Only one place where scheduling decision can be made and one
> vCPU can be replaced with another is the exit path from the hypervisor
> mode. The one exception are Idle vCPUs, which never leaves the
> hypervisor mode for obvious reasons.
>
> This leads to a number of problems. This list is not comprehensive. It
> lists only things that I or my colleagues encountered personally.
>
> Long-running hypercalls. Due to nature of some hypercalls they can
> execute for arbitrary long time. Mostly those are calls that deal with
> long list of similar actions, like memory pages processing. To deal
> with this issue Xen employs most horrific technique called "hypercall
> continuation". When code that handles hypercall decides that it should
> be preempted, it basically updates the hypercall parameters, and moves
> guest PC one instruction back. This causes guest to re-execute the
> hypercall with altered parameters, which will allow hypervisor to
> continue hypercall execution later. This approach itself have obvious
> problems: code that executes hypercall is responsible for preemption,
> preemption checks are infrequent (because they are costly by
> themselves), hypercall execution state is stored in guest-controlled
> area, we rely on guest's good will to continue the hypercall. All this
> imposes restrictions on which hypercalls can be preempted, when they
> can be preempted and how to write hypercall handlers. Also, it
> requires very accurate coding and already led to at least one
> vulnerability - XSA-318. Some hypercalls can not be preempted at all,
> like the one mentioned in [1].
>
> Absence of hypervisor threads/vCPUs. Hypervisor owns only idle vCPUs,
> which are supposed to run when the system is idle. If hypervisor needs
> to execute own tasks that are required to run right now, it have no
> other way than to execute them on current vCPU. But scheduler does not
> know that hypervisor executes hypervisor task and accounts spent time
> to a domain. This can lead to domain starvation.
>
> Also, absence of hypervisor threads leads to absence of high-level
> synchronization primitives like mutexes, conditional variables,
> completions, etc. This leads to two problems: we need to use spinlocks
> everywhere and we have problems when porting device drivers from linux
> kernel.

You cannot reenter a guest, even to deliver interrupts, if pre-empted at
an arbitrary point in a hypercall.  State needs unwinding suitably.

Xen's non-preemptible-ness is designed to specifically force you to not
implement long-running hypercalls which would interfere with timely
interrupt handling in the general case.

Hypervisor/virt properties are different to both a kernel-only-RTOS, and
regular usespace.  This was why I gave you some specific extra scenarios
to do latency testing with, so you could make a fair comparison of
"extra overhead caused by Xen" separate from "overhead due to
fundamental design constraints of using virt".


Preemption like this will make some benchmarks look better, but it also
introduces the ability to create fundamental problems, like preventing
any interrupt delivery into a VM for seconds of wallclock time while
each vcpu happens to be in a long-running hypercall.

If you want timely interrupt handling, you either need to partition your
workloads by the long-running-ness of their hypercalls, or not have
long-running hypercalls.

I remain unconvinced that preemption is an sensible fix to the problem
you're trying to solve.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.