[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Design session report: Live-Updating Xen


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Wed, 17 Jul 2019 13:02:36 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MVwRmtd/l/8/U9RojzdDiR0P2/PvfTdZLLYO6LWd1Pc=; b=VerQ9ukxyzXL1N12atIXclYJxkWbYEhzqFGrUB1VUo73iy6PwaDmtkvUaF915JGqP3TPY7lEhM+euzmxadSb25kH3Pc61tWlFhXRXJZm5mjJIUenjyPhI2J0Dsxji2YCNSztgy3x/nvO2/EdmJj+IaxkvPj1JGbIl5jddIynLkIscFA/3vxUoHDNhmlzUh0KYHXgDp/xe64UCIVIPiBOLpJ7xaO3osBNGGZwEWFHGKrpYZTJAui2jxEoIjhoCJ/Fa9U4fjt8Dl3m7CV1jP7aIJ/nCyQERCC85XtcJsIVdf/SENZoCrv7b6SmaC3xatbA864cnQn44YabU4w4V6rVEQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FFWaa1gxaWAosr7aRLbegI7GjDftiOiWLYKXRlumT+GGEZx0LjZI2HFrqKaHzYhSSoVvoeRLuxQ0Bu6ApDfXuYC0JvngEb+yhlckOVFiBZe0peSWVaNs3SIs3CTTWbzuBqG5uLgFcBG+d7dLdQlt4dSc+nNoipitm0IQBJiwzDxmm9noBDxxRb+DwJar2txhYURMrAiTiXyJG6i9aErZARpCRj9zoB7MHrtO4i4L8TpU+xa/xocpdPAJcZ1xHRYQEK1maQ0P8Enc/tqQWi+64gLE9jaEeHAUY5GTllemzNVHWYWNwnAQPK5rRQP2p/4HJqY7buIP97OD0pa36Fhn2Q==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Leonard Foerster <foersleo@xxxxxxxxxx>
  • Delivery-date: Wed, 17 Jul 2019 13:02:55 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVOz84rrbZg3jr7UWo9Co44cwhi6bN7RdMgAB5rACAAEe/loAAGskA
  • Thread-topic: [Xen-devel] Design session report: Live-Updating Xen

On 17.07.2019 13:26, Andrew Cooper wrote:
> On 17/07/2019 08:09, Jan Beulich wrote:
>> On 17.07.2019 01:51, Andrew Cooper wrote:
>>> On 15/07/2019 19:57, Foerster, Leonard wrote:
>>>>    * dom0less: bootstrap domains without the involvement of dom0
>>>>            -> this might come in handy to at least setup and continue dom0 
>>>> on target xen
>>>>            -> If we have this this might also enable us to de-serialize 
>>>> the state for
>>>>                    other guest-domains in xen and not have to wait for 
>>>> dom0 to do this
>>> Reconstruction of dom0 is something which Xen will definitely need to
>>> do.  With the memory still in place, its just a fairly small of register
>>> state which needs restoring.
>>>
>>> That said, reconstruction of the typerefs will be an issue.  Walking
>>> over a fully populated L4 tree can (in theory) take minutes, and its not
>>> safe to just start executing without reconstruction.
>>>
>>> Depending on how bad it is in practice, one option might be to do a
>>> demand validate of %rip and %rsp, along with a hybrid shadow mode which
>>> turns faults into typerefs, which would allow the gross cost of
>>> revalidation to be amortised while the vcpus were executing.  We would
>>> definitely want some kind of logic to aggressively typeref outstanding
>>> pagetables so the shadow mode could be turned off.
>> Neither walking the page table trees nor and on-demand re-creation can
>> possibly work, as pointed out during (partly informal) discussion: At
>> the very least the allocated and pinned states of pages can only be
>> transferred.
> 
> Pinned state exists in the current migrate stream.  Allocated does not -
> it is an internal detail of how Xen handles the memory.
> 
> But yes - this observation means that we can't simply walk the guest
> pagetables.
> 
>> Hence we seem to have come to agreement that struct
>> page_info instances have to be transformed (in place if possible, i.e.
>> when the sizes match, otherwise by copying).
> 
> -10 to this idea, if it can possibly be avoided.  In this case, it
> definitely can be avoided.
> 
> We do not want to be grovelling around in the old Xen's datastructures,
> because that adds a binary A=>B translation which is
> per-old-version-of-xen, meaning that you need a custom build of each
> target Xen which depends on the currently-running Xen, or have to
> maintain a matrix of old versions which will be dependent on the local
> changes, and therefore not suitable for upstream.

Now the question is what alternative you would suggest. By you
saying "the pinned state lives in the migration stream", I assume
you mean to imply that Dom0 state should be handed from old to
new Xen via such a stream (minus raw data page contents)?

>>>>            -> We might have to go and re-inject certain interrupts
>>> What hardware are you targeting here?  IvyBridge and later has a posted
>>> interrupt descriptor which can accumulate pending interrupts (at least
>>> manually), and newer versions (Broadwell?) can accumulate interrupts
>>> directly from hardware.
>> For HVM/PVH perhaps that's good enough. What about PV though?
> 
> What about PV?
> 
> The in-guest evtchn data structure will accumulate events just like a
> posted interrupt descriptor.  Real interrupts will queue in the LAPIC
> during the transition period.

Yes, that'll work as long as interrupts remain active from Xen's POV.
But if there's concern about a blackout period for HVM/PVH, then
surely there would also be such for PV.

>>>> A key cornerstone for Live-update is guest transparent live migration
>>>>    -> This means we are using a well defined ABI for saving/restoring 
>>>> domain state
>>>>            -> We do only rely on domain state and no internal xen state
>>> Absolutely.  One issue I discussed with David a while ago is that even
>>> across an upgrade of Xen, the format of the EPT/NPT pagetables might
>>> change, at least in terms of the layout of software bits.  (Especially
>>> for EPT where we slowly lose software bits to new hardware features we
>>> wish to use.)
>> Right, and therefore a similar transformation like for struct page_info
>> may be unavoidable here too.
> 
> None of that lives in the current migrate stream.  Again - it is
> internal details, so is not something which is appropriate to be
> inspected by the target Xen.
> 
>> Re-using large data structures (or arrays thereof) may also turn out
>> useful in terms of latency until the new Xen actually becomes ready to
>> resume.
> 
> When it comes to optimising the latency, there is a fair amount we might
> be able to do ahead of the critical region, but I still think this would
> be better done in terms of a "clean start" in the new Xen to reduce
> binary dependences.

Latency actually is only one aspect (albeit the larger the host, the more
relevant it is). Sufficient memory to have both old and new copies of the
data structures in place, plus the migration stream, is another. This
would especially become relevant when even DomU-s were to remain in
memory, rather than getting saved/restored.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.