[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Design session report: Live-Updating Xen


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 17 Jul 2019 19:40:50 +0100
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Leonard Foerster <foersleo@xxxxxxxxxx>
  • Delivery-date: Wed, 17 Jul 2019 18:40:59 +0000
  • Ironport-sdr: D9iCXW0s0cCCOmH9IH7PuWp7AadUmOpvF8f5K20AHKKCpmqjAX9qcjMFx5FDE+VWQYHM3TMJ2s HV6rQBKg16DBjps8NWLafDYeEWxvcvfWlMGOoqUGVuDZ1Lb358XcAsLNb3w7WSRDxDeq/fvX+z B4DQH3vYaO9bFu+Hte23EsmpTIAJw0YxkhmBJnn48nBs7cFwza83m1VsnKJCdWlo8Bq16oZ2B4 1FnhoQ2iq6fCUnFlYimsDtDvslaL7Mr1cySgjJlOcpG+EnBsyKfx31liC2oobjEh3+kjaQBgCu 5IQ=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 17/07/2019 14:02, Jan Beulich wrote:
> On 17.07.2019 13:26, Andrew Cooper wrote:
>> On 17/07/2019 08:09, Jan Beulich wrote:
>>> On 17.07.2019 01:51, Andrew Cooper wrote:
>>>> On 15/07/2019 19:57, Foerster, Leonard wrote:
>>>>>   * dom0less: bootstrap domains without the involvement of dom0
>>>>>           -> this might come in handy to at least setup and continue dom0 
>>>>> on target xen
>>>>>           -> If we have this this might also enable us to de-serialize 
>>>>> the state for
>>>>>                   other guest-domains in xen and not have to wait for 
>>>>> dom0 to do this
>>>> Reconstruction of dom0 is something which Xen will definitely need to
>>>> do.  With the memory still in place, its just a fairly small of register
>>>> state which needs restoring.
>>>>
>>>> That said, reconstruction of the typerefs will be an issue.  Walking
>>>> over a fully populated L4 tree can (in theory) take minutes, and its not
>>>> safe to just start executing without reconstruction.
>>>>
>>>> Depending on how bad it is in practice, one option might be to do a
>>>> demand validate of %rip and %rsp, along with a hybrid shadow mode which
>>>> turns faults into typerefs, which would allow the gross cost of
>>>> revalidation to be amortised while the vcpus were executing.  We would
>>>> definitely want some kind of logic to aggressively typeref outstanding
>>>> pagetables so the shadow mode could be turned off.
>>> Neither walking the page table trees nor and on-demand re-creation can
>>> possibly work, as pointed out during (partly informal) discussion: At
>>> the very least the allocated and pinned states of pages can only be
>>> transferred.
>> Pinned state exists in the current migrate stream.  Allocated does not -
>> it is an internal detail of how Xen handles the memory.
>>
>> But yes - this observation means that we can't simply walk the guest
>> pagetables.
>>
>>> Hence we seem to have come to agreement that struct
>>> page_info instances have to be transformed (in place if possible, i.e.
>>> when the sizes match, otherwise by copying).
>> -10 to this idea, if it can possibly be avoided.  In this case, it
>> definitely can be avoided.
>>
>> We do not want to be grovelling around in the old Xen's datastructures,
>> because that adds a binary A=>B translation which is
>> per-old-version-of-xen, meaning that you need a custom build of each
>> target Xen which depends on the currently-running Xen, or have to
>> maintain a matrix of old versions which will be dependent on the local
>> changes, and therefore not suitable for upstream.
> Now the question is what alternative you would suggest. By you
> saying "the pinned state lives in the migration stream", I assume
> you mean to imply that Dom0 state should be handed from old to
> new Xen via such a stream (minus raw data page contents)?

Yes, and this in explicitly identified in the bullet point saying "We do
only rely on domain state and no internal xen state".

In practice, it is going to be far more efficient to have Xen
serialise/deserialise the domain register state etc, than to bounce it
via hypercalls.  By the time you're doing that in Xen, adding dom0 as
well is trivial.

>
>>>>>           -> We might have to go and re-inject certain interrupts
>>>> What hardware are you targeting here?  IvyBridge and later has a posted
>>>> interrupt descriptor which can accumulate pending interrupts (at least
>>>> manually), and newer versions (Broadwell?) can accumulate interrupts
>>>> directly from hardware.
>>> For HVM/PVH perhaps that's good enough. What about PV though?
>> What about PV?
>>
>> The in-guest evtchn data structure will accumulate events just like a
>> posted interrupt descriptor.  Real interrupts will queue in the LAPIC
>> during the transition period.
> Yes, that'll work as long as interrupts remain active from Xen's POV.
> But if there's concern about a blackout period for HVM/PVH, then
> surely there would also be such for PV.

The only fix for that is to reduce the length of the blackout period. 
We can't magically inject interrupts half way through the xen-to-xen
transition, because we can't run vcpus at that point in time.

>
>>>>> A key cornerstone for Live-update is guest transparent live migration
>>>>>   -> This means we are using a well defined ABI for saving/restoring 
>>>>> domain state
>>>>>           -> We do only rely on domain state and no internal xen state
>>>> Absolutely.  One issue I discussed with David a while ago is that even
>>>> across an upgrade of Xen, the format of the EPT/NPT pagetables might
>>>> change, at least in terms of the layout of software bits.  (Especially
>>>> for EPT where we slowly lose software bits to new hardware features we
>>>> wish to use.)
>>> Right, and therefore a similar transformation like for struct page_info
>>> may be unavoidable here too.
>> None of that lives in the current migrate stream.  Again - it is
>> internal details, so is not something which is appropriate to be
>> inspected by the target Xen.
>>
>>> Re-using large data structures (or arrays thereof) may also turn out
>>> useful in terms of latency until the new Xen actually becomes ready to
>>> resume.
>> When it comes to optimising the latency, there is a fair amount we might
>> be able to do ahead of the critical region, but I still think this would
>> be better done in terms of a "clean start" in the new Xen to reduce
>> binary dependences.
> Latency actually is only one aspect (albeit the larger the host, the more
> relevant it is). Sufficient memory to have both old and new copies of the
> data structures in place, plus the migration stream, is another. This
> would especially become relevant when even DomU-s were to remain in
> memory, rather than getting saved/restored.

But we're still talking about something which is on a multi-MB scale,
rather than multi-GB scale.

Xen itself is tiny.  Sure there are overheads from the heap management
and pagetables etc, but the the overwhelming majority of used memory is
guest RAM which is staying in place.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.