[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 02/18] xen/arm: Implement PSCI system suspend call (virtual interface)


  • To: Julien Grall <julien.grall@xxxxxxx>, Mirela Simonovic <mirela.simonovic@xxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Thu, 15 Nov 2018 10:50:15 +0000
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Wei Liu <wei.liu2@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Xen Devel <xen-devel@xxxxxxxxxxxxx>, Davorin Mista <dm@xxxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, Saeed Nowshadi <saeed.nowshadi@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Andre Przywara <andre.przywara@xxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <stefano.stabellini@xxxxxxxxxx>
  • Delivery-date: Thu, 15 Nov 2018 10:50:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 15/11/2018 10:36, Julien Grall wrote:
> Hi Andrew,
>
> On 11/15/18 10:26 AM, Andrew Cooper wrote:
>> On 15/11/2018 10:13, Julien Grall wrote:
>>> (+ Andre)
>>>
>>> On 11/15/18 12:47 AM, Andrew Cooper wrote:
>>>> On 14/11/2018 12:49, Julien Grall wrote:
>>>>> Hi Mirela,
>>>>>
>>>>> On 14/11/2018 12:08, Mirela Simonovic wrote:
>>>>>>
>>>>>>
>>>>>> On 11/13/2018 09:32 AM, Andrew Cooper wrote:
>>>>>>> On 12/11/2018 19:56, Julien Grall wrote:
>>>>>>>> Hi Andrew,
>>>>>>>>
>>>>>>>> On 11/12/18 4:41 PM, Andrew Cooper wrote:
>>>>>>>>> On 12/11/18 16:35, Mirela Simonovic wrote:
>>>>>>>>>>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>>>>>>>>>>> index e594b48d81..7f8105465c 100644
>>>>>>>>>>>> --- a/xen/arch/arm/domain.c
>>>>>>>>>>>> +++ b/xen/arch/arm/domain.c
>>>>>>>>>>>> @@ -97,6 +97,11 @@ static void ctxt_switch_from(struct vcpu
>>>>>>>>>>>> *p)
>>>>>>>>>>>>           if ( is_idle_vcpu(p) )
>>>>>>>>>>>>               return;
>>>>>>>>>>>>
>>>>>>>>>>>> +    /* VCPU's context should not be saved if its domain is
>>>>>>>>>>>> suspended */
>>>>>>>>>>>> +    if ( p->domain->is_shut_down &&
>>>>>>>>>>>> +        (p->domain->shutdown_code == SHUTDOWN_suspend) )
>>>>>>>>>>>> +        return;
>>>>>>>>>>> SHUTDOWN_suspend is used in Xen for other purpose (see
>>>>>>>>>>> SCHEDOP_shutdown). The other user of that code relies on all
>>>>>>>>>>> the
>>>>>>>>>>> state
>>>>>>>>>>> to be saved on suspend.
>>>>>>>>>>>
>>>>>>>>>> We just need a flag to mark a domain as suspended, and I do
>>>>>>>>>> believe
>>>>>>>>>> SHUTDOWN_suspend is not used anywhere else.
>>>>>>>>>> Let's come back on this.
>>>>>>>>> SHUTDOWN_suspend is used for migration.  Grep for it through the
>>>>>>>>> Xen
>>>>>>>>> tree and you'll find several pieces of documentation,
>>>>>>>>> including the
>>>>>>>>> description of what this shutdown code means.
>>>>>>>>>
>>>>>>>>> What you are introducing here is not a shutdown - it is a suspend
>>>>>>>>> with
>>>>>>>>> the intent to resume executing later.  As such, it shouldn't use
>>>>>>>>> Xen's
>>>>>>>>> shutdown infrastructure, which exists mainly to communicate with
>>>>>>>>> the
>>>>>>>>> toolstack.
>>>>>>>> Would domain pause/unpause be a better solution?
>>>>>>> Actually yes - that sounds like a very neat solution.
>>>>>>
>>>>>> I believe domain pause will not work here - a domain cannot pause
>>>>>> itself, i.e. current domain cannot be paused. This functionality
>>>>>> seems to assume that a domain is pausing another domain. Or I
>>>>>> missed vC
>>>>>> the point.
>>>>>
>>>>> Yes domain pause/unpause will not work. However, you can introduce a
>>>>> boolean to tell you whether the domain was suspend.
>>>>>
>>>>> I actually quite like how suspend work for x86 HVM. This is based on
>>>>> pause/unpause. Have a look at hvm_s3_{suspend/resume}.
>>>>
>>>> That only exists because the ACPI controller is/was implemented in
>>>> QEMU.  I wouldn't recommend copying it.
>>>
>>> May I ask why? I don't think the properties are very different from
>>> Arm here.
>>
>> If you observe, that can only be actioned by a hypercall from qemu.  It
>> can't be actioned by an ACPI controller emulated by Xen.
>
> How does the ACPI controller emulated by Xen deal with suspend/resume?
> Do you have any pointer?

It doesn't, and that is one of the outstanding issues for trying to make
it work sensibly.  In the end it wasn't merged, and is still on
someone’s TODO list.

>
>>
>>>
>>>>
>>>> Having spent some more time thinking about this problem, what
>>>> properties
>>>> does the PSCI call have?
>>>>
>>>> I gather from other parts of this thread that there may be a partial
>>>> reset of state?  Beyond that, what else?
>>>>
>>>> I ask, because conceptually, domU suspend to RAM is literally just
>>>> "de-schedule until we want to wake up", and in this case, it
>>>> appears to
>>>> be "wake up on any of the still-active interrupts".  We've already
>>>> got a
>>>> scheduler API for that, and its called vcpu_block().  This already
>>>> exists with "wait until a new event occurs" semantics.
>>>>
>>>> Is there anything else I've missed?
>>>
>>> All vCPUs but the vCPU calling SYSTEM_SUSPEND should be off. Also,
>>> only events on that vCPU can trigger a resume. All the other event
>>> should not be taken into account.
>>>
>>> My worry with vcpu_block() is we don't prevent the other vCPUs to run.
>>> Technically they should be off, but I would like some safety to avoid
>>> any potential corner case (i.e other way to turn a vCPU on).
>>
>> Why not have the SYSTEM_SUSPEND check that all other vCPUs are DOWN
>> first, and fail the call if not?
>
> The code already check for that. My concern is there might (today or
> in the future) other bits of Xen that can potentially turn on the vCPU
> (e.g the toolstack).

I wouldn't be concerned - there is nothing you can do about it.

Because of things like domain construction / migration / emulation /
etc, the toolstack(/control domain) has all the low level hooks into Xen
to do everything.  Your concern here about the toolstack onlining other
vcpus is just as problematic as the toolstack issuing an unpause
hypercall, or mapping part of the guest and splatting /dev/random
everywhere.

Any state you can arrange in Xen can be undone/altered by the toolstack,
and you've got to accept that the toolstack can do this, and won't,
because it is a trusted piece of the system and has a vested interest in
the guest not crashing.

Beyond that, you'll want to use the most appropriate interfaces within
Xen to implement this, and it very much sounds like vcpu_block() has the
semantics that you want.

>
>>
>> If instead of waiting for any event, you need to wait for a specific
>> event, there is also vcpu_poll() which is a related scheduler API.
> I guess we need to agree how what kind of event can resume the guest
> from suspend/resume. I am not convinced that all events should be
> equal here.
>
> By vcpu_poll, do you mean do_poll?

That is the hypercall, but there is a variation of it for internal use.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.