[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Save paused cpu ctx



On 9/19/18 12:39 PM, Jan Beulich wrote:
>>>> On 19.09.18 at 11:19, <aisaila@xxxxxxxxxxxxxxx> wrote:
>> On Wed, 2018-09-19 at 03:01 -0600, Jan Beulich wrote:
>>>>>> On 19.09.18 at 10:21, <aisaila@xxxxxxxxxxxxxxx> wrote:
>>>>
>>>> I want to restart the discussion on dropping the "if ( v-
>>>>> pause_flags &
>>>> VPF_down )" from hvm_save_cpu_ctxt() and be able to save context in
>>>> a
>>>> vcup down state. The content of the ctx could be filled like it is
>>>> described in Intel SDM, "9.1.1 Processor State After Reset" (https:
>>>> //so 
>>>> ftware.intel.com/sites/default/files/managed/39/c5/325462-sdm-vol-
>>>> 1-
>>>> 2abcd-3abcd.pdf   page 2996).
>>>>
>>>> Is this enough to be up streamed?
>>>
>>> Along the lines of what Roger has said - without knowing _why_ you
>>> want to do that, it's hard to tell. It's pretty clear I think that we
>>> don't
>>> want to outright drop the condition, the question is just whether to
>>> somewhat relax it for your purposes.
>>>
>>> I'm afraid I also don't agree with your assertion that a down vCPU is
>>> in "after reset" state: To me, such a vCPU could legitimately be
>>> considered to be in that state, in the state it was in when it went
>>> down, or in basically any other (perhaps random) state.
>>
>> Processor State After Reset is just the name of the table in the
>> manual. Having a default state to return on a query was the start point
>> of the discussion. You are right, we can return whatever state the cpu
>> was in before it went down. The code works fine like that, further
>> more, this will make us have a smaller patch plus a legitimate ctx at
>> the end of the query.
> 
> You didn't understand my response then: There is no "the state" for
> a vCPU which is down. Therefore we shouldn't arbitrarily pick one of
> the almost infinite number of options.
> 
> Also in your reply to Roger you didn't respond at all to the size
> growth your proposed change would have for the migration stream,
> as pointed out by him.
> 
> In any event - for now I continue to think that the code should
> remain as is, and the caller should know whether (and _how_) to
> deal with down vCPU-s.

The scenario is that we are trying to query the state of a VCPU (please
note: just query). That means that we're only interested in getting some
coherent VCPU state via the XEN_DOMCTL_gethvmcontext_partial domctl.

We don't care if said state ends up being saved for the migration stream
or not, so in that respect the answer to Roger's question is: no size
increase or difference whatsoever.

All we want to do is to be able to query the state of any VCPU in the
valid range of VCPUs assigned to the domain, online or not. We believe
being able to query them is reasonable, and the SDM states that they do
have a state (whatever it happens to be: the init state, after reset, etc.).

For example, please look at this XenServer-only patch:

https://github.com/xenserver/xen.pg/blob/XS-7.1.x/master/x86-domctl-Don-t-pause-the-whole-domain-if-only-gett.patch

It just calls hvm_save_one_cpu_ctxt(v, &ctx); directly, then copies the
result to guest. and that's it. It's neither involving hvm_save_one(),
not does it affect the migration stream whatsoever. Also, whether by
accident or just a fluke, it seems to work with introspection just fine
(which is why Alexandru was asking if maybe it's enough to just let
hvm_save_one_cpu_ctxt() do its thing for correct VCPU state).

Now about the "the agent should do something else if the VCPU is down"
objection, that's not possible with the current Xen code: -ENOENT is
returned both when the VCPU is down _and_ when the VCPU index is out of
bounds (so if I query the state of VCPU 10 on a 2-VCPUs guest).


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.