[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/6] xen/tasklet: Return -ERESTART from continue_hypercall_on_cpu()



On 09.12.2019 18:49, Andrew Cooper wrote:
> On 09/12/2019 16:52, Jan Beulich wrote:
>> On 05.12.2019 23:30, Andrew Cooper wrote:
>>> Some hypercalls tasklets want to create a continuation, rather than fail the
>>> hypercall with a hard error.  By the time the tasklet is executing, it is 
>>> too
>>> late to create the continuation, and even continue_hypercall_on_cpu() 
>>> doesn't
>>> have enough state to do it correctly.
>> I think it would be quite nice if you made clear what piece of state
>> it is actually missing. To be honest, I don't recall anymore.
> 
> How to correctly mutate the registers and/or memory (which is specific
> to the hypercall subop in some cases).

Well, in-memory arguments can be accessed as long as the mapping is
the right one (which it typically wouldn't be inside a tasklet). Do
existing continue_hypercall_on_cpu() users need this? Looking over
patch 4 again, I didn't think so. (Which isn't to say that removing
the latent issue is not a good thing.)

In-register values can be changed as long as the respective exit
path will suitably pick up the value, which I thought was always
the case.

Hence I'm afraid your single reply sentence didn't really clarify
matters. I'm sorry if this is just because of me being dense.

>>> There is one RFC point.  The statement in the header file of "If this 
>>> function
>>> returns 0 then the function is guaranteed to run at some point in the 
>>> future."
>>> was never true.  In the case of a CPU miss, the hypercall would be blindly
>>> failed with -EINVAL.
>> "Was never true" sounds like "completely broken". Afaict it was true
>> in all cases except the purely hypothetical one of the tasklet ending
>> up executing on the wrong CPU.
> 
> There is nothing hypothetical about it.  It really will go wrong when a
> CPU gets offlined.

Accepted, but it's still not like "completely broken". I would even
suppose the case wasn't considered when CPU offlining support was
introduced, not when continue_hypercall_on_cpu() came into existence
(which presumably is when the comment was written).

Anyway - yes, I agree this is a fair solution to the issue at hand.

>>> The current behaviour with this patch is to not cancel the continuation, 
>>> which
>>> I think is less bad, but still not great.  Thoughts?
>> Well, that's a guest live lock then aiui.
> 
> It simply continues again.  It will livelock only if the hypercall picks
> a bad cpu all the time.

Oh, I see I was mislead by continue_hypercall_tasklet_handler() not
updating info->cpu, not paying attention to it actually freeing info.
Plus a crucial aspect looks to be that there are no "chained" uses of
continue_hypercall_on_cpu() anymore (the microcode loading one being
gone now) - afaict any such wouldn't guarantee forward progress with
this new model (without recording somewhere which CPUs had been dealt
with already).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.