[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 5/6] xen/tasklet: Return -ERESTART from continue_hypercall_on_cpu()
On 10/12/2019 08:55, Jan Beulich wrote: > On 09.12.2019 18:49, Andrew Cooper wrote: >> On 09/12/2019 16:52, Jan Beulich wrote: >>> On 05.12.2019 23:30, Andrew Cooper wrote: >>>> Some hypercalls tasklets want to create a continuation, rather than fail >>>> the >>>> hypercall with a hard error. By the time the tasklet is executing, it is >>>> too >>>> late to create the continuation, and even continue_hypercall_on_cpu() >>>> doesn't >>>> have enough state to do it correctly. >>> I think it would be quite nice if you made clear what piece of state >>> it is actually missing. To be honest, I don't recall anymore. >> How to correctly mutate the registers and/or memory (which is specific >> to the hypercall subop in some cases). > Well, in-memory arguments can be accessed as long as the mapping is > the right one (which it typically wouldn't be inside a tasklet). Do > existing continue_hypercall_on_cpu() users need this? Looking over > patch 4 again, I didn't think so. (Which isn't to say that removing > the latent issue is not a good thing.) > > In-register values can be changed as long as the respective exit > path will suitably pick up the value, which I thought was always > the case. > > Hence I'm afraid your single reply sentence didn't really clarify > matters. I'm sorry if this is just because of me being dense. How, physically, would you arrange for continue_hypercall_on_cpu() to make the requisite state adjustments? Yes - registers and memory can be accessed, but only the hypercall (sub?)op handler knows how to mutate them appropriately. You'd have to copy the mutation logic into continue_hypercall_on_cpu(), and pass in op/subops and a union of all pointers, *and* whatever intermediate state the subop handler needs. Or you can return -ERESTART and let the caller DTRT with the state it has in context, as it would in other cases requiring a continuation. > >>>> There is one RFC point. The statement in the header file of "If this >>>> function >>>> returns 0 then the function is guaranteed to run at some point in the >>>> future." >>>> was never true. In the case of a CPU miss, the hypercall would be blindly >>>> failed with -EINVAL. >>> "Was never true" sounds like "completely broken". Afaict it was true >>> in all cases except the purely hypothetical one of the tasklet ending >>> up executing on the wrong CPU. >> There is nothing hypothetical about it. It really will go wrong when a >> CPU gets offlined. > Accepted, but it's still not like "completely broken". I didn't mean it like that. I mean "it has never had the property it claimed", which is distinct from "the claim used to be true, but was then accidentally regressed". > I would even > suppose the case wasn't considered when CPU offlining support was > introduced, not when continue_hypercall_on_cpu() came into existence > (which presumably is when the comment was written). > > Anyway - yes, I agree this is a fair solution to the issue at hand. > >>>> The current behaviour with this patch is to not cancel the continuation, >>>> which >>>> I think is less bad, but still not great. Thoughts? >>> Well, that's a guest live lock then aiui. >> It simply continues again. It will livelock only if the hypercall picks >> a bad cpu all the time. > Oh, I see I was mislead by continue_hypercall_tasklet_handler() not > updating info->cpu, not paying attention to it actually freeing info. > Plus a crucial aspect looks to be that there are no "chained" uses of > continue_hypercall_on_cpu() anymore (the microcode loading one being > gone now) - afaict any such wouldn't guarantee forward progress with > this new model (without recording somewhere which CPUs had been dealt > with already). I'd forgotten that we had that, but I can't say I'm sad to see the back of it. I recall at the time saying that it wasn't a clever move. For now, I suggest that we ignore that case. If an when a real usecase appears, we can consider making adjustments. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |