|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v15 01/11] multicall: add no preemption ability between two calls
>>> On 09.09.14 at 14:44, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 09/09/14 12:51, Jan Beulich wrote:
>>>>> On 09.09.14 at 12:51, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 09/09/14 11:39, Jan Beulich wrote:
>>>>>>> On 09.09.14 at 08:43, <chao.p.peng@xxxxxxxxxxxxxxx> wrote:
>>>>> On Fri, Sep 05, 2014 at 11:46:20AM +0100, Andrew Cooper wrote:
>>>>>> On 05/09/14 09:37, Chao Peng wrote:
>>>>>>> Add a flag to indicate if the execution can be preempted between two
>>>>>>> calls. If not specified, stay preemptable.
>>>>>>>
>>>>>>> Signed-off-by: Chao Peng <chao.p.peng@xxxxxxxxxxxxxxx>
>>>>>>> ---
>>>>>>> xen/common/multicall.c | 5 ++++-
>>>>>>> xen/include/public/xen.h | 4 ++++
>>>>>>> 2 files changed, 8 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/xen/common/multicall.c b/xen/common/multicall.c
>>>>>>> index fa9d910..83b96eb 100644
>>>>>>> --- a/xen/common/multicall.c
>>>>>>> +++ b/xen/common/multicall.c
>>>>>>> @@ -40,6 +40,7 @@ do_multicall(
>>>>>>> struct mc_state *mcs = ¤t->mc_state;
>>>>>>> uint32_t i;
>>>>>>> int rc = 0;
>>>>>>> + bool_t preemptable = 0;
>>>>>>>
>>>>>>> if ( unlikely(__test_and_set_bit(_MCSF_in_multicall, &mcs->flags))
>>>>>>> )
>>>>>>> {
>>>>>>> @@ -52,7 +53,7 @@ do_multicall(
>>>>>>>
>>>>>>> for ( i = 0; !rc && i < nr_calls; i++ )
>>>>>>> {
>>>>>>> - if ( i && hypercall_preempt_check() )
>>>>>>> + if ( preemptable && hypercall_preempt_check() )
>>>>>>> goto preempted;
>>>>>>>
>>>>>>> if ( unlikely(__copy_from_guest(&mcs->call, call_list, 1)) )
>>>>>>> @@ -61,6 +62,8 @@ do_multicall(
>>>>>>> break;
>>>>>>> }
>>>>>>>
>>>>>>> + preemptable = mcs->call.flags & MC_NO_PREEMPT;
>>>>>>> +
>>>>>> Please consider what would happen if a malicious guest set NO_PREEMPT on
>>>>>> every multicall entry.
>>>>> OK, I see. My direct purpose here is to support batch operations for
>>>>> XENPF_resource_op added in next patch. Recall what Jan suggested in v14
>>>>> comments, we have 3 possible ways to support XENPF_resource_op batch:
>>>>> 1) Add a field in the xenpf_resource_op to indicate the iteration;
>>>>> 2) Fiddle multicall mechanism, just like this patch;
>>>>> 3) Add a brand new hypercall.
>>>>>
>>>>> So perhaps I will give up option 2) before I can see any improvement
>>>>> here. While option 3) is aggressive, so I'd go option 1) through I also
>>>>> don't quite like it (Totally because the iteration is transparent for
>>>>> user).
>>>> The I suppose you didn't really understand Andrew's comment: I
>>>> don't think he was suggesting to drop the approach, but instead
>>>> to implement it properly (read: securely).
>>> That is certainly one part of it.
>>>
>>> However, there is the other open question (dropped from this context) of
>>> how to deal with a multicall which has NO_PREEMT set, which itself
>>> preempts, and I don't have a good answer for this.
>> The pretty natural answer to this is - the specific handler knows
>> best what to do.
>
> Given our past history at retrofitting preempting into existing
> hypercalls, the multicaller has no idea whether the ops they have
> selected will preempt or not, and no way to guarentee that the behaviour
> will stay the same in future.
>
> The multicall dispatches to the regular hypercall handlers, which
> (cant?)
They can - current->mc_state.flags has _MCSF_in_multicall
set.
> and certainly shouldn't distinguish between a regular hypercall
> and multicall.
I agree with this. Yet it's a bug in the caller to request no
preemption at this layer for a constituent hypercall that can itself
preempt. But that's only a problem for the caller, not for the
hypervisor.
> As I have been looking through this code, I have noticed that the NDEBUG
> parameter corruption will break half of our existing preemption logic,
> which does use some of the parameters to hold preemption information.
Certainly not - call_list is being copied over a second time a few
lines after that NDEBUG section.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |