[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.



>>> On 23.03.17 at 04:23, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:

> 
> On 3/22/2017 10:39 PM, Jan Beulich wrote:
>>>>> On 21.03.17 at 03:52, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>>> --- a/xen/arch/x86/hvm/dm.c
>>> +++ b/xen/arch/x86/hvm/dm.c
>>> @@ -385,16 +385,51 @@ static int dm_op(domid_t domid,
>>>   
>>>       case XEN_DMOP_map_mem_type_to_ioreq_server:
>>>       {
>>> -        const struct xen_dm_op_map_mem_type_to_ioreq_server *data =
>>> +        struct xen_dm_op_map_mem_type_to_ioreq_server *data =
>>>               &op.u.map_mem_type_to_ioreq_server;
>>> +        unsigned long first_gfn = data->opaque;
>>> +        unsigned long last_gfn;
>>> +
>>> +        const_op = false;
>>>   
>>>           rc = -EOPNOTSUPP;
>>>           /* Only support for HAP enabled hvm. */
>>>           if ( !hap_enabled(d) )
>>>               break;
>>>   
>>> -        rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
>>> -                                              data->type, data->flags);
>>> +        if ( first_gfn == 0 )
>>> +            rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
>>> +                                                  data->type, data->flags);
>>> +        /*
>>> +         * Iterate p2m table when an ioreq server unmaps from 
> p2m_ioreq_server,
>>> +         * and reset the remaining p2m_ioreq_server entries back to 
> p2m_ram_rw.
>>> +         */
>>> +        if ( (first_gfn > 0) || (data->flags == 0 && rc == 0) )
>> Instead of putting the rc check on the right side, please do
>>
>>          if ( rc == 0 && (first_gfn > 0) || data->flags == 0) )
>>
>> That'll require setting rc to zero in an else to the previous if(),
>> but that's needed anyway afaics in order to not return
>> -EOPNOTSUPP once no further continuation is necessary.
>>
>> I further wonder why the if() here needs to look at first_gfn at
>> all - data->flags is supposed to remain at zero for continuations
>> (unless we have a misbehaving caller, in which case it'll harm
>> the guest only afaict). It seems to me, however, that this may
>> have been discussed once already, a long time ago. I'm sorry
>> for not remembering the outcome, if so.
> 
> We have not discussed this. Our previous discussion is about the if 
> condition before
> calling hvm_map_mem_type_to_ioreq_server(). :-)
> 
> Maybe above code should be changed to:
> @@ -400,11 +400,14 @@ static int dm_op(domid_t domid,
>           if ( first_gfn == 0 )
>               rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
>                                                     data->type, 
> data->flags);
> +       else
> +           rc = 0;
> +
>           /*
>            * Iterate p2m table when an ioreq server unmaps from 
> p2m_ioreq_server,
>            * and reset the remaining p2m_ioreq_server entries back to 
> p2m_ram_rw.
>            */
> -        if ( (first_gfn > 0) || (data->flags == 0 && rc == 0) )
> +        if ( data->flags == 0 && rc == 0 )
>           {
>               struct p2m_domain *p2m = p2m_get_hostp2m(d);

Yes, that's what I was trying to hint at.

>>> --- a/xen/arch/x86/mm/p2m.c
>>> +++ b/xen/arch/x86/mm/p2m.c
>>> @@ -1038,6 +1038,35 @@ void p2m_change_type_range(struct domain *d,
>>>       p2m_unlock(p2m);
>>>   }
>>>   
>>> +/* Synchronously modify the p2m type for a range of gfns from ot to nt. */
>>> +void p2m_finish_type_change(struct domain *d,
>>> +                            unsigned long first_gfn, unsigned long 
>>> last_gfn,
>> I think we'd prefer new functions to properly use gfn_t.
> Sorry? I do not get it.
> Paul suggested we replace last_gfn with max_nr, which sounds reasonable 
> to me. Guess you mean
> something else?

Indeed - even with Paul's suggestion, first_gfn would remain as a
parameter, and it should be of type gfn_t.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.