[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] tools/libxc, xen/x86: Added xc_set_mem_access_sparse()



>>> On 29.08.16 at 18:42, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
> On 08/29/16 19:20, Jan Beulich wrote:
>>>>> On 29.08.16 at 18:02, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>> On 08/29/16 18:42, Jan Beulich wrote:
>>>>>>> On 29.08.16 at 17:29, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>>>> On 08/26/2016 11:14 AM, Jan Beulich wrote:
>>>>>>>>> On 26.08.16 at 09:40, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>>>>>> On 08/26/16 10:18, Jan Beulich wrote:
>>>>>>>>>>> On 26.08.16 at 08:11, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>>>>>>>> @@ -76,6 +76,17 @@ int mem_access_memop(unsigned long cmd,
>>>>>>>>>          }
>>>>>>>>>          break;
>>>>>>>>>  
>>>>>>>>> +    case XENMEM_access_op_set_access_sparse:
>>>>>>>>> +    {
>>>>>>>>> +        xen_pfn_t *arr = xmalloc_bytes(sizeof(xen_pfn_t) * mao.nr);
>>>>>>>>> +
>>>>>>>>> +        // copy_from_guest(arr, mao.pfn_list, mao.nr);
>>>>>>>>
>>>>>>>> What is this (wrongly C++ style) comment about? I think this really
>>>>>>>> wasn't meant to be a comment, so RFC or not - how do things work
>>>>>>>> with this commented out? And where is the error checking for the
>>>>>>>> allocation (which btw should be xmalloc_array(), but the need for
>>>>>>>> an allocation here is questionable - the called function would better
>>>>>>>> retrieve the GFNs one by one).
>>>>>>>
>>>>>>> They don't work, I forgot that comment in (the line should not have been
>>>>>>> commented). I first wrote the patch on Xen 4.6 and there there was no
>>>>>>> CHECK_mem_access_op, so I was just trying to figure out what went wrong
>>>>>>> when I first saw the compile errors and tried this, then left it in
>>>>>>> accidentally.
>>>>>>>
>>>>>>> Indeed, there should also be a check for allocation failure.
>>>>>>>
>>>>>>> Do you mean that I would do better to just copy_from_guest(&gfn,
>>>>>>> mao.pfn_list + index, 1) in a for loop that sets mem_access 
>>>>>>> restrictions?
>>>>>>
>>>>>> Yes, albeit it is copy_from_guest_offset(&gfn, mao.pfn_list, index, 1).
>>>>>
>>>>> Avoiding translation, to the best of my understanding (and tested with
>>>>> the latest version of the patch I'm working on) would then require
>>>>> replacing copy_from_guest() with copy_from_user(), which does not have a
>>>>> copy_from_user_offset() alternative.
>>>>
>>>> I don't follow - where did you see copy_from_user() used in such a
>>>> case? If you go through the existing examples, you'll find that with
>>>> some #define-s (re-vectoring to copy_from_compat_offset()) this
>>>> can easily be taken care of.
>>>
>>> I was looking at xc_mem_paging_memop(), where the buffer parameter is
>>> being sent via mpo.buffer, which is an uint64_aligned_t, which I thought
>>> was what you meant. On the HV side, it's being copied_from_user().
>>>
>>> In the interest of preventing furher misunderstanding, could you please
>>> point out a specific example you have in mind?
>> 
>> Actually, having looked again more closely, I'm not sure you need
>> to use the compat version of the copying routine; you may need a
>> thin handler in compat/memory.c. Before going to further search
>> for a suitable example, would you mind pointing out what it is that
>> does not work with copy_from_guest_offset()?
> 
> With this change:
> 
> @@ -452,6 +453,11 @@ struct xen_mem_access_op {
>       * ~0ull is used to set and get the default access for pages
>       */
>      uint64_aligned_t pfn;
> +    /*
> +     * List of pfns to set access for
> +     * Used only with XENMEM_access_op_set_access_multi
> +     */
> +    uint64_aligned_t pfn_list;
>  };

Well, this can't possibly be right. You absolutely need to retain
the handle here that you had in the patch; to avoid the need for
compat translation you will want to make this a 64-bit handle,
though.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.