[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 07/10] xen/arm: optee: add support for arbitrary shared memory



Julien Grall writes:

>>> Some of the patches are using your EPAM e-mail addresss. Other are
>>> using your gmail address. Could you confirm this is expected?
>>
>> No, I'll make sure that all patches are authored by
>> <volodymyr_babchuk@xxxxxxxx>
>
> And your signed-off-by as well :).
Sure :)

>
> [...]
>
>>> I can't promise Xen will only be 4K only. So it would be better to
>>> make the number agnostic. Or at least writing clearly on top of the
>>> definition that it is assumed 4KB (maybe with a BUILD_BUG_ON(PAGE_SIZE
>>> != 4096) if not already in place).
>>
>> I can replace define with something like (67108864 / PAGE_SIZE). With
>> appropriate comment, of course.
>
> Well, none of your code is ready for another PAGE_SIZE than 4KB. So
> the BUILD_BUG_ON(PAGE_SIZE != 4096) is probably more suitable.

Yes, makes sense.

> [...]
>
>>>>    +static struct optee_shm_buf *allocate_optee_shm_buf(struct
>>>> optee_domain *ctx,
>>>> +                                                    uint64_t cookie,
>>>> +                                                    unsigned int 
>>>> pages_cnt,
>>>> +                                                    struct page_info 
>>>> *pg_list,
>>>> +                                                    unsigned int 
>>>> pg_list_order)
>>>> +{
>>>> +    struct optee_shm_buf *optee_shm_buf, *optee_shm_buf_tmp;
>>>> +    int old, new;
>>>> +    int err_code;
>>>> +
>>>> +    do
>>>> +    {
>>>> +        old = atomic_read(&ctx->optee_shm_buf_pages);
>>>> +        new = old + pages_cnt;
>>>> +        if ( new >= MAX_TOTAL_SMH_BUF_PG )
>>>
>>> Again, the limitation is in number of page and quite high. What would
>>> prevent a guest to register shared memory page by page? If nothing,
>>> then I think you can end up to interesting issues in Xen because of
>>> the growing list and memory used.
>>
>> OP-TEE will limit this on it's side. In most cases Xen have much bigger
>> heap than OP-TEE :-)
>
> The main problem is not the heap. The problem is the size of the list to 
> browse.
>>
>> Do you want me to limit this both in memory size and in number of buffers?
>
> I am not necessarily asking to put a limitation. What I ask is
> documenting what can happen. So we can take proper action in the
> future (such as when deciding whether to security support it).
>

Ah, okay, got it.

> [...]
>
>>
>> [...]
>>>>    static int optee_relinquish_resources(struct domain *d)
>>>>    {
>>>>        struct optee_std_call *call, *call_tmp;
>>>>        struct shm_rpc *shm_rpc, *shm_rpc_tmp;
>>>> +    struct optee_shm_buf *optee_shm_buf, *optee_shm_buf_tmp;
>>>>        struct optee_domain *ctx = d->arch.tee;
>>>>          if ( !ctx )
>>>> @@ -381,6 +552,13 @@ static int optee_relinquish_resources(struct domain 
>>>> *d)
>>>>        list_for_each_entry_safe( shm_rpc, shm_rpc_tmp, &ctx->shm_rpc_list, 
>>>> list )
>>>>            free_shm_rpc(ctx, shm_rpc->cookie);
>>>>    +    if ( hypercall_preempt_check() )
>>>> +        return -ERESTART;
>>>
>>> Same question as the other hypercall_preempt_check().
>>
>> Excuse me, but what question? I looked thru all your emails and can't
>> find one.
>
> It looks like I have by mistake trimmed my question on the other patch :/.
>
> Anyway, you should explain how you decide the placement of each
> hypercall_preempt_check(). For instance, if the lists are quite big,
> then you may want add a preempt check in the loop.
I see your point there. Check in the loop would require additional
logic. Is there any best practices? E.g. how often I should check for
preemption? Every N iterations of loop, where N is...

> The key point here is to document the choice even if you wrote it is
> "random". This will help in the future if we need to revise preemption
> choice.
I'd prefer to do proper fix, if it does not require a big amount of
work. Otherwise I'll add document the current state.



[...]

>>>> +        page = get_page_from_gfn(current->domain,
>>>> +                  
>>>> paddr_to_pfn(pages_data_guest->pages_list[entries_on_page]),
>>>> +                  &p2m, P2M_ALLOC);
>>>
>>> The indentation is wrong here. But the problem is due to the long
>>> name. For instance, you have 3 times the word "page" on the same
>>> line. Is that necessary?
>> Code is quite complex. I want every variable to be as descriptive as
>> possible. In some cases it leads to issues like this :-)
>
> There are way to simplify this code.
>
>  1) The OP-TEE code contains the following pattern a few time.
>
>      page = get_page_from_gfn(....)
>      if ( !page || p2mt != p2m_ram_rw )
>        ...
>
>    You can create an helper to encapsulate that. I think you have one case 
> where
>    the p2mt check is different. So potentially you want to patch the type 
> check
>    in parameter.
Yes, this a good idea.

>  2) You probably move __map_domain_page(guest_page/xen_pages) within
> the loop. And just deal with guest_page/xen_pages outside.
>
> With that and the renaming, then the code will become suddenly a bit less 
> complex.

Thank you for recommendations. I'll try to do in this way.

-- 
Best regards,Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.