[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v02 1/7] arm: introduce remoteprocessor iommu module



On 07/22/2014 04:20 PM, Andrii Tseglytskyi wrote:
> Hi Julien,

Hi Andrii,

>>
>>> +    }                                                   \
>>> +    __res;                                              \
>>> +})
>>> +
>>> +static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
>>> +{
>>> +    if ( (addr >= mmu->mem_start) && (addr < (mmu->mem_start +
>>> mmu->mem_size)) )
>>> +        return 1;
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static inline struct mmu_info *mmu_lookup(u32 addr)
>>> +{
>>> +    u32 i;
>>> +
>>> +    /* enumerate all registered MMU's and check is address in range */
>>> +    for ( i = 0; i < ARRAY_SIZE(mmu_list); i++ )
>>> +    {
>>> +        if ( mmu_check_mem_range(mmu_list[i], addr) )
>>> +            return mmu_list[i];
>>> +    }
>>> +
>>> +    return NULL;
>>> +}
>>> +
>>> +static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
>>> +{
>>> +    return mmu_for_each(mmu_check_mem_range, addr);
>>> +}
>>
>>
>> This solution leads any guest to access to the MMU and therefore program it.
>> If you plan to use for passthrough, you have to find a way to say that a
>> specific domain is able to use the MMU, maybe an hypercall. Otherwise this
>> is a security issue.
>>
>> IIRC I have already raised this concern on V1 :). It would be nice if you
>> resolve it ASAP, because I suspect it will need some rework in the way you
>> handle MMNU.
>>
> 
> Agree. I need to think how to solve this. I don't think that the
> hypercall is what is needed here.

IHMO, only the toolstack is able to know which domain will use the
remote processor or not. Adding a new DOMCTL looks like the best solution.

>> ioremap_* should only be used to map device memory. For the guest memory you
>> have to use copy_from_guest helper.
>>
> 
> OK. Just a small question - if I use copy_from_guest(), can I copy
> from physical pointer ?  Here I have an address, which points to exact
> physical memory.

copy_from_guest only cope with virtual guest address. You have to use
directly p2m_lookup and map_domain_page.

There is a ongoing discussion about it. See:
http://lists.xen.org/archives/html/xen-devel/2014-07/msg02096.html

>>
>>> +    /* dom0 should not access remoteproc MMU */
>>> +    if ( 0 == current->domain->domain_id )
>>> +        return 1;
>>
>>
>> Why this restriction?
>>
> 
> We agreed that remoteproc will be handled by domU, which doesn't has 1
> to 1 memory mapping.
> If remoteproc belongs to dom0, translation is not needed - it MMU will
> be configured with machine pointers.

In this case, the io ops are not registered for DOM0. So we should never
reach this case.

>>
>> So, the iohandler is usually call on the current VCPU, there is no need to
>> worry about it. Futhermore, I would pass the vcpu/domain in argument of the
>> next function.
>>
> 
> Can be. In first revision of these patches domain passed as a
> parameter. But that leaded to one more additional parameter in all
> functions below this call. I found that I can reduce arg list if I use
> current-> domain pointer.

Ok.

>>> +static int mmu_init_all(void)
>>> +{
>>> +    int res;
>>> +
>>> +    res = mmu_for_each(mmu_init, 0);
>>> +    if ( res )
>>> +    {
>>> +        printk("%s error during init %d\n", __func__, res);
>>> +        return res;
>>> +    }
>>
>>
>> Hmmm... do_initcalls doesn't check the return value. How your code behave we
>> one of the MMU has not been initialized?
>>
>> I think do_initcalls & co should check the return, but as it's the common
>> code I don't know how x86 respect this convention to return 0 if succeded.
>> Ian, Stefano, any thoughs?
>>
> 
> I would like to make this specific to ARM only if possible.

It looks like Ian and Stefano doesn't answer to this question. If we
can't check the return of the init call in do_initcalls, I would replace
by a panic.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.