[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Unshared IOMMU issues



Hi, all.

So, as I understand we have two possible solutions for the IOMMU page
table to be populated:
1.  When the first device is being assigned. Retrieve all mappings
from stage-2 table.
2.  When the domain is being created.

I would prefer the second variant.

Retrieving all mappings from P2M might take *some* time. This time
will depend on how much mappings the stage-2 table has
and how these mappings should be applied to IOMMU table.
Theoretically  the "unshared IOMMU" might support 4K pages only and
might require cache invalidation after installing each entry.

Thank you.

On Fri, Feb 17, 2017 at 5:25 PM, Julien Grall <julien.grall@xxxxxxx> wrote:
> Hi Jan,
>
> On 17/02/17 07:43, Jan Beulich wrote:
>>
>> Well, in the end it's your call, but I don't think this is an acceptable
>> model in the general case. Quite often - see the Viridian support in
>> x86 Xen for a very good example - distros (XenServer in this case)
>> enable functionality even if a guest (Linux in the case here) would
>> never really want to make use of it. Also you need to keep in mind
>> that for an admin it is better to not have to take care of all
>> eventualities before first starting a (perhaps long running) guest.
>> Granted we have a number of other limitations of that same kind,
>> but if such can be avoided, I'd always prefer to do so.
>
>
> To be fair, in server side, the SBSA [1] mandates the IOMMU to be compatible
> with ARM SMMU spec. This is allowing us to share page table by default with
> the SMMU. Today the driver does not support unsharing and I don't know yet
> any use case requiring to unshare them.
>
> For embedded side, I would be surprised if they use PCI hotplug. So populate
> IOMMU page table from domain creating is not a big concern.
>
> As this would be an interface between Xen and the toolstack, we could
> revisit later if we have platform where page table are not shared and
> hotplug is been used.
>
>>
>>>>>>> 2. The d->page_list seems only contains domain RAM (not 100% sure).
>>>>>>> Where can I get other regions (mmios, etc)?
>>>>>>
>>>>>>
>>>>>> These necessarily are being tracked for the domain, so you need to
>>>>>> take them from wherever they're stored on ARM.
>>>>>
>>>>>
>>>>> Is there any reason why you don't seem to have such code on x86? AFAICT
>>>>> only RAM is currently mapped.
>>>>
>>>>
>>>> Well, no-one care so far, I would guess. Even runtime mappings of
>>>> MMIO space were mad work properly only very recently (by Roger).
>>>>
>>>>> Regarding ARM, we know whether a domain is allowed to access a certain
>>>>> range of MMIO, but, similarly to above, we don't have the conversion
>>>>> MFN
>>>>> -> GFN for them. However in this case, we would not be able to use an
>>>>> M2P as a same MFN may be mapped in multiple domain.
>>>>
>>>>
>>>> Mapped by multiple domains? If one DomU and Dom0, I can see
>>>> this as possible, but not a requirement. If multiple DomU-s I have
>>>> to raise the question of security.
>>>
>>>
>>> The interrupt controller GICv2 supports virtualization and allow the
>>> guest to manage interrupt as it was running on baremetal. There is a
>>> per-CPU interface that is mapped on every domain. Obviously, the state
>>> is saved/restored during vCPU context switch.
>>
>>
>> Now that looks like a very special case, which the code doing the
>> mapping could (and should) be aware of. Quite likely this area
>> even gets mapped at a predetermined GFN (range) for guests
>> (in which case no lookup is necessary at all)?
>
>
> Yes we can in this case.
>
> Cheers,
>
> [1]
> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0029/index.html
>
> --
> Julien Grall



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.