[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Unshared IOMMU issues



Hi Jan,

On 16/02/17 16:34, Jan Beulich wrote:
On 16.02.17 at 17:11, <julien.grall@xxxxxxx> wrote:
On 16/02/17 15:52, Jan Beulich wrote:
On 16.02.17 at 16:02, <olekstysh@xxxxxxxxx> wrote:
On Thu, Feb 16, 2017 at 11:36 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
On 15.02.17 at 18:43, <olekstysh@xxxxxxxxx> wrote:
1.
I need:
Allow P2M core on ARM to update IOMMU mapping from the first "p2m_set_entry".
I do:
I explicitly set need_iommu flag for *every* guest domain during
iommu_domain_init() on ARM in case if page table is not shared.
At that moment I have no knowledge about will any device be assigned
to this domain or not. I am just want to receive all mapping updates
from P2M code. The P2M will update IOMMU mapping only when need_iommu
is set and page table is not shared.
I have doubts:
Is it correct to just force need_iommu flag?

No, I don't think so. This is a waste of a measurable amount of
resources when page tables aren't shared.

Or maybe another flag should be introduced?

Not sure what you think of here. Where's the problem with building
IOMMU page tables at the time the first device gets assigned, just
like x86 does?
OK, I have already had a look at  arch_iommu_populate_page_table() for x86.
I don't know at the moment how this solution can help me.
There are a least two points the prevent me from doing the similar thing.
1. For create IOMMU mapping I need both mfn and gfn. (+ flags).
I am able to get mfn only. How can I find corresponding gfn?

As the x86 one shows, via mfn_to_gmfn(). If ARM doesn't have
this, perhaps it needs to gain it?

Looking at the x86 implementation, mfn_to_gmfn is using a table for that
indexed by the MFN. This is requiring virtual address space that is
already scarce on ARM32 and also using physical memory.

I am not convinced this is the right things to do on ARM as the only
user so far will be the IOMMU code.

Another solution would be to go through the stage-2 page table and
replicate all the mappings.

That's certainly an option, if you want to save the memory (and
VA space on ARM32). It only makes the x86 model of establishing
the mappings slightly more compute intensive.

I made a quick calculation, ARM32 supports up 40-bit PA and IPA (e.g guest address), which means 28-bits of MFN/GFN. The GFN would have to be stored in a 32-bit for alignment, so we would need 2^28 * 4 = 1GiB of virtual address space and potentially physical memory.
We don't have 1GB of VA space free on 32-bit right now.

ARM64 currently supports up to 48-bit PA and 48-bit IPA, which means 36-bits of MFN/GFN. The GFN would have to be stored in 64-bit for alignment, so we would need 2^36 * 8 = 512GiB of virtual address space and potentially physical memory. While virtual address space is not a problem, the memory is a problem for embedded platform. We want Xen to be as lean as possible.

I though a bit more on the advantage to create the IOMMU page tables later on.

For devices assigned at domain creation, we know that devices will be assigned so we could let Xen and populated IOMMU while allocating the memory for the domain.

For hotplug devices, this would only happen for PCI as integrated devices cannot be hotplug. As we go towards emulating a root complex in Xen rather than the PV approach, you would need the root complex to be instantiated when the domain is created (unless we want to hotplug too?). IHMO, if you assign a root complex is likely that you will want to assign a PCI afterwards. So allocating page tables at that time sounds sensible.

This would avoid to walk the stage-2 page tables at runtime.

Any opinions?

2. The d->page_list seems only contains domain RAM (not 100% sure).
Where can I get other regions (mmios, etc)?

These necessarily are being tracked for the domain, so you need to
take them from wherever they're stored on ARM.

Is there any reason why you don't seem to have such code on x86? AFAICT
only RAM is currently mapped.

Well, no-one care so far, I would guess. Even runtime mappings of
MMIO space were mad work properly only very recently (by Roger).

Regarding ARM, we know whether a domain is allowed to access a certain
range of MMIO, but, similarly to above, we don't have the conversion MFN
-> GFN for them. However in this case, we would not be able to use an
M2P as a same MFN may be mapped in multiple domain.

Mapped by multiple domains? If one DomU and Dom0, I can see
this as possible, but not a requirement. If multiple DomU-s I have
to raise the question of security.

The interrupt controller GICv2 supports virtualization and allow the guest to manage interrupt as it was running on baremetal. There is a per-CPU interface that is mapped on every domain. Obviously, the state is saved/restored during vCPU context switch.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.