[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN RFC PATCH v5 3/5] xen/public: Introduce PV-IOMMU hypercall interface



Hello Jason,

Le 30/01/2025 à 21:17, Jason Andryuk a écrit :
> Hi Teddy,
>
> Thanks for working on this.  I'm curious about your plans for this:
>
> On 2025-01-21 11:13, Teddy Astie wrote:
>> +/**
>> + * IOMMU_alloc_nested
>> + * Create a nested IOMMU context (needs IOMMUCAP_nested).
>> + *
>> + * This context uses a platform-specific page table from domain
>> address space
>> + * specified in pgtable_gfn and use it for nested translations.
>> + *
>> + * Explicit flushes needs to be submited with IOMMU_flush_nested on
>> + * modification of the nested pagetable to ensure coherency between
>> IOTLB and
>> + * nested page table.
>> + *
>> + * This context can be destroyed using IOMMU_free_context.
>> + * This context cannot be modified using map_pages, unmap_pages.
>> + */
>> +struct pv_iommu_alloc_nested {
>> +    /* OUT: allocated IOMMU context number */
>> +    uint16_t ctx_no;
>> +
>> +    /* IN: guest frame number of the nested page table */
>> +    uint64_aligned_t pgtable_gfn;
>> +
>> +    /* IN: nested mode flags */
>> +    uint64_aligned_t nested_flags;
>> +};
>> +typedef struct pv_iommu_alloc_nested pv_iommu_alloc_nested_t;
>> +DEFINE_XEN_GUEST_HANDLE(pv_iommu_alloc_nested_t);
>
> Is this command intended to be used for GVA -> GPA translation?  Would
> you need some way to associate with another iommu context for GPA -> HPA
> translation?
>

It's intended to be used for accelerating IOMMU emulation for the guest.
So in this case the other GPA->HPA translation is domain's p2m
page-table (or something similar) such as the translations made from
this nested context is meaningful from guest point of view.

The idea to use it is to use the "remote_op" sub-command to let the
device model (e.g QEMU) alter the IOMMU behavior for the affected domain
(e.g by reattaching devices, making new IOMMU contexts, ...).

I think it can also be used for virtio-iommu pagetable.

> Maybe more broadly, what are your goals for enabling PV-IOMMU?  The
> examples on your blog post cover a domain restrict device access to
> specific portions of the the GPA space.  Are you also interested in GVA
> -> GPA?  Does VFIO required GVA -> GPA?
>

The current way of enabling and using PV-IOMMU is with the dedicated
Linux IOMMU driver [1] that implements Linux's IOMMU subsystem with this
proposed interface.
This in practice in the PV case replaces the xen-swiotlb with dma-iommu
and do all DMA through the paravirtualized IOMMU (e.g creating DMA
domains, moving devices to it).

Regarding GVA->GPA, this is what this interface provides, and
restricting device access to memory is one way of using it. This is a
requirement for VFIO (as it is also one for Linux IOMMU), and I managed
to make SPDK and DPDK work in Dom0 using VFIO and these patches [2].

[1] Originally
https://lists.xen.org/archives/html/xen-devel/2024-06/msg01145.html but
this patch got quite old and probably doesn't work anymore with this new
Xen patch series.
I have a updated patch in my xen-pviommu branch
https://gitlab.com/xen-project/people/tsnake41/linux/-/commit/125d67a09fa9f66a32f9175641cfccca22dbbdb6

[2] You also need to set "vfio_iommu_type1.allow_unsafe_interrupts=1" to
make VFIO work for now.

> And, sorry to bike shed, but ctx_no reads like "Context No" to me.  I
> think ctxid/ctx_id might be clearer.  Others probably have their own
> opinions :)
>

ctxid/ctx_id would make more sense (we already have names like domid).

> Thanks,
> Jason

Thanks
Teddy


Teddy Astie | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.