| 
    
 [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Proposal for virtual IOMMU binding b/w vIOMMU and passthrough devices
 On 26/10/2022 14:17, Rahul Singh wrote: Hi All, Hi Rahul, Below, you said that each IOMMUs may have a different ID space. So shouldn't we expose one vIOMMU per pIOMMU? If not, how do you expect the user to specify the mapping?At Arm, we started to implement the POC to support 2 levels of page tables/nested translation in SMMUv3. To support nested translation for guest OS Xen needs to expose the virtual IOMMU. If we passthrough the device to the guest that is behind an IOMMU and virtual IOMMU is enabled for the guest there is a need to add IOMMU binding for the device in the passthrough node as per [1]. This email is to get an agreement on how to add the IOMMU binding for guest OS. Before I will explain how to add the IOMMU binding let me give a brief overview of how we will add support for virtual IOMMU on Arm. In order to implement virtual IOMMU Xen need SMMUv3 Nested translation support. SMMUv3 hardware supports two stages of translation. Each stage of translation can be independently enabled. An incoming address is logically translated from VA to IPA in stage 1, then the IPA is input to stage 2 which translates the IPA to the output PA. Stage 1 is intended to be used by a software entity( Guest OS) to provide isolation or translation to buffers within the entity, for example, DMA isolation within an OS. Stage 2 is intended to be available in systems supporting the Virtualization Extensions and is intended to virtualize device DMA to guest VM address spaces. When both stage 1 and stage 2 are enabled, the translation configuration is called nesting. Stage 1 translation support is required to provide isolation between different devices within the guest OS. XEN already supports Stage 2 translation but there is no support for Stage 1 translation for guests. We will add support for guests to configure the Stage 1 transition via virtual IOMMU. XEN will emulate the SMMU hardware and exposes the virtual SMMU to the guest. Guest can use the native SMMU driver to configure the stage 1 translation. When the guest configures the SMMU for Stage 1, XEN will trap the access and configure the hardware accordingly. Now back to the question of how we can add the IOMMU binding between the virtual IOMMU and the master devices so that guests can configure the IOMMU correctly. The solution that I am suggesting is as below: For dom0, while handling the DT node(handle_node()) Xen will replace the phandle in the "iommus" property with the virtual IOMMU node phandle. Does this mean only one IOMMU will be supported in the guest? 
 In xl.cfg, we already pass the device-tree node path to passthrough. So Xen should already have all the information about the IOMMU and Master-ID. So it doesn't seem necessary for Device-Tree. For ACPI, I would have expected the information to be found in the IOREQ. So can you add more context why this is necessary for everyone?   
iommu_devid_map = [ “PMASTER_ID[@VMASTER_ID],IOMMU_BASE_ADDRESS” , “PMASTER_ID[@VMASTER_ID],IOMMU_BASE_ADDRESS”]
Below you give an example for Platform device. How would that fit in the context of PCI passthrough?   
Example: Let's say the user wants to assign the below physical device in DT to the guest.
  
iommu@4f000000 {
So I guess this node will be written by Xen. How will you the case where there are extra property to added (e.g. dma-coherent)?   
test@10000000 {
I am a bit confused. Here you use 0xfdeb for the phandle but below... 
 ... you use 0xfdea. Does this mean 'xl' will rewrite the phandle? 
 Cheers, -- Julien Grall 
 
 
  | 
  
![]()  | 
            
         Lists.xenproject.org is hosted with RackSpace, monitoring our  |