[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] [Draft Design] ACPI/IORT Support in Xen.



Hi,

On 23/10/17 14:57, Andre Przywara wrote:
On 12/10/17 22:03, Manish Jaggi wrote:
It is proposed that the idrange of PCIRC and ITS group be constant for
domUs.

"constant" is a bit confusing here. Maybe "arbitrary", "from scratch" or
"independent from the actual h/w"?

I don't think we should tie to anything here. IORT for DomU will get some input, it could be same as the host or something generated (not necessarily constant). That's implementation details and might be up to the user.


In case if PCI PT,using a domctl toolstack can communicate
physical RID: virtual RID, deviceID: virtual deviceID to xen.

It is assumed that domU PCI Config access would be trapped in Xen. The
RID at which assigned device is enumerated would be the one provided by the
domctl, domctl_set_deviceid_mapping

TODO: device assign domctl i/f.
Note: This should suffice the virtual deviceID support pointed by Andre.
[4]

Well, there's more to it. First thing: while I tried to include virtual
ITS deviceIDs to be different from physical ones, in the moment there
are fixed to being mapped 1:1 in the code.

So the first step would be to go over the ITS code and identify where
"devid" refers to a virtual deviceID and where to a physical one
(probably renaming them accordingly). Then we would need a function to
translate between the two. At the moment this would be a dummy function
(just return the input value). Later we would loop in the actual table.

We might not need this domctl if assign_device hypercall is extended to
provide this information.

Do we actually need a new interface or even extend the existing one?
If I got Julien correctly, the existing interface is just fine?

In the first place, I am not sure to understand why Domctl is mentioned in this document. I can understand why you want to describe the information used for DomU IORT. But it does not matter at how this is tying to the rest of the passthrough work.

[...]


6. IORT Generation
-------------------
There would be a common code to generate IORT table from iort_table_struct.

That sounds useful, but we would need to be careful with sharing code
between Xen and the tool stack. Has this actually been done before?

Yes, see libelf for instance. But I think there is a terminology problem here.

Skimming the rest of the e-mail I see: "populate a basic IORT in a buffer passed by toolstack (using a domctl : domctl_prepare_dom_iort)". By sharing code, I meant creating a library that would be compiled in both the hypervisor and the toolstack.

But as I said before, this is not the purpose now. The purpose is finally getting support of IORT in the hypervisor with the generation of the IORT for Dom0 fully separated from the parsing.

a. For Dom0
     the structure (iort_table_struct) be modified to remove smmu nodes
     and update id_mappings.
     PCIRC idmap -> output refrence to ITS group.
     (RID -> DeviceID).

     TODO: Describe algo in update_id_mapping function to map RID ->
DeviceID used
     in my earlier patch [3]

If the above approach works, this would become a simple list iteration,
creating PCI rc nodes with the appropriate pointer to the ITS nodes.

b. For DomU
     - iort_table_struct would have minimal 2 nodes (1 PCIRC and 1 ITS
group)
     - populate a basic IORT in a buffer passed by toolstack( using a
domctl : domctl_prepare_dom_iort)

I think we should reduce this to iterating the same data structure as
for Dom0. Each pass-through-ed PCI device would possibly create one
struct instance, and later on we do the same iteration as we do for
Dom0. If that proves to be simple enough, we might even live with the
code duplication between Xen and the toolstack.

I think you summarize quite what I have been saying in the previous thread. Thank you :).

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.