[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] scf: SCF device tree and configuration documentation



Julien,


What I would like to understand is what are the information that the hypervisors as to know for sharing co-processor? So far I have:
    - MMIOs
    - Interrupts

Anything else?
IOMMU bindings.
This knowledge enough to get the physical coprocessor shared.

In order to spawn a virtual coprocessor (vcoproc) for some domain you have to provide additional configuration information: - Which physical coprocessor this vcoproc should represent to a domain ( a SoC could have several physical coprocs shared through the framework) - IRQ(s) (provided that no IRQ remapping is implemented in XEN) could be omitted (or used for verification only) - IOMEM ranges correspondence between this vcoproc instance and a physical coprocessor.

The latest point in the configuration is the most complex.
Let me explain a use case we faced now:
    - a GPU has two different firmwares implementing OpenGL and OpenCL
- we need both GL and CL in the same domain working simultaneously (actually concurrently, but the concurrency should be transparent for domain, GPU drivers and firmwares)
In current case we are lucky, the GPU has a single mmio range.
We can implement such system using SCF: spawn two vcoprocs for a domain. Those vcoprocs will have own mmio range within the domain. In a hypervisor those mmio ranges would be served by the same handler, but must be associated with the own vcoproc context.

In case a coprocessor has several mmio ranges things are getting worse.

In a device tree configuration concept I explicitly link vcoproc to pcoproc and keep mmio ranges correspondency with names.
I'm not sure how to keep this coincidence in a simple way.

--

*Andrii Anisov*



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.