[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Notes for xen summit 2018 design session] Graphic virtualization



On 08/02/2018 04:26 PM, Artem Mygaiev wrote:
Hello Julien

Hi Artem,

Thank you for the feedback!

On 02.08.18 12:56, Julien Grall wrote:
Hi,

Sorry for the late posting. The notes were taken by Stefano Stabellini. Thank you.

This has some clarifications requested from EPAM regarding PowerVR.

The existing graphics solutions on Xen today are:
    - PV DRM:
         * Supports multiple displays per VM
         * Based on Grant-tables.
         * Improvement of Xen FB which is based on foreign mapping

Frontend driver will be part of LK starting 4.18
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/xen?h=v4.18-rc7
That's a good news. Do you know the state of the backend?



    - Intel GVT: https://01.org/igvt-g
         * Based on IOREQ server infrastructure
         * Performance is 70% of direct assigned hardware

    - NVIDIA:
         * Much more virtualizable
         * Provide mappable chunk of PCI BARs.
         * Userspace component emulates PCI config space

Current effort for graphic virtualization on Arm:
    - Samsung: They have a PV OpenGL solution. This seems to be fast.

This is interesting. Do you know if there is any open benchmark data?

Stefano introduced you with the Samsung speaker. Hopefully we will get more details on the benchmark.

Unfortunately, PV OpenGL is not available upstream at the moment. It was not clear whether the backend and frontend would ever be upstreamed and when.

However, the work looks quite similar to virgil (https://virgil3d.github.io/). It is Graphic virtualization solution based on virtio for the transport. I think it would be possible to re-use it by just replacing the transport layer.

Another solution is to implement virtio on Xen (see the discussion on the last community call).

    - EPAM:
         * PV OpenGL was dismissed because of performance concern
         * PV DRM for sharing display
         * PowerVR native virtualization (see below)

PoverVR virtualization:

Recent PoverVR hardware provided some virtualization support. The
solution is implemented in the firmware. A kernel module is used to talk
to the firmware via shared memory. The toolstack only have to setup
memory context for each VM.

            ** Recent PoverVR HW has some virtualization support
            ** Kernel module

It was not clear whether an extra pair of frontend/backend was required along with the PowerVR driver.

@Action: EPAM, could you clarify it?


No, there are no extra FE/BE drivers for GPU sharing in case of PowerVR.

Potential solution for upstream:
    - PV OpenGL
    - vGPU solution outside of the hypervisor (see below)

vGPU solution outside of the hypervisor:

A unikernel (or Dom0) based environment could be provided to run
proprietary software. >

One more option we we were discussing is "de-priviledged" or "native applications in Xen:
https://lists.xenproject.org/archives/html/xen-devel/2017-04/msg01002.html
We are looking into unikernels, too.

The proprietary software would use IOREQ server infrastructure to
emulate guest memory region used by the GPU and do the scheduling
decisions.


We also had an RFC for co-processors (including GPU) management some time ago:
https://lists.xenproject.org/archives/html/xen-devel/2016-10/msg01966.html
If I remember the series, the code may require to trap access to guest GPU access and manage to the GPU. There are a fair amount of chance that GPU vendors will not want to have that under GPL. So this would have to live outside of Xen.

This is where the IOREQ infrastructure comes into place. It allows to forward MMIO access to an external entity. This entity could be proprietary.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.