[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote: > On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote: > > I forgot to include this brief information about this patch series. > > > > This patch series contains the implementation of a new device driver, > > hyper_dmabuf, which provides a method for DMA-BUF sharing across > > different OSes running on the same virtual OS platform powered by > > a hypervisor. > > > > Detailed information about this driver is described in a high-level doc > > added by the second patch of the series. > > > > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing > > > > I am attaching 'Overview' section here as a summary. > > > > ------------------------------------------------------------------------------ > > Section 1. Overview > > ------------------------------------------------------------------------------ > > > > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual > > achines (VMs), which expands DMA-BUF sharing capability to the VM > > environment > > where multiple different OS instances need to share same physical data > > without > > data-copy across VMs. > > > > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the > > exporting VM (so called, “exporter”) imports a local DMA_BUF from the > > original > > producer of the buffer, then re-exports it with an unique ID, > > hyper_dmabuf_id > > for the buffer to the importing VM (so called, “importer”). > > > > Another instance of the Hyper_DMABUF driver on importer registers > > a hyper_dmabuf_id together with reference information for the shared > > physical > > pages associated with the DMA_BUF to its database when the export happens. > > > > The actual mapping of the DMA_BUF on the importer’s side is done by > > the Hyper_DMABUF driver when user space issues the IOCTL command to access > > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and > > exporting driver as is, that is, no special configuration is required. > > Consequently, only a single module per VM is needed to enable cross-VM > > DMA_BUF > > exchange. > > So I know that most dma-buf implementations (especially lots of importers > in drivers/gpu) break this, but fundamentally only the original exporter > is allowed to know about the underlying pages. There's various scenarios > where a dma-buf isn't backed by anything like a struct page. > > So your first step of noodling the underlying struct page out from the > dma-buf is kinda breaking the abstraction, and I think it's not a good > idea to have that. Especially not for sharing across VMs. > > I think a better design would be if hyper-dmabuf would be the dma-buf > exporter in both of the VMs, and you'd import it everywhere you want to in > some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always > in control of the pages, and a lot of the troubling forwarding you > currently need to do disappears. It could be another way to implement dma-buf sharing however, it would break the flexibility and transparency that this driver has now. With suggested method, there will be two different types of dma-buf exist in general usage model, one is local-dmabuf, a traditional dmabuf that can be shared only within in the same OS instance and the other is cross-vm sharable dmabuf created by hyper_dmabuf driver. The problem with this approach is that an application needs to know whether the contents will be shared or not across VMs in advance before deciding what type of dma-buf it needs to create. Otherwise, the application should always use hyper_dmabuf as the exporter for all contents that can be possibly shared in the future and I think this will require significant amount of application changes and also adds unnecessary dependency on hyper_dmabuf driver. > > 2nd thing: This seems very much related to what's happening around gvt and > allowing at least the host (in a kvm based VM environment) to be able to > access some of the dma-buf (or well, framebuffers in general) that the > client is using. Adding some mailing lists for that. I think you are talking about exposing framebuffer to another domain via GTT memory sharing. And yes, one of primary use cases for hyper_dmabuf is to share a framebuffer or other graphic object across VMs but it is designed to do it via more general way using existing dma-buf framework. Also, we wanted to make this feature available virtually for any sharable contents which can currently be shared via dma-buf locally. > -Daniel > > > > > ------------------------------------------------------------------------------ > > > > There is a git repository at github.com where this series of patches are all > > integrated in Linux kernel tree based on the commit: > > > > commit ae64f9bd1d3621b5e60d7363bc20afb46aede215 > > Author: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> > > Date: Sun Dec 3 11:01:47 2017 -0500 > > > > Linux 4.15-rc2 > > > > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3 > > > > _______________________________________________ > > dri-devel mailing list > > dri-devel@xxxxxxxxxxxxxxxxxxxxx > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |