[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file




> -----Original Message-----
> From: Stefano Stabellini [mailto:sstabellini@xxxxxxxxxx]
> Sent: Friday, June 23, 2017 4:09 PM
> To: Jarvis Roach <Jarvis.Roach@xxxxxxxxxxxxxxx>
> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Julien Grall
> <julien.grall@xxxxxxx>; Zhongze Liu <blackskygg@xxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx; Wei Liu <wei.liu2@xxxxxxxxxx>; Ian Jackson
> <ian.jackson@xxxxxxxxxxxxx>; edgari@xxxxxxxxxx; Edgar E. Iglesias
> <edgar.iglesias@xxxxxxxxxx>
> Subject: RE: [RFC v2]Proposal to allow setting up shared memory areas
> between VMs from xl config file
> 
> On Fri, 23 Jun 2017, Jarvis Roach wrote:
> > > -----Original Message-----
> > > From: Stefano Stabellini [mailto:sstabellini@xxxxxxxxxx]
> > > Sent: Friday, June 23, 2017 2:21 PM
> > > To: Julien Grall <julien.grall@xxxxxxx>
> > > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Zhongze Liu
> > > <blackskygg@xxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx; Wei Liu
> > > <wei.liu2@xxxxxxxxxx>; Ian Jackson <ian.jackson@xxxxxxxxxxxxx>;
> > > Jarvis Roach <Jarvis.Roach@xxxxxxxxxxxxxxx>; edgari@xxxxxxxxxx;
> > > Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx>
> > > Subject: Re: [RFC v2]Proposal to allow setting up shared memory
> > > areas between VMs from xl config file
> > >
> > > On Fri, 23 Jun 2017, Julien Grall wrote:
> > > > Hi,
> > > >
> > > > On 22/06/17 22:05, Stefano Stabellini wrote:
> > > > > > When we encounter an id IDx during "xl create":
> > > > > >
> > > > > >   + If it’s not under /local/shared_mem:
> > > > > >     + If the corresponding entry has a "master" tag, create the
> > > > > >       corresponding entries for IDx in xenstore
> > > > > >     + If there isn't a "master" tag, say error.
> > > > > >
> > > > > >   + If it’s found under /local/shared_mem:
> > > > > >     + If the corresponding entry has a "master" tag, say error
> > > > > >     + If there isn't a "master" tag, map the pages to the newly
> > > > > >       created domain, and add the current domain and necessary
> > > information
> > > > > >       under /local/shared_mem/IDx/slaves.
> > > > >
> > > > > Aside from using "gfn" instead of gmfn everywhere, I think it
> > > > > looks pretty good.
> > > > >
> > > > > I would leave out permissions and cacheability attributes from
> > > > > this version of the work. I would just add a note saying that
> > > > > memory will be mapped as RW regular cacheable RAM. Other
> > > > > permissions and cacheability will be possible, but they are not
> implemented yet.
> > > >
> > > > Well, I think we should design the interface correctly from the
> > > > beginning to facilitate future extension.
> > >
> > > Which interface are you speaking about?
> > >
> > > I don't think we should attemp to write how the hypercall interface
> > > might look like in the future to support setting permissions and
> > > cacheability attributes.
> > >
> > >
> > > > Also, you need to clarify what you mean by "regular cacheable RAM".
> > > > Are they write-through, write-back...? But, on ARM, this would
> > > > only be the caching attribute in stage-2 page table. The final
> > > > caching, memory type, shareability would be a combination of stage-2
> and stage-1 attributes.
> > >
> > > The very same that is used today for the ram of virtual machines, do
> > > we need to say any more than that? (For ARM, p2m_ram_rw and
> > > MATTR_MEM, LPAE_SH_INNER. For stage1, we should refer to
> > > xen/include/public/arch-arm.h.)
> >
> > I have customers who need some buffers LPAE_SH_OUTER and others
> who need NORMAL non-cacheable or inner-cacheable buffers, so my
> suggestion is to provide a way to support the full combination of
> configurations.
> >
> > While the stage 1/stage 2 combination results allow guests (via the stage 1
> translation regime) to force the two combinations I specifically mentioned,  
> in
> the first case the customers want LPAE_SH_OUTER for cache coherency with
> a DMA-capable I/O device. In that case, Xen needs to set the shareability
> attribute to OUTER in the stage 2 table since that's what is used for the
> SMMU. In the second case,  NORMAL non-cacheable or inner-cacheable, the
> customers are in a position where they can't trust the guests to disable their
> cache or set it for inner-cacheable, so it would be good for a way to Xen or
> privileged/trusted domain to do so.
> 
> Let me premise that I would be happy to see the whole set of configurations
> implemented in the long run, we might just not get there on day1. We could
> spec out how the VM config option should look like, but leave the
> cacheability and shareability parameteres unimplemented for now (also to
> address Julien't comment on defining future proof interfaces).
> 
> I understand the need for cache-coherent buffers for dma to/from devices,
> but I think that problem should be solved with the iomem config option. This
> project was meant to setup shared memory regions for VM-to-VM
> communications. It doesn't look like that is the kind of requirement that this
> framework is meant to meet, unless I am missing something?

As the intent is for direct VM-to-VM communication I concede the point. 
However, there is interest in I/O -> common buffer that both VMs can access 
using a distributed access algorithm, in which case you have indirect VM-to-VM 
communication occurring, though no doubt I'm stretching the meaning and intent 
of the project.

> Normal non-cacheable buffers are more interesting: do you actually see
> guests running on non-cacheable memory? If not, could you make an
> example of a use-case for two VMs sharing a non-cacheable page?

There a couple of different use cases for guests running without outer cache 
specifically, or without any cache generally. For safety applications, 
partitioning  VMs to their own CPU cores without sharing L2 cache (for all but 
one VM) would allow you to eliminate cross VM jitter caused by cache 
contention, while still gaining some advantage by using the L1 cache (and all 
of the advantage of using L2 cache for one of them).  For security 
applications, there's a similar desire not to share a common resource like 
cache between VMs for fear that a rogue actor could extract information from 
it. In both situations having a  shared page would be useful for inter-VM 
communication. Both use-cases presume that the base memory allocated to the 
guest as part of its VM environment is also set up as non-cacheable (or inner 
cacheable), which is why it would be useful to have an interface to control 
those attributes better.

The best use-case I can think of for normal, non-cacheable buffers for VMs with 
otherwise cacheable "main" memory would again be a security application where 
the cacheable main memory handles encrypted information, but where the 
decrypted data is put into a non-cached shared buffer for another VM to 
consume. Again the concern is that if the buffer was cacheable then a rogue 
agent in the system could use some side channel exploit to gain information 
about the data.



 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.