[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposal to allow setting up shared memory areas between VMs from xl config file



On Mon, 15 May 2017, Wei Liu wrote:
> On Sat, May 13, 2017 at 10:28:27AM +0800, Zhongze Liu wrote:
> > 2017-05-13 1:51 GMT+08:00 Wei Liu <wei.liu2@xxxxxxxxxx>:
> > > Hi Zhongze
> > >
> > > This is a nice write-up. Some comments below. Feel free to disagree with
> > > what I say below, this is more a discussion than picking on your design
> > > or plan.
> > >
> > 
> > HI, Wei Liu
> > 
> > Thanks for your time reading through my proposal.
> > 
> > >
> > > On Sat, May 13, 2017 at 01:01:39AM +0800, Zhongze Liu wrote:
> > >> Hi, Xen developers,
> > >>
> > >> I'm Zhongze Liu, a GSoC student of this year. Glad to meet you in the
> > >> Xen Project.  As an initial step to implementing my GSoC proposal, which
> > >> is still a draft,  I'm posting it here. And hope to hear from you your
> > >> suggestions.
> > >>
> > >> ====================================================
> > >> 1. Motivation and Description
> > >> ====================================================
> > >> Virtual machines use grant table hypercalls to setup a share page for
> > >> inter-VMs communications. These hypercalls are used by all PV
> > >> protocols today. However, very simple guests, such as baremetal
> > >> applications, might not have the infrastructure to handle the grant 
> > >> table.
> > >> This project is about setting up several shared memory areas for 
> > >> inter-VMs
> > >> communications directly from the VM config file.
> > >> So that the guest kernel doesn't have to have grant table support to be
> > >> able to communicate with other guests.
> > >>
> > >> ====================================================
> > >> 2. Implementation Plan:
> > >> ====================================================
> > >>
> > >> ======================================
> > >> 2.1 Introduce a new VM config option in xl:
> > >> ======================================
> > >> The shared areas should be shareable among several VMs,
> > >> every shared physical memory area is assigned to a set of VMs.
> > >> Therefore, a “token” or “identifier” should be used here to uniquely
> > >> identify a backing memory area.
> > >>
> > >>
> > >> I would suggest using an unsigned integer to serve as the identifier.
> > >> For example:
> > >>
> > >> In xl config file of vm1:
> > >>
> > >>     static_shared_mem = [“addr_range1= ID1”, “addr_range2 = ID2”]
> > >>
> > >> In xl config file of vm2:
> > >>
> > >>     static_shared_mem = [“addr_range3 = ID1”]
> > >>
> > >> In xl config file of vm3:
> > >>
> > >>     static_shared_mem = [“addr_range4 = ID2”]
> > >
> > > I can envisage you need some more attributes: what about the attributes
> > > like RW / RO / WO (or even X)?
> > >
> > > Also, I assume the granularity of the mapping is a page, but as far as I
> > > can tell there are two page granularity on ARM, you do need to consider
> > > both and what should happen if you mix and match them. What about
> > > mapping several pages and different VM use overlapping ranges?
> > >
> > > Can you give some concrete examples? What does addr_rangeX look like in
> > > practice?
> > >
> > >
> > 
> > Yes, those attributes are necessary and should be explicitly specified in 
> > the
> > config file. I'll add them in the next version of this proposal. And taking 
> > the
> > granularity into consideration, what do you say if we change the entries 
> > into
> > something like:
> > 'start=0xcafebabe, end=0xdeedbeef, granularity=4K, prot=RWX'.
> 
> I realised I may have gone too far after reading your reply.
> 
> What is the end purpose of this project? If you only want to insert a
> mfn into guest address space and don't care how the guest is going to
> map it, you can omit the prot= part. If you want stricter control, you
> will need them -- and that would also have implications on the
> hypervisor code you need.
> 
> I suggest you write the manual for the new mechanism you propose first.
> That way you describe the feature in a sysadmin-friendly way.  Describe
> the syntax, the effect of the new mechanism and how people are supposed
> to use it under what circumstances.

The memory sharing mechanism should enable guests to communicate with
each other using a shared ring. That implies that the memory needs to be
read-write, but I can imagine there are use cases for it to be read-only
too. I think it is a good idea to specify it.

However, I do not think we should ask Zhongze to write a protocol
specification for how these guests should communicate. That is out of
scope.


> > >> In the example above. A memory area A1 will be shared between
> > >> vm1 and vm2 -- vm1 can access this area using addr_range1
> > >> and vm2 using addr_range3. Likewise, a memory area A2 will be
> > >> shared between vm1 and vm3 -- vm1 can access A2 using addr_range2
> > >> and vm3 using addr_range4.
> > >>
> > >> The shared memory area denoted by an identifier IDx will be
> > >> allocated when it first appear, and the memory pages will be taken from
> > >> the first VM whose static_shared_mem list contains IDx. Take the above
> > >> config files for example, if we instantiate vm1, vm2 and vm3, one after
> > >> another, the memory areas denoted by ID1 and ID2 will both be allocated
> > >> in and taken from vm1.
> > >
> > > Hmm... I can see some potential hazards. Currently, multiple xl processes
> > > are serialized by a lock, and your assumption is the creation is done in
> > > order, but suppose sometime later they can run in parallel. When you
> > > have several "xl create" and they race with each other, what will
> > > happen?
> > >
> > > This can be solved by serializing in libxl or hypervisor, I think.
> > > It is up to you to choose where to do it.
> > >
> > > Also, please consider what happens when you destroy the owner domain
> > > before the rest. Proper reference counting should be done in the
> > > hypervisor.
> > >
> > 
> > Yes, the access to xenstore and other shared data should be serialized
> > using some kind of lock.
> > 
> > >
> > >>
> > >> ======================================
> > >> 2.2 Store the mem-sharing information in xenstore
> > >> ======================================
> > >> This information should include the length and owner of the area. And
> > >> it should also include information about where the backing memory areas
> > >> are mapped in every VM that are using it. This information should be
> > >> known to the xl command and all domains, so we utilize xenstore to keep
> > >> this information. A current plan is to place the information under
> > >> /local/shared_mem/ID. Still take the above config files as an example:
> > >>
> > >> If we instantiate vm1, vm2 and vm3, one after another,
> > >> “xenstore ls -f” should output something like this:
> > >>
> > >>
> > >> After VM1 was instantiated, the output of “xenstore ls -f”
> > >> will be something like this:
> > >>
> > >>     /local/shared_mem/ID1/owner = dom_id_of_vm1
> > >>
> > >>     /local/shared_mem/ID1/size = sizeof_addr_range1
> > >>
> > >>     /local/shared_mem/ID1/mappings/dom_id_of_vm1 = addr_range1
> > >>
> > >>
> > >>     /local/shared_mem/ID2/owner = dom_id_of_vm1
> > >>
> > >>     /local/shared_mem/ID2/size = sizeof_addr_range1
> > >>
> > >>     /local/shared_mem/ID2/mappings/dom_id_of_vm1 = addr_range2
> > >>
> > >>
> > >> After VM2 was instantiated, the following new lines will appear:
> > >>
> > >>     /local/shared_mem/ID1/mappings/dom_id_of_vm2 = addr_range3
> > >>
> > >>
> > >> After VM2 was instantiated, the following new lines will appear:
> > >>
> > >>     /local/shared_mem/ID2/mappings/dom_id_of_vm2 = addr_range4
> > >>
> > >> When we encounter an id IDx during "xl create":
> > >>
> > >>   + If it’s not under /local/shared_mem, create the corresponding entries
> > >>      (owner, size, and mappings) in xenstore, and allocate the memory 
> > >> from
> > >>      the newly created domain.
> > >>
> > >>   + If it’s found under /local/shared_mem, map the pages to the newly
> > >>       created domain, and add the current domain to
> > >>       /local/shared_mem/IDx/mappings.
> > >>
> > >
> > > Again, please think about destruction as well.
> > >
> > > At this point I think modelling after POSIX shared memory makes more
> > > sense. That is, there isn't one "owner" for the memory. You get hold of
> > > the shared memory via a key (ID in your case?).
> > >
> > 
> > Actually, I've thought about the same question and have discussed this with
> > Julien and Stefano. And this what they told me:
> > 
> > Stefano wrote:
> > "I think that in your scenario Xen (the hypervisor) wouldn't allow the
> > first domain to be completely destroyed because it knows that its
> > memory is still in use by something else in the system. The domain
> > remains in a zombie state until the memory is not used anymore. We need
> > to double-check this, but I don't think it will be a problem."
> > 
> 
> This has security implications -- a rogue guest can prevent the
> destruction of the owner.

We are going to use the same underlying hypervisor infrastructure, the
end result should be no different than sharing memory via grant table
from a security perspective. If not, then we need to fix Xen.


> > and Julien wrote:
> > "That's correct. A domain will not be destroyed until all the memory
> > associated to it will be freed.
> > A page will be considered free when all the reference on it will be
> > removed. This means that if the domain who allocated the page die, it
> > will not be fully destroyed until the page is not used by another
> > domain.
> > This is assuming that every domain using the page is taking a
> > reference (similar to foreign mapping). Actually, I think we might be
> > able to re-use the mapspace XENMAPSPACE_gmfn_foreign.
> > Actually, I think we can re-use the same mechanism as foreign mapping (see
> > Note that Xen on ARM (and x86?) does not take reference when mapping a
> > page to a stage-2 page table (e.g the page table holding the
> > translation between a guest physical address and host physical
> > address)."
> > 
> > I've also thought about modeling after the POSIX way of sharing memory.
> > If we do so, the owner of the shared pages should be Dom0, and we
> > will have to do the reference counting ourselves, and free pages when 
> > they're
> > no longer needed. I'm not sure which method is better. What do you say?
> > 
> 
> Assigning the page to Dom0 doesn't sound right to me either.
> 
> But the first step should really be defining the scope of the project.
> Technical details will follow naturally.

I thought that Zhongze wrote it well in "Motivation and Description".
What would you like to know in addition to that? 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.