[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2 of 4 RFC] xl/remus: Network buffering cmdline switch, setup/teardown



On Tue, 2013-07-30 at 11:25 -0400, Shriram Rajagopalan wrote:
> >
> >> > Also, I'm not sure how this is supposed to work when driver domains are
> >> > in use either,
> >>
> >> Remus wont work with driver domains, unless there are agents in each of the
> >> driver domains, that are co-ordinated by the memory checkpoint code in 
> >> dom0.
> >
> > That doesn't seem to be to hard to arrange. We are already thinking of a
> > "xenbackendd" daemon which runs hotplug scripts in both dom0 and domU in
> > order to reduce our dependency on udev (in dom0 the daemon may actually
> > be libxl directly).
> >
> >> Irrespective of the hotplug scripts, Remus needs to control the IFB
> >> network interface
> >> attached to the Guest's vifs/tap devices. Given that all these
> >> interfaces will be inside
> >> a driver domain, which does not have a fast (not xenstore)
> >> communication channel to dom0,
> >> there is no way the memory checkpointing code can co-ordinate with the
> >> driver domain in order
> >> to buffer/release its network packets after each checkpoint.
> >
> > It doesn't seem implausible that the toolstack and that daemon might not
> > also negotiate a memory-checkpoint evtchn or something like that for a
> > given domain.
> >
> >> One alternative would be to have a network agent running inside each
> >> of these driver domains,
> >> assuming that the driver domains would have network access (practical ? ).
> >> The memory checkpoint code would have to control the IFB devices via
> >> the network agents.
> >>
> >> All this is in the long run.. The immediate goal is to get
> >> network/disk buffering to work with xl.
> >
> > We still need to consider the future when designing the libxl API since
> > that is stable and we avoid changing it once committed (there are
> > mechanisms to evolve the interface, see libxl.h, but they are best
> > avoided).
> >
> 
> Assuming that we push all of the remus setup into libxl, to ease the pain off
> other toolstacks, then the external API to activate/deactivate remus
> will not change.
> (Also assuming that libxl.h is the libxl API you are talking about).
> 
> The API is already there. libxl_domain_remus_start with the remus_info
> structure. So, even if we add driver domain support later, external
> toolstacks would
> not be impacted. I will certainly add stub code to handle driver
> domains. However,
> adding any piece of mechanism in there would be premature unless we know how
> xenbackendd is going to work and the nature of communication channels.
> 
> >> If you are suggesting that we invoke a hotplug script when "xl remus"
> >> command is issued,
> >> I dont mind doing so either. The code in libxl (to control the plug
> >> qdisc) is not going to go away
> >> in either case.
> >
> > Would doing the setup in a shell script be easier/cleaner etc?
> >
> 
> Neither. The current C-code (xl_netbuf) can do everything that a shell
> script does.
> However, if we were to invoke an external "hotplug" style script, it
> would give us
> more flexibility.
> For e.g., people could tweak the script to add other traffic shaping
> stuff onto the VM's traffic.  Or, with openvswitch/SDN, other fun
> stuff could be done.
> 
> > I'm still not sure I understand why finding an ifb and binding it to a
> > vif cannot be done by the vif hotplug script and the ifb communicated
> > back to the toolstack via xenstore.
> >
> 
> To make sure we are on the same page:
>  I am under the assumption that the vif hotplug script is invoked
> during the domain boot.
>  I am also assuming that you are talking about existing vif hotplug
> scripts (vif-setup, vif-bridge, etc).

This is what I meant, but...
> 
>  So, if we lock onto an IFB and bind it to the vif (i.e. redirect all
> egress traffic via the IFB),
>  then what we are basically doing is to acquire an IFB device for
> every vif in the VM, for every VM,
>  that may be run under Remus sometime in future.
>  This basically means we would be setting up an IFB device for every
> domain right from boot
>  and not when Remus is started for the domain.

... this.

For some reason I thought that remus domains were started as such and
had the inherent property, which is obviously not the case, and I knew
that really.

> In case you mean something like a "vif-remus" hotplug script that is
> invoked when one invokes
> libxl_domain_remus_start API,

Or a new "action" for the existing vif-foo, but yes. Depends on whether
routed remus is different to bridged remus I suppose.

>  then, you are right. all the setup can
> be done in this hotplug script
> and the chosen ifbs can be communicated via xenstore back to libxl.

This might be worth it for the flexibility etc you mention above. Anyone
got any other thoughts?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.