[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] xen-netback: switch to per-cpu scratch space



On Tue, May 28, 2013 at 02:36:55PM +0100, David Vrabel wrote:
> On 28/05/13 14:18, Konrad Rzeszutek Wilk wrote:
> > On Mon, May 27, 2013 at 12:29:42PM +0100, Wei Liu wrote:
> >> There are maximum nr_onlie_cpus netback threads running. We can make use
> >> of per-cpu scratch space to reduce the size of buffer space when we move
> >> to 1:1 model.
> >>
> >> In the unlikely event when per-cpu scratch space is not available,
> >> processing routines will refuse to run on that CPU.
> [...]
> >> --- a/drivers/net/xen-netback/netback.c
> >> +++ b/drivers/net/xen-netback/netback.c
> [...]
> >> +                  printk(KERN_ALERT
> >> +                         "xen-netback: "
> >> +                         "CPU %d scratch space is not available,"
> >> +                         " not doing any TX work for netback/%d\n",
> >> +                         smp_processor_id(),
> >> +                         (int)(netbk - xen_netbk));
> > 
> > So ... are you going to retry it? Drop it? Can you include in the message 
> > the
> > the mechanism by which you are going to recover?
> > 
> [...]
> >> +                         "xen-netback: "
> >> +                         "CPU %d scratch space is not available,"
> >> +                         " not doing any RX work for netback/%d\n",
> >> +                         smp_processor_id(),
> >> +                         (int)(netbk - xen_netbk));
> > 
> > And can you explain what the recovery mechanism is?
> 
> There isn't any recovery mechanism at the moment. If the scratch space
> was not allocated then any netback thread may end up being unable to do
> any work indefinitely (if the scheduler repeatedly schedules them on the
> VCPU with no scratch space).
> 
> This is an appalling failure mode.
> 

This looks appalling at first glance but I doubt that people would pick
this patch without picking the later one.

With later patch vifs can be scheduled on different CPUs so that it
always gets a chance to work.

This patch is proposed before that one to reduce meaningless code
movement.

> I also don't think there is a sensible way to recover.  We do not want
> hotplugging of a VCPU to break or degrade the behaviour of existing VIFs.
> 
> The meta data is 12 * 512 = 6144 and the grant table ops is 24 * 512 =
> 12288.  This works out to 6 pages total.  I think we can spare 6 pages
> per VIF and just have per-thread scratch space.
> 

Sure, we can always worry about shrinking space usage later. :-)

I don't really mind using extra space. I only want a new working
baseline.

> You may also want to consider a smaller batch size instead of allowing
> for 2x ring size.  How often do you need this many entries?

Not often, but we ought to prepare for the worst case, right?


Wei.

> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.