WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] Netchannel2 optimizations [2/4]

> This patch uses the new packet message flag created in the previous
> patch to request an event only every N fragments. N needs to be less
> than the maximum number of fragments that we can send or we may get
> stuck.  The default number of fragments in this patch is 192 while
> the maximum number of fragments that we can send is 256.
>
> There is a small issue with this code. If we have a single UDP
> stream and the maximum TX socket buffer size limited by the kernel
> in the sender guest is not sufficient to consume N fragments (192
> for now) the communication may stop until some other stream sends
> packets in either the TX or RX direction. This should not be an
> issue with TCP since we willalway have ACKs being erceived what will
> cause events to be generated. We will need to fix this sometime
> soon, but it is an unlikely scenario in practice that we may let the
> code get into the netchannel2 tree for now, especially because the
> code is still experimental. But Steven has the final word on that.
I've applied the patch, along with the others in the series.  As you
say, this isn't really good enough for a final solution, as it stands,
but it'll do for now.

> A possible fix to this issue is to set the event request flag when
> we send a packet and the sender socket buffer is full.  I just did
> not have the time to look into the linux socket buffer code to
> figure out how to do that, but it should not be difficult once we
> understand the code.
I'm not convinced by this fix.  It'll certainly solve the particular
case of a UDP blast, but I'd be worried that there might be some other
buffering somewhere, in e.g. the queueing discipline or somewhere in
iptables.  Fixing any particular instance probably wouldn't be very
tricky, but it'd be hard to be confident you'd got all of them, and it
just sounds like a bit of a rat hole of complicated and
hard-to-reproduce bugs.

Since this is likely to be a rare case, I'd almost be happy just using
e.g. a 1Hz ticker to catch things when they look like they've gone
south.  Performance will suck, but this should be a very rare
workload, so that's not too much of a problem.

Does that sound plausible?

Steven.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel