WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] vif interfaces drop packets

Torsten Vielhak wrote:

The problem is that the data structure in which the ring buffer is
organized has to fit into one memory page (4096 bytes). So we are limited
to a buffer size of 340 entries which means 340 packets (~30% bigger). The
result are almost the same; it gets a little bit better: loss rates <0.1%
But I think it's really not a bug but a feature. You have to ensure that
the virtual machines are not locked up with too old packets in a long
queue during high network load (this would lead to an unreachable virtual
server which cannot answer recent packets). So this is really a traffic
shaping routine (very very basic); it would be better e.g. to delay tcp
packets if the buffer gets crowded since tcp stacks would react on the
delay and adjust their sending speed; but this is not possible because the
host machine does not "see" tcp traffic but only bridges the frames. This

Actually, tcp does react to getting acks received later, dropped
packets, and in general, congestion on the network. Sending will
slow down. Even sending of acks gets slowed when the user process
isn't scheduled often enough to drain the receive buffer fast
enough. So, there are mechanisms in TCP to react to the
environment, whether network related or caused by virtualization.
The degree to which this is handled might, of course, not be
good enough for your performance needs.

leads to another problem: if your guests are in a network with high
broadcast load the buffers get filled with theses broadcasts too.
So after all, IMHO it's a general design problem of virtual machines which
have to emulate the interrupts of real hardware.

All of the above said, there is some thinking that has to go
into improving the current architecture to handle high loads
better.

thanks,
Nivedita


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>