WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] more profiling

To: "Andy Grover" <andy.grover@xxxxxxxxxx>
Subject: [Xen-devel] more profiling
From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
Date: Fri, 29 Feb 2008 22:00:09 +1100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 29 Feb 2008 03:00:41 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Ach6wj+maafveOiCSzitY54bCV1UyQ==
Thread-topic: more profiling
Andy,

I put some profiling around calls to GrantAccess and EndAccess, and have
the following results:

XenNet     TxBufferGC        Count =     108351, Avg Time =     227989
XenNet     TxBufferFree      Count =          0, Avg Time =          0
XenNet     RxBufferAlloc     Count =     108353, Avg Time =      17349
XenNet     RxBufferFree      Count =          0, Avg Time =          0
XenNet     ReturnPacket      Count =      65231, Avg Time =       1106
XenNet     RxBufferCheck     Count =     108353, Avg Time =     124069
XenNet     Linearize         Count =     129024, Avg Time =      29333
XenNet     SendPackets       Count =     129024, Avg Time =      67107
XenNet     SendQueuedPackets Count =     237369, Avg Time =      73055
XenNet     GrantAccess       Count =     194325, Avg Time =      25878
XenNet     EndAccess         Count =     194261, Avg Time =      27181

The time for GrantAccess and EndAccess is, I think, quite significant in
the scheme of things, especially as TxBufferGC and RxBufferCheck (the
two large times) will both have multiple calls to GrantAccess and
EndAccess.

What I'd like to do is implement a compromise between my previous buffer
management approach (used lots of memory, but no allocate/grant per
packet) and your approach (uses minimum memory, but allocate/grant per
packet). We would maintain a pool of packets and buffers, and grow and
shrink the pool dynamically, as follows:
. Create a freelist of packets and buffers
. When we need a new packet or buffer, and there are none on the
freelist, allocate them and grant the buffer.
. When we are done with them, put them on the freelist
. Keep a count of the minimum size of the freelists. If the free list
has been greater than some value (32?) for some time (5 seconds?) then
free half of the items on the list.
. Maybe keep a freelist per processor too, to avoid the need for
spinlocks where we are running at DISPATCH_LEVEL

I think that gives us a pretty good compromise between memory usage and
calls to allocate/grant/ungrant/free.

I was going to look at getting rid of the Linearize, but if we don't
Linearize then we have to GrantAccess to the kernel supplied buffer, and
I think a (max) 1500 byte memcpy is going to be cheaper than a call to
GrantAccess...

Thoughts?

James


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>