[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] more profiling



James

Thanks for the reply.
It seems that your changes do not include netback, i.e all the
changes are limited to netfront. Correct?
In that case, your changes avoid the cost of issuing and
revoking the grant (i.e. adding and removing the grant from
the grant table). I assume netback is still doing hypercalls
for grant operations on every I/O operation (i.e. grant map
for TX and grant copy operation for RX).
In netchannel2 we plan to avoid the grant operations in netback
as well.

In my experiments I also see overheads on issuing and revoking
grants due to the use of atomic operations, but these are
much less expensive than copying an entire packet as you
do on the TX path. I am surprised with your results.
Can you give more details about your configuration and how you
are comparing the cost of copy versus issuing grants on TX.

Thanks

Renato

> -----Original Message-----
> From: James Harper [mailto:james.harper@xxxxxxxxxxxxxxxx]
> Sent: Friday, February 29, 2008 2:26 PM
> To: Santos, Jose Renato G; Andy Grover
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-devel] more profiling
>
> > James,
> >
> > Could you please provide me some context and details of this work.
> > This seems related to the work we are doing in netchannel2 to reuse
> > grants, but I don't think I understand what is that you are
> trying to
> do
> > and how it is related.
> >
>
> The solution I ended up implementing was to keep a list of
> pre-allocated pre-granted pages. Any time we need a new page
> (either for putting on the rx list, or for copying a tx
> packet to) it comes from the list. If there are no page on
> the list, a new page is allocated and granted. When we are
> finished with the page, it goes back on the free list.
>
> I'll also be writing some sort of garbage collector which
> runs periodically (maybe every x seconds, or every x calls to
> 'put_page_on_freelist'). If during that interval the number
> of free pages has been constantly above some threshold (32?),
> then we will ungrant and free half the pages on the list.
> This will keep memory usage reasonable while keeping performance good.
>
> In the tx path, the windows xennet driver currently takes the
> sg list of buffers per packet and copies them to a single
> page buffer. At first I thought there would be some
> performance to be had in just passing the backend the list of
> pages, but it looks like the memory copy operation is much
> less expensive than the grant operation.
>
> James
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.