[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] netif: staging grants for requests



On 01/06/2017 09:33 AM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Joao Martins [mailto:joao.m.martins@xxxxxxxxxx]
>> Sent: 14 December 2016 18:11
>> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
>> Cc: David Vrabel <david.vrabel@xxxxxxxxxx>; Andrew Cooper
>> <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; Paul Durrant
>> <Paul.Durrant@xxxxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>
>> Subject: [RFC] netif: staging grants for requests
>>
>> Hey,
>>
>> Back in the Xen hackaton '16 networking session there were a couple of ideas
>> brought up. One of them was about exploring permanently mapped grants
>> between
>> xen-netback/xen-netfront.
>>
>> I started experimenting and came up with sort of a design document (in
>> pandoc)
>> on what it would like to be proposed. This is meant as a seed for discussion
>> and also requesting input to know if this is a good direction. Of course, I
>> am willing to try alternatives that we come up beyond the contents of the
>> spec, or any other suggested changes ;)
>>
>> Any comments or feedback is welcome!
>>
> 
> Hi,
Hey!

> 
> Sorry for the delay... I've been OOTO for three weeks.
Thanks for the comments!

> I like the general approach or pre-granting buffers for RX so that the backend
> can simply memcpy and tell the frontend which buffer a packet appears in
Cool,

> but IIUC you are proposing use of a single pre-granted area for TX also, 
> which would
> presumably require the frontend to always copy on the TX side? I wonder if we 
> might go for a slightly different scheme...
I see.

> 
> The assumption is that the working set of TX buffers in the guest OS is fairly
> small (which is probably true for a small number of heavily used sockets and 
> an
> OS that uses a slab allocator)...
Hmm, [speaking about linux] maybe for the skb allocation cache. For the
remaining packet pages maybe not for say a scather-gather list...? But I guess
it would need to be validated whether this working set is indeed kept small as
this seems like a very strong assumption to comply with its various
possibilities in workloads. Plus wouldn't we leak info from these pages if it
wasn't used on the device but rather elsewhere in the guest stack?

> The guest TX code maintains a hash table of buffer addresses to grant refs. 
> When
> a packet is sent the code looks to see if it has already granted the buffer 
> and
> re-uses the existing ref if so, otherwise it grants the buffer and adds the 
> new
> ref into the table.

> The backend also maintains a hash of grant refs to addresses and, whenever it
> sees a new ref, it grant maps it and adds the address into the table. 
> Otherwise
> it does a hash lookup and thus has a buffer address it can immediately memcpy
> from.
> 
> If the frontend wants the backend to release a grant ref (e.g. because it's
> starting to run out of grant table) then a control message can be used to ask
> for it back, at which point the backend removes the ref from its cache and
> unmaps it.
Wouldn't this be somewhat similar to the persistent grants in xen block drivers?

> Using this scheme we allow a guest OS to still use either a zero-copy approach
> if it wishes to do so, or a static pre-grant... or something between 
> (e.g. pre-grant for headers, zero copy for bulk data).
> 
> Does that sound reasonable?
Not sure yet but it looks nice if we can indeed achieve the zero copy part. But
I have two concerns: say a backend could be forced to always remove refs as its
cache is always full having frontend not being able to reuse these pages
(subject to its own allocator behavior, in case assumption above wouldn't be
satisfied) nullifying backend effort into maintaining its mapped grefs table.
One other concern is whether those pages (assumed to be reused) might be leaking
off guest data to the backend (when not used on netfront).

Joao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.