[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: SKB paged fragment lifecycle on receive

On Mon, 2011-06-27 at 11:21 +0100, Michael S. Tsirkin wrote:
> On Mon, Jun 27, 2011 at 10:41:35AM +0100, Ian Campbell wrote:
> > On Sun, 2011-06-26 at 11:25 +0100, Michael S. Tsirkin wrote:
> > > On Fri, Jun 24, 2011 at 04:43:22PM +0100, Ian Campbell wrote:
> > > > In this mode guest data pages ("foreign pages") were mapped into the
> > > > backend domain (using Xen grant-table functionality) and placed into the
> > > > skb's paged frag list (skb_shinfo(skb)->frags, I hope I am using the
> > > > right term). Once the page is finished with netback unmaps it in order
> > > > to return it to the guest (we really want to avoid returning such pages
> > > > to the general allocation pool!).
> > > 
> > > Are the pages writeable by the source guest while netback processes
> > > them?  If yes, firewalling becomes unreliable as the packet can be
> > > modified after it's checked, right?
> > 
> > We only map the paged frags, the linear area is always copied (enough to
> > cover maximally sized TCP/IP, including options), for this reason.
> Hmm. That'll cover the most common scenarios
> (such as port filtering) but not deep inspection.


> Not sure how important that is.
> > > Also, for guest to guest communication, do you wait for
> > > the destination to stop looking at the packet in order
> > > to return it to the source? If yes, can source guest
> > > networking be disrupted by a slow destination?
> > 
> > There is a timeout which ultimately does a copy into dom0 memory and
> > frees up the domain grant for return to the sending guest.
> Interesting. How long's the timeout?

1 second IIRC.

> > I suppose one difference with this is that it deals with data from
> > "dom0" userspace buffers rather than (what looks like) kernel memory,
> > although I don't know if that matters yet. Also it hangs off of struct
> > sock which netback doesn't have. Anyway I'll check it out.
> I think the most important detail is the copy on clone approach.
> We can make it controlled by an skb flag if necessary.
> > > > but IIRC honouring it universally turned into a
> > > > very twisty maze with a number of nasty corner cases etc.
> > > 
> > > Any examples? Are they covered by the patchset above?
> > 
> > It was quite a while ago so I don't remember many of the specifics.
> > Jeremy might remember better but for example any broadcast traffic
> > hitting a bridge (a very interesting case for Xen), seems like a likely
> > case? pcap was another one which I do remember, but that's obviously
> > less critical.
> Last I looked I thought these clone the skb, so if a copy happens on
> clone things will work correctly?

Things should be correct, but won't necessarily perform well.

In particular if the clones (which become copies with this flag) are
frequent enough then there is no advantage to doing mapping instead of
just copying upfront, in fact it probably hurts overall.

Taking a quick look at the callers of skb_clone I also see skb_segment
in there. Since Xen tries to pass around large skbs (using LRO/GSO over
the PV interface) in order to amortise costs it is quite common for
things to undergo GSO as they hit the physical device. I'm not sure if
these commonly hit the specific code path which causes a clone though.

> > I presume with the TX zero-copy support the "copying due to attempted
> > clone" rate is low?
> Yes. My understanding is that this version targets a non-bridged setup
> (guest connected to a macvlan on a physical dev) as the first step.


> > > > FWIW I proposed a session on the subject for LPC this year.
> > > We also plan to discuss this on kvm forum 2011
> > > (colocated with linuxcon 2011).
> > > http://www.linux-kvm.org/page/KVM_Forum_2011
> > 
> > I had already considered coming to LinuxCon for other reasons but
> > unfortunately I have family commitments around then :-(

> And I'm not coming to LPC this year :(

That's a shame.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.