[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC v1: Xen block protocol overhaul - problem statement (with pictures!)



On Wed, 2013-01-23 at 15:03 +0000, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 23, 2013 at 09:24:37AM +0000, Ian Campbell wrote:
> > On Tue, 2013-01-22 at 19:25 +0000, Konrad Rzeszutek Wilk wrote:
> > > On Mon, Jan 21, 2013 at 12:37:18PM +0000, Ian Campbell wrote:
> > > > On Fri, 2013-01-18 at 18:20 +0000, Konrad Rzeszutek Wilk wrote:
> > > > > 
> > > > > > > E). The network stack has showed that going in a polling mode 
> > > > > > > does improve
> > > > > > > performance. The current mechanism of kicking the guest and or 
> > > > > > > block
> > > > > > > backend is not always clear.  [TODO: Konrad to explain it in 
> > > > > > > details]
> > > > > 
> > > > > Oh, I never did explain this - but I think the patches that Daniel 
> > > > > came
> > > > > up with actually fix a part of it. They make the kick-the-other guest
> > > > > only happen when the backend has processed all of the requests and
> > > > > cannot find anything else to do. Previously it was more of 'done one
> > > > > request, lets kick the backend.'.
> > > > 
> > > > blkback uses RING_PUSH_RESPONSES_AND_CHECK_NOTIFY so doesn't it get some
> > > > amount of evthcn mitigation for free?
> > > 
> > > So there are two paths here - the kick from a) frontend and the kick b) 
> > > backend
> > > gives the frontend.
> > > 
> > > The a) case is fairly straighforward. We process all of the rings we and 
> > > everytime
> > > we have finished with a request we re-read the producer. So if the 
> > > frontend keeps
> > > us bussy we will keep on processing.
> > > 
> > > The b) case is the one that is trigger happy. Every time a request is 
> > > completed (so
> > > say 44kB of data has finally been read/written) we kick the frontend.
> > >  In the networking world there are mechanism to modify the hardware were 
> > > it would
> > > kick the OS (so frontend in our case) when it has processed 8, 16, or 64 
> > > packets
> > > (or some other value). Depending on the latency this can be bad or good. 
> > > If the
> > > backend is using a very slow disk we would probably want the frontend to 
> > > be
> > > kicked every time a response has been completed.
> > 
> > Perhaps all that is needed is to have the f.e. set rsp_event to
> > min(rsp_cons + <BATCH_SIZE>, rsp_prod (+/- 1?) ) in blkfront's
> > RING_FINAL_CHECK_FOR_RESPONSES to implement batching, like the comment
> > in ring.h says:
> >  *  These macros will set the req_event/rsp_event field to trigger a
> >  *  notification on the very next message that is enqueued. If you want to
> >  *  create batches of work (i.e., only receive a notification after several
> >  *  messages have been enqueued) then you will need to create a customised
> >  *  version of the FINAL_CHECK macro in your own code, which sets the event
> >  *  field appropriately.
> > 
> > IOW I think we already have the mechanisms in the protocol to implement
> > this sort of thing.
> 
> Yes. It is question of how does the frontend and backend negotiate this.
> As in should there be an negotiation of this value, say 'feature-intr-batch'.

Either end can independently implement this using the existing
mechanisms, for their "direction", can't they? No need to negotiate.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.