[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] RE: Rather slow time of Pin in Windows with GPL PVdriver


  • To: James Harper <james.harper@xxxxxxxxxxxxxxxx>, MaoXiaoyun <tinnycloud@xxxxxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Thu, 10 Mar 2011 11:05:56 +0000
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Cc: xen devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 10 Mar 2011 03:07:22 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acve3kAiJZJduml+Qr66YN40iA+eswAAJ3MgAAjparAAALlkAAACRBTwAAAX2yAAAOZOkA==
  • Thread-topic: [Xen-devel] RE: Rather slow time of Pin in Windows with GPL PVdriver

It's kind of pointless because you're always having to go to vCPU0's shared 
info for the event info. so you're just going to keep pinging this between 
caches all the time. Same holds true of data you access in your DPC if it's 
constantly moving around. Better IMO to keep locality by default and distribute 
DPCs accessing distinct data explicitly.

  Paul

> -----Original Message-----
> From: James Harper [mailto:james.harper@xxxxxxxxxxxxxxxx]
> Sent: 10 March 2011 10:41
> To: Paul Durrant; MaoXiaoyun
> Cc: xen devel
> Subject: RE: [Xen-devel] RE: Rather slow time of Pin in Windows with
> GPL PVdriver
> 
> >
> > Yeah, you're right. We have a patch in XenServer to just use the
> lowest
> > numbered vCPU but in unstable it still pointlessly round robins.
> Thus,
> if you
> > bind DPCs and don't set their importance up you will end up with
> them
> not
> > being immediately scheduled quite a lot of the time.
> >
> 
> You say "pointlessly round robins"... why is the behaviour
> considered pointless? (assuming you don't use bound DPCs)
> 
> I'm looking at my networking code and if I could schedule DPC's on
> processors on a round-robin basis (eg because the IRQ's are
> submitted on a round robin basis), one CPU could grab the rx ring
> lock, pull the data off the ring into local buffers, release the
> lock, then process the local buffers (build packets, submit to NDIS,
> etc). While the first CPU is processing packets, another CPU can
> then start servicing the ring too.
> 
> If Xen is changed to always send the IRQ to CPU zero then I'd have
> to start round-robining DPC's myself if I wanted to do it that
> way...
> 
> Currently I'm suffering a bit from the small ring sizes not being
> able to hold enough buffers to keep packets flowing quickly in all
> situations.
> 
> James

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.