[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding



 
> > I'll take a look at pft. Does it use futexes, or is it just 
> contending 
> > for spinlocks in the kernel?
> 
> It contends for spinlocks in kernel.

Sounds like this will be a good benchmark. Does it generate a
perofrmance figure as it runs? (e.g. iterations a second or such like).
  
> > Thanks, I did look at the graphs at the time. As I recall, the 
> > notification mechanism was beginning to look somewhat 
> expensive under 
> > high context switch loads induced by IO. We'll have to see what the 
> > cost
> 
> Yes.  One of the tweaks we are looking to do is change the IO 
> operation from kernel space (responding to an icmp packet 
> happens within the
> kernel) to something that is more IO realistic which would 
> involve more time per operation, like sending a message over 
> tcp (echo server or something like that).

Running a parallel UDP ping-pong test might be good. 
 
> > BTW: it would be really great if you could work up a patch 
> to enable 
> > xm/xend to add/remove VCPUs from a domain.
> 
> OK.  I have an older patch that I'll bring up-to-date.  

Great, thanks.

> Here 
> is a list of things that I think we should do with add/remove.
> 
> 1. Fix cpu_down() to tell Xen to remove the vcpu from its 
> list of runnable domains.  Currently it a "down" vcpu only 
> yields it's timeslice back.
> 
> 2. Fix cpu_up() to have Xen make the target vcpu runnable again.
> 
> 3. Add cpu_remove() which removes the cpu from Linux, and 
> removes the vcpu in Xen.
> 
> 4. Add cpu_add() which boots another vcpu and then brings it 
> up another cpu in Linux.
> 
> I expect that cpu_up/cpu_down to be more light-weight than 
> cpu_add/cpu_remove.
> 
> Does that sound reasonable.  Do we want all four or can we 
> live with just 1 and 2?

It's been a while since I looked at Xen's boot_vcpu code (which could do
with a bit of refactoring between common and arch anyhow), but I don't
recall there being anything in there that looked particularly expensive.
Having said that, it's only holding down a couple of KB of memory, so
maybe we just need up/down/add.

Thanks,
Ian 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.