[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.2 Release Plan / TODO



> From: Ian Jackson [mailto:Ian.Jackson@xxxxxxxxxxxxx]
> Subject: RE: [Xen-devel] Xen 4.2 Release Plan / TODO
> 
> Dan Magenheimer writes ("RE: [Xen-devel] Xen 4.2 Release Plan / TODO"):
> > After reading libxl.h, I'm not absolutely positive I understand
> > all the conditions that would cause you to label a function as
> > "slow" but I believe all the libxl_tmem_* functions are "fast".
> 
> There are a few operations that make a function necessarily have to be
> slow in the libxl api sense.  These are: xenstore watches; spawning
> subprocesses; anything with a timeout.
> 
> More broadly any function which is sufficiently slow that a caller
> might reasonably want to initiate it, and then carry on doing
> something else while the function completes.  So this includes any
> operation which a toolstack might want to parallelise.

Got it.  Thanks.  This is a bit clearer than the comment in libxl.h.

> > All of them are strictly "call the hypervisor, wait for it to
> > return" and none of the hypercalls (actually which are variations of
> > the one tmem hypercall) require a callback to dom0 or to the
> > calling guest... they all complete entirely inside the hypervisor.
> 
> Right, that sounds good.  I guess you also mean that this will always
> be the case.

Yes AFAICT.

> > Libxl_tmem_destroy may take a long time as it has to walk
> > through and free some potentially very large data structures,
> > but it is only used at domain destruction.
> 
> How long a time are we talking about ?  Would it be a scalability or
> performance problem if an entire host's management toolstack had to
> block, and no other management operations could be performed on any
> domain for any reason, while the tmem destroy takes place ?

See previous reply to IanC... this is moot since (I think)
tmem_destroy will go away.

> > Libxl_tmem_list does allocate some memory in userland that the
> > hypercall fills synchronously (with ascii-formatted statistics/counters
> > maintained entirely by the tmem code in the hypervisor).
> 
> Memory allocation in userland is fine.  I guess we're not talking
> about megabytes here.

A reasonable bound would be on the order of 1K per tmem-enabled guest.
The current code in pyxc_tmem_control enforces a 32K buffer limit.

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.