[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: Fine-grained proxy resource charging



Hi John,

Below, a few words on HPLabs related project.
Regards, Lucy

> 
> 
> -------- Original Message --------
> Subject:      Fine-grained proxy resource charging
> Date:         Mon, 22 Aug 2005 01:25:19 -0400
> From:         John L Griffin <jlg@xxxxxxxxxx>
> To:   xen-devel@xxxxxxxxxxxxxxxxxxx
> 
>       
> 
> 
> 
> I am looking into how to charge a domain (say, domain "A") for the 
> resources consumed by other service domains (say, B) on behalf of A.  For 
> example, charging A for the CPU cycles consumed by the network I/O domain 
> (B) as it processes packets produced or consumed by A.
> 
> The HP folks recently demonstrate a useful first step (see the Usenix 2005 
> paper and the xen-devel post "Yet another Xen performance monitoring tool" 
> on 2005-08-18): count the number of page swaps between A and B (as well as 
> C and B, D and B, etc.) and use that to approximate how much of B's CPU 
> usage should be assigned to A (and C, D, etc.)
> 
> I'm pursuing more cycle-accurate methods, in anticipation of non-dom0 
> service domains that will do variable amounts of proxy processing, 
> especially where the resources consumed (CPU, memory, I/O, larger 
> primitives) are not correlated with the amount of interdomain traffic 
> between A and B.  For example, a lightweight version of Resource 
> Containers (Banga, OSDI 1999) or similar concepts.
> 


We continue our earlier work presented at USENIX'2005 toward  more
accurate methods for accounting CPU resources used by the driver
domain (B, in your example) for I/O processing on behalf of the other
domains (A, C, D) that are using the shared driver hosted by the
domain B.  We do not attempt to extend this work for other resources
that you have on your list: memory or other larger primitives, though
I/O bandwidth is a natural extension for resource accounting and can
be addressed as well.


We believe that the amount of memory page exchanges (between A and B,
C and B, etc.) is a relatively accurate "hint" for splitting the CPU
overhead in B with respect to A, C, etc. There is a room for
improvement there: we try to consider the I/O path involved in this
operation, and quantify the CPU overhead by the different components
on the path. Overall, such an approach and accounting might work when
domain B is hosting a driver of a particular kind. The problem gets
much harder and more complex when there are different drivers hosted
by the same driver domain.

We also interested in the problem of allocating a "right" amount of
resources to driver domains. We can see that performance of I/O
intensive applications, can be significantly different depending on
the amount of resources allocated to the driver domains.

The monitoring tool that we released recently (the xen-devel post "Yet
another Xen performance monitoring tool" on 2005-08-18) has a nice set
of metrics to support such kind of studies. We are putting together a
small tutorial  on how one can use it for performance profiling.


> The eventual goal would be for B itself to calculate and specify to
> Xen the amount of processing it does on behalf of A and other
> domains. Looking ahead, a possible next step is for Xen to expose to
> B whether or not A has already exceeded its periodic resource
> allocation, so any schedulers inside B can make smarter decisions:
> for example, not processing packets for A when A has temporarily
> exhausted its allocation.  There aren't many technical details yet;
> my objective here is to synchronize with anyone who's also been
> thinking about this particular problem. 


Yes, we are also looking at how this overhead can be taken into
account during the CPU scheduling for making a smarter resource
allocation decision. The trade off here seems to be how one can
enforce such decisions: either via a new scheduling policy (requires
changes to Xen) or via changing the next period resource allocation
from the outside of Xen based on the previous usage (one can use xm
bvt .... or xm sedf facility for changing the allocation). It might
depend on the targeted granularity of resource allocation decisions.

The other interesting question here is to provide some kind of
performance isolation: for example, limiting the impact of the
excessive traffic to one domain (say A) and its related overhead in
driver domain (B)  on performance of the other domains.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.