[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xc_domain_getfullinfo() gone


  • To: Andrew Theurer <habanero@xxxxxxxxxx>
  • From: Kip Macy <kip.macy@xxxxxxxxx>
  • Date: Fri, 13 May 2005 07:57:11 -0700
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 13 May 2005 14:57:45 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=fkC/OJKXfsJt08Jzw6aauzR019OGa1wA2fVxsLabzF6YCpHXvKtag5AEcfrir/56NUbq4Oh4AhmDF4e/J+UaaVrIMk/z1Yo5FlFCPKiHfdwj2UqX9bS8UfK1Or+HXx6mr+9FRp16Je6zI50eAXsw8wVA+e6eUOpw5ABa62dvL+4=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Never mind, I was thinking get_vcpu_context, but per-cpu time is
already available in get_vcpu_context.

                          -Kip

On 5/13/05, Andrew Theurer <habanero@xxxxxxxxxx> wrote:
> I noticed this was gone from libxc.  Would there be any objection to
> adding xc_domain_get_vcpu_info?  I am interested in querying the
> cpu_time for each vcpu for a utility that does something like:
> 
> vm-stat
> 
> cpu[ util] domN-vcpuM[util]...domY-vcpuZ[util]
> ------------ --------------------------------------
> cpu0[075.4] dom0-vcpu0[000.3] dom1-vcpu1[075.1]
> cpu1[083.7] dom1-vcpu2[083.7]
> cpu2[069.2] dom1-vcpu3[069.2]
> cpu3[075.9] dom1-vcpu0[075.9]
>                                                     < time interval>
> cpu0[100.0] dom0-vcpu0[000.5] dom1-vcpu1[099.5]
> cpu1[099.8] dom1-vcpu2[099.8]
> cpu2[099.8] dom1-vcpu3[099.8]
> cpu3[099.8] dom1-vcpu0[099.8]
> 
> cpu0[100.0] dom0-vcpu0[000.3] dom1-vcpu1[099.7]
> cpu1[099.7] dom1-vcpu2[099.7]
> cpu2[099.7] dom1-vcpu3[099.7]
> cpu3[099.7] dom1-vcpu0[099.7]
> 
> cpu0[100.0] dom0-vcpu0[000.6] dom1-vcpu1[099.4]
> cpu1[099.7] dom1-vcpu2[099.7]
> cpu2[099.7] dom1-vcpu3[099.7]
> cpu3[101.4] dom1-vcpu0[101.4]
> 
> And while we're on this subject, I wanted to track, per phys cpu,
> exec_domain context switches, and store this as ctx_switches in
> schedule_data struct.  I believe tracking context switches would be a
> good stat to have, for example, to expose problems like high domU
> traffic networking on one cpu system.  Any objection to this or suggestions?
> 
> Thanks,
> 
> -Andrew
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.