[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 00/13] x86/PMU: Xen PMU PV support



On 10/09/13 16:47, Boris Ostrovsky wrote:
On 09/10/2013 11:34 AM, Jan Beulich wrote:
On 10.09.13 at 17:20, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> wrote:
This version has following limitations:
* For accurate profiling of dom0/Xen dom0 VCPUs should be pinned.
* Hypervisor code is only profiled on processors that have running dom0 VCPUs
on them.
With that I assume this is an RFC rather than full-fledged submission?

I was thinking that this would be something like stage 1 implementation (and
probably should have mentioned this in the cover letter).

For this stage I wanted to confine all changes on Linux side to xen subtrees. Properly addressing the above limitation would likely require changes in non-xen
sources (change in perf file format, remote MSR access etc.).

I think having the vpmu stuff for PV guests is a great idea, and from a quick skim through I don't have any problems with the general approach. (Obviously some more detailed review will be needed.)

However, I'm not a fan of this method of collecting perf stuff for Xen and other VMs together in the cpu buffers for dom0. I think it's ugly, fragile, and non-scalable, and I would prefer to see if we could implement the same feature (allowing perf to analyze Xen and other vcpus) some other way. And I would rather not use it as a "stage 1", for fear that it would become entrenched.

I think at the hackathon we discussed the idea of having "fake" cpus -- each of which would correspond to either a pcpu with Xen, or a vcpu of another domain. How problematic is that approach? For phase 1 can we just do vpmu for PV guests (and add hooks to allow domains to profile themselves), and look into how to profile Xen and other VMs as a stage 2?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.