[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/arm: Introduce pmu_access parameter



On Wed, 1 Sep 2021, Julien Grall wrote:
> On 01/09/2021 14:10, Bertrand Marquis wrote:
> > > On 1 Sep 2021, at 13:55, Julien Grall <julien@xxxxxxx> wrote:
> > > 
> > > Hi,
> > > 
> > > On 01/09/2021 13:43, Michal Orzel wrote:
> > > > Introduce new Xen command line parameter called "pmu_access".
> > > > The default value is "trap": Xen traps PMU accesses.
> > > > In case of setting pmu_access to "native", Xen does not trap
> > > > PMU accesses allowing all the guests to access PMU registers.
> > > > However, guests cannot make use of PMU overflow interrupts as
> > > > PMU uses PPI which Xen cannot route to guests.
> > > > This option is only intended for development and testing purposes.
> > > > Do not use this in production system.
> > > I am afraid your option is not safe even in development system as a vCPU
> > > may move between pCPUs.
> > > 
> > > However, even if we restricted the use to pinned vCPU *and* dedicated
> > > pCPU, I am not convinced that exposing an half backed PMU (the overflow
> > > interrupt would not work) to the guest is the right solution. This likely
> > > means the guest OS would need to be modified and therefore the usage of
> > > this option is fairly limited.
> > > 
> > > So I think the first steps are:
> > >   1) Make the PPI work. There was some attempt in the past for it on
> > > xen-devel. You could have a look.
> > >   2) Provide PMU bindings
> > > 
> > > With that in place, we can discuss how to expose the PMU even if it is
> > > unsafe in some conditions.
> > 
> > With those limitations, using the PMU to monitor the system performances or
> > on some specific use cases is still really useful.
> > We are using that to do some benchmarks of Xen or of some applications to
> > compare the behaviour to a native system or
> > analyse the performances of Xen itself (hypercalls,context switch …etc)

I also already had to write a patch almost exactly like this one to
provide to customers a few months back.


> I understand this is useful for some setup and I am not trying to say we
> should not have a way to expose the PMU (even unsafely) in upstream. However,
> I think the option as it stands is too wide (this should be a per domain knob)
> and we should properly expose the PMU (interrupts, bindings...).

I have never used PMU directly myself, only provided a patch similar to
this one.  But as far as I could tell the users were fully satisfied
with it and it had no interrupts support either. Could it be that
interrupts are not actually needed to read the perf counters, which is
probably what users care about?

In regards to "this should be a per domain knob", in reality if you are
doing PMU experiments you don't care if only one or all domains have
access: you are working in a controlled environment trying to figure out
if your setup meets the timing requirements. There are no security or
safety concerns (those are different experiments) and there is no
interference problems in the sense of multiple domains trying to access
PMU at the same time -- you control the domains you decide which one is
accessing them.

So I am in favor of a patch like this one because it actually satisfy
our requirements. Even if we had per-domain support and interrupts
support, I don't think they would end up being used.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.