[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RT Xen on ARM - R-Car series



On Fri, Feb 15, 2019 at 12:30 PM Andrii Anisov <andrii.anisov@xxxxxxxxx> wrote:
>
> Hello Julien,

Hi,

> On 14.02.19 19:29, Julien Grall wrote:
> > I am not sure why you are speaking about the current implementation
> > when my point was about the new implementation.
> >
> > I guess your point stick even if we decide to use guest physical
> > address.
> For sure. The type of guest address used by hypervisor to reach runstate area 
> and having that area mapped are quite orthogonal questions. But, IMHO, 
> tightly coupled and might be solved together.

Not really... This is an implementation details that does not matter
on the OS side.

>
> > Although, I am still unconvinced of the benefits to keep it
> > mapped.
> My point was reducing context switch time, but those controversial numbers 
> left me confused.

You missed my point.  I don't say performance is not important but you
have to take into account the drawbacks.
I am not entirely happy to keep the runstate always mapped if it uses
more memory in Xen (vmap is quite limited)
and does not make significant improvement in the context switch time.

>
> >> I've measured the raw `update_runstate_area()` execution time. With 
> >> runstate mapped - its execution time is less than my timer tick (120ns), 
> >> with runstate not mapped - I've seen
> > its execution time as 4 to 8 ticks (480-960ns).
> >
> > Please provide the code you use to measure it. How often do you call it?
> The code to see that is as simple as following:
>
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 0a2e997..d673d00 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -314,8 +314,16 @@ static void schedule_tail(struct vcpu *prev)
>       context_saved(prev);
>
>       if ( prev != current )
> +    {
> +        s_time_t t = 0;
> +        if (current->domain->domain_id == 1)
> +            t = NOW();

If you want accurate number, then the NOW() macro is not sufficient
enough. Read to CNTPCTL_EL0 can occur speculatively and out of order
relative to other instructions executed on the same PE. So the PE can
potentially execute CNTPCTL_EL0 before update_runstate_area(...).

You would want to add an isb() at least before NOW() and potentially
after (unless you have register dependency).
I have a patch for adding an isb()  in the NOW() macro. I will send it later on.

Note that I have no idea whether the isb()s matter on the processors
you use. But it would be best to have it to make sure the numbers are
accurate.

>           update_runstate_area(current);
>
> +        if (current->domain->domain_id == 1)
> +            printk("cp = %"PRI_stime"\n", NOW()-t);

That's only one number.  Did you do an average over multiple context
switch (say 1000) ?

> +    }
> +
>       /* Ensure that the vcpu has an up-to-date time base. */
>       update_vcpu_system_time(current);
>   }
>
> >> But using TBM, I encountered that making runstate mapped with Roger's 
> >> patch increases the IRQ latency from ~7000ns to ~7900ns. It is opposite to 
> >> my expectations and to the raw decrease of `runstate_update_area()` 
> >> execution time.
> >
> > Raw benchmarks should be taken with a grain of salt. The more if you
> > only benchmark a single function as the context switch may introduce
> > latency/cache eviction.
> >
> > Although, I would have expected the numbers to be the same. What is
> > your configuration here? Do you have others guests running? How many
> > context switch do you have?
>
> The configuration is the same as here [1].

Well, your e-mail contains multiple configuration. From my
understanding, TTBM will run exclusively on one CPU, so you will look
at context switch between idle vCPU and TTBM.
Do you trap wfi/wfe?

Also, you didn't answer to my last question regarding the number of
context switch. How long do you leave the test run?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.