[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Notes on stubdoms and latency on ARM



On Fri, 2017-07-07 at 18:02 +0300, Volodymyr Babchuk wrote:
> Hello Dario,
> 
Hi!

> On 20 June 2017 at 13:11, Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> wrote:
> > On Mon, 2017-06-19 at 11:36 -0700, Volodymyr Babchuk wrote:
> > > 
> > > Thanks. Actually, we discussed this topic internally today. Main
> > > concern today is not a SMCs and OP-TEE (I will be happy to do
> > > this
> > > right in XEN), but vcopros and GPU virtualization. Because of
> > > legal
> > > issues, we can't put this in XEN. And because of vcpu framework
> > > nature
> > > we will need multiple calls to vgpu driver per one vcpu context
> > > switch.
> > > I'm going to create worst case scenario, where multiple vcpu are
> > > active and there are no free pcpu, to see how credit or credit2
> > > scheduler will call my stubdom.
> > > 
> > 
> > Well, that would be interesting and useful, thanks for offering
> > doing
> > that.
> 
> Yeah, so I did that. 
>
Ok, great! Thanks for doing and reporting about this. :-D

> And I have get some puzzling results. I don't know why,
> but when I have 4 (or less) active vcpus on 4 pcpus, my test  takes
> about 1 second to execute.
> But if there are 5 (or mode) active vcpus on 4 pcpus, it executes
> from
> 80 to 110 seconds.
> 
I see. So, I've got just a handful of minutes right now, to only
quickly look at the result and ask a couple of questions. Will think
about this more in the coming days...

> There will be the details, but first let me remind you my setup.
>  I'm testing on ARM64 machine with 4 Cortex A57 cores. I wrote
> special test driver for linux, that calls SMC instruction 100 000
> times.
> Also I hacked miniOS to act as monitor for DomU. This means that
> XEN traps SMC invocation and asks MiniOS to handle this.
>
Ok.

> So, every SMC is handled in this way:
> 
> DomU->XEN->MiniOS->XEN->DomU.
> 
Right. Nice work again.

> Now, let's get back to results.
> 
> ** Case 1:
> - Dom0 has 4 vcpus and is idle
> - DomU has 4 vcpus and is idle
> - Minios has 1 vcpu and is not idle, because it's scheduler does
> not calls WFI.
> I run test in DomU:
> 
> root@salvator-x-h3-xt:~# time -p cat /proc/smc_bench
> Will call SMC 100000 time(s)
>
So, given what you said above, this means that the vCPU that is running
this will frequently block (when calling SMC) and resume (when SMC is
handled) quite frequently, right?

Also, are you sure (e.g., because of how the Linux driver is done) that
this always happen on one vCPU?

> Done!
> real 1.10
> user 0.00
> sys 1.10

> ** Case 2:
> - Dom0 has 4 vcpus. They all are executing endless loop with sh
> oneliner:
> # while : ; do : ; done &
> - DomU has 4 vcpus and is idle
> - Minios has 1 vcpu and is not idle, because it's scheduler does not
> calls WFI.
>
Ah, I see. This is unideal IMO. It's fine for this POC, of course, but
I guess you've got plans to change this (if we decide to go the stubdom
route)?

> - In total there are 6 vcpus active
> 
> I run test in DomU:
> real 113.08
> user 0.00
> sys 113.04
> 
Ok, so there's contention for pCPUs. Dom0's vCPUs are CPU hogs, while,
if my assumption above is correct, the "SMC vCPU" of the DomU is I/O
bound, in the sense that it blocks on an operation --which turns out to
be SMC call to MiniOS-- then resumes and block again almost
immediately.

Since you are using Credit, can you try to disable context switch rate
limiting? Something like:

# xl sched-credit -s -r 0

should work.

This looks to me like one of those typical scenario where rate limiting
is counterproductive. In fact, every time that your SMC vCPU is woken
up, despite being boosted, it finds all the pCPUs busy, and it can't
preempt any of the vCPUs that are running there, until rate limiting
expires.

That means it has to wait an interval of time that varies between 0 and
1ms. This happens 100000 times, and 1ms*100000 is 100 seconds... Which
is roughly how the test takes, in the overcommitted case.

> * Case 7
> - Dom0 has 4 vcpus and is idle.
> - DomU has 4 vcpus. Two of them are executing endless loop with sh
> oneliner:
> # while : ; do : ; done &
> - Minios have 1 vcpu and is not idle, because it's scheduler does not
> calls WFI.
> - *Minios is running on separate cpu pool with 1 pcpu*:
> Name               CPUs   Sched     Active   Domain count
> Pool-0               3    credit       y          2
> minios               1    credit       y          1
> 
> I run test in DomU:
> real 1.11
> user 0.00
> sys 1.10
> 
> * Case 8
> - Dom0 has 4 vcpus and is idle.
> - DomU has 4 vcpus. Three of them are executing endless loop with sh
> oneliner:
> # while : ; do : ; done &
> - Minios have 1 vcpu and is not idle, because it's scheduler does not
> calls WFI.
> - Minios is running on separate cpu pool with 1 pcpu:
> 
> I run test in DomU:
> real 100.12
> user 0.00
> sys 100.11
> 
> 
> As you can see, I tried to move minios to separate cpu pool. But it
> didn't helped a lot.
> 
Yes, but it again makes sense. In fact, now there are 3 CPUs in Pool-0, 
and all are kept always busy by the the 3 DomU vCPUs running endless
loops. So, when the DomU's SMC vCPU wakes up, has again to wait for the
rate limit to expire on one of them.

> I expected that it would be 20% to 50% slower, when there are more
> vCPUs than pCPUs. But it is 100 times slower and I can't explain
> this.
> Probably, something is very broken in my XEN. But I used 4.9 with
> some
> hacks to make minios work. I didn't touched scheduler at all.
> 
If you can, try with rate limiting off and let me know. :-D

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.