[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [ARM] Native application design and discussion (I hope)





On 05/10/2017 10:56 AM, George Dunlap wrote:
On 09/05/17 19:29, Stefano Stabellini wrote:
On Tue, 9 May 2017, Dario Faggioli wrote:
And it should not be hard to give such code access to the context
of
the vCPU that was previously running (in x86, given we implement
what
we call lazy context switch, it's most likely still loaded in the
pCPU!).

I agree with Stefano, switching to the idle vCPU is a pretty bad
idea.

the idle vCPU is a fake vCPU on ARM to stick with the common code
(we
never leave the hypervisor). In the case of the EL0 app, we want to
change exception level to run the code with lower privilege.

Also IHMO, it should only be used when there are nothing to run and
not
re-purposed for running EL0 app.

It's already purposed for running when there is nothing to do _or_ when
there are tasklets.

I do see your point about privilege level, though. And I agree with
George that it looks very similar to when, in the x86 world, we tried
to put the infra together for switching to Ring3 to run some pieces of
Xen code.

Right, and just to add to it, context switching to the idle vcpu has a
cost, but it doesn't give us any security benefits whatsever. If Xen is
going to spend time on context switching, it is better to do it in a
way that introduces a security boundary.

"Context switching" to the idle vcpu doesn't actually save or change any
registers, nor does it flush the TLB.  It's more or less just accounting
for the scheduler.  So it has a cost (going through the scheduler) but
not a very large one.

It depends on the architecture. For ARM we don't yet support lazy context switch. So effectively, the cost to "context switch" to the idle vCPU will be quite high.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.