[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH xen v2] xen: arm: fully implement multicall interface.



On Tue, 2014-04-08 at 08:13 +0100, Jan Beulich wrote:
> >>> On 07.04.14 at 17:18, <Ian.Campbell@xxxxxxxxxx> wrote:
> > On Mon, 2014-04-07 at 16:13 +0100, Jan Beulich wrote:
> >> >> On x86 we actually decided quite a long while ago to try to avoid
> >> >> domain_crash_synchronous() whenever possible.
> > 
> > I meant to ask why this was?
> > 
> > I'm supposing that the for(;;) do_softirqs is not considered very
> > nice...
> 
> That plus it misguides you into not writing proper error path code.

Which is exactly what it let me get away with :-p

> >> You'd want to return some error indication, avoid anything else to be
> >> done that might confusion on a dead domain (read: abort the entire
> >> multicall),
> > 
> > Hrm, that will involve frobbing around with the common do_multicall code
> > since it currently doesn't consider the possibility of do_multicall_call
> > failing in a fatal way.
> 
> But then again - is there anything wrong with actually carrying
> out the multicall (with truncated arguments), resulting in the
> domain dying only slightly later?

My concern was that this truncation happens naturally when running on a
32-bit hypervisor (since the actual hypercall implementations take
32-bit arguments internally). Meaning that the issue would be hidden
until you move that kernel to a 64-bit hypervisor (with 64-bit hypercall
arguments internally) at which point it mysteriously starts failing
because some previously unnoticed garbage shows up in the top half of
the argument.

On 32-on-64 x86 you avoid this because the multicall_entry_t contains
32-bit arguments and you have a compat layer which extends to 64-bit
when calling the core hypercall implementation.

On ARM we want our structs to be the same on 32- and 64-bit which means
we effectively have some padding -- and I wanted to avoid guests relying
on the contents of that padding being ignored or otherwise setting up an
ABI trap for the future.

> >>  and on the hypercall exit path the vCPU would be taken off
> >> the scheduler, i.e. you run your normal call tree to completion and
> >> you're guaranteed that the vCPU in question won't make it back into
> >> guest context.
> > 
> > What about other vcpus? I suppose they get nobbled as and when they
> > happen to enter the hypervisor?
> 
> Sure - via domain_shutdown() they all get vcpu_pause_nosync()'ed,
> i.e. they're being forced into the hypervisor if not already there.

Not sure how I missed that, thanks!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.