[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 23/25] argo: signal x86 HVM and ARM via VIRQ



Hi Christopher,

On 04/12/2018 09:03, Christopher Clark wrote:
On Sun, Dec 2, 2018 at 11:55 AM Julien Grall <Julien.Grall@xxxxxxx> wrote:

Hi,

On 01/12/2018 01:33, Christopher Clark wrote:
* x86 PV domains are notified via event channel.

PV guests are known to have the event channel software present in the guest
kernel, so it is fine to depend on and use it.

* x86 HVM domains and all ARM domains are notified via VIRQ.

The intent is to remove the requirement for event channel software to be
installed within these guests in order to use Argo. VIRQ signalling is also
the method that has been in use for the longest period with this hypercall
in both XenClient and OpenXT.

I am a bit confused. vIRQs are based on event channel, so how do you
remove the requirement on event channel?

Are VIRQs always delivered via events in all cases? I was under the
impression that was not necessarily so with HVM guests but I haven't
checked and could well be incorrect.

It depends on your meaning of vIRQs. We seem to use it for two cases in the hypervisor.

In the context of send_guest_global_virq(), the interrupt will be para-virtualized as we delivered via events.

On Arm most of the virtual interrupts will goes through the virtual interrupt controller. They can be raised using vgic_inject_irq() and event channel are therefore not required. I think this is fairly similar on PVH/HVM but I will let the x86 folks confirm here.


A bit of context might help with how this multiple-method logic (as
submitted) was arrived at:

1) Both XenClient's original version of v4v, and that used in OpenXT,
deliver notifications to guests via VIRQ.
This logic has been performing fine for our uses cases, so there
hasn't really been a push to switch away from it.

From my understanding, VIRQ is just a convenience alias for the guest to receive the associated event. The guest only need to say "I want to bind VIRQ foo". In the other case, you would need to allocate the event channel in the hypervisor and then pass the information somehow to the guest.


2) The last version of v4v that was submitted to xen-devel for
iteration with the Xen community was intended to use event channels
instead, in response to a request from Jan at the time. Given that
expressed preference, I've added that, plumbing it in through via the
IPI event method exposed in patch #01, and then used in patch #05, of
the submitted series.

3) Bromium's uxen uses different logic for delivery of events to
non-PV guests: an edge-triggered, ISA IRQ, along these lines:

     #define ARGO_SIGNAL_ISA_IRQ 8
     hvm_isa_irq_assert(d, ARGO_SIGNAL_ISA_IRQ, NULL);
     hvm_isa_irq_deassert(d, ARGO_SIGNAL_ISA_IRQ);

I'm told that this avoids the need to EOI in the guest, reducing the
VMEXIT load, and using an ISA IRQ avoids some logic in Windows that
requires that a device be detected. I briefly looked into adding this
to Argo, but Linux wasn't immediately happy and I haven't had time to
look into it further given the proximity of the 4.12 release, with
other work still to complete.

Anyway: since method 3 isn't ready to submit, and if VIRQs don't have
an advantage over using event channels directly wrt. to needing
in-guest support to function, then I can drop this patch (#23) and
simplify the get_config op (#25), which will leave all notifications
being delivered as events.

Alternatively, if this is about which is the right delivery method for
ARM, with some valid reason to retain use of VIRQ for HVM x86, then
I'm happy to switch ARM over to deliver by the event method rather
than VIRQ if that makes more sense.

For Arm, 3) would look the right approach if you want to avoid the dependencies on the event channel driver.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.