[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] evtchn: make support for different ABIs tunable



Hi Jan,

On 08.08.19 17:12, Jan Beulich wrote:
On 08.08.2019 16:31,  Elnikety, Eslam  wrote:

First of all - can you please try to get your reply quoting
improved, such that readers can actually tell your replies
from context? (I didn't check whether perhaps your mail was
a HTML one, and my plain text reading of it discarded the
markings. If so - please don't send HTML mail.)

Oopsy. It was HTML. I will be more diligent going forward :)


On 8. Aug 2019, at 15:27, Jan Beulich 
<JBeulich@xxxxxxxx<mailto:JBeulich@xxxxxxxx>> wrote:
On 07.08.2019 19:42, Eslam Elnikety wrote:
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1170,6 +1170,11 @@ long do_event_channel_op(int cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)

       case EVTCHNOP_init_control: {
           struct evtchn_init_control init_control;
+
+        /* Fail init_control for domains that must use 2l ABI */
+        if ( current->domain->options & XEN_DOMCTL_CDF_disable_fifo )
+            return -EPERM;
+
           if ( copy_from_guest(&init_control, arg, 1) != 0 )
               return -EFAULT;
           rc = evtchn_fifo_init_control(&init_control);

I think the check would better go into evtchn_fifo_init_control(),
at least as long as the setting really is FIFO-centric. Also the

Not sure. It is the FIFO ABI that defines EVTCHNOP_init_control
(not defined in 2L). Short-circuiting the hypercall at this place
seems more appropriate.

I'd expect any 3rd variant to also have a need for an init-control
operation, and hence at that point this would become a hook like
many of the other ops. And at that point the check would need to
move anyway. Hence it's better to put it in its long term
designated place right away.

Moreover, doing copy_from_guest and calling into
evtchn_fifo_init_control only to return error is not optimal.

True, yet from my pov the more logical alternative code structure
is still preferable.

Fair point. On a second thought, the additional cycles are not a problem (given that init_control is not on a critical __must be performant__ path). I am now also in favor of moving the check to event_fifo_init_control.


Irrespective of these remarks, and along the lines of comments on
the v1 thread, introducing wider control that would also allow
disabling 2-level (for HVM guests) would seem better to me. This
would then hopefully be coded up in a way that would make extending
it straightforward, once a 3rd mechanism appears.

Hmmm... we cannot force guests to call init_control (in order to flip from 2L to FIFO). 
Quoting from [1] 4.4 “If this call (EVTCHNOP_init_control) fails on the boot VCPU, the 
guest should continue to use the 2-level event channel ABI for all VCPUs.” Support for 
2L ABI does not sound like something that can be optional. Once a 3rd mechanism appears, I 
imagine adding a corresponding domctl flag to control such mechanism.

For HVM, event channels as a whole should be optional; we shouldn't
default a possible control for this to off though. For PV, the
2-level interface indeed has to be considered mandatory. Hence
today there are these (theoretically) possible combinations

                PV              HVM
nothing         invalid         valid
2-level only    valid           valid

The patch at hand here is concerned only with the above line. (In fact, v3 will rename the subject to: "evtchn: introduce a knob to on/off FIFO ABI per guest".)

I take it that the concern here is that whatever changes the patch proposes should play nicely with potential future changes introducing such a generic framework. Any concrete suggestions?

Thanks,
Eslam

FIFO only       ???             valid
both            valid           valid




Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.