[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC v1 55/74] xen/pvshim: forward evtchn ops between L0 Xen and L2 DomU
>>> On 08.01.18 at 17:22, <roger.pau@xxxxxxxxxx> wrote: > On Mon, Jan 08, 2018 at 09:05:40AM -0700, Jan Beulich wrote: >> >>> On 04.01.18 at 14:06, <wei.liu2@xxxxxxxxxx> wrote: >> > + unsigned long evtchn = xchg(&XEN_shared_info->evtchn_pending[l1], >> > 0); >> > + >> > + __clear_bit(l1, &pending); >> > + evtchn &= ~XEN_shared_info->evtchn_mask[l1]; >> > + while ( evtchn ) >> > + { >> > + unsigned int port = ffsl(evtchn) - 1; >> > + >> > + __clear_bit(port, &evtchn); >> > + port += l1 * BITS_PER_LONG; >> >> What about a 32-bit client? If that's not intended to be supported, >> building of such a guest should be prevented (in dom0_build.c). > > 32bit client? You mean building a shim that runs in 32bit mode? If so > I haven't really through of it, but in any case BITS_PER_LOG would be > OK also in that case? No, by "client" I mean the (sole) guest of the shim, in the 32-bit case of which you'd need to use BITS_PER_EVTCHN_WORD() here. But since 32-bit PV guests are not a problem wrt SP3, I can see why we wouldn't want/need to support that case. Yet if so, I'd prefer if we did that uniformly, by e.g. also avoiding the compat complications in the new grant table wrapper. >> > + \ >> > + if ( copy_from_guest(&op, arg, 1) != 0 ) \ >> > + return -EFAULT; \ >> > + \ >> > + rc = xen_hypercall_event_channel_op(EVTCHNOP_##cmd, &op); \ >> > + if ( rc ) \ >> > + break; \ >> > + \ >> > + spin_lock(&d->event_lock); \ >> >> Would the lock better be acquired already before the hypercall >> above? > > I'm not sure I see your point here, certainly L0 already must have > it's own locking. AFAICT the shim just needs to lock event_lock when > fiddling with event channel data of the guest. The point isn't to guard L0 in any case, but to deal with racing requests coming from the client. Having looked again, the two operations the macro is being used for are probably fine with the locking left as is, but as you can see from the other reply sent a few minutes ago (to the original patch) there are races to be considered in general. >> > + case EVTCHNOP_unmask: { >> > + struct evtchn_unmask unmask; >> > + >> > + if ( copy_from_guest(&unmask, arg, 1) != 0 ) >> > + return -EFAULT; >> > + >> > + /* Unmask is handled in L1 */ >> > + rc = evtchn_unmask(unmask.port); >> > + >> > + break; >> > + } >> >> Is this really sufficient, without handing anything through to L0? >> Perhaps it's fine as long as there's no pass-through support here. > > For the unmask operation? I think so, if there was a pending event the > shim will already take care of injecting it to the guest. Well, as the Linux code (evtchn_2l_unmask()) tells us certain unmasks have to go through the hypervisor. I would assume that in the case of the shim this means that L2 requests need to also be handed through to L0 whenever they're not being handled entirely locally to L1. >> > @@ -1030,6 +1055,11 @@ long do_event_channel_op(int cmd, >> > XEN_GUEST_HANDLE_PARAM(void) arg) >> > { >> > long rc; >> > >> > +#ifdef CONFIG_X86 >> > + if ( pv_shim ) >> > + return pv_shim_event_channel_op(cmd, arg); >> > +#endif >> >> Patch it right into the hypercall table instead? > > That would only work if the shim is a compile time option, but not a > run time one, the hypercall table is ro. Well, yes and no: See nmi_shootdown_cpus() for a precedent of how to do that without removing the r/o attribute. Not having the hook sit here would (I assume) allow to avoid compiling the entire do_event_channel_op() down the road in the shim-only case. The compiler may be able to partially do this (omitting the rest of the function), but my experience is that deferring to the compiler in this regard often means leaving some traces around. But anyway - this of course is something which can also be sorted out later, if it's deemed too complicated for doing right away. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |