|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Device model operation hypercall (DMOP, re qemu depriv)
Jan Beulich writes ("Re: Device model operation hypercall (DMOP, re qemu
depriv)"):
> On 26.08.16 at 13:38, <ian.jackson@xxxxxxxxxxxxx> wrote:
> > Another example would be a DMOP that takes (or returns) an event
> > channel number in the calling domain. This would be a problem because
> > there would be nothing to stop qemu from messing about with evtchns
> > which dom0 is using for other purposes (or conversely, there would be
> > no way for the dom0 evtchn driver to know about the returned evtchn
> > number and allow qemu to receive it).
>
> Doesn't that follow the more general "mixing up own and target
> domains" pattern, which is relatively easy to audit for?
Yes, as I understand what you mean by that pattern, indeed.
> > Another might be a DMOP that implicitly grants the target domain some
> > of the calling domain's scheduling priority. (I realise this is quite
> > implausible from a scheduling API POV, but it gives an idea.)
> >
> > Another example is that of course VCPU pool management and VCPU-PCPU
> > pinning must not be available via DMOP.
> >
> > (I write `qemu' here for brevity and clarity, but really I mean any
> > DMOP caller which is supposed to be privileged for the target domain
> > but not generally privileged.)
>
> These all look rather contrived, especially keeping in mind that
> what we mean to exclude right now are accidental violations of
> the intended isolation. I.e. I think for all of those one would need
> to go to some lengths to actually achieve the "goal", but they are
> rather unlikely to be the result of a bug.
Right.
So I think this confirms your conclusion that this "audit" (ie,
checking that there are problems in these kind of categories) won't be
very difficult ?
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |