[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 02/11] hvmctl: convert HVMOP_set_pci_intx_level



>>> On 20.06.16 at 16:48, <ian.jackson@xxxxxxxxxxxxx> wrote:
> Daniel De Graaf writes ("Re: [PATCH 02/11] hvmctl: convert 
> HVMOP_set_pci_intx_level"):
>> On 06/20/2016 08:53 AM, Jan Beulich wrote:
>> > Note that this adds validation of the "domain" interface structure
>> > field, which previously got ignored.
>> >
>> > Note further that this retains the hvmop interface definitions as those
>> > had (wrongly) been exposed to non-tool stack consumers (albeit the
>> > operation wouldn't have succeeded when requested by a domain for
>> > itself).
>> >
>> > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> > ---
>> > TBD: xen/xsm/flask/policy/access_vectors says "also needs hvmctl", but
>> >      I don't see how this has been done so far. With the change here,
>> >      doing two checks in flask_hvm_control() (the generic one always
>> >      and a specific one if needed) would of course be simple, but it's
>> >      unclear how subsequently added sub-ops should then be dealt with
>> >      (which don't have a similar remark).
>> 
>> I am not sure why that remark is there: it seems like it refers to an
>> overall check in the HVM operation hypercall, which does not exist.
>> 
>> There is no reason to have an operation protected by two different
>> access checks, so I think that both the previous and patched code
>> are correct and the "also needs hvmctl" comment should be removed.
>> With that, Acked-by: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
> 
> This is a slight digression, but is it intended that all of these
> hvmctl's are safe to expose to a deprivileged device model process in
> dom0, or to a device model stub domain ?

Yes, afaict (they've been exposed the same way before).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.