[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 04 of 10 v2] xen: allow for explicitly specifying node-affinity



On 12/21/2012 10:17 AM, George Dunlap wrote:
> On 19/12/12 19:07, Dario Faggioli wrote:
>> Make it possible to pass the node-affinity of a domain to the hypervisor
>> from the upper layers, instead of always being computed automatically.
>>
>> Note that this also required generalizing the Flask hooks for setting
>> and getting the affinity, so that they now deal with both vcpu and
>> node affinity.
>>
>> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> 
> I can't comment on the XSM stuff -- is any part of the "getvcpuaffinity" 
> stuff for XSM a public interface that needs to be backwards-compatible?  
> I.e., is s/vcpu//; OK from an interface point of view?
> 
> WRT everything else:
> Acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>

It is an interface used only by the XSM policy itself, which is already
going to have non-backwards-compatible changes in 4.3 due to IS_PRIV
reworking and adding new hooks.  The FLASK policy in Xen has not supported
loading policies that do not exactly match the hypervisor's access vectors
because the hypervisor policy is still maintained in the same source code
tree as the hypervisor, so I would consider this similar to the compatibility
between libxc/libxl and the hypervisor rather than trying for the same level
of compatibility that Linux provides for SELinux policies.

A quick grep of xen-unstable finds one instance of getvcpuaffinity in xen.te
that needs to be changed to getaffinity; with that:
Acked-by: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>

>> ---
>> Changes from v1:
>>   * added the missing dummy hook for nodeaffinity;
>>   * let the permission renaming affect flask policies too.
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.