[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Draft C] Boot ABI for HVM guests without a device-model



Hello,

El 04/09/15 a les 18.12, Ian Campbell ha escrit:
> On Fri, 2015-09-04 at 17:47 +0200, Roger Pau Monnà wrote:
>> VCPUOP_initialize was never available to HVM guests, so I don't think
>> changing the argument is a problem. However, I understand that for the
>> sake of clarity overloading an hypercall this way is not the best
>> practice. What about naming it VCPUOP_hvm_initialise?
> 
> If the new interface could support both PV (vcpu_guest_context) and the new
> thing (i.e. with a type field and a union perhaps), or if the new interface
> can work for PV some other way then it's not unheard of to rename the
> existing number with _compat and take over the name with a new number.
> 
> It just needs some compat __XEN_INTERFACE_VERSION__ stuff in the headers,
> like with e.g. __HYPERVISOR_sched_op vs __HYPERVISOR_sched_op_compat.
> 
> (I've not looked at this interface and I don't really remember what the old
> one looks like, so maybe this is an insane idea in this case)

So AFAICS we have 3 options:

1. Overload VCPUOP_initialise like it's done in the current series (v6).
For PV guests the hypercall parameter is of type vcpu_guest_context,
while for HVM guests the parameter is of type vcpu_hvm_context.

2. Create a new hypercall (VCPUOP_hvm_initialise) only available to HVM
guests, that only allows vcpu_hvm_context as a parameter.

3. Deprecate current VCPUOP_initialise, introduce a new
VCPUOP_initialise, that takes the following parameter:

union vcpu_context {
        struct vcpu_guest_context pv_ctx;
        struct vcpu_hvm_context hvm_ctx;
};

TBH, I don't have an opinion between 2 and 3, but I would like to get a
consensus before I start implementing any of those.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.