[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 4/5] libxl: change xs path for pv qemu



On 08/06/15 11:25, Wei Liu wrote:
> On Thu, Jun 04, 2015 at 12:28:18PM +0100, Stefano Stabellini wrote:
>> If QEMU is ran just to provide PV backends, change the xenstore path to
>> /local/domain/0/device-model/$DOMID/pv.
>>
>> Add a parameter to libxl__device_model_xs_path to distinguish the device
>> model from the pv backends provider.
>>
>> Store the device model binary path under
>> /local/domain/$DOMID/device-model on xenstore, so that we can fetch it
>> later and retrieve the list of supported options from
>> /local/domain/0/libxl/$device_model_binary, introduce in the previous
>> path.
>>
> TBH this protocol works, but it is not very extensible.
>
> I envisaged we need to assign $emulator_id to different device models
> when I fixed stubdom, but I never got to that since there weren't need
> for multiple emulators.
>
> That is, as an example:
>
> /local/domain/$backend_domid/device-model/$domid/$emulator_id/xxx
>
> That way we can:
>
> 1. Have something like multidev in libxl to wait for several device
>    models to be ready without writing tedious code for every single one.
> 2. Fit into libxl migration stream, which naturally uses $emulator_id to
>    distinguish different emulators. 
>
> The downside of this is we need to add an extra option to QEMU to accept
> the emulator id assigned by toolstack.
>
> Just my two cents.
>
> Wei.

From the XenServer point of view, we already use multiple device models
in certain cirumstances, and libxl support is one of the many tasks on
the xenopsd/libxl integration todo list.

If a change is going to be made, lets go the full way to getting
multiple emulators working properly.

FWIW, Wei's design looks to be sufficient.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.