[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: libxl API changes for 4.2 (Was: Re: [Xen-devel] [PATCH] libxl: do not expose libxenctrl/libxenstore headers via libxl.h)



On Thu, 2011-04-14 at 18:41 +0100, Jim Fehlig wrote:
> Ian Campbell wrote:

> > The specification of specific hvmloader and qemu-dm binaries is also
> > likely to be deprecated soon, the user will just need to ask for old or
> > new qemu and libxl will figure the rest out (it will still be possible
> > to override if desired)
> >   
> 
> Yep, ability to specify device model binary should be retained.  I don't
> know of any users specifying a qemu-dm wrapper via device_model, but
> wouldn't be surprised if they exist.

Although we won't be removing this functionality anyway it might be
interesting to know what people use the ability to use a wrapper for.
Perhaps they are things we can integrate as a more first class
capability?

We could also switch to running a wrapper by default
(e.g. /etc/xen/scripts/device-model), shipping a version which just
exec's the real binary but allowing folks to modify as they desire...

> > I've also been wondering what can/should be done about the split between
> > libxl_domain_create_info, libxl_domain_build_info and
> > libxl_device_model_info now that they are all bundled together in
> > libxl_domain_config and not exposed directly in the API (since the
> > related functions became internal, that was before 4.1). It seems like
> > there ought to be scope for collapsing those datastructures somewhat but
> > I'm not sure how yet.
> >   
> 
> I wonder if libxl_device_model_info even needs to be exposed?  Generally
> speaking, the domain config consists of
> 
> metadata (name, uuid, description, etc.)
> basic resources (cpu, memory, maxmemory, etc.)
> OS booting info (order, loader, kernel/ramdisk)
> clock/timekeeping (clock offset, timer type, timer tick policy, etc.)
> lifecycle controls (what to do on reboot, shutdown, crash, etc.)
> devices (block, net, framebuffer, PCI, framebuffer, input, serial,
> parallel, etc.)
> 
> All of the device model info can be inferred from this configuration.

I agree, I think there is a lot about the split between
libxl_{create,build,device_model}_info and libxl_domain_build_state
which can be rationalised now that the distinction is largely internal
to the libxl_domain_{create,restore} functions instead of exposed via
the previous libxl_domain_{create,build} API.

> BTW, thanks for the heads up on these changes.

No problems. Sorry it's such a long laundry list.

Another thing which came to mind is each device type we have exposes a
different set of basic operations:

disks have add, del and a list function which returns a list of
libxl_device_disk plus a libxl_device_disk_getinfo which takes a
libxl_device_disk and returns libxl_diskinfo.

nics have add, del and a list function which returns a list of
libxl_nicinfo directly with no way to get a list of libxl_device_nic.

vkb and vfb have add, clean_shutdown and hard_shutdown (and *_shutdown
all return ERROR_NI).

PCI has add, remove and shutdown (==remove all devices) plus a couple of
list functions with a different interface style to the disk/nic ones.

Console just has add. In principal the user can perhaps add additional
secondary console (perhaps even hotplug them) but in practice at the
moment the usage is all internal to libxl_domain_create.

The disk and nic del functions both take a wait flag which really
translates into !force (wait==0 means nuke the device without
cooperation from the guest, wait==1 means do a graceful/cooperative
remove and wait for it to complete, there is no way to do an async
graceful removal). while the pci remove function has a force parameter.

Obviously I would like to transition this to a consistent set of
interfaces across all devices. Additionally the operations which can
interact with the guest likely need to have asynchronous versions (or
possibly only asynchronous versions, with a sync wrapper). I'm thinking
along the lines of the following for the base set of operations

add: pretty much as today, but add the option to do it asynchronously.

remove: replaces del and {clean,hard}_shutdown, has forced and
non-forced variants. non-forced can be async.

list: takes a domid and returns a list of libxl_device_foo.

info: takes a libxl_device_foo and fills in a libxl_fooinfo (only exists
for devices where further info is available)

The existing libxl_device_pci_shutdown (remove all) functionality could
be internal or we could specify such an API for all devices. Not sure
how useful that really is to libxl users, currently it is only used
internal during domain destruction.

Another thing I'm considering is restricting the IDL's "integer" type to
a signed 31 bit type. At least one language we want to bind (ocaml) has
this restriction on its native int type and I suspect it won't be the
only one. The explicit {u}int{16,32,64} types will remain for cases
where we explicitly need the full ranges.

Ian.

> 
> Regards,
> Jim
> 
> > The topologyinfo datastructure should be a list of tuples, not a tuple
> > of lists.
> >
> > The API seems to expose a bunch of console related datastructures but
> > not much in the way of functions to do anything with them. One of those
> > must be wrong.
> >
> > I think IanJ wants to fixup the event API as well, it's a bit barking at
> > the moment.
> >
> > IanJ is also going to be looking at the handling of storage backends, I
> > expect that is moistly going to be internal to the library but it might
> > have an impact on the API too.
> >
> >   
> >>   Seems best for clients
> >> to target new releases (4.1, 4.2, ...) and expect branch releases
> >> (4.1.1, 4.1.2, ...) to have a stable API?
> >>     
> >
> > That seems like a reasonable expectation to me.
> >
> > Ian.
> >
> >
> >   



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.