[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH,RFC]: Introduce libxl_domain_create()



On Tue, 2010-12-21 at 11:02 +0000, Ian Campbell wrote:
> On Tue, 2010-12-21 at 08:44 +0000, Gianni Tedesco wrote:
> > On Fri, 2010-12-17 at 09:06 +0000, Ian Campbell wrote:
> > > On Tue, 2010-12-14 at 19:00 +0000, Gianni Tedesco wrote:
> > > We probably need IDL support for enumerations and other constants.
> > 
> > Might be a good idea. We also rely on a few xc constants. In the case of
> > the python binding I had been adding them manually. If we did this via
> > IDL it'd be an idea to generate type-safety macros for that stuff too.
> 
> What sort of macros?

Well just to have them as typed structs and macros to get/set the
integer values. Beyond that I don't see much point in putting them in
the IDL at all.

> > > > +typedef struct {
> > [...]
> > > > +} libxl_domain_config;
> > > 
> > > Should be in IDL so it gets a destructor? Could require adding an Array
> > > construct to handle the foo + num_foo style stuff.
> > 
> > I've thought about that and rejected it because C arrays don't map to
> > anything useful in language bindings. It makes sense to me to keep this
> > as a builtin and use functions to fill these domain creation related
> > structures in for us.
> 
> OK
> 
> > But then what you get is like two versions of:
> >  - libxl_device_add_(nic|block|etc)
> > One for a live domain and one for domain creation.
> > 
> > I have been toying with the idea of using polymorphism (is that what
> > it's called?) So that such a function would multiplex to different
> > implementations depending on whether this is a live domain or a
> > description of a domain for creation. It might need a bit of thinking
> > through as how it would be used.
> > 
> > Not sure what the others think of that?
> 
> Proper polymorphism is a bit tricky in C, since you can't have multiple
> variations of the same function with different parameters and you simply
> end up with multiple different functions -- i.e. back where you started.
> 
> The need for a version of libxl_device_add_FOO for the create case is
> simply to support automatically extending the array while filing in the
> structure etc? I don't see a useful way to have a function which works
> like this for both live and in-creation domains.

Maybe I'm using the wrong OOP term? But you would have a libxl_domain
struct and operations on it would be redirected via virtual methods.
Either to update a struct or a live domain. But the input parameters
would always be the same.

> > > > +static pid_t autoconnect_console(libxl_ctx *ctx, int domid)
> > > > +{
> > > [...]
> > > > +}
> > > 
> > > I think console connect should be under toolstack control (i.e. stay in
> > > xl). exec'ing the xenconsole client is only one way of connecting the
> > > console, e.g. xapi might want to launch vncterm instead.
> > > 
> > > I think libxl_domain_create should take a callback function pointer to
> > > connect the console. It's possible that libxl might also provide a
> > > convenience function which launches xenconsole which is suitable for use
> > > as this callback but ultimately it should be the toolstack's decision
> > > what to do here.
> > 
> > I think you're right. I had just been trying to avoid having a mega
> > function with 12 arguments, 6 of them callbacks.
> 
> A structure containing the callback functions is the answer there.

Hmm, my displeasure is not so much how to pass the arguments but that
API's based around a lot of callbacks tend to be difficult to understand
and hard to maintain. They pretend to make it such that you don't need
to know what order things are done in but then you end up relying on the
order things are done in...

I have already factored out the other nastiness of the API I proposed
(the flags) but autoconnect seems to be the final sticking point.

> > I had the idea that the original domain_create() was very fragile and I
> > didn't want to move things around. But on the other hand it seems to me
> > that there's no reason to start the console at two different points
> > depending on pv or hvm and perhaps I should just try to move the code
> > around.
> 
> I'm pretty certain Stefano did this deliberately when he introduced
> console support for HVM, I don't remember the specific constraint he
> found on HVM though. It seems to arise from 22100:fde833c66948 but the
> commit message doesn't say why just "it needs to be this way".
> 
> > Domains start paused anyway so why can't we just:
> > 
> >  libxl_domain_create();
> >  do_console_stuff();
> >  libxl_domain_unpause();
> 
> Not quite because for a PV domain we need to do the console before the
> bootloader runs (so the user can interact with pygrub) and the
> bootloader provides the input to libxl_domain_create().
> 
> So for PV it ends up as
>         libxl_domain_make() // returns a domid
>         do_console_stuff() // launches console client
>         libxl_run_bootloader() // potentially interact with console, return 
> kernel etc
>         libxl_domain_create() // build domain using kernel
>         libxl_domain_unpause() // go go go
> 
> My guess is that there is some reason you can't create the console for
> an HVM guest before libxl_domain_create but I don't specifically know
> why, perhaps qemu needs to be running?
> 
> In theory at least the do_console_stuff should cause a client to start
> and wait for the server end to appear, rather than insist on connecting
> immediately and it already behaves that way for PV guests, I don't see
> any fundamental reason HVM couldn't be the same.

Yes, the points above make sense. I think I would rather go down the
road of libxl callers setting up their console stuff before creating the
domain, providing a wait_for_console_stuff() API. Or at worst, give them
the control back over bootloader and do a 2/3-phase domain creation API.

Gianni


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.