[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 --for 4.6 COLOPre 24/25] tools/libxl: move remus state into a seperate structure



On Thu, 2015-07-16 at 12:10 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH v4 --for 4.6 COLOPre 24/25] 
> tools/libxl: move remus state into a seperate structure"):
> > On Wed, 2015-07-15 at 21:50 +0800, Yang Hongyang wrote:
> > > Yes, checkpoint device is an abstract layer, used by both Remus & colo,
> > > in the abstract layer, we do not aware of remus or colo, in Remus or colo,
> > > we can use container of cds to retrive Remus/colo state.
> > 
> > This is because the cds callbacks receive a
> > libxl__checkpoint_devices_state * but are specific to either Remus of
> > Colo?
> > 
> > I think the usual way to solve that would be for the callback to take a
> > void *data "closure" field, which is registered along with the callbacks
> > and passed to all callbacks, or in this case perhaps you can get away
> > with just including it in the cds itself.
> > 
> > Ian, what do you think?
> 
> This is rather an unusual situation.  Normally there are two patterns:
> 
> 
> 1.
> 
> Things like:
> 
>   static void device_disk_add(libxl__egc *egc, uint32_t domid,
>                            libxl_device_disk *disk,
>                            libxl__ao_device *aodev,
>                            char *get_vdev(libxl__gc *, void *,
>                                           xs_transaction_t),
>                            void *get_vdev_user)
> 
> and similar patterns in much code in libxl and elsewhere.  This is the
> normal case.  (It is of course essential to use this when there are
> multiple call sites, so the void* data pointer might vary.)
> 
> 
> 2.
> 
> Things like (NB not very like real code):
> 
>   struct libxl_some_operation_state {
>     libxl_inner_generic_operation_state igos;
> 
>   void someop_innerop_make_happen(libxl_some_operation_state *sos) {
>     sos->igos.callback = someop_innerop_done;
> 
>   void someop_innerop_done(libxl_inner_generic_operation_state *igos) {
>     sos = CONTAINER_OF(igos);
> 
> Here the callback someop_innerop_done can be sure that CONTAINER_OF is
> correct because the callback is set up only in one place where it is
> obvious that the igos is part of a sos.
> 
> IMO this is an exception to  the usual rule that you have to accompany
> the function pointer with a void*.  The exception is justified because
> it is very easy to see that  the code is correct.  And, if any mistake
> is made, the setup is unconditional, so it will _always_ get the wrong
> container and go wrong (which will hopefully be spotted in testing).
> 
> 
> 
> In this particular situation the plumbing that relates a particular
> callback to the remus or colo state is rather more complicated.  I
> don't think it will be as obvious that the appropriate CONTAINER_OF is
> being used, let alone obvious that this it's always the same.
> 
> OTOH for any particular callback the context pointer is supposed to be
> a particular CONTAINER_OF.  It would be nice to write this in the
> code.
> 
> I think in this case the best answer would be:
> 
>   struct libxl__checkpoint_devices_state {
>       void (*postsuspend)(libxl__egc *egc, libxl__remus_device *dev);

ITYM s/libxl__remus_device/libxl__checkpoint_devices_state here?

If so then this is pretty much what I meant by "in this case perhaps you
can get away with just including it in the cds itself", but more clearly
explained.

>       void *callbacks_context;
> 
> and in the callback
> 
>   void libxl__remus_devices_postsuspend(libxl__egc *egc,
>                                 libxl__check_devices_state *cds)
>   {
>       libxl__remus_state *rs = cds->callbacks_context;
>       assert(cds = &rs->cds);
> 
> or some such.
> 
> 
> Thanks,
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.