[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 01/10] libxl: change libxl__ao_device_remove to libxl__ao_device



Ian Jackson wrote:

I've fixed al other comments, so I'm not going to reply to every one of them :)

+void libxl__initiate_device_remove(libxl__egc *egc,
+                                   libxl__ao_device *aodev)
...
-    libxl__ev_devstate_init(&aorm->ds);
+    libxl__ev_devstate_init(&aodev->ds);

-    rc = libxl__ev_devstate_wait(gc,&aorm->ds, device_remove_callback,
+    rc = libxl__ev_devstate_wait(gc,&aodev->ds, device_backend_callback,
                                   state_path, XenbusStateClosed,
                                   LIBXL_DESTROY_TIMEOUT * 1000);

libxl__ev_devstate_init is not needed before _wait.  (Admittedly
that's there in the code before you touched it.)  In general foo_init
is not needed before foo_do_the_thing_and_call_me_back.

Given the use of `backend' in function names to refer to what is
manipulated with aodev->ds, perhaps aodev->ds should be called
aodev->backend_ds ?

Done, I've changed it to backend_ds

I was briefly confused about whether `device_backend_cleanup' was a
general cleanup function for a libxl__ao_device (which it isn't).  And
there is of course a frontend state too, which we don't explicitly
deal with here.

How do the frontend and backend directories get removed from
xenstore ?  (I have a feeling I have asked this before but I don't
remember the answer.  Perhaps it could be a comment in the code in
some appropriate place?)

We talked about this, and now I've introduced a function that is always called before the user callback, that deletes both frontend and backend xs entries.

So the flow of execution is a little bit more "linear", and we have common entry and exit points, we only call the callback from the exit function.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.