[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 03/19] libxc: allocate memory with vNUMA information for PV guest
Wei Liu writes ("[PATCH v3 03/19] libxc: allocate memory with vNUMA information for PV guest"): ... > diff --git a/tools/libxc/include/xc_dom.h b/tools/libxc/include/xc_dom.h > index 07d7224..c459e77 100644 > --- a/tools/libxc/include/xc_dom.h > +++ b/tools/libxc/include/xc_dom.h > @@ -167,6 +167,11 @@ struct xc_dom_image { ... > + /* vNUMA information */ > + unsigned int *vnode_to_pnode; /* vnode to pnode mapping array */ > + uint64_t *vnode_size; /* vnode size array */ You don't specify the units. You should probably name the variable _bytes or _pages or something. Looking at the algorithm below it seems to be in _mby. But the domain size is specified in pages. So AFAICT if you try to create a domain which is not a whole number of pages, it is bound to fail ! Perhaps the vnode memory size should be in pages too. > + unsigned int nr_vnodes; /* number of elements of above arrays */ Is there some reason to prefer this arrangement with multiple parallel arrays, to one with a single array of structs ? > + /* Setup dummy vNUMA information if it's not provided. Not > + * that this is a valid state if libxl doesn't provide any > + * vNUMA information. > + * > + * In this case we setup some dummy value for the convenience > + * of the allocation code. Note that from the user's PoV the > + * guest still has no vNUMA configuration. > + */ This arrangement for defaulting makes it difficult to supply only partial information - for example, to supply the number of vnodes but allow the system to make up the details. I have a similar complaint about the corresponding libxl code. I think you should decide where you want the defaulting to be, and do it in a more flexible way in that one place. Probably, libxl. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |