|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/7] xen: vNUMA support for guests.
On mer, 2013-11-13 at 22:26 -0500, Elena Ufimtseva wrote:
> Defines interface, structures and hypercalls for guests that wish
> to retreive vNUMA topology from Xen.
>
Well, not only "for guests that wish to retrieve" the vNUMA topology,
also for toolstacks that wish to configure it.
> Two subop hypercalls introduced by patch:
> XEN_DOMCTL_setvnumainfo to define vNUMA domain topology per domain
> and XENMEM_get_vnuma_info to retreive that topology by guest.
>
> Signed-off-by: Elena Ufimtseva <ufimtseva@xxxxxxxxx>
> ---
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> @@ -539,6 +539,7 @@ int domain_kill(struct domain *d)
> tmem_destroy(d->tmem);
> domain_set_outstanding_pages(d, 0);
> d->tmem = NULL;
> + domain_vnuma_destroy(&d->vnuma);
> /* fallthrough */
> case DOMDYING_dying:
> rc = domain_relinquish_resources(d);
> @@ -1297,6 +1298,15 @@ int continue_hypercall_on_cpu(
> return 0;
> }
>
> +void domain_vnuma_destroy(struct domain_vnuma_info *v)
> +{
I think vnuma_destroy() is ok, as it is tmem_destroy(),
evtchn_destory(), etc.
> + v->nr_vnodes = 0;
> + xfree(v->vmemrange);
> + xfree(v->vcpu_to_vnode);
> + xfree(v->vdistance);
> + xfree(v->vnode_numamap);
> +}
> +
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> @@ -871,6 +872,87 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t)
> u_domctl)
> }
> break;
>
> + case XEN_DOMCTL_setvnumainfo:
> + {
> + unsigned int i = 0, dist_size;
> + uint nr_vnodes;
> + ret = -EFAULT;
> +
> + /* Already set? */
> + if ( d->vnuma.nr_vnodes > 0 )
> + return 0;
> +
> + nr_vnodes = op->u.vnuma.nr_vnodes;
> +
> + if ( nr_vnodes == 0 )
> + return ret;
> + if ( nr_vnodes * nr_vnodes > UINT_MAX )
> + return ret;
> +
Mmm... I think this three 'return's ought all to be 'break's, or you'll
never get to execute the common exit path from do_domctl().
> + /*
> + * If null, vnode_numamap will set default to
> + * point to allocation mechanism to dont use
> + * per physical node allocation or this is for
> + * cases when there is no physical NUMA.
> + */
> + if ( guest_handle_is_null(op->u.vnuma.vdistance) ||
> + guest_handle_is_null(op->u.vnuma.vmemrange) ||
> + guest_handle_is_null(op->u.vnuma.vcpu_to_vnode) )
> + goto err_dom;
> +
Sorry, I'm not sure I fully understand the comment: you're saying that
it is ok for vnuma_nodemap to be NULL, right?
> + dist_size = nr_vnodes * nr_vnodes;
> +
> + d->vnuma.vdistance = xmalloc_array(unsigned int, dist_size);
> + d->vnuma.vmemrange = xmalloc_array(vmemrange_t, nr_vnodes);
> + d->vnuma.vcpu_to_vnode = xmalloc_array(unsigned int, d->max_vcpus);
> + d->vnuma.vnode_numamap = xmalloc_array(unsigned int, nr_vnodes);
> +
> + if ( d->vnuma.vdistance == NULL ||
> + d->vnuma.vmemrange == NULL ||
> + d->vnuma.vcpu_to_vnode == NULL ||
> + d->vnuma.vnode_numamap == NULL )
> + {
> + ret = -ENOMEM;
> + goto err_dom;
> + }
>
Well, in general, things like just 'err' or 'out' are fine as labels.
However, in this case, since we're inside quite a big switch{}, a bit
more of context could be helpful. What about killing the '_dom' part
(which does not really say much) and putting a meaningful prefix? Also,
it's not like the code you're jumping at is executed only on error, so
even the 'err_' part looks incorrect.
Personally, I'd go for something like 'setvnumainfo_out' (see
XEN_DOMCTL_getdomaininfo for reference).
> + if ( unlikely(copy_from_guest(d->vnuma.vdistance,
> + op->u.vnuma.vdistance,
> + dist_size)) )
> + goto err_dom;
> + if ( unlikely(copy_from_guest(d->vnuma.vmemrange,
> + op->u.vnuma.vmemrange,
> + nr_vnodes)) )
> + goto err_dom;
> + if ( unlikely(copy_from_guest(d->vnuma.vcpu_to_vnode,
> + op->u.vnuma.vcpu_to_vnode,
> + d->max_vcpus)) )
> + goto err_dom;
> + if ( !guest_handle_is_null(op->u.vnuma.vnode_numamap) )
> + {
> + if ( unlikely(copy_from_guest(d->vnuma.vnode_numamap,
> + op->u.vnuma.vnode_numamap,
> + nr_vnodes)) )
> + goto err_dom;
> + }
> + else
> + for ( i = 0; i < nr_vnodes; i++ )
> + d->vnuma.vnode_numamap[i] = NUMA_NO_NODE;
> +
> + /* Everything is good, lets set the number of vnodes */
> + d->vnuma.nr_vnodes = nr_vnodes;
Put a blank line here.
> + ret = 0;
> +err_dom:
> + if ( ret != 0 )
> + {
> + d->vnuma.nr_vnodes = 0;
> + xfree(d->vnuma.vdistance);
> + xfree(d->vnuma.vmemrange);
> + xfree(d->vnuma.vcpu_to_vnode);
> + xfree(d->vnuma.vnode_numamap);
> + }
> + }
> + break;
> +
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> @@ -733,6 +734,41 @@ long do_memory_op(unsigned long cmd,
> XEN_GUEST_HANDLE_PARAM(void) arg)
>
> break;
>
> + case XENMEM_get_vnuma_info:
> + {
> + vnuma_topology_info_t mtopology;
> + struct domain *d;
> +
> + rc = -EFAULT;
> + if ( copy_from_guest(&mtopology, arg, 1) )
> + return -EFAULT;
> + if ( (d = rcu_lock_domain_by_any_id(mtopology.domid)) == NULL )
> + return -ESRCH;
> +
> + if ( (d->vnuma.nr_vnodes == 0) || (d->vnuma.nr_vnodes >
> d->max_vcpus) )
> + return EOPNOTSUPP;
>
I think you need to rcu_unlock_xxx() here.
Also, -EONOTSUPP (note the '-') ?
> +
> + if ( __copy_to_guest(mtopology.vmemrange,
> + d->vnuma.vmemrange,
> + d->vnuma.nr_vnodes) != 0 )
> + goto vnumaout;
> + if ( __copy_to_guest(mtopology.vdistance,
> + d->vnuma.vdistance,
> + d->vnuma.nr_vnodes * d->vnuma.nr_vnodes) !=
> 0 )
> + goto vnumaout;
> + if ( __copy_to_guest(mtopology.vcpu_to_vnode,
> + d->vnuma.vcpu_to_vnode,
> + d->max_vcpus) != 0 )
> + goto vnumaout;
> +
> + if ( __copy_to_guest(mtopology.nr_vnodes, &d->vnuma.nr_vnodes, 1) !=
> 0 )
> + goto vnumaout;
> + rc = 0;
> +vnumaout:
> vnumainfo_out ?
> + rcu_unlock_domain(d);
> + break;
> + }
> +
> default:
> rc = arch_memory_op(op, arg);
> break;
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> +/*
> + * XEN_DOMCTL_setvnumainfo: sets the vNUMA topology
> + * parameters a guest may request.
> + */
> +struct xen_domctl_vnuma {
> + uint32_t nr_vnodes;
> + uint32_t __pad;
> + XEN_GUEST_HANDLE_64(uint) vdistance;
> + XEN_GUEST_HANDLE_64(uint) vcpu_to_vnode;
> + /* domain memory mapping map to physical NUMA nodes */
> + XEN_GUEST_HANDLE_64(uint) vnode_numamap;
> + /*
> + * memory rages that vNUMA node can represent
^ranges
> + * If more than one, its a linked list.
> + */
> + XEN_GUEST_HANDLE_64(vmemrange_t) vmemrange;
> +};
> diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
> index a057069..bc61bab 100644
> --- a/xen/include/xen/domain.h
> +++ b/xen/include/xen/domain.h
> @@ -89,4 +89,14 @@ extern unsigned int xen_processor_pmbits;
>
> extern bool_t opt_dom0_vcpus_pin;
>
> +struct domain_vnuma_info {
> + uint nr_vnodes;
> + uint *vdistance;
> + uint *vcpu_to_vnode;
> + uint *vnode_numamap;
> + struct vmemrange *vmemrange;
> +};
> +
I think you can kill the 'domain_' prefix. It's pretty clear this is a
per-domain thing, from the fact that it lives inside struct domain.
> +void domain_vnuma_destroy(struct domain_vnuma_info *v);
> +
Why do you need to declare this function here? Isn't this used only in
domain.c ?
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |