[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XEN][vNUMA][PATCH 3/9] public interface


  • To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • From: Dulloor <dulloor@xxxxxxxxx>
  • Date: Mon, 5 Jul 2010 22:57:55 -0700
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 05 Jul 2010 22:58:50 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=dEujlcVd2FWs22yokEij6/Tfsbe1MfMVqGluM2tbdZK/XiAzAaRZPJp01eE3Lp3Iak cif7mUSrDj6EujddHZdYe9c7Z9Aw5HRiQtHYpbEZc9vS6LaYYdw4wH6zIYrMh6sKCYY7 RtQFja7M51IBH3mcsVWl0I8Cxckkpz+Dc7W0U=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On Mon, Jul 5, 2010 at 3:23 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx> wrote:
>> +#ifndef __XEN_PUBLIC_DOM_NUMA_X86_H__
>> +#define __XEN_PUBLIC_DOM_NUMA_X86_H__
>> +
>> +/* struct xc_cpumask : static structure */
>> +#define XEN_CPUMASK_BITS_PER_BYTE 8
>> +#define XEN_CPUMASK_BITS_TO_BYTES(bits) \
>> +    (((bits)+XEN_CPUMASK_BITS_PER_BYTE-1)/XEN_CPUMASK_BITS_PER_BYTE)
>>
>> +#define XEN_MAX_VCPUS 128
>> +#define XEN_CPUMASK_DECLARE_BITMAP(name,bits) \
>> +    uint8_t name[XEN_CPUMASK_BITS_TO_BYTES(bits)]
>> +struct xen_cpumask{ XEN_CPUMASK_DECLARE_BITMAP(bits, XEN_MAX_VCPUS); };
>> +#define XEN_CPUMASK_BITMAP(maskp) ((maskp)->bits)
>
> What are xc_cpumask (a libxc concept) related definitions doing in a
> hypervisor public header? These aren't even used in this header file. Below
> I suggest a vcpu_to_vnode[] array, which probably gets rid of the need for
> this bitmask stuff anyway.

Stale comment with xc_cpumask .. sorry !
I did think of the vcpu_to_vnode array, but then we use the bitmask in hvm_info
anyway (with vcpu_online). I thought I could atleast fold them into a
single structure.
I could change that if you insist.

>
>> +#define XEN_MAX_VNODES 4
>
> A small number to be statically defined. Better to make your structure
> extensible I think, perhaps including pointers out to vnode-indexed arrays?
This structure is passed in hvm_info page. Should I use offset/len for these
dynamic-sized, vnode-indexed arrays ?

>
>> +/* vnodes are 1GB-aligned */
>> +#define XEN_MIN_VNODE_SHIFT (30)
>> +
>> +struct xen_vnode_info {
>> +    uint8_t vnode_id;
>> +    uint8_t mnode_id;
>
> How do vnodes and mnodes differ? Why should a guest care about or need to
> know about both, whatever they are?
vnode_id is the node-id in the guest and mnode_id refers to the real node
it maps to. Actually I don't need vnode_id. Will take that out.

>
>> +    uint32_t nr_pages;
>
> Not an address range? Is that implicitly worked out somehow? Should be
> commented, but even better just a <start,end> range explicitly given?

The node address ranges are assumed contiguous and increasing. I will
change that to <start,end> ranges.

>
>> +    struct xen_cpumask vcpu_mask; /* vnode_to_vcpumask */
>> +};
>
> Why not have a single integer array vcpu_to_vnode[] in the main
> xen_domain_numa_info structure?

No specific reason, except that all the vnode-related info is
folded into a single structure. I will change that if you insist.

>
>> +#define XEN_DOM_NUMA_INTERFACE_VERSION  0x01
>> +
>> +#define XEN_DOM_NUMA_CONFINE    0x01
>> +#define XEN_DOM_NUMA_SPLIT      0x02
>> +#define XEN_DOM_NUMA_STRIPE     0x03
>> +#define XEN_DOM_NUMA_DONTCARE   0x04
>
> What should the guest do with these? You're rather light on comments in this
> critical interface-defining header file.
I will add comments. The intent is to share this information with the hypervisor
and PV guests (for ballooning).

>
>> +struct xen_domain_numa_info {
>> +    uint8_t version;
>> +    uint8_t type;
>> +
>> +    uint8_t nr_vcpus;
>> +    uint8_t nr_vnodes;
>> +
>> +    /* XXX: hvm_info_table uses 32-bit for high_mem_pgend,
>> +     * so we should be fine 32-bits too*/
>> +    uint32_t nr_pages;
>
> If this is going to be visible outside HVMloader (e.g., in PV guests) then
> just make it a uint64_aligned_t and be done with it.

Will do that.
>
>> +    /* Only (nr_vnodes) entries are filled */
>> +    struct xen_vnode_info vnode_info[XEN_MAX_VNODES];
>> +    /* Only (nr_vnodes*nr_vnodes) entries are filled */
>> +    uint8_t vnode_distance[XEN_MAX_VNODES*XEN_MAX_VNODES];
>
> As suggested above, make these pointers out to dynamic-sized arrays. No need
> for XEN_MAX_VNODES at all.

In general, I realise I should add more comments.
>
>  -- Keir
>
>> +};
>> +
>> +#endif
>
> On 05/07/2010 09:52, "Dulloor" <dulloor@xxxxxxxxx> wrote:
>
>> oops .. sorry, here it is.
>>
>> -dulloor
>>
>> On Mon, Jul 5, 2010 at 12:39 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
>> wrote:
>>> This patch is incomplete.
>>>
>>>
>>> On 03/07/2010 00:54, "Dulloor" <dulloor@xxxxxxxxx> wrote:
>>>
>>>> Implement the structure that will be shared with hvmloader (with HVMs)
>>>> and directly with the VMs (with PV).
>>>>
>>>> -dulloor
>>>>
>>>> Signed-off-by : Dulloor <dulloor@xxxxxxxxx>
>>>
>>>
>>>
>
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.