WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [XEN][vNUMA][PATCH 3/9] public interface

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [XEN][vNUMA][PATCH 3/9] public interface
From: Dulloor <dulloor@xxxxxxxxx>
Date: Tue, 6 Jul 2010 10:52:30 -0700
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 06 Jul 2010 10:53:29 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=VN9DU8Qq2SXmzlvDNzABUKkTZryixKBRtcrP0nTzLu8=; b=jffzc9MNjcXr1KjzJ/OvTBMy23khJ385mIhi6RRh7GxP0XXqKbRntjWvSjOuD5UOiL 6mUZCGVElPzloKWurjzkBf7JTrv0zLYF8B0cSZvZ8NL5K1+sxHkNmRuHZEqXHpkfCWK6 V00PBKd/T75+7YTOkUijgCPLsV3giAf0S5PDY=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=YRAhMoQxrVuHp1C/jjnHudio92HvALVEFg22IPvuoCqHHaJxMA8CCckCpCjlyWvLPl TpKoolk5VoMkObttx/o3BijYW6GLBMo/PiXpe0xMkqNwYPKlAuM8zkR1Bgd5EzWlFF4W ckuRSlqBrCFQAOtDNprgLOo8zSQ/a/B4wAVok=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C858E6AE.197BE%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTinMEypyPK_R6eBgx_7ar95SUKQWSHczEhINDuzQ@xxxxxxxxxxxxxx> <C858E6AE.197BE%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thanks for the comments. I will make changes over the weekend and post
v2 patches.

thanks
dulloor

On Tue, Jul 6, 2010 at 5:57 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx> wrote:
> On 06/07/2010 06:57, "Dulloor" <dulloor@xxxxxxxxx> wrote:
>
>>> What are xc_cpumask (a libxc concept) related definitions doing in a
>>> hypervisor public header? These aren't even used in this header file. Below
>>> I suggest a vcpu_to_vnode[] array, which probably gets rid of the need for
>>> this bitmask stuff anyway.
>>
>> Stale comment with xc_cpumask .. sorry !
>> I did think of the vcpu_to_vnode array, but then we use the bitmask in
>> hvm_info
>> anyway (with vcpu_online). I thought I could atleast fold them into a
>> single structure.
>> I could change that if you insist.
>
> I think overall vnode_to_vcpu[] is a better way round, unless the per-node
> vcpu maps are really particularly handy for some reason.
>
>>> A small number to be statically defined. Better to make your structure
>>> extensible I think, perhaps including pointers out to vnode-indexed arrays?
>> This structure is passed in hvm_info page. Should I use offset/len for these
>> dynamic-sized, vnode-indexed arrays ?
>
> The 'hvm_info page' is a slightly restrictive concept really. Actually the
> hvm_info data gets plopped down at a fixed location below 1MB in the guest's
> memory map, and you can just extend from there even across a page boundary.
> I would simply include pointers out to the dynamically-sized arrays; and
> their sizes should be implicit given nr_vnodes.
>
>>> How do vnodes and mnodes differ? Why should a guest care about or need to
>>> know about both, whatever they are?
>> vnode_id is the node-id in the guest and mnode_id refers to the real node
>> it maps to. Actually I don't need vnode_id. Will take that out.
>
> Yes that's a completely pointless unnecessary distinction.
>
>>>
>>>> +    uint32_t nr_pages;
>>>
>>> Not an address range? Is that implicitly worked out somehow? Should be
>>> commented, but even better just a <start,end> range explicitly given?
>>
>> The node address ranges are assumed contiguous and increasing. I will
>> change that to <start,end> ranges.
>
> Thanks.
>
>>>
>>>> +    struct xen_cpumask vcpu_mask; /* vnode_to_vcpumask */
>>>> +};
>>>
>>> Why not have a single integer array vcpu_to_vnode[] in the main
>>> xen_domain_numa_info structure?
>>
>> No specific reason, except that all the vnode-related info is
>> folded into a single structure. I will change that if you insist.
>
> Personally I think it it would be neater to change it. A whole bunch of
> cpumask machinery disappears.
>
>  -- Keir
>
>>>
>>>> +#define XEN_DOM_NUMA_INTERFACE_VERSION  0x01
>>>> +
>>>> +#define XEN_DOM_NUMA_CONFINE    0x01
>>>> +#define XEN_DOM_NUMA_SPLIT      0x02
>>>> +#define XEN_DOM_NUMA_STRIPE     0x03
>>>> +#define XEN_DOM_NUMA_DONTCARE   0x04
>>>
>>> What should the guest do with these? You're rather light on comments in this
>>> critical interface-defining header file.
>> I will add comments. The intent is to share this information with the
>> hypervisor
>> and PV guests (for ballooning).
>>
>>>
>>>> +struct xen_domain_numa_info {
>>>> +    uint8_t version;
>>>> +    uint8_t type;
>>>> +
>>>> +    uint8_t nr_vcpus;
>>>> +    uint8_t nr_vnodes;
>>>> +
>>>> +    /* XXX: hvm_info_table uses 32-bit for high_mem_pgend,
>>>> +     * so we should be fine 32-bits too*/
>>>> +    uint32_t nr_pages;
>>>
>>> If this is going to be visible outside HVMloader (e.g., in PV guests) then
>>> just make it a uint64_aligned_t and be done with it.
>>
>> Will do that.
>>>
>>>> +    /* Only (nr_vnodes) entries are filled */
>>>> +    struct xen_vnode_info vnode_info[XEN_MAX_VNODES];
>>>> +    /* Only (nr_vnodes*nr_vnodes) entries are filled */
>>>> +    uint8_t vnode_distance[XEN_MAX_VNODES*XEN_MAX_VNODES];
>>>
>>> As suggested above, make these pointers out to dynamic-sized arrays. No need
>>> for XEN_MAX_VNODES at all.
>>
>> In general, I realise I should add more comments.
>>>
>>>  -- Keir
>>>
>>>> +};
>>>> +
>>>> +#endif
>>>
>>> On 05/07/2010 09:52, "Dulloor" <dulloor@xxxxxxxxx> wrote:
>>>
>>>> oops .. sorry, here it is.
>>>>
>>>> -dulloor
>>>>
>>>> On Mon, Jul 5, 2010 at 12:39 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
>>>> wrote:
>>>>> This patch is incomplete.
>>>>>
>>>>>
>>>>> On 03/07/2010 00:54, "Dulloor" <dulloor@xxxxxxxxx> wrote:
>>>>>
>>>>>> Implement the structure that will be shared with hvmloader (with HVMs)
>>>>>> and directly with the VMs (with PV).
>>>>>>
>>>>>> -dulloor
>>>>>>
>>>>>> Signed-off-by : Dulloor <dulloor@xxxxxxxxx>
>>>>>
>>>>>
>>>>>
>>>
>>>
>>>
>
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel