WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: eliminating 166G limit (was Re: [Xen-devel] Problem with nr_nodes on

To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
Subject: Re: eliminating 166G limit (was Re: [Xen-devel] Problem with nr_nodes on large memory NUMA machine)
From: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Date: Tue, 27 Nov 2007 09:00:44 +0000
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 27 Nov 2007 01:00:19 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C3718C3B.10C5C%Keir.Fraser@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <474BE6C2.76E4.0078.0@xxxxxxxxxx> <C3718C3B.10C5C%Keir.Fraser@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> 27.11.07 09:56 >>>
>On 27/11/07 08:43, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>
>> I think page allocation in this path isn't nice, at least not without success
>> guarantee (not the least because because netback doesn't check return
>> values). I would therefore rather see a solution in placing the burden of
>> ensuring accessibility on the producer (netback) of the page, and fail the
>> transfer if the destination domain can't access the page (whether to be
>> nice and try an allocate-and-copy operation here is a secondary thing).
>> 
>> Netback would then need to determine the address size of netfront's domain
>> (just like blkback and blktap do, except that HVM domains should also be
>> treated as not requiring address restriction), and have two pools of pages
>> for use in transfers - one unrestricted and one limited to 37 address bits
>> (the
>> two could be folded for resource efficiency if the machine has less than
>> 128G). Besides that, netback would also start checking return values of the
>> multicall pieces.
>
>I don't get how your netback approach works. The pages we transfer do not
>originate from netback, so it has little control over them. And, even if it
>did, when we allocate pages for network receive we do not know which
>domain's packet will end up in each buffer.

Oh, right, I mixed up old_mfn and new_mfn in netbk_gop_frag(). Nevertheless
netback could take care of this by doing the copying there, as at that point i
already knows the destination domain.

>Personally I think doing it in Xen is perfectly good enough for supporting
>this very out-of-date network receive mechanism.

I'm not just concerned about netback here. The interface exists, and other
users might show up and/or exist already. Whether it would be acceptable
for them to do allocation and copying is unknown. You'd therefore either
need a way to prevent future users of the transfer mechanism, or set proper
requirements on its use. I think that placing extra requirements on the user
of the interface is better than introducing extra (possibly hard to reproduce/
recognize/debug) possibilities of failure.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel