[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: eliminating 166G limit (was Re: [Xen-devel] Problem with nr_nodes on large memory NUMA machine)


  • To: Jan Beulich <jbeulich@xxxxxxxxxx>
  • From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
  • Date: Tue, 27 Nov 2007 09:21:26 +0000
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 27 Nov 2007 01:15:58 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acgw1uJtIQgAbpzKEdyhwwAWy6hiGQ==
  • Thread-topic: eliminating 166G limit (was Re: [Xen-devel] Problem with nr_nodes on large memory NUMA machine)

On 27/11/07 09:00, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

>> I don't get how your netback approach works. The pages we transfer do not
>> originate from netback, so it has little control over them. And, even if it
>> did, when we allocate pages for network receive we do not know which
>> domain's packet will end up in each buffer.
> 
> Oh, right, I mixed up old_mfn and new_mfn in netbk_gop_frag(). Nevertheless
> netback could take care of this by doing the copying there, as at that point i
> already knows the destination domain.

You may not know constraints on that domain's max_mfn though. We could add
an interface to Xen to interrogate that, but generally it's not something we
probably want to expose outside of Xen and the guest itself.

>> Personally I think doing it in Xen is perfectly good enough for supporting
>> this very out-of-date network receive mechanism.
> 
> I'm not just concerned about netback here. The interface exists, and other
> users might show up and/or exist already. Whether it would be acceptable
> for them to do allocation and copying is unknown. You'd therefore either
> need a way to prevent future users of the transfer mechanism, or set proper
> requirements on its use. I think that placing extra requirements on the user
> of the interface is better than introducing extra (possibly hard to reproduce/
> recognize/debug) possibilities of failure.

The interface is obsolete.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.