WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] blkback/2.6.38: Use 'vzalloc' for page arrays ...

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: [Xen-devel] blkback/2.6.38: Use 'vzalloc' for page arrays ...
From: Daniel Stodden <daniel.stodden@xxxxxxxxxx>
Date: Fri, 11 Mar 2011 18:57:19 -0800
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 11 Mar 2011 18:58:42 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
commit ef19ebb7c4fe3e647cbc8d5bd6601a27bc6ab408
Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date:   Tue Mar 1 16:26:10 2011 -0500
    Previously we would allocate the array for page using 'kmalloc' 
    which we can as easily do with 'vzalloc'.

Sure? Vmalloc goes in numbers of pages. The overhead from that switch is
not huge (except for that xen_blkbk allocation), but the default vector
sizes don't justify vm area construction work either.

Mind if I'll push back with a couple kmallocs?

    The pre-allocation of pages
    was done a bit differently in the past - it used to be that
    the balloon driver would export "alloc_empty_pages_and_pagevec"
    which would have in one function created an array, allocated
    the pages, balloned the pages out (so the memory behind those
    pages would be non-present), and provide us those pages.

-       for (i = 0; i < mmap_pages; i++)
+       for (i = 0; i < mmap_pages; i++) {
                blkbk->pending_grant_handles[i] = BLKBACK_INVALID_HANDLE;
-
+               blkbk->pending_pages[i] = alloc_page(GFP_KERNEL | 
__GFP_HIGHMEM);

This is broken if CONFIG_HIGHMEM is actually set, because the current
code won't bother mapping that page:

(XEN) mm.c:3795:d0 Could not find L1 PTE for address 8fb51000

Second, the memory overhead of not ballooning than frame out anymore is
admittedly not gigantic, but doesn't look so sweet either.

Why are we doing this? We still need a page struct for starting I/O on
that foreign frame. The new M2P path won't touch those matters, unless
I've been missing some important pieces. It only covers our way back
from ptes to the mfn.

    This was OK as those pages were shared between other guest and
    the only thing we needed was to "swizzel" the MFN of those pages
    to point to the other guest MFN. We can still "swizzel" the MFNs
    using the M2P (and P2M) override API calls, but for the sake of
    simplicity we are dropping the balloon API calls. We can return
    to those later on.

So it's just transient for balloon.c maintenance? If so, the old
get_empty_pages_and_pagevec always carried a and_pagevec too many. :o)
Should just take a caller side vector, so coming up with a new call
would actually be a nice opportunity imho.

Cheers,
Daniel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>