WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: userspace block backend / gntdev problems

To: Gerd Hoffmann <kraxel@xxxxxxxxxx>
Subject: [Xen-devel] Re: userspace block backend / gntdev problems
From: Derek Murray <Derek.Murray@xxxxxxxxxxxx>
Date: Fri, 4 Jan 2008 14:50:31 +0000
Cc: Xen Development Mailing List <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 04 Jan 2008 06:51:18 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <477E3925.7070404@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <477E3925.7070404@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi Gerd,

On 4 Jan 2008, at 13:48, Gerd Hoffmann wrote:
First problem is the fixed limit of 128 slots. The frontend submits up
to 32 requests, with up to 11 grants each.  With the shared ring this
sums up to 353 grants per block device. When is blkbackd running in aio
mode, thus many requests are in flight at the same time and thus also
many grants mapped at the same time, the 128 limit is easily reached. I
don't even need to stress the disk with bonnie or something, just
booting the virtual machine is enougth.  Any chance replace the
fix-sized array with a list to remove that hard-coded limit?  Or at
least raise the limit to -- say -- 1024 grants?

The 128-grant limit is fairly arbitrary, and I wanted to see how people were using gntdev before changing this. The reason for using a fixed-size array is that it gives us O(1)-time mapping and unmapping of single grants, which I anticipated would be the most frequently- used case. I'll prepare a patch that enables the configuration of this limit when the device is opened.

Second problem is that batched grant mappings (using
xc_gnttab_map_grant_refs) don't work reliable. Symtoms I see are random
failures with ENOMEM for no obvious reason (128 grant limit is *far*
away).

If it's failing with ENOMEM, a possible reason is that the address space for mapping grants within gntdev (the array I mentioned above) is becoming fragmented. Are you combining the mapping of single grants and batches within the same gntdev instance? A possible workaround would be to use separate gntdev instances for mapping the single grants, and for mapping the batches. That way, the fragmentation should not occur, if the batches are all of the same size.

Also host kernel crashes (kernel 2.6.21-2952.fc8xen).

When does this happen? Could you post the kernel OOPS?

When using xc_gnttab_map_grant_ref only (no batching) and limiting the
number requests in flight to 8 (so we stay below the 128 grant limit)
everything works nicely though.

That's good to know, thanks!

Regards,

Derek Murray.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel