WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [RFC PATCH 33/33] Add Xen virtual block device driver.

To: Chris Wright <chrisw@xxxxxxxxxxxx>
Subject: [Xen-devel] Re: [RFC PATCH 33/33] Add Xen virtual block device driver.
From: Arjan van de Ven <arjan@xxxxxxxxxxxxx>
Date: Tue, 18 Jul 2006 12:34:06 +0200
Cc: Andrew Morton <akpm@xxxxxxxx>, Zachary Amsden <zach@xxxxxxxxxx>, Jeremy Fitzhardinge <jeremy@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Ian Pratt <ian.pratt@xxxxxxxxxxxxx>, Rusty Russell <rusty@xxxxxxxxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, Andi Kleen <ak@xxxxxxx>, virtualization@xxxxxxxxxxxxxx, Christian Limpach <Christian.Limpach@xxxxxxxxxxxx>
Delivery-date: Thu, 20 Jul 2006 05:15:05 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20060718091958.657332000@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Intel International BV
References: <20060718091807.467468000@xxxxxxxxxxxx> <20060718091958.657332000@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, 2006-07-18 at 00:00 -0700, Chris Wright wrote:
> plain text document attachment (blkfront)
> The block device frontend driver allows the kernel to access block
> devices exported exported by a virtual machine containing a physical
> block device driver.

Hi,

as first general comment, I think that some of the memory allocation
GFP_ flags are possibly incorrect; I would expect several places to use
GFP_NOIO rather than GFP_KERNEL, to avoid recursion/deadlocks

> +static void blkif_recover(struct blkfront_info *info)
> +{
> +     int i;
> +     struct blkif_request *req;
> +     struct blk_shadow *copy;
> +     int j;
> +
> +     /* Stage 1: Make a safe copy of the shadow state. */
> +     copy = kmalloc(sizeof(info->shadow), GFP_KERNEL | __GFP_NOFAIL);

like here..

> +     memcpy(copy, info->shadow, sizeof(info->shadow));

and __GFP_NOFAIL is usually horrid; is this because error recovery was
an afterthought, or because it's physically impossible? In addition
__GFP_NOFAIL in a block device driver is... an interesting way to add
OOM deadlocks... have the VM guys looked into this yet?

> +#if 1
> +#define IPRINTK(fmt, args...) \
> +    printk(KERN_INFO "xen_blk: " fmt, ##args)
> +#else
> +#define IPRINTK(fmt, args...) ((void)0)
> +#endif

hmm isn't this a duplication of the pr_debug() and dev_dbg()
infrastructure? Please don't reinvent new ones..





_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>