This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Solution for problems with HyperSCSI and vbds ?

To: sven.kretzschmar@xxxxxx
Subject: Re: [Xen-devel] Solution for problems with HyperSCSI and vbds ?
From: Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>
Date: Wed, 15 Oct 2003 18:48:15 +0100
Cc: Keir.Fraser@xxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxxx, Ian.Pratt@xxxxxxxxxxxx
Delivery-date: Wed, 15 Oct 2003 18:49:38 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: Your message of "Wed, 15 Oct 2003 19:13:49 +0200." <200310151913490750.0041ADE7@xxxxxxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
> >[Ian:]The main thing would be turning the VFR into more of an L2 switch
> >than a router, with each domain having its own MAC[*]. We could then
> >add a rule to grant a domain TX permission for a particular 802
> >protocol number. HyperSCSI presumably has some high-level
> >server-based authentication and privilege verification? If so, it
> >should be pretty straightforward. 
> This is much better, though more complicated too ;-)
> However, I wouldn't do this based on protocols or routing HyperSCSI
> ether packets or the need to use HyperSCSI kernel modules in 
> domains > 0 (Perhaps too complicated and only a special solution for this
> problem).

I still like my proposal ;-)

It's pretty straight forward to implement, is relatively clean,
and will have good performance. 

However, if you're exporting a single disk from the HyperSCSI
server its not much help.

> The virtual block device driver mapps this to /dev/sda and forwards
> the request to Xen (perhaps it also tags this request as a request
> to a "special device" before forwarding the request to Xen).
> Xen realizes that there is no physical device connected to /dev/sda
> (or registered with Xen ? Maybe it can then also recognize that
> the request was marked as targeting a "special device").
> Because of that condition, it forwards this block device request
> to DOM0 now in which a "request handler" kernel module will listen for 
> block device requests which may be forwarded to DOM0 from 
> Xen to be handled in DOM0 (It will need to register a callback 
> function with Xen in order to do so).

I think your best solution is not to use Xen vbd's at all.  If
you don't like NFS, how about having domains >0 using "enhanced
network block devices" which talk to a simple server running in
domain0. The storage for the nbd server can be files, partitions
or logical volumes on /dev/sda.

This should require writing no code, and will give pretty good
performance. It gives good control over storage allocations etc.


[It appears to work as a rootfs, but I haven't verified]


This SF.net email is sponsored by: SF.net Giveback Program.
SourceForge.net hosts over 70,000 Open Source Projects.
See the people who have HELPED US provide better services:
Click here: http://sourceforge.net/supporters.php
Xen-devel mailing list