WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Solution for problems with HyperSCSI and vbds ?

To: sven.kretzschmar@xxxxxx
Subject: Re: [Xen-devel] Solution for problems with HyperSCSI and vbds ?
From: Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>
Date: Wed, 15 Oct 2003 23:07:52 +0100
Cc: Ian.Pratt@xxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxxx, Ian.Pratt@xxxxxxxxxxxx
Delivery-date: Wed, 15 Oct 2003 23:09:11 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: Your message of "Wed, 15 Oct 2003 23:19:02 +0200." <200310152319020683.0075BD3C@xxxxxxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
> Thinking about just 3 DOM0 HyperSCSI clients connecting 
> to the HyperSCSI-Server directly feels somehow more comfortable.
> (e.g. a lot easier administration, less points of failiure.)
> The 3 DOM0s in this example can then export the HyperSCSI
> device(s) via whatever means to the domains > 0.

Of course, the absolutely proper solution is to put HyperSCSI
into Xen, so that Xen's block device interface could be used by
guest OSes to talk directly with the remote disk.

However, I wouldn't want to contemplate putting a big gob of code
like HyperSCSI into Xen until we have implemented the plan for
ring-1 loadable modules support. This would then give us a
shared-memory block device interface between guestOSes and the
HyperSCSI driver (also running in ring1). The HyperSCSI driver
would then talk to the network interface, again using
shared-memory.
 
> Thanks a lot for pointing me to this solution !
> I will look into it during the next days (especially performance ;-).

I'm looking forward to hearing how you get on.

> A propos:
> Did you ever make benchmarks about the average or maximum
> throughput of your VFR implementation in XEN ?

The throughput between domains and the real network interface is
_very_ good, easily able to saturate a 1Gb/s NIC, probably good
for rather more.

However, I'm afraid to say that we recently discovered that our
inter domain performance is pretty abysmal -- worse than our 
performance over the real network, which is simultaneously
amusing and sad. 

The problem is that we currently don't get the asynchronous
`pipelining' when doing inter-domain networking that gives good
performance when going to an external interface: since the
communication is currently synchronous we don't get back pressure
to allow a queue to build up as would happen with a real NIC. The
net result is that we end up bouncing in and out of xen several
times for each packet.

I volunteered to fix this, but I'm afraid I haven't had time as
yet. I'm confident we should end up with really good inter domain
networking performance, using pipelining and page flipping.

> Also, did you make some benchmarks about the amount
> of performance degradation by using vbds/vds for disk access
> compared with using the block device directly (test in DOM0)?

Performance of vbds and raw partitions should be identical. Disks
are slow -- you have to really work at it to cock the performance
up ;-)

> Could mounting /dev/sda via enbd be more performant or
> at least nearly equally performant to using vds and vbds 
> because of the additional overhead of vd/vbd use... ??

Performance using enbd should be pretty good once we've sorted
out inter domain networking.


Ian


-------------------------------------------------------
This SF.net email is sponsored by: SF.net Giveback Program.
SourceForge.net hosts over 70,000 Open Source Projects.
See the people who have HELPED US provide better services:
Click here: http://sourceforge.net/supporters.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel