WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] single kernel for xen0 & xenU

To: "James Harper" <JamesH@xxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] single kernel for xen0 & xenU
From: Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>
Date: Mon, 18 Oct 2004 14:12:44 +0100
Cc: "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxx>, "Xen Devel Mailing List" <xen-devel@xxxxxxxxxxxxxxxxxxxxx>, Ian.Pratt@xxxxxxxxxxxx
Delivery-date: Mon, 18 Oct 2004 14:21:18 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: Your message of "Mon, 18 Oct 2004 21:22:20 +1000." <AEC6C66638C05B468B556EA548C1A77D3BE251@trantor>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
> > There's no downside to using a xen0 kernel in other domains,
> > apart from a bit of extra bloat and a slightly longer boot time.
> 
> That being true then, is there any particular reason why we have
> separate kernels?

There was a time when a xen0 kernel wouldn't work if it was
running in a non privileged domain. It now correctly handles
privilege violations and continues.

Some people still like a lean-and-mean stripped down kernel...
 
> Do you have any opinion on how best to organise it? Currently I have 1
> iscsi target (running linux) and 2 xen physical hosts. The target
> currently exports lvm logical volumes which the xenU domains see as
> physical disks with a partition table etc. This works well within the
> domains but accessing them for maintenance outside is a right pain.

LVM seems to work well for carving up the disk space, but I've
just switched over to gnbd for exporting it to my client
machines.

Actually, all of my clients are also servers, and IBERIA run both
gnbd clients and servers on each to allow transparent access to
LVM partitions across the cluster.

I've been meaning to knock up a xend block device script that
auto imports the devices, optimising the case where the device is
local. I guess I'll have the syntax as gnbd:hostname/device
but they'll need to be some convention for creating gnbd export
names, such as hostname-device. 
 
> How do you import the partitions into Dom0 such that they can
> be exported into DomU? Do you run into problems if multiple
> physical machines see the same iscsi disks?  What if multiple
> physical machines see the same volume group?

Having multiple machines connect to the same iscsi or gnbd target
seems to work fine. Obviously, you should make sure that the
target is only mounted from one place at a time (unless you're
using a cluster file systems like ocfs2).

Ian



-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel

<Prev in Thread] Current Thread [Next in Thread>