WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] single kernel for xen0 & xenU

To: James Harper <JamesH@xxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] single kernel for xen0 & xenU
From: "M.A. Williamson" <maw48@xxxxxxxxx>
Date: 18 Oct 2004 14:12:20 +0100
Cc: Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>, Xen Devel Mailing List <xen-devel@xxxxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 19 Oct 2004 12:13:56 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: <AEC6C66638C05B468B556EA548C1A77D3BE251@trantor>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D3BE251@trantor>
Reply-to: maw48@xxxxxxxxxx
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
That being true then, is there any particular reason why we have
separate kernels?

Simply because (with default settings) the XenU kernel is 30% smaller.

> There haven't been any API changes for some time.

That's good to know.

After the 2.0 release, we'd seek to keep APIs / ABIs stable and just fix bugs or add minor features that don't tread on anything pre-existing.

Do you have any opinion on how best to organise it? Currently I have 1
iscsi target (running linux) and 2 xen physical hosts. The target
currently exports lvm logical volumes which the xenU domains see as
physical disks with a partition table etc. This works well within the
domains but accessing them for maintenance outside is a right pain.

In principle, dom0 should be able to export VBDs to itself, then you could see the partitions inside. I don't know if this works at the moment but it seems doable... Anyone tried this?

How do you import the partitions into Dom0 such that they can be
exported into DomU? Do you run into problems if multiple physical
machines see the same iscsi disks? What if multiple physical machines
see the same volume group?

I'd import the iSCSI disks in dom0 and then export that device as if it were a physical device. i.e. if you've imported to dom0 as /dev/foobar then put export 'phy:/dev/foobar,/dev/target_dev,w' in the domain config file.

This could be automated using a shell script (as for file disks and nbd disks) if you feel like saving time, then you could just have 'iscsi:host:whatever,/dev/target_dev,w'... There's examples under /etc/xen/scripts/ but we can help you out.

HTH,
Mark

Thanks

James


------------------------------------------------------- This SF.net email is sponsored by: IT Product Guide on ITManagersJournal Use IT products in your business? Tell us what you think of them. Give us Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more http://productguide.itmanagersjournal.com/guidepromo.tmpl _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/xen-devel


-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel

<Prev in Thread] Current Thread [Next in Thread>