[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: Further problems with HyperSCSI and vbds...



> I think the problem here is, that HyperSCSI attaches /dev/sda 
> without really knowing anything about Xen ;-)
> Xen also knows nothing about this "faked" physical SCSI 
> device on /dev/sda, only xenolinux does, because of the loaded 
> HyperSCSI kernel module driver.

Yes, you've hit the nail on the head. Although you construct VBDs out
of carved up hd* and sd* partitions, those partitions have to be on
devices that Xen knows about. So, when you try and access the VBD Xen
maps the request to a non-existent local SCSI disc :-)
 
> So, perhaps the virtual block driver in xenolinux tries to access the 
> faked physical /dev/sda device via Xen, but as Xen does not know it, 
> this somehow does not really work. (Btw: Shouldn't this result in some 
> printk() error messages in the xenolinux virtual block driver ?)

I'll add the debugging back into the xenolinux driver. In any case, a
bit more noise from our development tree would be no bad thing!

> The virtual block driver in xenolinux should instead recognize that 
> this is not a physical device registered with Xen and should try to
> forward these disk requests and ioctls directly to the /dev/sda(X) device,
> instead of sending it to Xen.
> Of course, this should only by allowed for devices (or device drivers)
> loaded in domain0 ??

Why do you want to construct VBDs if only domain 0 is going to access
them? However, if that's all you want to do then yes --- modificatiosn
to xl_scsi.c will suffice.

> I know that this might violate the design principle of Xen to be the
> only component which has direct access to the hardware.
> However, the /dev/sd* devices from HyperSCSI are not really local
> hardware, it's only a "faked" physical disk.

DOM0 is allowed unrestricted access to hardware already. Otherwise X
wouldn't work :-)
 
> I would be interested in some thoughts about that from the Xen project
> team and list readers, because I consider HyperSCSI to be an important
> feature for xenolinux domains.
> It would allow you to store the whole filesystems of a lot of domains from
> several physical machines, which are running xen/xenolinux, on one big
> fileserver.
> As HyperSCSI is a very quick and efficient protocol / implementation, this
> would be a lot quicker and remarkably more efficient than using NFS for
> the same task.

There are a few options to allow HyperSCSI access from all domains:

 1. NFS-mount HyperSCSI partitions via domain 0 (this will work
already)

 2. NFS-mount VBDs which map onto chunks of HyperSCSI disk, via domain
0 (this might work if you hack DOM0's xl_scsi.c a bit so that DOM0
VBDs can map onto HyperSCSI block devices).

 3. Add proper support for HyperSCSI to Xen. You'd need some scheme
for validating transmits which use the HyperSCSI transport, and
demusing received frames to the appropriate domain. I don't know
anything about teh protocol, so I don't know how easy this would be
(e.g. how much state Xen would need to keep lying around).

 -- Keir


-------------------------------------------------------
This SF.net email is sponsored by: SF.net Giveback Program.
SourceForge.net hosts over 70,000 Open Source Projects.
See the people who have HELPED US provide better services:
Click here: http://sourceforge.net/supporters.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.