[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Trying to get HyperSCSI and Xen to work... ;-)



> After some puzzling I was able to compile, load and use the 
> HyperSCSI client(!) module in Xen's domain 0.
> It reported that it found the eth0 interface, but that there are no
> HyperSCSI servers on the local network.
> The same try with booting a "normal/standard" kernel on the same machine
> worked out OK, so it's a problem with Xen's VFR, I think.

Absolutely. Out of the box Xen currently only groks IP and ARP
packets. I've appended a patch to (hopefully) fix this for the first
VIF of domain 0. 

> How is this done correctly ????  (Of course, the above code did NOT work ;-)

Your fix to network.c is unnecessary -- by default unrouteable packets
from DOM0 are sent to the physical interface, and incoming packets are
sent to DOM0. However, a fix *is* required to net/dev.c.

I've appended an alternative patch, to net/dev.c. It compiles but I
haven't actually tested it out ;-) 

> 1.) I am not sure about the exact difference between VIF_PHYS and
> VIF_PHYSICAL_INTERFACE ?

The latter is used in the network rule lists. The former is returned
by get_target_vif to inform the network data path to route a packet to
teh physical interface.

> 2.) What exactly is the meaning and use of VIF_SPECIAL ?

It's a revolting hack to get the currently broken packet-forwarding
code to work properly. I very much want to get rid of it as soon as
possible :-)

> 3.) I am not really understanding the use of VIF_DOMAIN_SHIFT and 
>      VIF_DOMAIN_MASK  (some way to identify the domain # only from
>      the id by shifting ?)  ?

It's a packed representation for addressing VIFs. VIFs are indexed per
domain. E.g. DOM2, idx 3. The packed representation of that VIF, as
passed to get_vif_by_id(), is (2<<VIF_DOMAIN_SHIFT)|3.

There ought to be neat access macros to hide the packed
representation. They haven't been written yet. :-)

A lot of the network control code (e.g., routing of packets) isn't
great. We want to rework it at some point in the near future -- adding
more flexible packet forwarding, filtering and rewriting. The code has
evolved into its current form rather than being cleanly designed :-(

 Regards,
 Keir

--- 1.62/xen/net/dev.c  Tue Sep 30 12:47:02 2003
+++ edited/xen/net/dev.c        Thu Oct  9 00:14:19 2003
@@ -1800,20 +1800,28 @@
     return 0;
 }
 
-inline int init_tx_header(u8 *data, unsigned int len, struct net_device *dev)
+inline int init_tx_header(net_vif_t *vif, u8 *data, 
+                          unsigned int len, struct net_device *dev)
 {
+    int proto = ntohs(*(unsigned short *)(data + 12));
+
     memcpy(data + ETH_ALEN, dev->dev_addr, ETH_ALEN);
         
-    switch ( ntohs(*(unsigned short *)(data + 12)) )
+    switch ( proto )
     {
     case ETH_P_ARP:
         if ( len < 42 ) break;
         memcpy(data + 22, dev->dev_addr, ETH_ALEN);
-        return ETH_P_ARP;
+        break;
     case ETH_P_IP:
-        return ETH_P_IP;
+        break;
+    default:
+        /* Unsupported protocols are onyl allowed to/from VIF0/0. */
+        if ( (vif->domain->domain != 0) || (vif->idx != 0) )
+            proto = 0;
+        break;
     }
-    return 0;
+    return proto;
 }
 
 
@@ -1884,7 +1892,7 @@
         g_data = map_domain_mem(tx.addr);
 
         protocol = __constant_htons(
-            init_tx_header(g_data, tx.size, the_dev));
+            init_tx_header(vif, g_data, tx.size, the_dev));
         if ( protocol == 0 )
         {
             __make_tx_response(vif, tx.id, RING_STATUS_BAD_PAGE);


-------------------------------------------------------
This SF.net email is sponsored by: SF.net Giveback Program.
SourceForge.net hosts over 70,000 Open Source Projects.
See the people who have HELPED US provide better services:
Click here: http://sourceforge.net/supporters.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.