[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] modifying drivers


  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • From: Ritu kaur <ritu.kaur.us@xxxxxxxxx>
  • Date: Fri, 19 Feb 2010 14:30:02 -0800
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 19 Feb 2010 14:30:59 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=V3zWItqiDu4SWLDOBh530NSMD+c0mB48xfdeJmNUOB3wDrcpKWiuBM3hAu1KHHkaKK huXBNpWmc9oMOJQhFX6NvZJVncW1fesjwp/ua4s1F0Gmy9NqV87kE39Pw4gQ3zDPcQma 0vBPnVNXUGqmxXYwnKPD+pvAAUBeK61JJAx9I=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi Ian,

Thanks for the clarification. In our team meeting we decided to drop netback changes to support exclusive access and go with xe command line or xencenter way to do it(We are using Citrix Xenserver). Had couple of follow-up questions related to Xen.

1.Is it correct that netfront driver(or any *front driver) has to be explicitly integrated or compiled in the guest OS? the reason I ask this is,

a. In the documents I have read, it mentions guest OS can run without any modification, however, if above is true we have to make sure guest OS we use are compiled with the relevant *front drivers.

b. we had done some changes to netback and netfront(as mentioned in the previous email), when compiling kernel for dom0 it includes both netfront and netback and assumed via some mechanism this netfront driver would be integrated/installed into guest domains when they are installed.

2. Any front or back driver communication is via xenbus only?

3. Supporting ioctl calls. Our driver has ioctl support to read/write hardware registers and one solution was to use pci passthrough mechanism, however, it binds the NIC to a specific domU and we do not want that. We would like to have multiple users access to hw registers(mainly stats and other stuff) via guest domains and be able to access them simultaneously. For this, we decided to go with the mechanism of shared memory/event channel similar to front and back drivers.  Can you please provide some inputs on this?

Thanks


On Fri, Feb 19, 2010 at 9:24 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
On Fri, 2010-02-19 at 17:12 +0000, Ritu kaur wrote:
>
>
> On Fri, Feb 19, 2010 at 1:07 AM, Ian Campbell
> <Ian.Campbell@xxxxxxxxxx> wrote:

>         It's not overhead, it is the *right* way to implement control
>         operations
>         of this sort. Your QA scripts are ideally placed to do this.
>
> Can you elaborate on this? If I understand this correctly, you are
> saying QA scripts written by us can be used to  access or restrict
> i.e run these scripts from dom0 and allow or restrict access to a
> specific domU? I am not aware if this is possible without modifying
> toolstack?

You can use "xm network-attach" and "xm network-detach" to add and
remove a guest VIF to ensure only the guest you wish to test has a vif.
You can call these commands from scripts etc. You can also modify (or
generate) your guest configuration files as necessary to ensure guests
are started with the VIFs you require. Nothing here should require
toolstack or kernel modifications.

>
>         Are you sure you shouldn't be looking at PCI passthrough
>         support or
>         something of that nature?
>
>
> We are looking into this option as well. However from the following
> wiki it seems we have to compile guest OS with pcifrontend driver
> support.

Most PV guests have this support enabled out of the box.

> http://wiki.xensource.com/xenwiki/Assign_hardware_to_DomU_with_PCIBack_as_module?highlight=%28pci%29
>
> We are looking at different ways to accomplish the task and clearly we
> would like to test out all options before making a decision.
>
> Modifying netback is one of the options(not the final one) and clearly
> the changes we are doing has nothing netback specific, modifying and
> testing it out doesn't hurt either. Appreciate if you or someone on
> the list can provide some inputs on debugging the issue I mentioned in
> my first email.

I think you need to take a step back and become familiar with how a Xen
system currently works and is normally configured and managed before you
dive in and start modifying kernel drivers and toolstacks. You are in
danger of going completely off into the weeds at the moment.

Ian.

>
> Thanks
>
>
>         Ian.
>
>
>         >  Hence would like some inputs w.r.t debugging the netback
>         tx/rx code.
>         >
>         > Thanks
>         >
>         >
>         >
>         >         Ian.
>         >
>         >
>         >         >  For this,
>         >         >
>         >         > 1.  keep track of devices created via
>         netback_probe function
>         >         which is
>         >         > called for every device.
>         >         > 2. Use domid field in netif_st data structure
>         >         > 3. Added new function netif_check_domid and placed
>         it along
>         >         with
>         >         > netif_schedulable, I add a check if netif->domid
>         is the
>         >         lowest one(via
>         >         > list created in step 1)
>         >         > 4. Function netif_schedulable is called from
>         >         > a. tx_queue_callback
>         >         > b. netif_be_start_xmit
>         >         > c. net_rx_action
>         >         > d. add_to_net_schedule_tail
>         >         > e. netif_be_int
>         >         >
>         >         > This works fine for the first vm that comes up.
>         However,
>         >         subsequent vm
>         >         > bringup has issues which reboots dom0 itself.
>         >         >
>         >         > 5. I removed the function added by me in function
>         >         netif_be_start_xmit
>         >         > only, this allows multiple vm's to be up and will
>         allow only
>         >         first vm
>         >         > to access netback. However, this has issues with
>         the second
>         >         > functionality I would like to have i.e when first
>         vm is
>         >         suspended,
>         >         > next vm in the list should get access. I added
>         kernel
>         >         printfs in above
>         >         > functions and none of them are called after first
>         vm is
>         >         suspended and
>         >         > subsequent vm is trying to access.
>         >         >
>         >         > Wanted to know inputs from experts on this and how
>         to
>         >         proceed with
>         >         > debugging.
>         >         >
>         >         > Thanks
>         >         >
>         >
>         >
>         >
>         >
>
>
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.