[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Problems with latest unstable 1.3



Okay, I checked in a patch to sort out all these problems. Please try
it out on QEMU again and let me know if probing fails.

Note that only DOM0 will be useful in such a setup right now --- there
is no way for other domains to access devices that are controlled by
DOM0. New inter-domain virtual device drivers to cope with this are in
the pipeline...

 -- Keir

> 
> The PCI- and IRQ-virtualisation is not quite there yet -- but hopefully
> it will be in a couple of days.
> 
> Issues at the moment are:
>  - probing and routing of device interrupt pins -> IRQs is broken.
>  - passing of physical interrupts to guest OSes is untested and thus
>    probably broken in some way or another.
> 
> I'm currently addressing all these problems.
> 
>  -- Keir
> 
> > I have been trying out the latest unstable with the new i/o and have found
> > the following issue. I created a xen.gz with nodev=y set, and tried it out
> > with all my devices in xenolinux.
> > 
> > The "machine" I am running on is "qemu", and it doesn't have emulation for
> > PCI. Therefore xenolinux is doing its ideprobes independent of the pci ide
> > code.
> > It is calling the routine "probe_irq_on" in irq.c, and it is failing with
> > the following messages:
> > 
> > ide: Assuming 50MHz system bus speed for PIO modes; override with idebus=xx
> > Kernel panic: Failed to obtain physical IRQ 127
> > 
> > "Probe_irq_on" is used to enable ALL unallocated irqs, the caller code will
> > then twig the device you are probing (in my case it is the ide drives for
> > ide0 disk, and ide0 cdrom) - and then will record the irq that actually got
> > the interrupt thus figuring out which irq belongs to which device.
> > 
> > The reasons it is failing seem to be the following:
> > 1) The probe enables 127 physical IRQs (NR_PIRQS), but xen fails to bind any
> > pirq > 63. This is because sched.h only defines pirq_to_evtchn with a size
> > of 64
> > 2) When I tried making that constant from 64->128, it still failed on IRQ 12
> > (which I think was already allocated to another device).
> > 
> > I was able to get much much further by setting "ide0=0x1f0,0x3f6,14
> > ide1=noprobe ide2=noprobe ide3=noprobe" on the command line. It still failed
> > much later on with an MMU update failure. I am currently tracking that one
> > down further before reporting it.
> > 
> > 
> > Barry Silverman
> > 
> > 
> > 
> > -------------------------------------------------------
> > This SF.Net email is sponsored by: IBM Linux Tutorials
> > Free Linux tutorial presented by Daniel Robbins, President and CEO of
> > GenToo technologies. Learn everything from fundamentals to system
> > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxxxx
> > https://lists.sourceforge.net/lists/listinfo/xen-devel
> 
> 
> 
> -------------------------------------------------------
> This SF.Net email is sponsored by: IBM Linux Tutorials
> Free Linux tutorial presented by Daniel Robbins, President and CEO of
> GenToo technologies. Learn everything from fundamentals to system
> administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.sourceforge.net/lists/listinfo/xen-devel



-------------------------------------------------------
This SF.Net email is sponsored by: IBM Linux Tutorials
Free Linux tutorial presented by Daniel Robbins, President and CEO of
GenToo technologies. Learn everything from fundamentals to system
administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.