WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Fwd: Re: [Xen-devel] Zaptel PCI IRQ problem]

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Fwd: Re: [Xen-devel] Zaptel PCI IRQ problem]
From: François Delawarde <fdelawarde@xxxxxxxxxxxxxxxxx>
Date: Mon, 21 May 2007 17:38:51 +0200
Delivery-date: Mon, 21 May 2007 08:44:13 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <907625E08839C4409CE5768403633E0B018E1D55@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <907625E08839C4409CE5768403633E0B018E1D55@xxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Icedove 1.5.0.10 (X11/20070329)
Thanks for answering,

I'm not sure Dom0 has its own CPU in that case, but the problem happens when I don't have DomUs (the problem is not that much worse with DomUs installed). That's why I don't really understand why I have those latency problems with interrupts, as a normal Linux kernel behaves perfectly when loaded (no IRQ loss).

I don't think scheduling is in cause here. Of course with better scheduling, there would be less load and it would 'betterize' my case, but the main problem seam to be where interrupts are handled.

Do you think there is anything I could try? Oh, and I tried with 3.1 this morning (with basic 2.6.18 Xen kernel without any customization), and for now on, the same problems without a single DomU.

François.



Petersson, Mats wrote:
-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
François Delawarde
Sent: 21 May 2007 15:20
To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Fwd: Re: [Xen-devel] Zaptel PCI IRQ problem]

Hi,

Sorry to insist, but I would really like to be able to use Xen and my 
zaptel hardware all together in Dom0. I was just wondering if the 3.1 
release could contain some changes compared to 3.0.4 related to IRQ 
handling or scheduling that could workout my problem.
    

As far as I can see, there's no improvement in 3.1 over 3.0.4 as to how interrupts are handled or how the scheduling works [I'm not really sure how that could practically be improved wihtout loosing performance elsewhere - this is a case of "you can make it right for some people some of the time, but not all people all of the time"]. 

This can possibly be solved by restricting which other domains run on the same CPU as Dom0. However, there will certainly be some load on Dom0 because of qemu-dm being there, but unless you're running disk or network benchmarks on your DomU, you should have reasonable performance in Dom0 without much effort. 

If you share the Dom0 CPU with DomU's, then you have little chance to get it to wrok. 

All this is assuming I understand correctly that the real problem here is that the latency between the interrupt and the actual code executing in user-mode is the key to the problem. Making sure Dom0 runs on it's own CPU will make sure that there's very little overhead compared to native. 

--
Mats
  
Thanks,
François.

----

I actually first asked to asterisk mailing lists, and a few 
persons told 
me that it was Xen's fault, as it was not yet 'mature' enough 
to have a 
good IRQ handling under load.

Note that I made tests the last few days as I wasn't sure if 
it was Xen 
or not, and the exact same system works perfectly with a normal Linux 
kernel (same config file except for Xen stuff that are 
removed). A Dom-0 
kernel without any VMs running comports itself the way I described 
(bad), and I tried both schedulers (sedf and credit) without success.

It doesn't appear to be a load problem as the load is about the same 
with the non-Xen kernel I tried, but with IRQ handling in 
load period. 
I'm talking about a machine that is certainly not 
over-loaded, but that 
once in a while suffers some iowait for disk access. Under 
Xen kernel, 
if I kill everything I can and only leave Asterisk with at most one 
simultaneous conversation, it works quite nice.

I'm using the debian (I think they actually come from fedora) patches 
for 2.6.18, and just want to know if this issue is known or has/will 
been/be resolved somehow in future versions, if there is anyway I can 
deal with it with some kernel configuration, or if I should 
wait a few 
months/years more to be able to use Xen in my specific setting.

Thanks,
François.


Ian Pratt wrote:
    
I'm currently trying to run an Asterisk server in a Xen 
        
kernel under
    
Dom0 (debian kernel 2.6.18 with xen hypervisor 3.0.4). I 
        
had read of
    
some possible timing issues with ztdummy (using rtc) under 
        
DomU, but I
    
have a zaptel compatible PCI card (TDM400P), and I experience big
problems with IRQ misses every time there is a bit of load 
        
on the server
    
(for example, when an HVM DomU is running). The card is supposed to
report 1000 interruptions per second, but it doesn't, and 
        
consequences
    
are horrible crackling sound in communications. Running the utility
zttest to check for the stability of those interrupts 
        
under a small bit
    
of load, i get:
    
        
I believe folk have had success running asterisk in a domU 
      
and assigning the PCI device directly to the guest. It's best 
to set the affinity masks for other guests and dom0 such that 
the domU with asterisk in it has a dedicated physical CPU core. 
    
We ran asterisk on an older version of Xen without any 
      
problems, and nothing has changed that should effect xen's 
ability to do this. [you could try using the sedf scheduler 
if you still have problems with 'credit']
    
Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

  
      
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel



    



  

--

 _________________________

François Delawarde

Ingeniero de red

Tel: 918.03.92.51

E-mail: fdelawarde@xxxxxxxxxxxxxxxxx

 _________________________

http://www.wirelessmundi.com/

C/Isaac Newton, 1 - Oficina 26 · Parque Tecnológico de Madrid

28760 TRES CANTOS (Madrid)

Tlf./Fax: (+34) 918 03 92 51


La información contenida en este mensaje y en sus archivos adjuntos es CONFIDENCIAL y se dirige exclusivamente a sus destinatarios. Queda expresamente prohibida la utilización de la misma por cualquier persona distinta de los destinatarios de esta comunicación. Si usted ha recibido este mensaje por error le rogamos que lo comunique inmediatamente a WIRELESS MUNDI y lo borre al igual que todos sus documentos adjuntos. El correo electrónico no puede asegurar la confidencialidad ni la integridad de sus mensajes por lo que WIRELESS MUNDI no se hace responsable de tales errores u omisiones.

----------0----------

All information in this message and its attachments is confidential and may be legally privileged. Only intended recipients are authorized to use it. If you have received this transmission in error, please notify WIRELESS MUNDI immediately and delete this message and its attachments. E-mail transmissions are not guaranteed to be secure or error free and WIRELESS MUNDI does not accept liability for such errors or omissions.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>