[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH 2/3][RFC] MSI/MSI-X support fordom0/driver domain


  • To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>, "Tian, Kevin" <kevin.tian@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
  • Date: Mon, 28 May 2007 22:21:38 +0800
  • Delivery-date: Mon, 28 May 2007 07:20:02 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Aceg/mz7PRs/d4jlSeu7V67d+hYswQACriSgAACTnuAAAZF9OQAB6/ZAAAEo6AYAAAay8AAA4oLCAAAAikAAAO1vKwAACgiQAAGmDRAAAaUoMQAACYzg
  • Thread-topic: [Xen-devel] [PATCH 2/3][RFC] MSI/MSI-X support fordom0/driver domain

Sure, you didn't misunderstand my original patch :) 

I just wanted to exchange some my understanding for the interface between 
xen/guest with you, not related to the MSI at all. Sorry for any confusing :$

I will update my patch according to your feedback.

Thanks
Yunhong  Jiang

-----Original Message-----
From: Keir Fraser [mailto:Keir.Fraser@xxxxxxxxxxxx] 
Sent: 2007年5月28日 22:16
To: Jiang, Yunhong; Tian, Kevin; xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] [PATCH 2/3][RFC] MSI/MSI-X support fordom0/driver 
domain

On 28/5/07 15:03, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:

> Another point is, should we export vector to domain0/driver domain in long
> run? 
> 
> I think vector is an item for cpu, so that when interrupt happen, cpu will
> jump to IDT entry index by the vector. But for dom0/domU, it has no idea of
> IDT at all, so why should we export vector for domain0/domU? Is the pirq
> enough?

Well, dom0 is going to poke the vector into the MSI field of its PCI device
isn't it? So it needs to know the vector. In VT-d/PCI-IOV I assume this will
actually be an abstract value that gets remapped to an appropriate physical
vector. Since Xen is providing the vector number, we can change to providing
a 'virtual vector' easily when the time comes. If it makes you happier,
think of it as a cookie that gets provided by alloc_irq_vector() and then is
poked into the PCI device MSI field and also used in calls to
physdev_msi_format and physdev_map_irq.

> As for irq/pirq, I think irq is the index to irq_desc( stated at
> http://www.webservertalk.com/archive242-2006-5-1471415.html  and
> http://marc.info/?l=linux-kernel&m=110021870415938&w=2), while pirq is a
> virtual interrupt number  (something like gsi) injected by xen and is
> corresponding to physical interrupt source. What's special in domain0/domain U
> is, in normal kernel, the gsi/irq maybe different, while in domain0/domainU,
> we can arrange special so that irq/pirq is the same.
> 
> With this, for IOAPIC, domain0 will get a pirq from xen for a specific
> physical gsi,and then in io_apic_set_pci_routing(), the irq, instead of vector
> will be used. For msi(x), the  physdev_msi_format() will pass pirq/domain
> pair, and xen will return back content with vector information in it.

Currently the msi_format command you added to Xen takes a domain/vector
pair. I think this is the correct thing to do, so don't change it.

As for your comments on Linux pirq/irq/gsi management.... Erm, yeah. I don't
really fully understand all that 'clever' intricacy. But I'm not suggesting
you change it! Let me be clear: especially for dom0, rolling with whatever
policy it currently has for irq namespace management is the right thing to
do. My main point is that we don't want to bake that into the Xen interfaces
any more than we have to.

> Not sure if my understanding is right. Also, I suspect even my understanding
> is right, should we do like this, since this will cause a lot of changes to
> ioapic-xen.c code (we may have less changes after domain0 switch to latest
> kernel).

I'm not really suggesting you make any drastic changes to what you've done
on the Linux side! If you think otherwise then you've misunderstood my
intentions (and/or I've misunderstood your approach and patches this far).

I just want the interfaces you're adding to Xen simplified and made a bit
more consistent and elegant. This will require some changes to your Linux
patches of course but I would not expect them to be drastic.

 -- Keir

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.