[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] use of struct hvm_mirq_dpci_mapping.gmsi vs. HVM_IRQ_DPCI_*_MSI flags


  • To: Jan Beulich <JBeulich@xxxxxxxxxx>
  • From: Haitao Shan <maillists.shan@xxxxxxxxx>
  • Date: Thu, 21 Apr 2011 15:14:40 +0800
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Kay, Allen M" <allen.m.kay@xxxxxxxxx>
  • Delivery-date: Thu, 21 Apr 2011 00:15:18 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=f7tM17KQH8cOxMgyZzvrq/DF7NcpHn1o0dhrxS3NZn6dtx1j5TTT9YP8Kjozoo5fvE QntbokBE34aYoTOTDY/0HpcHScPSzZItAcdKkVt4crRG2Su12t2zMnxw8kTU5AystISm 3JE7dn96drORHPRmzRJqsI7U5MGsX9SSXEzgQ=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

See comments below.

2011/3/31 Jan Beulich <JBeulich@xxxxxxxxxx>
pt_irq_create_bind_vtd() initializes this substructure only when setting
.flags to HVM_IRQ_DPCI_MACH_MSI|HVM_IRQ_DPCI_GUEST_MSI (the
PT_IRQ_TYPE_MSI case), while the other path will not set
HVM_IRQ_DPCI_GUEST_MSI but may also set HVM_IRQ_DPCI_MACH_MSI.
Yet hvm_dpci_msi_eoi() and hvm_migrate_pirqs() check for
HVM_IRQ_DPCI_MACH_MSI, i.e. may run into an uninitialized
.gmsi.* field. What am I missing here?
I think these fields are introduced by MSI-to-gINTx patch. MACH_MSI means the host (physical) is using MSI, while GUEST_MSI is just what we can guess from its name.
I agree only checking MACH_MSI is not enough.
 

I'm largely asking because I think struct hvm_mirq_dpci_mapping.dom
and .digl_list could actually overlay .gmsi, as much as struct
hvm_irq_dpci.hvm_timer could actually rather be folded into struct
hvm_mirq_dpci_mapping (and then also overlay .gmsi). The overlay
distinction bit would, based on initialization, be HVM_IRQ_DPCI_GUEST_MSI,
but according to use it wouldn't be clear which of the two
HVM_IRQ_DPCI_*_MSI bits is actually the correct one.

Having a single structure only would make it a lot easier to
convert struct hvm_mirq_dpci_mapping * in struct hvm_irq_dpci to
a sparse struct hvm_mirq_dpci_mapping ** (populating slots only
as they get used), thus shrinking the currently two d->nr_pirqs
sized array allocations in pt_irq_create_bind_vtd() to a single one
with only pointer size array elements (allowing up to about 512
domain pirqs rather than currently slightly above 80 without
exceeding PAGE_SIZE on allocation).
I also agree. But  I think better Allen could do the final judgement. Thanks!

Also I'm wondering why the PT_IRQ_TYPE_MSI path of
pt_irq_create_bind_vtd() checks that on re-use of an IRQ the
flags are indicating the same kind of interrupt, while the other
path doesn't bother doing so.
The purpuse is described in the check in notes:
# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1239806337 -3600
# Node ID 3e64dfebabd7340f5852ad112c858efcebc9cae5
# Parent  b2c43b0fba713912d8ced348b5d628743e52d8be
passthrough: allow pt_bind_irq for msi update
Extend pt_bind_irq to handle the update of msi guest
vector and flag.
Unbind and rebind using separate hypercalls may not be viable
sometime.
For example, the guest may update MSI address/data on fly without
disabling it first (e.g. change delivery/destination), implement these
updates in such a way may result in interrupt loss.
Signed-off-by: Qing He <qing.he@xxxxxxxxx>
 

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.