[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 for-next 10/12] vpci/msi: add MSI handlers
On Fri, Dec 15, 2017 at 05:07:06AM -0700, Jan Beulich wrote: > >>> On 18.10.17 at 13:40, <roger.pau@xxxxxxxxxx> wrote: > > +static void control_write(const struct pci_dev *pdev, unsigned int reg, > > + uint32_t val, void *data) > > +{ > > + struct vpci_msi *msi = data; > > + unsigned int vectors = min_t(uint8_t, > > + 1u << MASK_EXTR(val, PCI_MSI_FLAGS_QSIZE), > > + msi->max_vectors); > > + bool new_enabled = val & PCI_MSI_FLAGS_ENABLE; > > + > > + /* > > + * No change if the enable field and the number of vectors is > > + * the same or the device is not enabled, in which case the > > + * vectors field can be updated directly. > > + */ > > + if ( new_enabled == msi->enabled && > > + (vectors == msi->vectors || !msi->enabled) ) > > + { > > + msi->vectors = vectors; > > + return; > > + } > > + > > + if ( new_enabled ) > > + { > > + unsigned int i; > > + > > + /* > > + * If the device is already enabled it means the number of > > + * enabled messages has changed. Disable and re-enable the > > + * device in order to apply the change. > > + */ > > + if ( msi->enabled ) > > + { > > + vpci_msi_arch_disable(msi, pdev); > > + msi->enabled = false; > > + } > > + > > + if ( vpci_msi_arch_enable(msi, pdev, vectors) ) > > + return; > > + > > + for ( i = 0; msi->masking && i < vectors; i++ ) > > + vpci_msi_arch_mask(msi, pdev, i, (msi->mask >> i) & 1); > > The ordering looks wrong at the first (and second) glance: It gives > the impression that you enable the vectors and only then mask > them. I _assume_ the ordering is the way it is because > vpci_msi_arch_enable() leaves the vectors masked I've taken another look at this, and I think what's done here is still not fully correct. vpci_mis_arch_enable (which calls allocate_and_map_msi_pirq and pt_irq_create_bind) will leave the masking bits as they where. There's no explicit masking done there. It just happens that Xen sets the mask to ~0 when adding the PCI device (see msi_capability_init), and thus all vectors are masked by default when the device first enables MSI. So given the following flow: - Guest enables MSI with 8 vectors enabled and unmasked. - Guest disables MSI. - Guest masks vector 4. - Guest re-enables MSI. There's going to be a window where vector 4 won't be masked in the code above (between the call to vpci_msi_arch_enable and the call to vpci_msi_arch_mask). It's quite likely that the QEMU side is also missing this, but AFAICT it's not something that an OS would usually do. I think the proper way to solve this is to reset the mask bits to masked when the vector is unbound, so that at bind time the state of the mask is consistent regardless of whether the vector has been previously bound or not. The following patch should fix this: diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c index 8f16e6c0a5..bab3aa349a 100644 --- a/xen/drivers/passthrough/io.c +++ b/xen/drivers/passthrough/io.c @@ -645,7 +645,22 @@ int pt_irq_destroy_bind( } break; case PT_IRQ_TYPE_MSI: + { + unsigned long flags; + struct irq_desc *desc = domain_spin_lock_irq_desc(d, machine_gsi, + &flags); + + if ( !desc ) + return -EINVAL; + /* + * Leave the MSI masked, so that the state when calling + * pt_irq_create_bind is consistent across bind/unbinds. + */ + guest_mask_msi_irq(desc, true); + spin_unlock_irqrestore(&desc->lock, flags); break; + } + default: return -EOPNOTSUPP; } I think this should be send as a separate patch of this series, since it's a fix for pt_irq_destroy_bind. > (albeit that's > sort of contradicting the msi->masking part of the loop condition), > and if so this should be explained in a comment. If, however, this > assumption of mine is wrong, then the order needs changing. I will add a comment once this is sorted. Thanks, Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |