|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 2/2] xen: events: free irqs in error condition
On Tue, Feb 27, 2018 at 03:55:58PM +0000, Amit Shah wrote:
> In case of errors in irq setup for MSI, free up the allocated irqs.
>
> Fixes: 4892c9b4ada9f9 ("xen: add support for MSI message groups")
> Reported-by: Hooman Mirhadi <mirhadih@xxxxxxxxxx>
> CC: <stable@xxxxxxxxxxxxxxx>
> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> CC: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> CC: Eduardo Valentin <eduval@xxxxxxxxxx>
> CC: Juergen Gross <jgross@xxxxxxxx>
> CC: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> CC: "K. Y. Srinivasan" <kys@xxxxxxxxxxxxx>
> CC: Liu Shuo <shuo.a.liu@xxxxxxxxx>
> CC: Anoob Soman <anoob.soman@xxxxxxxxxx>
> Signed-off-by: Amit Shah <aams@xxxxxxxxxx>
> ---
> drivers/xen/events/events_base.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/xen/events/events_base.c
> b/drivers/xen/events/events_base.c
> index c86d10e..a299586 100644
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -750,11 +750,14 @@ int xen_bind_pirq_msi_to_irq(struct pci_dev *dev,
> struct msi_desc *msidesc,
>
> ret = irq_set_msi_desc(irq, msidesc);
> if (ret < 0)
> - goto error_irq;
> + goto error_desc;
> out:
> mutex_unlock(&irq_mapping_update_lock);
> return irq;
> error_irq:
> + while (--nvec >= i)
> + xen_free_irq(irq + nvec);
> +error_desc:
> while (i > 0) {
> i--;
> __unbind_from_irq(irq + i);
It seems pointless to introduce another label and another loop to fix
something that can be fixed with a single label and a single loop,
this just makes the code more complex for no reason.
IMHO the way to solve this issue is:
while (nvec--) {
if (nvec >= i)
xen_free_irq(irq + nvec);
else
__unbind_from_irq(irq + nvec);
}
Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |