[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization Extensions



On 12 July 2013 01:25, Eric Trudeau <etrudeau@xxxxxxxxxxxx> wrote:
>> -----Original Message-----
>> From: Ian Campbell [mailto:ian.campbell@xxxxxxxxxx]
>> Sent: Thursday, July 11, 2013 1:18 PM
>> To: Eric Trudeau
>> Cc: xen-devel; Julien Grall; Stefano.Stabellini@xxxxxxxxxx
>> Subject: Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization
>> Extensions
>>
>> On Thu, 2013-07-11 at 16:46 +0000, Eric Trudeau wrote:
>> > > > >> > What functions should I call to implement  XEN_DOMCTL_bind_pt_irq
>> on
>> > > > >> ARM?
>> > > > >>
>> > > > >> There's a function like route_irq_to_guest which we use to route 
>> > > > >> IRQs
>> to
>> > > > >> dom0 during boot. In principal that could also be used to reroute 
>> > > > >> an IRQ
>> > > > >> to a guest, but I'm not sure how it will interact with the 
>> > > > >> reassignment,
>> > > > >> since in your case the IRQ starts off bound to dom0. Hopefully it's 
>> > > > >> just
>> > > > >> a small change to make it work for this case.
>> > > > >>
>> >
>> > I have IRQs for devices being passed to ARM DomU guests now.  Thanks for
>> your
>> > help.
>>
>> Excellent! Do you have any useful patches?
>>
>
> I added code in the arm domain.c to release the IRQs when a domain is 
> destroyed.
> I am providing my changes, but I believe there may be more work to have a 
> clean
> solution.  Specifically, the following items may need to be addressed.

Thanks for that patch!

> 1.      The dom.cfg "irqs" section has the 32 added to the device-tree IRQ 
> number because
>            I wasn't sure where to add the translation.  This can be cleaned 
> up when
>            guests are able to be given DTB files and Xen can parse them.
>
> 2.      I assume level-triggered IRQs since the "irqs" list is only irq 
> numbers.  This also
>            could be left until guest DTB files are supported.
>
> 3.      I added clearing of the IRQ_GUEST bit in the desc->status in 
> release_irq because
>             I didn't see where it would be unset.  This is probably not a big 
> deal since not
>             many situations arise where an IRQ is sometimes host and 
> sometimes guest.

Could you send a separate patch for this fix?

> 4.      I changed vgic.c to use the d->nr_irqs instead of assuming guests 
> have no SPIs.
>              This allowed me to use the extra_domu_irqs boot param to allow 
> guests to
>              have SPIs.

How do you define the number of SPIs for dom0?  Also with extra_dom0_irqs?

By default nr_pirqs = static_nr_irqs + extra_dom{0,U}_irqs.

On ARM, start_nr_irqs equals to 1024. So you are allocating much more IRQs
than requested/supported by the GIC.

In any case, the number of IRQs per guest must not be "hardcoded" via the
command line. This value is different on each board.

For dom0, we can use the same number as the host and for a guest we can
give a parameters during the domain creation to let know how many SPIs is
needed for the guest.

> 5.      I added a check for whether an IRQ was already in use, earlier in the 
> flow of
>              gic_route_irq_to_guest() so that the desc->handler is not messed 
> with before
>              __setup_irq does the check and returns an error.  Also, 
> gic_set_irq_properties
>              will be avoided in the error case as well.
>
> I rebased to commit 9eabb07 and verified my changes.  I needed the fix in 
> gic_irq_shutdown
> or my release_irq changes caused other IRQs to be disabled when a domain was 
> destroyed.

I'm surprised, this issue should have been corrected with the commit
751554b. I don't see a fix in this patch, do you have one?

> From 551f174c9653def01890fadd2dd769ab8a9acde7 Mon Sep 17 00:00:00 2001
> From: Eric Trudeau <etrudeau@xxxxxxxxxxxx>
> Date: Thu, 11 Jul 2013 20:03:51 -0400
> Subject: [PATCH] Add support for Guest physdev irqs
>
> ---
>  xen/arch/arm/domain.c  | 16 ++++++++++++++++
>  xen/arch/arm/gic.c     | 15 ++++++++++-----
>  xen/arch/arm/physdev.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++--
>  xen/arch/arm/vgic.c    |  5 +----
>  4 files changed, 73 insertions(+), 11 deletions(-)
>
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 4c434a1..52d3429 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -31,6 +31,8 @@
>  #include <asm/gic.h>
>  #include "vtimer.h"
>  #include "vpl011.h"
> +#include <xen/iocap.h>
> +#include <xen/irq.h>
>
>  DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
>
> @@ -513,8 +515,22 @@ fail:
>      return rc;
>  }
>
> +static int release_domain_irqs(struct domain *d)
> +{
> +    int i;
> +    for (i = 0; i <= d->nr_pirqs; i++) {
> +        if (irq_access_permitted(d, i)) {
> +            release_irq(i);
> +        }
> +    }
> +    return 0;
> +}

As you may know, release_irq will spin until the flags IRQ_INPROGRESS
is unset. This is done my the maintenance handler.

Now, let say the domain has crashed and the IRQ is not yet EOIed, then you
will spin forever.

> +
>  void arch_domain_destroy(struct domain *d)
>  {
> +    if (d->irq_caps != NULL)
You don't need this check.
During the domain create, Xen ensures that irq_caps is not NULL.

> +        release_domain_irqs(d);
>      p2m_teardown(d);
>      domain_vgic_free(d);
>      domain_uart0_free(d);
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index cafb681..1f576d1 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -510,7 +510,7 @@ void gic_route_spis(void)
>      }
>  }
>
> -void __init release_irq(unsigned int irq)
> +void release_irq(unsigned int irq)
>  {
>      struct irq_desc *desc;
>      unsigned long flags;
> @@ -522,6 +522,7 @@ void __init release_irq(unsigned int irq)
>      action = desc->action;
>      desc->action  = NULL;
>      desc->status |= IRQ_DISABLED;
> +    desc->status &= ~IRQ_GUEST;
>
>      spin_lock(&gic.lock);
>      desc->handler->shutdown(desc);
> @@ -707,6 +708,12 @@ int gic_route_irq_to_guest(struct domain *d, const 
> struct dt_irq *irq,
>      spin_lock_irqsave(&desc->lock, flags);
>      spin_lock(&gic.lock);
>
> +    if ( desc->action != NULL )
> +    {
> +        retval = -EBUSY;
> +        goto out;
> +    }
> +
>      desc->handler = &gic_guest_irq_type;
>      desc->status |= IRQ_GUEST;
>
> @@ -715,12 +722,10 @@ int gic_route_irq_to_guest(struct domain *d, const 
> struct dt_irq *irq,
>      gic_set_irq_properties(irq->irq, level, 1u << smp_processor_id(), 0xa0);
>
>      retval = __setup_irq(desc, irq->irq, action);
> -    if (retval) {
> -        xfree(action);
> -        goto out;
> -    }
>
>  out:
> +    if (retval)
> +        xfree(action);
>      spin_unlock(&gic.lock);
>      spin_unlock_irqrestore(&desc->lock, flags);
>      return retval;
> diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
> index 61b4a18..8a5f770 100644
> --- a/xen/arch/arm/physdev.c
> +++ b/xen/arch/arm/physdev.c
> @@ -9,12 +9,56 @@
>  #include <xen/lib.h>
>  #include <xen/errno.h>
>  #include <asm/hypercall.h>
> +#include <public/physdev.h>
> +#include <xen/guest_access.h>
> +#include <xen/irq.h>
> +#include <xen/sched.h>
> +#include <asm/gic.h>
>
>
>  int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>  {
> -    printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd);
> -    return -ENOSYS;
> +    int ret;
> +
> +    switch ( cmd )
> +    {
> +    case PHYSDEVOP_map_pirq: {
> +        physdev_map_pirq_t map;
> +        struct dt_irq irq;
> +        struct domain *d;
> +
> +        ret = -EFAULT;
> +        if ( copy_from_guest(&map, arg, 1) != 0 )
> +            break;
> +
> +        d = rcu_lock_domain_by_any_id(map.domid);
> +        if ( d == NULL ) {
> +            ret = -ESRCH;
> +            break;
> +        }

Missing sanity check on the map.pirq value.

> +        irq.irq = map.pirq;
> +        irq.type = DT_IRQ_TYPE_LEVEL_MASK;
> +
> +        ret = gic_route_irq_to_guest(d, &irq, "GuestIRQ");

Do you plan to handle non 1:1 IRQ mapping?
How does work your the IRQ mapping if the IRQ is already mapped to dom0?

> +        if (!ret)
> +            printk("%s gic_route_irq_to_guest added IRQ %d to dom%d\n",
> +                   __FUNCTION__, irq.irq, d->domain_id);
> +
> +        rcu_unlock_domain(d);
> +
> +        if (!ret && __copy_to_guest(arg, &map, 1) )
> +            ret = -EFAULT;
> +        break;
> +    }
> +
> +    default:
> +        printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, 
> cmd);
> +        ret = -ENOSYS;
> +        break;
> +    }
> +
> +    return ret;
>  }
>
>  /*
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 2e4b11f..0ebcdac 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -82,10 +82,7 @@ int domain_vgic_init(struct domain *d)
>      /* Currently nr_lines in vgic and gic doesn't have the same meanings
>       * Here nr_lines = number of SPIs
>       */
> -    if ( d->domain_id == 0 )
> -        d->arch.vgic.nr_lines = gic_number_lines() - 32;
> -    else
> -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> +    d->arch.vgic.nr_lines = d->nr_pirqs - 32;

If you want to stick on the 1:1 mapping, the best solution
is to set "nr_lines to gic_number_lines() - 32" for all the domains.

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.