|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH v1 02/10] xen/arm: register mmio handler at runtime
Hi,
On Thu, Mar 20, 2014 at 3:18 PM, Julien Grall <julien.grall@xxxxxxxxxx> wrote:
> Hello Vijay,
>
> (Adding Andrii who is working on a similar patch).
>
Thank you for pointing me to this thread. Some time ago I posted the
idea of this solution, but my work with it is a bit postponed now.
Hope to resume it asap.
The reason why I started working with this - is a proper OMAP IOMMU
handling, which needs runtime IO traps for sure.
vgic and vuart as standalone modules do not need this - they need boot
time IO mem configuration only. But if you add OMAP IOMMU (hope I'll
post it asap), you'll need to update vgic and vuart too.
> Thanks you for the patch.
>
> On 03/19/2014 02:17 PM, vijay.kilari@xxxxxxxxx wrote:
>> From: Vijaya Kumar K <Vijaya.Kumar@xxxxxxxxxxxxxxxxxx>
>>
>> mmio handlers are registers at compile time
>> for drivers like vuart and vgic.
>> Make mmio handler registered at runtime by
>> creating linked list of mmio handlers
>>
>> Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@xxxxxxxxxxxxxxxxxx>
>> ---
>> xen/arch/arm/io.c | 32 +++++++++++++++++---------
>> xen/arch/arm/io.h | 16 +++++--------
>> xen/arch/arm/vgic.c | 61
>> ++++++++++++++++++++++++++------------------------
>> xen/arch/arm/vuart.c | 39 ++++++++++++++++----------------
>> 4 files changed, 79 insertions(+), 69 deletions(-)
>>
>> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
>> index a6db00b..d140b29 100644
>> --- a/xen/arch/arm/io.c
>> +++ b/xen/arch/arm/io.c
>> @@ -17,31 +17,41 @@
>> */
>>
>> #include <xen/config.h>
>> +#include <xen/init.h>
>> +#include <xen/kernel.h>
>> #include <xen/lib.h>
>> +#include <xen/spinlock.h>
>> #include <asm/current.h>
>>
>> #include "io.h"
>>
>> -static const struct mmio_handler *const mmio_handlers[] =
>> -{
>> - &vgic_distr_mmio_handler,
>> - &vuart_mmio_handler,
>> -};
>> -#define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers)
>> +LIST_HEAD(handle_head);
static LIST_HEAD
>> +static DEFINE_SPINLOCK(handler_lock);
>
> As you change the code, I would prefer a per domain list IO handler. So
> we can easily handle GICv2 guest on GICv3 host.
>
> This list would only contains handlers that will be effectively used for
> the domain.
>
>> int handle_mmio(mmio_info_t *info)
>> {
>> struct vcpu *v = current;
>> - int i;
>> + struct list_head *pos;
>> + struct mmio_handler *mmio_handle;
>>
>> - for ( i = 0; i < MMIO_HANDLER_NR; i++ )
>> - if ( mmio_handlers[i]->check_handler(v, info->gpa) )
Must be locked with irqsave lock.
>> + list_for_each(pos, &handle_head) {
>> + mmio_handle = list_entry(pos, struct mmio_handler, handle_list);
>
> You can use list_for_each_entry here.
>
Right. Should be list_for_each_entry
>> + if ( mmio_handle->check_handler(v, info->gpa) )
>> return info->dabt.write ?
>> - mmio_handlers[i]->write_handler(v, info) :
>> - mmio_handlers[i]->read_handler(v, info);
>> + mmio_handle->write_handler(v, info) :
>> + mmio_handle->read_handler(v, info);
>> + }
>>
>> return 0;
>> }
>> +
>> +void register_mmio_handler(struct mmio_handler * handle)
>> +{
>> + spin_lock(&handler_lock);
>
> Why do you take the lock here and not in handle_mmio?
>
I planned that these functions will be used at runtime and not only
boot time. So spinlocking here is needed for sure. From IOMMU point of
view (which I working with) - this code may be called at any time at
any CPU simultaneously. The only issue here - lock should be irqsave.
>> + list_add(&handle->handle_list, &handle_head);
>> + spin_unlock(&handler_lock);
>> +}
>> +
>> /*
>> * Local variables:
>> * mode: C
>> diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
>> index 8d252c0..99cd7c3 100644
>> --- a/xen/arch/arm/io.h
>> +++ b/xen/arch/arm/io.h
>> @@ -22,6 +22,7 @@
>> #include <xen/lib.h>
>> #include <asm/processor.h>
>> #include <asm/regs.h>
>> +#include <xen/list.h>
>>
>> typedef struct
>> {
>> @@ -30,20 +31,15 @@ typedef struct
>> paddr_t gpa;
>> } mmio_info_t;
>>
>> -typedef int (*mmio_read_t)(struct vcpu *v, mmio_info_t *info);
>> -typedef int (*mmio_write_t)(struct vcpu *v, mmio_info_t *info);
>> -typedef int (*mmio_check_t)(struct vcpu *v, paddr_t addr);
>> -
>
> Why did you remove the typedef? It was useful for the code comprehension.
>
>> struct mmio_handler {
>> - mmio_check_t check_handler;
>> - mmio_read_t read_handler;
>> - mmio_write_t write_handler;
>> + int (*read_handler)(struct vcpu *v, mmio_info_t *info);
>> + int (*write_handler)(struct vcpu *v, mmio_info_t *info);
>> + int (*check_handler)(struct vcpu *v, paddr_t addr);
>
> If we are going to a per domain list IO, I would remove check_handler
> and replacing by:
>
> paddr_t addr;
> paddr_t size;
>
>> + struct list_head handle_list;
>
> On a previous mail (see
> http://www.gossamer-threads.com/lists/xen/devel/317457#317457) I said
> that a list would be better ... but after thinking we can define a fixed
> array of 16 cells. It would be enough for now.
>
> You can see an example in arch/x86/hvm/intercept.c
>
>> };
>>
>> -extern const struct mmio_handler vgic_distr_mmio_handler;
>> -extern const struct mmio_handler vuart_mmio_handler;
>> -
>> extern int handle_mmio(mmio_info_t *info);
>> +void register_mmio_handler(struct mmio_handler * handle);
>>
>> #endif
>>
>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>> index 553411d..d2a13fb 100644
>> --- a/xen/arch/arm/vgic.c
>> +++ b/xen/arch/arm/vgic.c
>> @@ -73,34 +73,6 @@ static struct vgic_irq_rank *vgic_irq_rank(struct vcpu
>> *v, int b, int n)
>> return NULL;
>> }
>>
>> -int domain_vgic_init(struct domain *d)
>> -{
>> - int i;
>> -
>> - d->arch.vgic.ctlr = 0;
>> -
>> - /* Currently nr_lines in vgic and gic doesn't have the same meanings
>> - * Here nr_lines = number of SPIs
>> - */
>> - if ( d->domain_id == 0 )
>> - d->arch.vgic.nr_lines = gic_number_lines() - 32;
>> - else
>> - d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
>> -
>> - d->arch.vgic.shared_irqs =
>> - xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
>> - d->arch.vgic.pending_irqs =
>> - xzalloc_array(struct pending_irq, d->arch.vgic.nr_lines);
>> - for (i=0; i<d->arch.vgic.nr_lines; i++)
>> - {
>> - INIT_LIST_HEAD(&d->arch.vgic.pending_irqs[i].inflight);
>> - INIT_LIST_HEAD(&d->arch.vgic.pending_irqs[i].lr_queue);
>> - }
>> - for (i=0; i<DOMAIN_NR_RANKS(d); i++)
>> - spin_lock_init(&d->arch.vgic.shared_irqs[i].lock);
>> - return 0;
>> -}
>> -
>
> I would predefine vgic_distr_mmio_handler early rather moving the whole
> function. It's easier to understand the modification in this patch.
>
> [..]
>
>> struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq)
>> {
>> struct pending_irq *n;
>> diff --git a/xen/arch/arm/vuart.c b/xen/arch/arm/vuart.c
>> index b9d3ced..c237d71 100644
>> --- a/xen/arch/arm/vuart.c
>> +++ b/xen/arch/arm/vuart.c
>> @@ -44,24 +44,6 @@
>>
>> #define domain_has_vuart(d) ((d)->arch.vuart.info != NULL)
>>
>> -int domain_vuart_init(struct domain *d)
>> -{
>> - ASSERT( !d->domain_id );
>> -
>> - d->arch.vuart.info = serial_vuart_info(SERHND_DTUART);
>> - if ( !d->arch.vuart.info )
>> - return 0;
>> -
>> - spin_lock_init(&d->arch.vuart.lock);
>> - d->arch.vuart.idx = 0;
>> -
>> - d->arch.vuart.buf = xzalloc_array(char, VUART_BUF_SIZE);
>> - if ( !d->arch.vuart.buf )
>> - return -ENOMEM;
>> -
>> - return 0;
>> -}
>> -
>
> Same remark as domain_vgic_init.
>
> Regards,
>
> --
> Julien Grall
Regards,
Andrii
--
Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |