[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC] xen: arm: context switch vtimer PPI state.
On 27/01/15 13:34, Ian Campbell wrote: >>> diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c >>> index 31fb81a..9074aca 100644 >>> --- a/xen/arch/arm/gic-v2.c >>> +++ b/xen/arch/arm/gic-v2.c >>> @@ -156,6 +156,45 @@ static void gicv2_save_state(struct vcpu *v) >>> writel_gich(0, GICH_HCR); >>> } >>> >>> +static void gicv2_save_and_mask_hwppi(struct vcpu *v, const unsigned virq, >>> + struct hwppi_state *s) >>> +{ >>> + struct pending_irq *p = irq_to_pending(v, virq); >>> + const unsigned int offs = virq / 32; >>> + const unsigned int mask = (1u << (virq % 32)); >>> + const unsigned int pendingr = readl_gicd(GICD_ISPENDR + offs*4); >>> + const unsigned int activer = readl_gicd(GICD_ISACTIVER + offs*4); >>> + const unsigned int enabler = readl_gicd(GICD_ISENABLER + offs*4); >>> + const bool is_edge = !!(p->desc->arch.type & DT_IRQ_TYPE_EDGE_BOTH); >> >> "For each supported PPI, it is IMPLEMENTATION DEFINED whether software >> can program the corresponding Int_config field." >> >> So I would not rely on arch.type as it may not be in sync with the register. >> >> It will be more problematic with the upcoming patch to let the guest >> configure ICFGR0. > > If anything is reprogramming ICFGR0 and not reflecting that in the > corresponding desc then it is buggy IMHO. Either arch.type should be > always valid or it should be removed. > > For ease of implementation I think we should probably refuse to allow > any ppi used via this scheme to be reprogrammed, i.e. Xen should choose > at start of day (based on DTB, h/w capability) and just force it upon > the guest. So you will have to modify the guest (and even DOM0) device tree to expose the correct type of the interrupt. For now we always expose as level. > >>> + if ( s->enabled ) >>> + { >>> + writel_gicd(mask, GICD_ICENABLER + offs*4); >>> + set_bit(_IRQ_DISABLED, &p->desc->status); >> >> I would prefer if you use gicv2_irq_disable rather than open coding. > > I'd like to hear from Stefano about whether this is the correct thing to > do in general before changing this. If context switching the status bits > is not required/wrong or there is some preferable way etc. You have to context switch the enable bit. The guest may disable/enable this IRQ. > More generally the way pending_irq and the irq descriptor are related > after this patch needs some careful though IMHO. > >>> @@ -125,11 +137,15 @@ void gic_route_irq_to_xen(struct irq_desc *desc, >>> const cpumask_t *cpu_mask, >>> >>> /* Program the GIC to route an interrupt to a guest >>> * - desc.lock must be held >>> + * - d may be NULL, in which case interrupt *must* be a PPI and is >>> routed to >>> + * the vcpu currently running when that PPI fires. In this case the >>> code >>> + * responsible for the related hardware must save and restore the PPI >>> with >>> + * gic_save_and_mask_hwppi/gic_restore_hwppi. >>> + * - if d is non-NULL then the interrupt *must* be an SPI. >>> */ >> >> the d == NULL sounds more an hackish way to specify this IRQ is routed >> to any guest. I would prefer if you introduce a separate function and >> duplicate the relevant bits. > > That is route_irq_to_current_vcpu vs route_irq_to_guest which already > exist as the API the callers actually use. And I should have wrote the same comment for route_irq_to_current. >>> + ASSERT( d || ( irq >=16 && irq < 32 ) ); >>> + >> >> Please no ASSERT in this function. This will be soon expose to the guest >> (see my device passthrough series). > > Then you should add range checking on the inputs at the hypercall level, > since irrespective of whether the ASSERT is there or that condition must > be true for things to function correctly. No, because we also need those check when routing to DOM0. It's a safe guard to prevent bad behavior. More check are coming soon. We should not hack route_irq_to_guest for a corner case (i.e routing PPI to the current vCPU). A separate function is better. Regards, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |