[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Keystone Issue



On Wed, Jun 10, 2020 at 4:39 AM Bertrand Marquis
<Bertrand.Marquis@xxxxxxx> wrote:
>
>
>
> > On 10 Jun 2020, at 09:20, Marc Zyngier <maz@xxxxxxxxxx> wrote:
> >
> > On 2020-06-10 09:06, Bertrand Marquis wrote:
> >> Hi,
> >>> On 9 Jun 2020, at 18:45, Marc Zyngier <maz@xxxxxxxxxx> wrote:
> >>> Hi Julien,
> >>> On 2020-06-09 18:32, Julien Grall wrote:
> >>>> (+ Marc)
> >>>> On 09/06/2020 18:03, Bertrand Marquis wrote:
> >>>>> Hi
> >>>>>> On 9 Jun 2020, at 16:47, Julien Grall <julien@xxxxxxx> wrote:
> >>>>>> On 09/06/2020 16:28, Bertrand Marquis wrote:
> >>>>>>> Hi,
> >>>>>>>> On 9 Jun 2020, at 15:33, CodeWiz2280 <codewiz2280@xxxxxxxxx> wrote:
> >>>>>>>> There does appear to be a secondary (CIC) controller that can forward
> >>>>>>>> events to the GIC-400 and EDMA controllers for the keystone 2 family.
> >>>>>>>> Admittedly, i'm not sure how it is being used with regards to the
> >>>>>>>> peripherals.  I only see mention of the GIC-400 parent for the 
> >>>>>>>> devices
> >>>>>>>> in the device tree.  Maybe Bertrand has a better idea on whether any
> >>>>>>>> peripherals go through the CIC first?  I see that gic_interrupt ()
> >>>>>>>> fires once in Xen, which calls doIRQ to push out the virtual 
> >>>>>>>> interrupt
> >>>>>>>> to the dom0 kernel.  The dom0 kernel then handles the interrupt and
> >>>>>>>> returns, but gic_interrupt() never fires again in Xen.
> >>>>>>> I do not remember of any CIC but the behaviour definitely look like 
> >>>>>>> an interrupt acknowledge problem.
> >>>>>>> Could you try the following:
> >>>>>>> --- a/xen/arch/arm/gic-v2.c
> >>>>>>> +++ b/xen/arch/arm/gic-v2.c
> >>>>>>> @@ -667,6 +667,9 @@ static void gicv2_guest_irq_end(struct irq_desc 
> >>>>>>> *desc)
> >>>>>>>     /* Lower the priority of the IRQ */
> >>>>>>>     gicv2_eoi_irq(desc);
> >>>>>>>     /* Deactivation happens in maintenance interrupt / via GICV */
> >>>>>>> +
> >>>>>>> +    /* Test for Keystone2 */
> >>>>>>> +    gicv2_dir_irq(desc);
> >>>>>>> }
> >>>>>>> I think the problem I had was related to the vgic not deactivating 
> >>>>>>> properly the interrupt.
> >>>>>> Are you suggesting the guest EOI is not properly forwarded to the 
> >>>>>> hardware when LR.HW is set? If so, this could possibly be workaround 
> >>>>>> in Xen by raising a maintenance interrupt every time a guest EOI an 
> >>>>>> interrupt.
> >>>>> Agree the maintenance interrupt would definitely be the right solution
> >>>> I would like to make sure we aren't missing anything in Xen first.
> >>>> From what you said, you have encountered this issue in the past with a
> >>>> different hypervisor. So it doesn't look like to be Xen related.
> >>>> Was there any official statement from TI? If not, can we try to get
> >>>> some input from them first?
> >>>> @Marc, I know you dropped 32-bit support in KVM recently :). Although,
> >>> Yes! Victory is mine! Freedom from the shackles of 32bit, at last! :D
> >>>> I was wondering if you heard about any potential issue with guest EOI
> >>>> not forwarded to the host. This is on TI Keystone (Cortex A-15).
> >>> Not that I know of. A-15 definitely works (TC2, Tegra-K1, Calxeda Midway 
> >>> all run just fine with guest EOI), and GIC-400 is a pretty solid piece of 
> >>> kit (it is just sloooooow...).
> >>> Thinking of it, you would see something like that if the GIC was seeing 
> >>> the writes coming from the guest as secure instead of NS (cue the early 
> >>> firmware on XGene that exposed the wrong side of GIC-400).
> >>> Is there some kind of funky bridge between the CPU and the GIC?
> >> Yes the behaviour I had was coherent with the GIC seeing the processor
> >> in secure mode and not in non secure hence making the VGIC ack non
> >> functional.
> >
> > Can you please check this with the TI folks? It may be fixable if
> > the bridge is SW configurable.
>
> At that time they did not “offer” that solution but does not mean it is not 
> possible.
>
> >
> >> So the only way to solve this is actually to do the interrupt
> >> deactivate inside Xen (using a maintenance interrupt).
> >
> > That's a terrible hack, and one that would encourage badly integrated HW.
> > I appreciate the need to "make things work", but I'd be wary of putting
> > this in released SW. Broken HW must die. I have written more than my share
> > of these terrible hacks (see TX1 support), and I deeply regret it, as
> > it has only given Si vendors an excuse not to fix things.
>
> Fully agree and I also had to do some hacks for the TX1 ;-)
>
> >
> >> I remember that I also had to do something specific for the
> >> configuration of edge/level and priorities to have an almost proper
> >> behaviour.
> >
> > Well, the moment the GIC observes secure accesses when they should be
> > non-secure, all bets are off and you have to resort to the above hacks.
> > The fun part is that if you have secure SW running on this platform,
> > you can probably DoS it from non-secure. It's good, isn't it?
>
> Definitely is but if I remember correctly they have 2 kind of SoC: one that 
> can be only used in non-secure and an other which is meant to be use with 
> secure and non secure.
>
> Bertrand
>
> >
> >> Sadly I have no access to the code anymore, so I would need to guess
> >> back what that was..
> >
> > I'd say this *is* a good thing.
The problem is that a hack may be my only solution to getting this
working on this platform.  If TI says that they don't support it then
i'm stuck.  Just to summarize the problem, we believe that the GIC is
seeing secure accesses from dom0 when they should be non-secure.  This
is causing the VGIC ack to be non-functional from dom0.   We would
need a firmware that supports both secure and non-secure accesses.

The Xen code gets to "gicv2_guest_irq_end()" where it executes
"gicv2_eoi_irq()", but then we had to add the deactivate
"gicv2_dir_irq" to clear the virtual interrupt manually to get things
going again.

> >
> >        M.
> > --
> > Jazz is not dead. It just smells funny...
>



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.