[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/4] xen/arm: gic: Ensure we have an ISB between ack and do_IRQ()


  • To: Andre Przywara <andre.przywara@xxxxxxx>, Andrii Anisov <andrii.anisov@xxxxxxxxx>
  • From: Juergen Gross <jgross@xxxxxxxx>
  • Date: Mon, 3 Dec 2018 14:53:38 +0100
  • Autocrypt: addr=jgross@xxxxxxxx; prefer-encrypt=mutual; keydata= xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAHNHkp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmRlPsLAeQQTAQIAIwUCU4xw6wIbAwcL CQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJELDendYovxMvi4UH/Ri+OXlObzqMANruTd4N zmVBAZgx1VW6jLc8JZjQuJPSsd/a+bNr3BZeLV6lu4Pf1Yl2Log129EX1KWYiFFvPbIiq5M5 kOXTO8Eas4CaScCvAZ9jCMQCgK3pFqYgirwTgfwnPtxFxO/F3ZcS8jovza5khkSKL9JGq8Nk czDTruQ/oy0WUHdUr9uwEfiD9yPFOGqp4S6cISuzBMvaAiC5YGdUGXuPZKXLpnGSjkZswUzY d9BVSitRL5ldsQCg6GhDoEAeIhUC4SQnT9SOWkoDOSFRXZ+7+WIBGLiWMd+yKDdRG5RyP/8f 3tgGiB6cyuYfPDRGsELGjUaTUq3H2xZgIPfOwE0EU4xwFgEIAMsx+gDjgzAY4H1hPVXgoLK8 B93sTQFN9oC6tsb46VpxyLPfJ3T1A6Z6MVkLoCejKTJ3K9MUsBZhxIJ0hIyvzwI6aYJsnOew cCiCN7FeKJ/oA1RSUemPGUcIJwQuZlTOiY0OcQ5PFkV5YxMUX1F/aTYXROXgTmSaw0aC1Jpo w7Ss1mg4SIP/tR88/d1+HwkJDVW1RSxC1PWzGizwRv8eauImGdpNnseneO2BNWRXTJumAWDD pYxpGSsGHXuZXTPZqOOZpsHtInFyi5KRHSFyk2Xigzvh3b9WqhbgHHHE4PUVw0I5sIQt8hJq 5nH5dPqz4ITtCL9zjiJsExHuHKN3NZsAEQEAAcLAXwQYAQIACQUCU4xwFgIbDAAKCRCw3p3W KL8TL0P4B/9YWver5uD/y/m0KScK2f3Z3mXJhME23vGBbMNlfwbr+meDMrJZ950CuWWnQ+d+ Ahe0w1X7e3wuLVODzjcReQ/v7b4JD3wwHxe+88tgB9byc0NXzlPJWBaWV01yB2/uefVKryAf AHYEd0gCRhx7eESgNBe3+YqWAQawunMlycsqKa09dBDL1PFRosF708ic9346GLHRc6Vj5SRA UTHnQqLetIOXZm3a2eQ1gpQK9MmruO86Vo93p39bS1mqnLLspVrL4rhoyhsOyh0Hd28QCzpJ wKeHTd0MAWAirmewHXWPco8p1Wg+V+5xfZzuQY0f4tQxvOpXpt4gQ1817GQ5/Ed/wsDtBBgB CAAgFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAlrd8NACGwIAgQkQsN6d1ii/Ey92IAQZFggA HRYhBFMtsHpB9jjzHji4HoBcYbtP2GO+BQJa3fDQAAoJEIBcYbtP2GO+TYsA/30H/0V6cr/W V+J/FCayg6uNtm3MJLo4rE+o4sdpjjsGAQCooqffpgA+luTT13YZNV62hAnCLKXH9n3+ZAgJ RtAyDWk1B/0SMDVs1wxufMkKC3Q/1D3BYIvBlrTVKdBYXPxngcRoqV2J77lscEvkLNUGsu/z W2pf7+P3mWWlrPMJdlbax00vevyBeqtqNKjHstHatgMZ2W0CFC4hJ3YEetuRBURYPiGzuJXU pAd7a7BdsqWC4o+GTm5tnGrCyD+4gfDSpkOT53S/GNO07YkPkm/8J4OBoFfgSaCnQ1izwgJQ jIpcG2fPCI2/hxf2oqXPYbKr1v4Z1wthmoyUgGN0LPTIm+B5vdY82wI5qe9uN6UOGyTH2B3p hRQUWqCwu2sqkI3LLbTdrnyDZaixT2T0f4tyF5Lfs+Ha8xVMhIyzNb1byDI5FKCb
  • Cc: "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Andrii Anisov <Andrii_Anisov@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>, Julien Grall <julien.grall@xxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>
  • Delivery-date: Mon, 03 Dec 2018 13:53:58 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 03/12/2018 14:46, Andre Przywara wrote:
> On 30/11/2018 19:52, Andrii Anisov wrote:
>> Hello Andre,
>>
>> Please see my comments below:
>>
>> On 23.11.18 14:18, Andre Przywara wrote:
>>> Fundamentally there is a semantic difference between edge and level
>>> triggered IRQs: When the guest has handled an *edge* IRQ (EOIed so
>>> the LR's state goes to 0), this is done and dusted, and Xen doesn't
>>> need to care about this anymore until the next IRQ occurs.> For level
>>> triggered IRQs, even though the guest has handled it, we need to
>>> resample the (potentially virtual) IRQ line, as it may come up or
>>> down at the *device*'s discretion: the interrupt reason might have
>>> gone away (GPIO condition no longer true), even before we were able
>>> to inject it, or there might be another interrupt reason not yet
>>> handled (incoming serial character while serving a transmit
>>> interrupt). Also typically it's up to the interrupt handler to
>>> confirm handling the interrupt, either explicitly by clearing an
>>> interrupt bit in some status register or implicitly, for instance by
>>> draining a FIFO, say on a serial device. So even though from the
>>> (V)GIC's point of view the interrupt has been processed (EOIed), it
>>> might still be pending.
>> So, as I understand the intended behavior of a vGIC for the level
>> interrupt is following cases:
>> 1. in case the interrupt line is still active from HW side, but
>>    interrupt handler from VM EOIs the interrupt, it should
>>    be signaled to vCPU by vGIC again
> 
> yes
> 
>> 2. in case a peripheral deactivated interrupt line, but VM did not
>>    activated it yet, this interrupt should be removed from pending for
>>    VM
> 
> yes
> 
>> IMO, case 1 is indirectly supported by old vgic. For HW interrupts its
>> pretty natural: deactivation by VM in VGIC leads to deactivation in
>> GIC, so the interrupt priority is restored and GIC will trap PCPU to
>> reinsert it. This will be seen by VM as immediate IRQ trap after EOI.
> 
> Yes, this is true for hardware interrupts, and this lets the old VGIC
> get away with it.
> Virtual devices with level interrupt semantics are a different story,
> the SBSA UART has some hacks in it to support it properly.
> 
>> Also Case 2 is not implemented in the old vgic. It is somehow
>> supported by new vgic, though it also relies on the trap to the
>> hypervisor to commit the update to LRs.
> 
> Yes, and there is not so much we can do about it. But that's not a real
> problem, as you have this problem in bare metal, too.
> 
>> But it's rather a problem of
>> GIC arch/implementation, which does not signal CPU nor updates
>> associated LR on level interrupt deassertion.
>>
>>> My intimate "old Xen VGIC" knowledge has been swapped out from my
>>> brain meanwhile, but IIRC Xen treats every IRQ as if it would be an
>>> edge IRQ. Which works if the guest's interrupt handler behaves
>>> correctly. Most IRQ handlers have a loop to iterate over all possible
>>> interrupt reasons and process them, so the line goes indeed down
>>> before they EOI the IRQ.
>>
>> I've spent some time to look through the new vgic implementation and I
>> have a note about it:
>> It's not clear why are you probing level interrupts on guest->hyp
>> transition. While it targets case 2 described above, it seems to be
>> more relevant to probe the level interrupts right before hyp->guest
>> transition. Because vcpu might be descheduled and while it hangs on
>> scheduler queues interrupt line level has more chances to be changed
>> by peripheral itself.
>> Also I'm pretty scared of new vgic locking scheme with per-irq locks
>> and locking logic i.e. in vgic_queue_irq_unlock() function.
> 
> Well, you should be scared of the old VGIC locking scheme instead ;-)
> Apart from the vgic_queue_irq_unlock() function, the rest of the new
> locking scheme is much clearer. And it scales much better, as we have
> per-IRQ locks, which should virtually never be contended, and per-CPU
> locks, which are very rarely contended only, as it's mostly only taken
> by its own VCPU during entry and exit. There is no per-domain lock for
> the emulation anymore.
> I see that it's tempting to have an "easy" locking scheme, but that
> typically is either one-lock-to-rule-them-all, which doesn't scale, or
> doesn't cover corner cases.
> You should always prefer correctness over performance, otherwise you
> just fail faster ;-)
> 
>> Also sorting
>> list in vgic_flush_lr_state() with vgic_irq_cmp() looks very
>> expensive.
> 
> Yes, but effectively this virtually never happens, as you have rarely
> more than four pending IRQs at the same time.
> I had patches lying around to improve this part, just never got around
> to post, especially with only little rationale.
> If you are interested, I can dig them out, though I am not sure how
> relevant this is.
> 
>> But, for sure all that stuff performance should be properly evaluated
>> and measured.
> 
> Indeed. Can you hack something into Xen to get some statistics on those
> cases? I am not sure if Xen has something to trace lock contention, but
> you could easily add some counters to track how many LRs we actually
> use, to see if this is actually a problem in your case.
> I think PERFCOUNTER is your friend.

CONFIG_LOCK_PROFILE and xen-lockprof on tools side?

Not sure it is still working, though. Its about 9 years since I wrote
and used it.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.