[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [patch RFC 38/38] irqchip: Add IMS array driver - NOT FOR MERGING



On Sat, Aug 22, 2020 at 01:47:12AM +0200, Thomas Gleixner wrote:
> On Fri, Aug 21 2020 at 17:17, Jason Gunthorpe wrote:
> > On Fri, Aug 21, 2020 at 09:47:43PM +0200, Thomas Gleixner wrote:
> >> So if I understand correctly then the queue memory where the MSI
> >> descriptor sits is in RAM.
> >
> > Yes, IMHO that is the whole point of this 'IMS' stuff. If devices
> > could have enough on-die memory then they could just use really big
> > MSI-X tables. Currently due to on-die memory constraints mlx5 is
> > limited to a few hundred MSI-X vectors.
> 
> Right, that's the limit of a particular device, but nothing prevents you
> to have a larger table on a new device.

Well, physics are a problem.. The SRAM to store the MSI vectors costs
die space and making the chip die larger is not an option. So the
question is what do you throw out of the chip to get a 10-20x increase
in MSI SRAM?

This is why using host memory is so appealing. It is
economically/functionally better.

I'm going to guess other HW is in the same situation, virtualization
is really pushing up the number of required IRQs.

> >> How is that supposed to work if interrupt remapping is disabled?
> >
> > The best we can do is issue a command to the device and spin/sleep
> > until completion. The device will serialize everything internally.
> >
> > If the device has died the driver has code to detect and trigger a
> > PCI function reset which will definitely stop the interrupt.
> 
> If that interrupt is gone into storm mode for some reason then this will
> render your machine unusable before you can do that.

Yes, but in general the HW design is to have one-shot interrupts, it
would have to be well off the rails to storm. The kind of off the
rails where it could also be doing crazy stuff on PCI-E that would be
very harmful.

> > So, the implementation of these functions would be to push any change
> > onto a command queue, trigger the device to DMA the command, spin/sleep
> > until the device returns a response and then continue on. If the
> > device doesn't return a response in a time window then trigger a WQ to
> > do a full device reset.
> 
> I really don't want to do that with the irq descriptor lock held or in
> case of affinity from the interrupt handler as we have to do with PCI
> MSI/MSI-X due to the horrors of the X86 interrupt delivery trainwreck.
> Also you cannot call into command queue code from interrupt disabled and
> interrupt descriptor lock held sections. You can try, but lockdep will
> yell at you immediately.

Yes, I wouldn't want to do this from an IRQ.

> One question is whether the device can see partial updates to that
> memory due to the async 'swap' of context from the device CPU.

It is worse than just partial updates.. The device operation is much
more like you'd imagine a CPU cache. There could be copies of the RAM
in the device for long periods of time, dirty data in the device that
will flush back to CPU RAM overwriting CPU changes, etc.

Without involving the device there is just no way to create data
consistency, and no way to change the data from the CPU. 

This is the down side of having device data in the RAM. It cannot be
so simple as 'just fetch it every time before you use it' as
performance would be horrible.

> irq chips have already a mechanism in place to deal with stuff which
> cannot be handled from within the irq descriptor spinlock held and
> interrupt disabled section.
> 
> The mechanism was invented to deal with interrupt chips which are
> connected to i2c, spi, etc.. The access to an interrupt chip control
> register has to queue stuff on the bus and wait for completion.
> Obviously not what you can do from interrupt disabled, raw spinlock held
> context either.

Ah intersting, sounds like the right parts! I didn't know about this..

> Now coming back to affinity setting. I'd love to avoid adding the bus
> lock magic to those interfaces because until now they can be called and
> are called from atomic contexts. And obviously none of the devices which
> use the buslock magic support affinity setting because they all deliver
> a single interrupt to a demultiplex interrupt and that one is usually
> sitting at the CPU level where interrupt steering works.
> 
> If we really can get away with atomically updating the message as
> outlined above and just let it happen at some point in the future then
> most problems are solved, except for the nastyness of CPU hotplug.

Since we can't avoid a device command, I'm think more along the lines
of having the affinity update trigger an async WQ to issue the command
from a thread context. Since it doesn't need to be synchronous it can
make it out 'eventually'.

I suppose the core code could provide this as a service? Sort of a
varient of the other lazy things above?

> But that's actually a non issue. Nothing prevents us from having an
> early 'migrate interrupts away from the outgoing CPU hotplug state'
> which runs in thread context and can therefore utilize the buslock
> mechanism. Actually I was thinking about that for other reasons already.

That would certainly work well, seems like it fits with the other
lazy/sleeping stuff above as well.

> >> If interrupt remapping is enabled then both are trivial because then the
> >> irq chip can delegate everything to the parent chip, i.e. the remapping
> >> unit.
> >
> > I did like this notion that IRQ remapping could avoid the overhead of
> > spin/spleep. Most of the use cases we have for this will require the
> > IOMMU anyhow.
> 
> You still need to support !remap scenarios I fear.

For x86 I think we could accept linking this to IOMMU, if really
necessary.

But it would have to work with ARM - is remapping a x86 only thing?
Does ARM put the affinity in the GIC tables not in the MSI data?

> Let me summarize what I think would be the sane solution for this:
> 
>   1) Utilize atomic writes for either all 16 bytes or reorder the bytes
>      and update 8 bytes atomically which is sufficient as the wide
>      address is only used with irq remapping and the MSI message in the
>      device is never changed after startup.

Sadly not something the device can manage due to data coherence

>   2) No requirement for issuing a command for regular migration
>      operations as they have no requirements to be synchronous.
> 
>      Eventually store some state to force a reload on the next regular
>      queue operation.

Would the async version above be OK?

>   3) No requirement for issuing a command for mask and unmask operations.
>      The core code uses and handles lazy masking already. So if the
>      hardware causes the lazyness, so be it.

This lazy masking thing sounds good, I'm totally unfamiliar with it
though.

>   4) Issue commands for startup and teardown as they need to be
>      synchronous

Yep

>   5) Have an early migration state for CPU hotunplug which issues a
>      command from appropriate context. That would even allow to handle
>      queue shutdown for managed interrupts when the last CPU in the
>      managed affinity set goes down. Restart of such a managed interrupt
>      when the first CPU in an affinity set comes online again would only
>      need minor modifications of the existing code to make it work.

Yep

> Thoughts?

This email is super helpful, I definately don't know all these corners
of the IRQ subsystem as my past with it has mostly been SOC stuff that
isn't as complicated!

Thanks,
Jason



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.