[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 09/27] ARM: GICv3: introduce separate pending_irq structs for LPIs



Hi Jan,

On 28/03/17 08:58, Jan Beulich wrote:
On 27.03.17 at 20:39, <sstabellini@xxxxxxxxxx> wrote:
CC'ing Andrew, Jan and George to get more feedback on the security
impact of this patch.

I'll make a quick summary for you: we need to allocate a 56 bytes struct
(called pending_irq) for each potential interrupt injected to guests
(dom0 and domUs). With the new ARM interrupt controller there could be
thousands.

We could do the allocation upfront, which requires more memory, or we
could do the allocation dynamically at run time when each interrupt is
enabled, and deallocate the struct when an interrupt is disabled.

However, there is a concern that doing xmalloc/xfree in response to an
unprivileged DomU request could end up becoming a potential vector of
denial of service attacks. The guest could enable a thousand interrupts,
then disable a thousand interrupts and so on, monopolizing the usage of
one physical cpu. It only takes the write of 1 bit in memory for a guest
to enable/disable an interrupt.

Well, I think doing the allocations at device assign time would be
the least problematic approach: The tool stack could account for
the needed memory (ballooning Dom0 if necessary). (We still
have the open work item of introducing accounting of guest-
associated, but guest-inaccessible memory in the hypervisor.)

What I don't really understand the background of is the pCPU
monopolization concern. Is there anything here that's long-running
inside the hypervisor? Otherwise, the scheduler should be taking
care of everything.

Let me give you some background before answering to the question. The ITS is an interrupt controller widget which provides a sophisticated way of dealing with MSIs in a scalable manner. A command queue is used to manage the ITS. It is a memory region provided by the guest can be up to 1MB. Each command is composed of 4 double-word (e.g 32 bytes) which make the possibly for a guest to queue up 32768 commands.

In order to control the command queue there are 2 registers:
- GITS_CWRITER that points to the end of the queue and updated by the software when a new command is written - GITS_CREADR that points to the beginning of the queue and updated by the forware when a new command has been executed.

The ITS will process command until the queue is empty (u.e GITS_CWRITER == GITS_CREADR). A software has 2 solutions to wait the completion of the command:
        - Polling on GITS_CREADR
        - Adding a command INT which will send a interrupt when executed

Usually the polling mode also includes a timeout in the code.

A guest could potentially queue up to 32768 commands before updating GITS_CWRITER. In the current approach, the command queue will be handled synchronously when the software wrote into GITS_CWRITER and trapped by the hypervisor. This means that we will not return from the hypervisor until the ITS executed the command between GITS_CREADER and GITS_CWRITER.

All the command have to be executed quickly to avoid the guest monopolizing the pCPU by flooding the command queue.

We thought using different alternative such as tasklet or breaking down the command queue in batch. But none of them fit well.

If you have a better idea, I am happy to consider it.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.