[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/IRQ: bump max number of guests for a shared IRQ to 31


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
  • Date: Wed, 2 Dec 2020 16:34:55 +0000
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: <andrew.cooper3@xxxxxxxxxx>, <roger.pau@xxxxxxxxxx>, <wl@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 02 Dec 2020 16:35:09 +0000
  • Ironport-sdr: 1NQUARUK5eMmTpNbHwSkFWjmAQc3uAN53dEqKAZZMYHmVQMxFAuCg0+idIP1f0TjWD9xsgjS7m gs77VQH+9dNT8lEgv/Oe9gQQIkpe7VhWlIvLXKdlTsTBa7pbbbAIgM7TFeYl/gzdvOiylRfsFq K9ATtHRtZIIuE4RsapGHA739VflB95/4VSx8ungxV/HUrbpC3U8hoSIv6YUB/Ql+PxfewVmpKo LgNR0hGGQnK1QhepEKlH0k8TSqMadXLC2DDqMpYzRGzgTx7118B7dE/T8DViJEdNDFT6H/Mclz YR0=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 02/12/2020 15:21, Jan Beulich wrote:
> On 02.12.2020 15:53, Igor Druzhinin wrote:
>> On 02/12/2020 09:25, Jan Beulich wrote:
>>> Instead I'm wondering whether this wouldn't better be a Kconfig
>>> setting (or even command line controllable). There don't look to be
>>> any restrictions on the precise value chosen (i.e. 2**n-1 like is
>>> the case for old and new values here, for whatever reason), so a
>>> simple permitted range of like 4...64 would seem fine to specify.
>>> Whether the default then would want to be 8 (close to the current
>>> 7) or higher (around the actually observed maximum) is a different
>>> question.
>>
>> I'm in favor of a command line argument here - it would be much less trouble
>> if a higher limit was suddenly necessary in the field. The default IMO
>> should definitely be higher than 8 - I'd stick with number 32 which to me
>> should cover our real world scenarios and apply some headroom for the future.
> 
> Well, I'm concerned of the extra memory overhead. Every IRQ,
> sharable or not, will get the extra slots allocated with the
> current scheme. Perhaps a prereq change then would be to only
> allocate multi-guest arrays for sharable IRQs, effectively
> shrinking the overhead in particular for all MSI ones?

That's one way to improve overall system scalability but in that area
there is certainly much bigger fish to fry elsewhere. With 32 elements in the
array we get 200 bytes of overhead per structure, with 16 it's just 72 extra
bytes which in the unattainable worst case scenario of every single vector taken
in 512 CPU machine would only account for several MB of overhead.

I'd start with dynamic array allocation first and setting the limit to 16 that
should be enough for now. And then if that default value needs to be raised
we can consider further improvements.

Igor



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.