|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 7/7] x86/hap: Increase the number of initial mempool_size to 1024 pages
On Tue, Apr 30, 2024 at 4:47 PM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>
> On 28.04.2024 18:52, Petr Beneš wrote:
> > From: Petr Beneš <w1benny@xxxxxxxxx>
> >
> > This change anticipates scenarios where `max_altp2m` is set to its maximum
> > supported value (i.e., 512), ensuring sufficient memory is allocated upfront
> > to accommodate all altp2m tables without initialization failure.
>
> And guests with fewer or even no altp2m-s still need the same bump? You
> know the number of altp2m-s upon domain creation, so why bump by any more
> than what's strictly needed for that?
I have to admit I've considered computing the value which goes to
hap_set_allocation
by simply adding 256 + max_altp2m, but that felt so arbitrary - the
256 value itself
feels arbitrary, as I haven't found any reasoning for it anywhere.
I have also tried to make code changes to make the initial allocation
size configurable
via libxl (possibly reusing the shadow_memkb) - which seemed to me
like the "correct"
solution, but those changes were more complicated than I had
anticipated and I would
definitely not make it till the 4.19 deadline.
Question is, what to do now? Should I change it to 256 + max_altp2m?
> > The necessity for this increase arises from the current mechanism where
> > altp2m
> > tables are allocated at initialization, requiring one page from the mempool
> > for each altp2m view.
>
> So that's the p2m_alloc_table() out of hap_enable()? If you're permitting
> up to 512 altp2m-s, I think it needs considering to not waste up to 2Mb
> without knowing how many of the altp2m-s are actually going to be used.
Yes and I ultimately agree.
> How complicate on-demand allocation would be I can't tell though, I have
> to admit.
That's also a fix I've been trying to work on - to make whole altp2m allocations
on-demand. Unfortunately, I didn't make it in time.
> > --- a/tools/tests/paging-mempool/test-paging-mempool.c
> > +++ b/tools/tests/paging-mempool/test-paging-mempool.c
> > @@ -35,7 +35,7 @@ static struct xen_domctl_createdomain create = {
> >
> > static uint64_t default_mempool_size_bytes =
> > #if defined(__x86_64__) || defined(__i386__)
> > - 256 << 12; /* Only x86 HAP for now. x86 Shadow needs more work. */
> > + 1024 << 12; /* Only x86 HAP for now. x86 Shadow needs more work. */
>
> I also can't derive from the description why we'd need to go from 256 to
> 1024 here and ...
It's explained in the code few lines below:
/*
* Check that the domain has the expected default allocation size. This
* will fail if the logic in Xen is altered without an equivalent
* adjustment here.
*/
I have verified that the default_mempool_size_bytes must reflect the number
of pages in the initial hap_set_allocation() call.
Is it something I should include in the commit message, too?
> > --- a/xen/arch/x86/mm/hap/hap.c
> > +++ b/xen/arch/x86/mm/hap/hap.c
> > @@ -468,7 +468,7 @@ int hap_enable(struct domain *d, u32 mode)
> > if ( old_pages == 0 )
> > {
> > paging_lock(d);
> > - rv = hap_set_allocation(d, 256, NULL);
> > + rv = hap_set_allocation(d, 1024, NULL);
>
> ... here. You talk of (up to) 512 pages there only.
>
> Also isn't there at least one more place where the tool stack (libxl I
> think) would need changing, where Dom0 ballooning needs are calculated?
> And/or doesn't the pool size have a default calculation in the tool
> stack, too?
I have found places in libxl where the mempool_size is calculated, but
that mempool
size is then set AFTER the domain is created via xc_set_paging_mempool_size.
In my opinion it doesn't necessarily require change, since it's
expected by the user
to manually set it via shadow_memkb. The only current problem is (which this
commit is trying to fix) that setting shadow_memkb doesn't help when
max_altp2m > (256 - 1 + vcpus + MAX_NESTEDP2M), since the initial mempool
size is hardcoded.
I didn't find any other places that would require reflecting the
current "256" value.
P.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |