|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v1 01/27] xen/riscv: Implement ARCH_PAGING_MEMPOOL
On 10.03.2026 18:08, Oleksii Kurochko wrote:
> The p2m_freelist is used to allocate pages for the P2M, but to initialize
> this list, domain_p2m_set_allocation() might be called.
> This function is invoked in construct_domU() within the common Dom0less
> code, and providing an implementation of domain_p2m_set_allocation() when
> CONFIG_ARCH_PAGING_MEMPOOL=y is appropriate for RISC-V.
With this wording it is odd to see ...
> --- a/xen/arch/riscv/Kconfig
> +++ b/xen/arch/riscv/Kconfig
> @@ -1,5 +1,6 @@
> config RISCV
> def_bool y
> + select ARCH_PAGING_MEMPOOL
... this. You really want to settle on whether it is selected unconditionally
or not. Also for the code below, where ...
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -10,7 +10,7 @@ obj-y += irq.o
> obj-y += kernel.o
> obj-y += mm.o
> obj-y += p2m.o
> -obj-y += paging.o
> +obj-$(CONFIG_ARCH_PAGING_MEMPOOL) += paging.o
> obj-y += pt.o
> obj-$(CONFIG_RISCV_64) += riscv64/
> obj-y += sbi.o
... this change and any #ifdef-ary further down aren't needed unless the
select became conditional. (Plus with the change above things likely wouldn't
even build if ARCH_PAGING_MEMPOOL could be off under certain conditions.)
> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -1568,3 +1568,34 @@ struct page_info *get_page_from_gfn(struct domain *d,
> unsigned long gfn,
>
> return page;
> }
> +
> +#ifdef CONFIG_ARCH_PAGING_MEMPOOL
> +
> +int arch_set_paging_mempool_size(struct domain *d, uint64_t size)
> +{
> + unsigned long pages = PFN_DOWN(size);
> + bool preempted = false;
> + int rc;
> +
> + if ( (size & ~PAGE_MASK) || /* Non page-sized request? */
> + pages != PFN_DOWN(size) ) /* 32-bit overflow? */
> + return -EINVAL;
Can't this be had with just
if ( ((paddr_t)pages << PAGE_SHIFT) != size )
return -EINVAL;
(and perhaps utilizing pfn_to_paddr(), even if it's not a PFN we're dealing
with here)?
> + spin_lock(&d->arch.paging.lock);
> + rc = p2m_set_allocation(d, pages, &preempted);
> + spin_unlock(&d->arch.paging.lock);
> +
> + ASSERT(preempted == (rc == -ERESTART));
This actually suggests that (once again) Arm code perhaps shouldn't have been
copied verbatim: There shouldn't be a need for the "preempted" state to be
returned back in two distinct ways.
> + return rc;
> +}
> +
> +/* Return the size of the pool, in bytes. */
> +int arch_get_paging_mempool_size(struct domain *d, uint64_t *size)
> +{
> + *size = (uint64_t)ACCESS_ONCE(d->arch.paging.total_pages) << PAGE_SHIFT;
As per above, maybe use pfn_to_paddr()?
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |