|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 1/5] common/vmap: Fall back to simple allocator when !HAS_VMAP
On 15.11.2024 13:46, Luca Fancellu wrote:
>
>
>> On 15 Nov 2024, at 11:12, Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>
>> On 15.11.2024 11:50, Luca Fancellu wrote:
>>> --- a/xen/common/vmap.c
>>> +++ b/xen/common/vmap.c
>>> @@ -426,3 +426,10 @@ void *_xvrealloc(void *va, size_t size, unsigned int
>>> align)
>>>
>>> return ptr;
>>> }
>>> +
>>> +void iounmap(void __iomem *va)
>>> +{
>>> + unsigned long addr = (unsigned long)(void __force *)va;
>>> +
>>> + vunmap((void *)(addr & PAGE_MASK));
>>> +}
>>
>> Why is this being moved here, and converted from inline to out-of-line?
>> What the description says is insufficient imo, as even if you mean to
>> only support vmap_contig() and ioremap() on MPU systems, you'll still
>> need both vunmap() and iounmap().
>>
>> Plus, if it really needs converting, I don't think it should live at the
>> very end of the file, past _xvmalloc() and friends. Better suitable places
>> may then be next to vunmap() itself, or between vfree() and xvfree().
>
> I’ll try to keep it as it was originally, I gave a brief look into the R82
> branch and it should be fine.
> I’m planning to define vmap_config(), vunmap(), ioremap(), iounmap() in a
> vmap-mpu.c under arch/arm/mpu
>
>>
>>> --- a/xen/include/xen/vmap.h
>>> +++ b/xen/include/xen/vmap.h
>>> @@ -5,7 +5,7 @@
>>> * purpose area (VMAP_DEFAULT) and a livepatch-specific area (VMAP_XEN). The
>>> * latter is used when loading livepatches and the former for everything
>>> else.
>>> */
>>> -#if !defined(__XEN_VMAP_H__) && defined(VMAP_VIRT_START)
>>> +#if !defined(__XEN_VMAP_H__)
>>> #define __XEN_VMAP_H__
>>
>> With this adjustment, where are the functions defined that you "unhide"
>> the declarations of, in the MPU case? As you say in the description,
>> vmap.c won't be built in that case.
>
> Sure, I’ll wrap what can’t be used in MPU case with HAS_VMAP, I would like to
> keep out:
>
> void *vmap_contig(mfn_t mfn, unsigned int nr);
>
> void vunmap(const void *va);
>
> void __iomem *ioremap(paddr_t pa, size_t len);
>
> static inline void iounmap(void __iomem *va)
>
> static inline void vm_init(void)
>
> In order to don’t put too many #ifdef, are you ok if I move the declarations
> in order to have these close to each other. like below:
Some re-arrangement ought to be fine, especially when the #ifdef is
accompanied by a comment. I can't see how there can be #else though.
Jan
> --- a/xen/include/xen/vmap.h
> +++ b/xen/include/xen/vmap.h
> @@ -11,6 +11,8 @@
> #include <xen/mm-frame.h>
> #include <xen/page-size.h>
>
> +#ifdef CONFIG_HAS_VMAP
> +
> /* Identifiers for the linear ranges tracked by vmap */
> enum vmap_region {
> /*
> @@ -68,25 +70,6 @@ void *__vmap(const mfn_t *mfn, unsigned int granularity,
> unsigned int nr,
> */
> void *vmap(const mfn_t *mfn, unsigned int nr);
>
> -/*
> - * Maps physically contiguous pages onto the VMAP_DEFAULT vmap region
> - *
> - * @param mfn Base mfn of the physical region
> - * @param nr Number of mfns in the physical region
> - * @return Pointer to the mapped area on success; NULL otherwise.
> - */
> -void *vmap_contig(mfn_t mfn, unsigned int nr);
> -
> -/*
> - * Unmaps a range of virtually contiguous memory from one of the vmap regions
> - *
> - * The system remembers internally how wide the mapping is and unmaps it all.
> - * It also can determine the vmap region type from the `va`.
> - *
> - * @param va Virtual base address of the range to unmap
> - */
> -void vunmap(const void *va);
> -
> /*
> * Allocate `size` octets of possibly non-contiguous physical memory and map
> * them contiguously in the VMAP_DEFAULT vmap region
> @@ -112,6 +95,33 @@ void *vzalloc(size_t size);
> */
> void vfree(void *va);
>
> +/* Return the number of pages in the mapping starting at address 'va' */
> +unsigned int vmap_size(const void *va);
> +
> +/* Pointer to 1 octet past the end of the VMAP_DEFAULT virtual area */
> +void *arch_vmap_virt_end(void);
> +
> +#else /* !CONFIG_HAS_VMAP */
> +
> +/*
> + * Maps physically contiguous pages onto the VMAP_DEFAULT vmap region
> + *
> + * @param mfn Base mfn of the physical region
> + * @param nr Number of mfns in the physical region
> + * @return Pointer to the mapped area on success; NULL otherwise.
> + */
> +void *vmap_contig(mfn_t mfn, unsigned int nr);
> +
> +/*
> + * Unmaps a range of virtually contiguous memory from one of the vmap regions
> + *
> + * The system remembers internally how wide the mapping is and unmaps it all.
> + * It also can determine the vmap region type from the `va`.
> + *
> + * @param va Virtual base address of the range to unmap
> + */
> +void vunmap(const void *va);
> +
> /*
> * Analogous to vmap_contig(), but for IO memory
> *
> @@ -124,9 +134,6 @@ void vfree(void *va);
> */
> void __iomem *ioremap(paddr_t pa, size_t len);
>
> -/* Return the number of pages in the mapping starting at address 'va' */
> -unsigned int vmap_size(const void *va);
> -
> /* Analogous to vunmap(), but for IO memory mapped via ioremap() */
> static inline void iounmap(void __iomem *va)
> {
> @@ -135,9 +142,6 @@ static inline void iounmap(void __iomem *va)
> vunmap((void *)(addr & PAGE_MASK));
> }
>
> -/* Pointer to 1 octet past the end of the VMAP_DEFAULT virtual area */
> -void *arch_vmap_virt_end(void);
> -
> /* Initialises the VMAP_DEFAULT virtual range */
> static inline void vm_init(void)
> {
> @@ -146,4 +150,6 @@ static inline void vm_init(void)
> #endif
> }
>
> +#endif /* CONFIG_HAS_VMAP */
> +
> #endif /* __XEN_VMAP_H__ */
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |