[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 14/17] xen/riscv: implement p2m_next_level()
On 7/16/25 1:43 PM, Jan Beulich wrote:
On 16.07.2025 13:32, Oleksii Kurochko wrote:On 7/2/25 10:35 AM, Jan Beulich wrote:On 10.06.2025 15:05, Oleksii Kurochko wrote:--- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -387,6 +387,17 @@ static inline bool p2me_is_valid(struct p2m_domain *p2m, pte_t pte) return p2m_type_radix_get(p2m, pte) != p2m_invalid; } +/* + * pte_is_* helpers are checking the valid bit set in the + * PTE but we have to check p2m_type instead (look at the comment above + * p2me_is_valid()) + * Provide our own overlay to check the valid bit. + */ +static inline bool p2me_is_mapping(struct p2m_domain *p2m, pte_t pte) +{ + return p2me_is_valid(p2m, pte) && (pte.pte & PTE_ACCESS_MASK); +}Same question as on the earlier patch - does P2M type apply to intermediate page tables at all? (Conceptually it shouldn't.)It doesn't matter whether it is an intermediate page table or a leaf PTE pointing to a page — PTE should be valid. Considering that in the current implementation it’s possible for PTE.v = 0 but P2M.v = 1, it is better to check P2M.v instead of PTE.v.I'm confused by this reply. If you want to name 2nd level page table entries P2M - fine (but unhelpful). But then for any memory access there's only one of the two involved: A PTE (Xen accesses) or a P2M (guest accesses). Hence how can there be "PTE.v = 0 but P2M.v = 1"? I think I understand your confusion, let me try to rephrase. The reason for having both It could also be the case that the P2M PTE type isn't
An intermediate page table entry is something Xen controls entirely. Hence it has no (guest induced) type. ... And actually it is a reason why it is needed to set a type even for an intermediate page table entry. I hope now it is a lit bit clearer what and why was done. @@ -492,6 +503,70 @@ static pte_t p2m_entry_from_mfn(struct p2m_domain *p2m, mfn_t mfn, p2m_type_t t, return e; } +/* Generate table entry with correct attributes. */ +static pte_t page_to_p2m_table(struct p2m_domain *p2m, struct page_info *page) +{ + /* + * Since this function generates a table entry, according to "Encoding + * of PTE R/W/X fields," the entry's r, w, and x fields must be set to 0 + * to point to the next level of the page table. + * Therefore, to ensure that an entry is a page table entry, + * `p2m_access_n2rwx` is passed to `mfn_to_p2m_entry()` as the access value, + * which overrides whatever was passed as `p2m_type_t` and guarantees that + * the entry is a page table entry by setting r = w = x = 0. + */ + return p2m_entry_from_mfn(p2m, page_to_mfn(page), p2m_ram_rw, p2m_access_n2rwx);Similarly P2M access shouldn't apply to intermediate page tables. (Moot with that, but (ab)using p2m_access_n2rwx would also look wrong: You did read what it means, didn't you?)|p2m_access_n2rwx| was chosen not really because of the description mentioned near its declaration, but because it sets r=w=x=0, which RISC-V expects for a PTE that points to the next-level page table. Generally, I agree that P2M access shouldn't be applied to intermediate page tables. What I can suggest in this case is to use|p2m_access_rwx| instead of|p2m_access_n2rwx|,No. p2m_access_* shouldn't come into play here at all. Okay, then it seems like I just can't explicitly re-use p2m_pte_from_mfn() in page_to_p2m_table() and have to open-code p2m_pte_from_mfn() or add another one argument is_table to decide if p2m_access_t and/or p2m_type_t should be applied. Period. Just like P2M types shouldn't. As per above - intermediate page tables are Xen internal constructs. Look please at the explaining above why p2m types is needed despite of the fact that logically it isn't really needed. which will ensure that the P2M access type isn't applied when|p2m_entry_from_mfn() |is called, and then, after calling|p2m_entry_from_mfn()|, simply set|PTE.r,w,x=0|. So this function will look like: /* Generate table entry with correct attributes. */ static pte_t page_to_p2m_table(struct p2m_domain *p2m, struct page_info *page) { /* * p2m_ram_rw is chosen for a table entry as p2m table should be valid * from both P2M and hardware point of view. * * p2m_access_rwx is chosen to restrict access permissions, what mean * do not apply access permission for a table entry */ pte_t pte = p2m_pte_from_mfn(p2m, page_to_mfn(page), _gfn(0), p2m_ram_rw, p2m_access_rwx); /* * Since this function generates a table entry, according to "Encoding * of PTE R/W/X fields," the entry's r, w, and x fields must be set to 0 * to point to the next level of the page table. */ pte.pte &= ~PTE_ACCESS_MASK; return pte; } Does this make sense? Or would it be better to keep the current version of |page_to_p2m_table()| and just improve the comment explaining why|p2m_access_n2rwx |is used for a table entry?No to both, as per above.+static struct page_info *p2m_alloc_page(struct domain *d) +{ + struct page_info *pg; + + /* + * For hardware domain, there should be no limit in the number of pages that + * can be allocated, so that the kernel may take advantage of the extended + * regions. Hence, allocate p2m pages for hardware domains from heap. + */ + if ( is_hardware_domain(d) ) + { + pg = alloc_domheap_page(d, MEMF_no_owner); + if ( pg == NULL ) + printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n"); + }The comment looks to have been taken verbatim from Arm. Whatever "extended regions" are, does the same concept even exist on RISC-V?Initially, I missed that it’s used only for Arm. Since it was mentioned in |doc/misc/xen-command-line.pandoc|, I assumed it applied to all architectures. But now I see that it’s Arm-specific:: ### ext_regions (Arm)Also, special casing Dom0 like this has benefits, but also comes with a pitfall: If the system's out of memory, allocations will fail. A pre- populated pool would avoid that (until exhausted, of course). If special- casing of Dom0 is needed, I wonder whether ...+ else + { + spin_lock(&d->arch.paging.lock); + pg = page_list_remove_head(&d->arch.paging.p2m_freelist); + spin_unlock(&d->arch.paging.lock); + }... going this path but with a Dom0-only fallback to general allocation wouldn't be the better route.IIUC, then it should be something like: static struct page_info *p2m_alloc_page(struct domain *d) { struct page_info *pg; spin_lock(&d->arch.paging.lock); pg = page_list_remove_head(&d->arch.paging.p2m_freelist); spin_unlock(&d->arch.paging.lock); if ( !pg && is_hardware_domain(d) ) { /* Need to allocate more memory from domheap */ pg = alloc_domheap_page(d, MEMF_no_owner); if ( pg == NULL ) { printk(XENLOG_ERR "Failed to allocate pages.\n"); return pg; } ACCESS_ONCE(d->arch.paging.total_pages)++; page_list_add_tail(pg, &d->arch.paging.freelist); } return pg; } And basically use|d->arch.paging.freelist| for both dom0less and dom0 domains, with the only difference being that in the case of Dom0,|d->arch.paging.freelist |could be extended. Do I understand your idea correctly?Broadly yes, but not in the details. For example, I don't think such a page allocated from the general heap would want appending to freelist. Commentary and alike also would want tidying. Could you please explain why it wouldn't want appending to freelist? And of course going forward, for split hardware and control domains the latter may want similar treatment. Could you please clarify what is the difference between hardware and control domains? I thought that it is the same or is it for the case when we have dom0 (control domain) which runs domD (hardware domain) and guest domain? Thanks. ~ Oleksii
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |