[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS





On 3/20/26 2:19 PM, Jan Beulich wrote:
On 20.03.2026 10:58, Oleksii Kurochko wrote:
On 3/19/26 8:58 AM, Jan Beulich wrote:
On 17.03.2026 13:49, Oleksii Kurochko wrote:
On 2/13/26 2:11 PM, Jan Beulich wrote:
+#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
+#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
+
+#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
+#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }
(cut)

If all you want are 2Gb guests, why would such guests be 64-bit? And with
(iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
even a 32-bit hypervisor would suffice?
Btw, shouldn't we look at VPN width?

My understanding is that we should take GUEST_RAM0_BASE as sgfn address
and then map it to mfn's page (allocated by alloc_domheap_pages())? And then
repeat this process until we won't map GUEST_RAM0_SIZE.

In this case for RV32 VPN (which is GFN in the current context) is 32-bit
wide as RV32 supports only Sv32, what is 2^32 - 1, what is almost 4gb.
??? (IOW - I fear I'm confused enough by the question that I don't know how
to respond.)

You mentioned above that:
    "... And with (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide) 
..."

I wanted to clarify why you use PPN here in the context of GUEST_RAM0_BASE 
definition.
(and maybe I just misinterpreted incorrectly your original message)
GUEST_RAM0_BASE is the address at which the guest believes RAM starts in its 
physical
address space, i.e. it is a GPA, which is then translated to an MPA.

  From the MMU's perspective, the GPA looks like:
    VPN[1] | VPN[0] | page_offset   (in Sv32x4 mode)

In Sv32x4, the GPA is 34 bits wide (or 22 bits wide in terms of GFNs), and the 
MPA is
also 32 bits wide (or 22 bits wide in terms of PPN).

You mentioning Sv32x4 may point at part of the problem: For the guest physical
memory layout (and hence size), paging and hence virtual addresses don't matter
at all. What matters is what the guest can put in the page table entries it
writes. Addresses there are represented as PPNs, aren't they? Hence my use of
that acronym.

That's is what I came to after wrote and sent an e-mail. Now you confirmed that.


The distinction is not significant in Sv32x4, since PPN width equals VPN width, 
but
in other modes VPN < PPN (in terms of bit width).
So when we want to run a guest in Sv39x4 mode and want to give the guest the 
full
Sv39x4 address space, setting GUEST_RAM0_SIZE to the maximum possible value for
Sv39x4, shouldn't we look at the VPN width rather than the PPN width?

No, why? The guest can arrange to map more than 2^39 bytes. Not all at the same
time, sure, but by suitable switching page tables (or merely entries) around.

Good point. Then the right limit is therefore the PPN width which reflects the actual physical addressing capability.

Thanks a lot.

~ Oleksii




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.