[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS


  • To: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 20 Mar 2026 14:19:24 +0100
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Romain Caritey <Romain.Caritey@xxxxxxxxxxxxx>, Alistair Francis <alistair.francis@xxxxxxx>, Connor Davis <connojdavis@xxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 20 Mar 2026 13:19:45 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 20.03.2026 10:58, Oleksii Kurochko wrote:
> On 3/19/26 8:58 AM, Jan Beulich wrote:
>> On 17.03.2026 13:49, Oleksii Kurochko wrote:
>>> On 2/13/26 2:11 PM, Jan Beulich wrote:
>>>>>>> +#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM 
>>>>>>> @ 2GB */
>>>>>>> +#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
>>>>>>> +
>>>>>>> +#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
>>>>>>> +#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }
>>> (cut)
>>>
>>>> If all you want are 2Gb guests, why would such guests be 64-bit? And with
>>>> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
>>>> even a 32-bit hypervisor would suffice?
>>> Btw, shouldn't we look at VPN width?
>>>
>>> My understanding is that we should take GUEST_RAM0_BASE as sgfn address
>>> and then map it to mfn's page (allocated by alloc_domheap_pages())? And then
>>> repeat this process until we won't map GUEST_RAM0_SIZE.
>>>
>>> In this case for RV32 VPN (which is GFN in the current context) is 32-bit
>>> wide as RV32 supports only Sv32, what is 2^32 - 1, what is almost 4gb.
>> ??? (IOW - I fear I'm confused enough by the question that I don't know how
>> to respond.)
> 
> You mentioned above that:
>    "... And with (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits 
> wide) ..."
> 
> I wanted to clarify why you use PPN here in the context of GUEST_RAM0_BASE 
> definition.
> (and maybe I just misinterpreted incorrectly your original message)
> GUEST_RAM0_BASE is the address at which the guest believes RAM starts in its 
> physical
> address space, i.e. it is a GPA, which is then translated to an MPA.
> 
>  From the MMU's perspective, the GPA looks like:
>    VPN[1] | VPN[0] | page_offset   (in Sv32x4 mode)
> 
> In Sv32x4, the GPA is 34 bits wide (or 22 bits wide in terms of GFNs), and 
> the MPA is
> also 32 bits wide (or 22 bits wide in terms of PPN).

You mentioning Sv32x4 may point at part of the problem: For the guest physical
memory layout (and hence size), paging and hence virtual addresses don't matter
at all. What matters is what the guest can put in the page table entries it
writes. Addresses there are represented as PPNs, aren't they? Hence my use of
that acronym.

> The distinction is not significant in Sv32x4, since PPN width equals VPN 
> width, but
> in other modes VPN < PPN (in terms of bit width).
> So when we want to run a guest in Sv39x4 mode and want to give the guest the 
> full
> Sv39x4 address space, setting GUEST_RAM0_SIZE to the maximum possible value 
> for
> Sv39x4, shouldn't we look at the VPN width rather than the PPN width?

No, why? The guest can arrange to map more than 2^39 bytes. Not all at the same
time, sure, but by suitable switching page tables (or merely entries) around.

Jan

> In other words, GUEST_RAM0_SIZE should be (2^41 - 1) rather than (2^56 - 1)
> for Sv39x4.
> 
> ~ Oleksii
> 




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.