[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 02/13] xen: harmonize return types of hypercall handlers
On 16.12.21 22:15, Stefano Stabellini wrote: On Thu, 16 Dec 2021, Stefano Stabellini wrote:On Thu, 16 Dec 2021, Juergen Gross wrote:On 16.12.21 03:10, Stefano Stabellini wrote:On Wed, 15 Dec 2021, Juergen Gross wrote:On 14.12.21 18:36, Julien Grall wrote:Hi, On 08/12/2021 15:55, Juergen Gross wrote:Today most hypercall handlers have a return type of long, while the compat ones return an int. There are a few exceptions from that rule, however.So on Arm64, I don't think you can make use of the full 64-bit because a 32-bit domain would not be able to see the top 32-bit. In fact, this could potentially cause us some trouble (see [1]) in Xen. So it feels like the hypercalls should always return a 32-bit signed value on Arm.This would break hypercalls like XENMEM_maximum_ram_page which are able to return larger values, right?The other advantage is it would be clear that the top 32-bit are not usuable. Stefano, what do you think?Wouldn't it make more sense to check the return value to be a sign extended 32-bit value for 32-bit guests in do_trap_hypercall() instead? The question is what to return if this is not the case. -EDOM?I can see where Julien is coming from: we have been trying to keep the arm32 and arm64 ABIs identical since the beginning of the project. So, like Julien, my preference would be to always return 32-bit on ARM, both aarch32 and aarch64. It would make things simple. The case of XENMEM_maximum_ram_page is interesting but it is not a problem in reality because the max physical address size is only 40-bit for aarch32 guests, so 32-bit are always enough to return the highest page in memory for 32-bit guests.You are aware that this isn't the guest's max page, but the host's?I can see now that you meant to say that, no matter what is the max pseudo-physical address supported by the VM, XENMEM_maximum_ram_page is supposed to return the max memory page, which could go above the addressibility limit of the VM. So XENMEM_maximum_ram_page should potentially be able to return (1<<44) even when called by an aarch32 VM, with max IPA 40-bit. I would imagine it could be useful if dom0 is 32-bit but domUs are 64-bit on a 64-bit hypervisor (which I think it would be a very rare configuration on ARM.) Then it looks like XENMEM_maximum_ram_page needs to be able to return a value > 32-bit when called by a 32-bit guest. The hypercall ABI follows the ARM C calling convention, so a 64-bit value should be returned using r0 and r1. But looking at xen/arch/arm/traps.c:do_trap_hypercall, it doesn't seem it ever sets r1 today. Only r0 is set, so effectively we only support 32-bit return values on aarch32 and for aarch32 guests. In other words, today all hypercalls on ARM return 64-bit to 64-bit guests and 32-bit to 32-bit guests. Which in the case of memory_op is "technically" the correct thing to do because it matches the C declaration in xen/include/xen/hypercall.h: extern long do_memory_op( unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg); So... I guess the conclusion is that on ARM do_memory_op should return "long" although it is not actually enough for a correct implementation of XENMEM_maximum_ram_page for aarch32 guests ? Hence my suggestion to check the return value of _all_ hypercalls to be proper sign extended int values for 32-bit guests. This would fix all potential issues without silently returning truncated values. Juergen Attachment:
OpenPGP_0xB0DE9DD628BF132F.asc Attachment:
OpenPGP_signature
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |