[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 02/13] xen: harmonize return types of hypercall handlers


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 17 Dec 2021 08:39:41 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZdQgE6pD11VLa2w58nTAWrowaNXWfX2MqQAbRI165VI=; b=BDW8tX24o9RgPgn7NYtODW6ELJXG0zageDtaeabKIl9IvTAh/qIud1aNpBrr2OMsqwgG5pjXsWMLGNzOPeBNGdNqr46wiWiG0ZADvAsdhs7x+c3EuGz43Mr46u3wdoRPcya+Arw5yd9w2WC1IQlfX+ID+DVmCT6ZcjFjM/tt/xPa/RXnHAYCQX19bzrOeBwMFvxYs/VYMPD9i19q0x5zTE1+vda9DnmaJrnwuquKoGSmwHKfbBmV2lSH/+Fe3MuCe3EidXONHbW7eor35RZxVXCadRyVB7erLMb/btdKkDSy5FmHhva92pTC8dVfnIbL8YI2Fi4+5z+q7j664fnWmw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EpoP1nSWVwo/z8WqXtRQsdYuHb/7a600GCmDGo1qGOrrHvIdxr1dJt+RHFopG2u1fKurJ9Gcz75CMMDcmzyglyaTOuIZT/lUXerv3j0nz2vinPiuyqS6/3nI655rbVIh+OChJXLeslI2crVRcS4k9lQnFt6ROl+zWQlFfFeq9ycspDmgM3xbupvYjYSlq3i9+0H6SBtMyIhD4T80Kpl3lL+wGkdePOlcjLEJkM2EiOZtEb5I270LbGnUI6WQQLfn1VWVX0qge2j1EenmXAB/AAC3pMtY8hkYdKpX1LuWYT/GkgkJnzrT9yS/5L8SAmDEcrwGYWrxpafkfRaWnxGoNQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Juergen Gross <jgross@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Christopher Clark <christopher.w.clark@xxxxxxxxx>
  • Delivery-date: Fri, 17 Dec 2021 07:40:06 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 16.12.2021 22:15, Stefano Stabellini wrote:
> On Thu, 16 Dec 2021, Stefano Stabellini wrote:
>> On Thu, 16 Dec 2021, Juergen Gross wrote:
>>> On 16.12.21 03:10, Stefano Stabellini wrote:
>>>> On Wed, 15 Dec 2021, Juergen Gross wrote:
>>>>> On 14.12.21 18:36, Julien Grall wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On 08/12/2021 15:55, Juergen Gross wrote:
>>>>>>> Today most hypercall handlers have a return type of long, while the
>>>>>>> compat ones return an int. There are a few exceptions from that rule,
>>>>>>> however.
>>>>>>
>>>>>> So on Arm64, I don't think you can make use of the full 64-bit because a
>>>>>> 32-bit domain would not be able to see the top 32-bit.
>>>>>>
>>>>>> In fact, this could potentially cause us some trouble (see [1]) in Xen.
>>>>>> So it feels like the hypercalls should always return a 32-bit signed
>>>>>> value
>>>>>> on Arm.
>>>>>
>>>>> This would break hypercalls like XENMEM_maximum_ram_page which are able
>>>>> to return larger values, right?
>>>>>
>>>>>> The other advantage is it would be clear that the top 32-bit are not
>>>>>> usuable. Stefano, what do you think?
>>>>>
>>>>> Wouldn't it make more sense to check the return value to be a sign
>>>>> extended 32-bit value for 32-bit guests in do_trap_hypercall() instead?
>>>>>
>>>>> The question is what to return if this is not the case. -EDOM?
>>>>
>>>>
>>>> I can see where Julien is coming from: we have been trying to keep the
>>>> arm32 and arm64 ABIs identical since the beginning of the project. So,
>>>> like Julien, my preference would be to always return 32-bit on ARM, both
>>>> aarch32 and aarch64. It would make things simple.
>>>>
>>>> The case of XENMEM_maximum_ram_page is interesting but it is not a
>>>> problem in reality because the max physical address size is only 40-bit
>>>> for aarch32 guests, so 32-bit are always enough to return the highest
>>>> page in memory for 32-bit guests.
>>>
>>> You are aware that this isn't the guest's max page, but the host's?
> 
> I can see now that you meant to say that, no matter what is the max
> pseudo-physical address supported by the VM, XENMEM_maximum_ram_page is
> supposed to return the max memory page, which could go above the
> addressibility limit of the VM.
> 
> So XENMEM_maximum_ram_page should potentially be able to return (1<<44)
> even when called by an aarch32 VM, with max IPA 40-bit.
> 
> I would imagine it could be useful if dom0 is 32-bit but domUs are
> 64-bit on a 64-bit hypervisor (which I think it would be a very rare
> configuration on ARM.)
> 
> Then it looks like XENMEM_maximum_ram_page needs to be able to return a
> value > 32-bit when called by a 32-bit guest.
> 
> The hypercall ABI follows the ARM C calling convention, so a 64-bit
> value should be returned using r0 and r1. But looking at
> xen/arch/arm/traps.c:do_trap_hypercall, it doesn't seem it ever sets r1
> today. Only r0 is set, so effectively we only support 32-bit return
> values on aarch32 and for aarch32 guests.
> 
> In other words, today all hypercalls on ARM return 64-bit to 64-bit
> guests and 32-bit to 32-bit guests. Which in the case of memory_op is
> "technically" the correct thing to do because it matches the C
> declaration in xen/include/xen/hypercall.h:
> 
> extern long
> do_memory_op(
>     unsigned long cmd,
>     XEN_GUEST_HANDLE_PARAM(void) arg);
> 
> So...  I guess the conclusion is that on ARM do_memory_op should return
> "long" although it is not actually enough for a correct implementation
> of XENMEM_maximum_ram_page for aarch32 guests ?

For 32-bit guests the value needs to be saturated and allocations be
restricted to the "visible" part of physical address space. Just like
we do on x86. Except of course for Arm the question is in how far
knowledge of the largest physical address in the system is actually
relevant to guests (i.e. like for HVM on x86): This isn't transparent
only for PV, or at least it better would be restricted to PV.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.