[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] stubdom: foreignmemory: Fix build after 0dbb4be739c5


  • To: Julien Grall <julien@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 14 Jul 2021 08:11:22 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=j4/sYfafgOCTOioVY7nl7Z9UsQerES2BcRUQBRBx8S8=; b=SqPu+sCd+WGEzB6yO1ZekdRffHEj1yLFO/O0OoUbOsJXvTHB1ef9QfGjvAuW14v2yMYnkg85Msc97KGC/N0F12HhtJH1bEKE1ZTr/BDDTI6G2l8g8volMal5wH0YUBNMCSDfFEp34mVtGf75lYc2NedUG7/3MyYJSOk3uy6yUSWgVQl8RVeGEybErYsVTeC/OY6HwJydaATIHpd+00h6JxUCqHoPVwHsspoqHonxgXxHdHuqJiBBbFtBCLz7JQhIG2vCnbqYwLeQWrsO0lQcIRL2Gr3OMnYxUbmvQPDbTMeSEvkaVmF7j3OhFVkTpc5sqcGKvur0/+FQ96EiVHnTHw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Fi1+rLOgMxa4lXSqoWhrhqPwljhkaLwBSOA0wYVxuGvfnLt6MQLUwh0NUoNekdDH2C9FN6jllch/ecbE3vmWCxv5m8VtJapAxxCpPmUjh0ptYbOvlLRkUm0Xz8P2esL9fA33vs2y+Jak+widwlyAhRa5UAlqphgW3DGz/HobkfwFm7ePZhNAUlnCSAtKJxS2JIkfDXGFKBoCsCz51W9p8oTVXkoyYf/pVBRk4uEbzzlfd42G5DVyPmroL3KxR8BquaZBJMAL8guSRq7jHDllD9+eRYlL28coozAPSA8a7Q5Okuf46mZ7jbuUy/PdibgA2cEyrt+L07hj4Nd7z1wj5g==
  • Authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
  • Cc: Julien Grall <jgrall@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Costin Lupu <costin.lupu@xxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Wed, 14 Jul 2021 06:11:38 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 13.07.2021 18:33, Julien Grall wrote:
> Hi,
> 
> On 13/07/2021 17:27, Jan Beulich wrote:
>> On 13.07.2021 18:15, Julien Grall wrote:
>>> On 13/07/2021 16:52, Jan Beulich wrote:
>>>> On 13.07.2021 16:33, Julien Grall wrote:
>>>>> On 13/07/2021 15:23, Jan Beulich wrote:
>>>>>> On 13.07.2021 16:19, Julien Grall wrote:
>>>>>>> On 13/07/2021 15:14, Jan Beulich wrote:
>>>>>>>>> And I don't think it should be named XC_PAGE_*, but rather XEN_PAGE_*.
>>>>>>>>
>>>>>>>> Even that doesn't seem right to me, at least in principle. There 
>>>>>>>> shouldn't
>>>>>>>> be a build time setting when it may vary at runtime. IOW on Arm I 
>>>>>>>> think a
>>>>>>>> runtime query to the hypervisor would be needed instead.
>>>>>>>
>>>>>>> Yes, we want to be able to use the same userspace/OS without rebuilding
>>>>>>> to a specific hypervisor page size.
>>>>>>>
>>>>>>>> And thinking
>>>>>>>> even more generally, perhaps there could also be mixed (base) page 
>>>>>>>> sizes
>>>>>>>> in use at run time, so it may need to be a bit mask which gets 
>>>>>>>> returned.
>>>>>>>
>>>>>>> I am not sure to understand this. Are you saying the hypervisor may use
>>>>>>> at the same time different page size?
>>>>>>
>>>>>> I think so, yes. And I further think the hypervisor could even allow its
>>>>>> guests to do so.
>>>>>
>>>>> This is already the case on Arm. We need to differentiate between the
>>>>> page size used by the guest and the one used by Xen for the stage-2 page
>>>>> table (what you call EPT on x86).
>>>>>
>>>>> In this case, we are talking about the page size used by the hypervisor
>>>>> to configure the stage-2 page table
>>>>>
>>>>>> There would be a distinction between the granularity at
>>>>>> which RAM gets allocated and the granularity at which page mappings (RAM
>>>>>> or other) can be established. Which yields an environment which I'd say
>>>>>> has no clear "system page size".
>>>>>
>>>>> I don't quite understand why you would allocate and etablish the memory
>>>>> with a different page size in the hypervisor. Can you give an example?
>>>>
>>>> Pages may get allocated in 16k chunks, but there may be ways to map
>>>> 4k MMIO regions, 4k grants, etc. Due to the 16k allocation granularity
>>>> you'd e.g. still balloon pages in and out at 16k granularity.
>>> Right, 16KB is a multiple of 4KB, so a guest could say "Please allocate
>>> a contiguous chunk of 4 4KB pages".
>>>
>>>   From my understanding, you are suggesting to tell the guest that we
>>> "support 4KB, 16KB, 64KB...". However, it should be sufficient to say
>>> "we support 4KB and all its multiple".
>>
>> No - in this case it could legitimately expect to be able to balloon
>> out a single 4k page. Yet that's not possible with 16k allocation
>> granularity.
> 
> I am confused... why would you want to put such restriction? IOW, what 
> are you trying to protect against?

Protect? It may simply be that the most efficient page size is 16k.
Hence accounting of pages may be done at 16k granularity. IOW there
then is one struct page_info per 16k page. How would you propose a
guest would alloc/free 4k pages in such a configuration?

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.