[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V4] xen/gnttab: Store frame GFN in struct page_info on Arm


  • To: Julien Grall <julien@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 4 Jan 2022 09:31:51 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=C2Hs7vGS8M62r1tacuWdnx0iqLb/qJoiJOWB9d0VRTo=; b=gaY5vyCOBg3t2c+I2+QERYSicQC1bAdqRPVID+1MOEsgqFFNUnF1pbJNF9uyMbJUFd/ul+U2cl8HF3xmb2Ms3XGg2+PZB/w9nGTjvc7PbjzHn39DLy0zTSX9d254gZ+SU+otOQtV+FH98nUBpdPRf7GvugbquYYtW3Cjn+LmnEtrEMkYRoB71an9TWBe22YrKO4Mzj4pLqzkREVub0ktSbqvb0fNInFiabHHOUqTzZtWMdgQsSXSrDyczVwiryeRsItJt65RKOLp1j6ZRr010kKP5fu7nPq94lphbF+mbvio+/IMkx/CGJuQs8a29Yex4QwsXy+Q3Ivmdp89iNzRfw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Bndv9oHvzPondHhyACNy5gt7B1JdCiayip94jKNmdT8T/sn66of4XT1xu0FxyeugtRvhEm6VemsKGQxFaACGtnlAuVyy0UtEFyHgJG3dZZUWVArod7qtmhd89D8OM/i5JLV0ii2YCNC5+V14b0EWPOhOtnea+oDePV17KGScb1ww/+5jgBBawxzMSLmF/rOvXNtuVoz1a8/Tl22zxbesXyD98IVPyMTUq8JEb4rXPSQ7f6FBkqg/PkFrOiDEnSX1rdjUmoTR8DqFGXaXUrzIgDSpRQdy/+IYndCY8quHL2iJKTseJF3GCBjhm0BVAPNtzhgV0vOGvlFj7aeIqR/SEQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Bob Eshleman <bobbyeshleman@xxxxxxxxx>, Alistair Francis <alistair.francis@xxxxxxx>, Connor Davis <connojdavis@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Oleksandr <olekstysh@xxxxxxxxx>
  • Delivery-date: Tue, 04 Jan 2022 08:32:00 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 22.12.2021 13:33, Julien Grall wrote:
> On 22/12/2021 13:05, Jan Beulich wrote:
>> On 22.12.2021 11:01, Julien Grall wrote:
>>> On 14/12/2021 17:45, Jan Beulich wrote:
>>>> On 14.12.2021 17:26, Oleksandr wrote:
>>>>> On 14.12.21 15:37, Jan Beulich wrote:
>>>>>> On 03.12.2021 21:33, Oleksandr Tyshchenko wrote:
>>>>>>> @@ -2177,14 +2181,22 @@ void *alloc_xenheap_pages(unsigned int order, 
>>>>>>> unsigned int memflags)
>>>>>>>     
>>>>>>>     void free_xenheap_pages(void *v, unsigned int order)
>>>>>>>     {
>>>>>>> +    struct page_info *pg;
>>>>>>> +    unsigned int i;
>>>>>>> +
>>>>>>>         ASSERT(!in_irq());
>>>>>>>     
>>>>>>>         if ( v == NULL )
>>>>>>>             return;
>>>>>>>     
>>>>>>> +    pg = virt_to_page(v);
>>>>>>> +
>>>>>>>         memguard_guard_range(v, 1 << (order + PAGE_SHIFT));
>>>>>> ... this really want to (logically) move into the new arch hooks.
>>>>>> That'll effectively mean to simply drop the Arm stubs afaict (and I
>>>>>> notice there's some dead code there on x86, which I guess I'll make
>>>>>> a patch to clean up). But first of all this suggests that you want
>>>>>> to call the hooks with base page and order, putting the loops there.
>>>>>
>>>>> I see your point and agree ... However I see the on-list patches that
>>>>> remove common memguard_* invocations and x86 bits.
>>>>> So I assume, this request is not actual anymore, or I still need to pass
>>>>> an order to new arch hooks? Please clarify.
>>>>
>>>> Well, that patch (really just the Arm one) effectively takes care of
>>>> part of what I did say above. Irrespective I continue to think that
>>>> the hook should take a (page,order) tuple instead of getting invoked
>>>> once for every order-0 page. And the hook invocations should be placed
>>>> such that they could fulfill the (being removed) memguard function
>>>> (iirc that was already the case, at least mostly).
>>>
>>> IIUC your suggestion, with your approach, alloc_xenheap_pages() would
>>> look like:
>>>
>>>        for ( i = 0; i < (1u << order); i++ )
>>>            pg[i].count_info |= PGC_xen_heap;
>>>
>>>        arch_alloc_xenheap_pages(pg, 1U << order);
>>
>> Like Oleksandr said, the 2nd argument would be just "order".
>>
>>> The Arm implementation for arch_alloc_xenheap_pages() would also contain
>>> a loop.
>>>
>>> This could turn out to be quite expensive with large allocation (1GB
>>> allocation would require 16MB of cache) because the cache may not have
>>> enough space contain all the pages of that range. So you would have to
>>> pull twice the page_info in the cache.
>>
>> Hmm, that's a fair point. I assume you realize that a similar issue of
>> higher overhead would occur when using your approach, and when some
>> memguard-like thing was to reappear: Such mapping operations typically
>> are more efficient when done on a larger range.
> 
> Yes, I was aware of that when I wrote my message. However, they are not 
> necessary at the moment. So I think we can defer the discussion.
> 
>>  Since that's only a
>> hypothetical use at this point, I'm willing to accept your preference.
>> I'd like us to consider one more aspect though: All you need on Arm is
>> the setting of the exact same bits to the exact same pattern for every
>> struct page_info involved. Can't we simply have an arch hook returning
>> that pattern, for generic code to then OR it in alongside PGC_xen_heap?
> 
> arch_alloc_xenheap_pages() will modify inuse.type_info so we can't or 
> the value to PGC_xen_heap.

Oh, sure - I didn't mean it to be a single OR, but two next to each other
inside the one loop that's already there.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.