|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH V4] xen/gnttab: Store frame GFN in struct page_info on Arm
On 22.12.2021 13:33, Julien Grall wrote:
> On 22/12/2021 13:05, Jan Beulich wrote:
>> On 22.12.2021 11:01, Julien Grall wrote:
>>> On 14/12/2021 17:45, Jan Beulich wrote:
>>>> On 14.12.2021 17:26, Oleksandr wrote:
>>>>> On 14.12.21 15:37, Jan Beulich wrote:
>>>>>> On 03.12.2021 21:33, Oleksandr Tyshchenko wrote:
>>>>>>> @@ -2177,14 +2181,22 @@ void *alloc_xenheap_pages(unsigned int order,
>>>>>>> unsigned int memflags)
>>>>>>>
>>>>>>> void free_xenheap_pages(void *v, unsigned int order)
>>>>>>> {
>>>>>>> + struct page_info *pg;
>>>>>>> + unsigned int i;
>>>>>>> +
>>>>>>> ASSERT(!in_irq());
>>>>>>>
>>>>>>> if ( v == NULL )
>>>>>>> return;
>>>>>>>
>>>>>>> + pg = virt_to_page(v);
>>>>>>> +
>>>>>>> memguard_guard_range(v, 1 << (order + PAGE_SHIFT));
>>>>>> ... this really want to (logically) move into the new arch hooks.
>>>>>> That'll effectively mean to simply drop the Arm stubs afaict (and I
>>>>>> notice there's some dead code there on x86, which I guess I'll make
>>>>>> a patch to clean up). But first of all this suggests that you want
>>>>>> to call the hooks with base page and order, putting the loops there.
>>>>>
>>>>> I see your point and agree ... However I see the on-list patches that
>>>>> remove common memguard_* invocations and x86 bits.
>>>>> So I assume, this request is not actual anymore, or I still need to pass
>>>>> an order to new arch hooks? Please clarify.
>>>>
>>>> Well, that patch (really just the Arm one) effectively takes care of
>>>> part of what I did say above. Irrespective I continue to think that
>>>> the hook should take a (page,order) tuple instead of getting invoked
>>>> once for every order-0 page. And the hook invocations should be placed
>>>> such that they could fulfill the (being removed) memguard function
>>>> (iirc that was already the case, at least mostly).
>>>
>>> IIUC your suggestion, with your approach, alloc_xenheap_pages() would
>>> look like:
>>>
>>> for ( i = 0; i < (1u << order); i++ )
>>> pg[i].count_info |= PGC_xen_heap;
>>>
>>> arch_alloc_xenheap_pages(pg, 1U << order);
>>
>> Like Oleksandr said, the 2nd argument would be just "order".
>>
>>> The Arm implementation for arch_alloc_xenheap_pages() would also contain
>>> a loop.
>>>
>>> This could turn out to be quite expensive with large allocation (1GB
>>> allocation would require 16MB of cache) because the cache may not have
>>> enough space contain all the pages of that range. So you would have to
>>> pull twice the page_info in the cache.
>>
>> Hmm, that's a fair point. I assume you realize that a similar issue of
>> higher overhead would occur when using your approach, and when some
>> memguard-like thing was to reappear: Such mapping operations typically
>> are more efficient when done on a larger range.
>
> Yes, I was aware of that when I wrote my message. However, they are not
> necessary at the moment. So I think we can defer the discussion.
>
>> Since that's only a
>> hypothetical use at this point, I'm willing to accept your preference.
>> I'd like us to consider one more aspect though: All you need on Arm is
>> the setting of the exact same bits to the exact same pattern for every
>> struct page_info involved. Can't we simply have an arch hook returning
>> that pattern, for generic code to then OR it in alongside PGC_xen_heap?
>
> arch_alloc_xenheap_pages() will modify inuse.type_info so we can't or
> the value to PGC_xen_heap.
Oh, sure - I didn't mean it to be a single OR, but two next to each other
inside the one loop that's already there.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |