|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
On 18.05.2021 10:57, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@xxxxxxxx>
>> Sent: Tuesday, May 18, 2021 3:35 PM
>>
>> On 18.05.2021 07:21, Penny Zheng wrote:
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -2447,6 +2447,9 @@ int assign_pages(
>>> {
>>> ASSERT(page_get_owner(&pg[i]) == NULL);
>>> page_set_owner(&pg[i], d);
>>> + /* use page_set_reserved_owner to set its reserved domain owner.
>> */
>>> + if ( (pg[i].count_info & PGC_reserved) )
>>> + page_set_reserved_owner(&pg[i], d);
>>
>> Now this is puzzling: What's the point of setting two owner fields to the
>> same
>> value? I also don't recall you having introduced
>> page_set_reserved_owner() for x86, so how is this going to build there?
>>
>
> Thanks for pointing out that it will fail on x86.
> As for the same value, sure, I shall change it to domid_t domid to record its
> reserved owner.
> Only domid is enough for differentiate.
> And even when domain get rebooted, struct domain may be destroyed, but domid
> will stays
> The same.
Will it? Are you intending to put in place restrictions that make it
impossible for the ID to get re-used by another domain?
> Major user cases for domain on static allocation are referring to the whole
> system are static,
> No runtime creation.
Right, but that's not currently enforced afaics. If you would
enforce it, it may simplify a number of things.
>>> @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
>>> return pg;
>>> }
>>>
>>> +/*
>>> + * Allocate nr_pfns contiguous pages, starting at #start, of static
>>> +memory,
>>> + * then assign them to one specific domain #d.
>>> + * It is the equivalent of alloc_domheap_pages for static memory.
>>> + */
>>> +struct page_info *alloc_domstatic_pages(
>>> + struct domain *d, unsigned long nr_pfns, paddr_t start,
>>> + unsigned int memflags)
>>> +{
>>> + struct page_info *pg = NULL;
>>> + unsigned long dma_size;
>>> +
>>> + ASSERT(!in_irq());
>>> +
>>> + if ( memflags & MEMF_no_owner )
>>> + memflags |= MEMF_no_refcount;
>>> +
>>> + if ( !dma_bitsize )
>>> + memflags &= ~MEMF_no_dma;
>>> + else
>>> + {
>>> + dma_size = 1ul << bits_to_zone(dma_bitsize);
>>> + /* Starting address shall meet the DMA limitation. */
>>> + if ( dma_size && start < dma_size )
>>> + return NULL;
>>
>> It is the entire range (i.e. in particular the last byte) which needs to
>> meet such
>> a restriction. I'm not convinced though that DMA width restrictions and
>> static
>> allocation are sensible to coexist.
>>
>
> FWIT, if starting address meets the limitation, the last byte, greater than
> starting
> address shall meet it too.
I'm afraid I don't know what you're meaning to tell me here.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |