[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 01/15] xen/arm: register mmio handler at runtime



On Tue, Apr 8, 2014 at 4:21 PM, Julien Grall <julien.grall@xxxxxxxxxx> wrote:
> On 04/08/2014 11:34 AM, Vijay Kilari wrote:
>>>>>>
>>>>>>>
>>>>>>> [..]
>>>>>>>
>>>>>>>> diff --git a/xen/include/asm-arm/domain.h 
>>>>>>>> b/xen/include/asm-arm/domain.h
>>>>>>>> index 50b9b54..23dac85 100644
>>>>>>>> --- a/xen/include/asm-arm/domain.h
>>>>>>>> +++ b/xen/include/asm-arm/domain.h
>>>>>>>> @@ -116,6 +116,7 @@ struct arch_domain
>>>>>>>>      struct hvm_domain hvm_domain;
>>>>>>>>      xen_pfn_t *grant_table_gpfn;
>>>>>>>>
>>>>>>>> +    struct io_handler *io_handlers;
>>>>>>>
>>>>>>> Why do you need a pointer here? I think can can directly use
>>>>>>> struct io_handler iohandlers.
>>>>>>>
>>>>>>   I just adds to increase the size of this arch_domain struct.
>>>>>> so allocated memory at runtime.
>>>>>
>>>>> Do you hit the page size?
>>>> Yes, I have faced page size exhausted issue which I have reported earlier
>>>
>>> Did you still have the issue when you moved out GICv stuff outside?
>> Now I am not getting this issue. But I feel we are on the edge. So it
>> is always better
>> to allocate runtime.
>
> Hmmmm... we are not on the edge with the current Xen:
>    - arm32: sizeof(struct domain) = 1024
>    - arm64: sizeof(struct domain) = 1408
>
> Unless if you have added lots of fields in the structure, there is no
> reason to use a pointer here.
>
OK, I can remove this pointer.  When I make static allocation as below

struct io_handler iohandlers;

Now all the files that includes domain.h now expects io.h defined
in arch/arm to include this file for io_handler definition.
So I plan to move io.h to include directory under different name as
pio.h file or device_io.h?

> Regards,
>
> --
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.