[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 1/4] xen/arm: vgic-v2: Report the correct GICC size to the guest



On 23/10/15 14:39, Ian Campbell wrote:
> On Fri, 2015-10-23 at 14:34 +0100, Julien Grall wrote:
>> Hi Ian,
>>
>> On 23/10/15 14:28, Ian Campbell wrote:
>>> On Thu, 2015-10-08 at 19:23 +0100, Julien Grall wrote:
>>>> The GICv2 DT node is usually used by the guest to know the
>>>> address/size
>>>> of the regions (GICD, GICC...) to map into their virtual memory.
>>>>
>>>> While the GICv2 spec requires the size of the GICC to be 8KB, we
>>>> correctly do an 8KB stage-2 mapping but errornously report 256 in the
>>>> device tree (based on GUEST_GICC_SIZE).
>>>
>>> "erroneously"
>>>
>>>>
>>>> I bet we didn't see any issue so far because all the registers except
>>>> GICC_DIR lives in the first 256 bytes of the GICC region and all the
>>>> guest
>>>
>>> "guests"
>>>
>>>> I have seen so far are driving the GIC with GICC_CTLR.EIOmode =
>>>> 0.
>>>>
>>>> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
>>>
>>> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
>>>
>>> (typo's fixable on commit).
>>
>> Thank you!
>>
>>>> ---
>>>>     This patch is a good candidate to backport for Xen 4.6 - 4.4.
>>>>     Without it a guest relying on the DT can't use GICC_DIR.
>>>
>>> Noted, but just to check: This patch (and none of the other fixes in
>>> this
>>> series) are all which are required for a guest to be able to use
>>> GICC_DIR,
>>> right?
>>
>> Correct. BTW, I forgot to mention that this patch may not apply cleanly
>> on Xen 4.4 as rearranged the guest memory address space in Xen 4.5.
> 
> I'd be inclined not to bother with it for 4.4 at this juncture then.

I'm fine with that.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.