[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 08/15] xen/arm: vgic-v3: Emulate correctly the re-distributor



Hi Ian,

On 02/02/15 15:59, Ian Campbell wrote:
> On Thu, 2015-01-29 at 18:25 +0000, Julien Grall wrote:
>> There is a one-to-one mapping between each re-distributors and processors.
>> Each re-distributors can be accessed by any processor at any time. For
>> instance during the initialization of the GIC, the drivers will browse
>> the re-distributor to find the one associated to the current processor
>> (via GICR_TYPER). So each re-distributor has its own MMIO region.
>>
>> The current implementation of the vGICv3 emulation assumes that the
>> re-distributor region is banked. Therefore, the processor won't be able
>> to access other re-distributor. While this is working fine for Linux, a
>> processor will only access GICR_TYPER to find the associated re-distributor,
>> we should have a correct implementation for the other operating system.
> 
> You could state something stronger here, which is that it is wrong and
> should be fixed as a matter of principal. The fact that we happen to get
> away with it for some versions of Linux is an aside (although worth
> mentioning)

I will rework the paragraph.

>>
>> All emulated registers of the re-distributors take a vCPU in parameter
>> and necessary lock. Therefore concurrent access is already properly handled.
>>
>> The missing bit is retrieving the right vCPU following the region accessed.
>> Retrieving the right vCPU could be slow, so it has been divided in 2 paths:
>>     - fast path: The current vCPU is accessing its own re-distributor
>>     - slow path: The current vCPU is accessing an other re-distributor
> 
> "another".
> 
>>
>> As the processor needs to initialize itself, the former case is very
>> common. To handle the access quickly, the base address of the
>> re-distributor is computed and stored per-vCPU during the vCPU 
>> initialization.
>>
>> The latter is less common and more complicate to handle. The re-distributors
>> can be spread accross multiple regions in the memory.
> 
> "across"
> 
>> +    /*
>> +     * Find the region where the re-distributor lives. For this purpose,
>> +     * we look one region ahead as only MMIO range for redistributors
>> +     * traps here.
>> +     * Note: We assume that the region are ordered.
> 
> You could also check base+size vs gpa to avoid this limitation.

IHMO, this limitation is harmless. This would happen for DOM0 if the
device tree doesn't sort the region.

AFAICT, we have a similar limitation for the memory region. Although I
could sort them when we build DOM0.

>> +
>> +    /*
>> +     * Note: We are assuming that d->vcpu[vcpu_id] is never NULL. If
>> +     * it's the case, the guest will receive a data abort and won't be
>> +     * able to boot.
> 
> Is cpu hotplug a factor here? Do we support guests booting with offline
> cpus yet?

The "problem" is not because of CPU hotpluging. XEN_DOMCTL_max_vcpus
doesn't allow the change of the number of vCPUs. AFAICT, this won't be
supported (see comments within the code).

But, the domctl may not set a vCPU if it's fail to initialize it. So in
theory it would be possible to have d->vcpu[vcpu_id] == NULL. But in
practice, the toolstack won't continue if one of the VCPU has not been
allocated.

I felt it was good to mention it for documentation purpose.

> 
>> +    /*
>> +     * Safety check mostly for DOM0. It's possible to have more vCPU
>> +     * than the number of physical CPU. Maybe we should deny this case?
> 
> In general it's allowed, if a bit silly. Mainly for e.g. people working
> on PV spinlock code who want to force contention... Unlikely for dom0 I
> suppose.

We don't control the DOM0 memory layout, so in practice we can't have
more vCPUs than allowed by the hardware. This is because every
re-distributors have it's own MMIO region.

It's different for guests as we control the memory layout.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.