[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH 1/8] x86/domctl: introduce a pair of hypercall to set and get cpu topology
On Tue, Jan 09, 2018 at 11:47:54PM +0000, Andrew Cooper wrote: >On 08/01/18 04:01, Chao Gao wrote: >> Define interface, structures and hypercalls for toolstack to build >> cpu topology and for guest that will retrieve it [1]. >> Two subop hypercalls introduced by this patch: >> XEN_DOMCTL_set_cpu_topology to define cpu topology information per domain >> and XENMEM_get_cpu_topology to retrieve cpu topology information. >> >> [1]: during guest creation, those information helps hvmloader to build ACPI. >> >> Signed-off-by: Chao Gao <chao.gao@xxxxxxxxx> > >I'm sorry, but this going in the wrong direction. Details like this >should be contained and communicated exclusively in the CPUID policy. > >Before the spectre/meltdown fire started, I had a prototype series >introducing a toolstack interface for getting and setting a full CPUID >policy at once, rather than piecewise. I will be continuing with this Is the new interface able to set CPUID policy for each vCPU rather than current for each domain? Otherwise I couldn't see how to set APIC_ID for each vcpu except by introducing a new interface. >work once the dust settles. > >In particular, we should not have multiple ways of conveying the same >information, or duplication of the same data inside the hypervisor. > >If you rearrange your series to put the struct cpuid_policy changes >first, then patch 2 will become far more simple. HVMLoader should >derive its topology information from the CPUID instruction, just as is >expected on native hardware. Good point. It seems that in HVMLoader BSP should boot APs in a broadcase fashion and then information is collected via CPUID and then build MADT/SRAT. Thanks Chao _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |