[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] XSAVE save/restore shortcomings



>>> On 05.09.13 at 12:53, Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote:
> Il 30/08/2013 12:11, Jan Beulich ha scritto:
>> I'd like to make clear that the change presented is going to handle
>> only the most trivial cases (where any new xsave state addition
>> adds to the end of the save area). This is an effect of a more
>> fundamental flaw in the original design (which the patch doesn't try
>> to revise, as it's not clear to me yet what the best approach here is):
>> While the XSAVE hardware specification allows for each piece to be
>> individually savable/restorable, both PV and HVM save/restore
>> assume a single monolithic blob. Which is already going to be a
>> problem: AVX-512 as well as MPX conflict with LWP. And obviously
>> it can't be excluded that we'll see CPUs supporting AVX-512 but not
>> MPX as well as guests using the former but not the latter, and
>> neither can be dealt with under the current design.
> 
> This should not be a problem, the manual says "The layout of the
> XSAVE/XRSTOR save area is fixed and may contain non-contiguous
> individual save areas.  The XSAVE/XRSTOR save area is not compacted if
> some features are not saved or are not supported by the processor and/or
> by system software".  Note "by the processor": the way I read this, size
> may vary (which is why CPUID.0Dh exists, basically), but offsets are
> guaranteed to be constant.

Then why would there be a way to retrieve these offsets via CPUID?

> Thus the only problem is LWP.  Given AMD's current non-involvement in
> x86, it may be simpler to avoid the problem completely by not
> implementing virtual LWP...

For which it is too late: Both 4.2 and 4.3 already have LWP support.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.