[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 6/6] x86emul: simplify FPU handling asm() constraints



>>> On 07.12.16 at 16:16, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 06/12/16 14:14, Jan Beulich wrote:
>> The memory clobber is rather harsh here.
> 
> Does removing it actually make a difference?  I can't spot anything
> which could reasonably be reordered around the asm() blocks.

It's not so much the re-ordering but the potential dropping of
something that was cached in a register. Indeed there was no
change to the generated code at all with current gcc, but the
compiler gets better over time, and we shouldn't restrict it
unduly.

>> However, fic.exn_raised may be
>> modified as a side effect, so we need to let the compiler know that all
>> of "fic" may be changed (preventing it from moving around accesses to
>> the exn_raised field).
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>
>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -784,7 +784,7 @@ do {
>>  })
>>  
>>  struct fpu_insn_ctxt {
>> -    uint8_t insn_bytes;
>> +    uint8_t insn_bytes; /* Must be first! */
> 
> And one single byte.  The compiler would previously have caught an
> accidental breaking of this requirement.

There was no such requirement before. Of course we could add an
immediate operand to all the asm()s, specifying the offsetof(). But
then again we already have a hidden dependency on its size anyway.

> As an alternative, how about using ACCESS_ONCE() to read exn_raised?  It
> would allow you to drop the memory clobber and also not generalise the
> fic.insn_bytes memory parameter to fic.

There's no ACCESS_ONCE() in the x86 code, and even less so in
what the emulator code can use. Nor would what you suggest
allow to legitimately drop the memory clobber.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.