[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] x86/emulate: implement hvmemul_cmpxchg() with an actual CMPXCHG



On 03/28/2017 01:03 PM, Jan Beulich wrote:
>>>> On 28.03.17 at 11:14, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>> I'm not sure that the RETRY model is what the guest OS expects. AFAIK, a
>> failed CMPXCHG should happen just once, with the proper registers and ZF
>> set. The guest surely expects neither that the instruction resume until
>> it succeeds, nor that some hidden loop goes on for an undeterminate
>> ammount of time until a CMPXCHG succeeds.
> 
> The guest doesn't observe the CMPXCHG failing - RETRY leads to
> the instruction being restarted instead of completed.

Indeed, but it works differently with hvm_emulate_one_vm_event() where
RETRY currently would have the instruction be re-executed (properly
re-executed, not just re-emulated) by the guest.

> There is a severe downside to MMIO accesses with this model
> though: Other than for RAM, we shouldn't be reading (and even
> less so writing) the memory location more than once.
> 
>> The picture is further complicated by the two-part handling of
>> instructions in x86_emulate(). For example, for CMPXCHG we have:
>> [...]
>> I now see that this is why using a spinlock only around the writeback
>> part did not solve the issue: both the compare part and the writeback
>> part need to be part of the same atomic operation - lock needs to be
>> aquired before the cmp / ZF part.
> 
> No exactly: RMW insns require the lock to be acquired ahead of
> the memory read, and dropped after the memory write.

Indeed, that's what I meant. I should have been more precise.

>> Opinions and suggestions are appreciated. If I'm not mistaken, it looks
>> like the smp_lock design is the better solution to the problem.
> 
> Andrew has told me that the re-work of how we do memory accesses
> in the emulator is pretty high on his priority list now. Whether we'd
> want to introduce an interim solution therefore depends on the time
> scale to possibly reach the only ultimately correct solution (no longer
> acting on intermediate copies of the data coming from guest memory,
> which is only correct for plain reads and plain writes).

Great! I understand, I'll switch to the smp_lock() patch as soon as the
schedule allows, and we'll see how that fits into the timeframe.


Thanks,
Razvan




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.