[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 21/25] x86emul: add read-modify-write hook



>>> On 05.02.18 at 15:21, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 05/02/18 08:22, Jan Beulich wrote:
>>>>> On 02.02.18 at 17:13, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 07/12/17 14:16, Jan Beulich wrote:
>>>> In order to correctly emulate read-modify-write insns, especially
>>>> LOCKed ones, we should not issue reads and writes separately. Use a
>>>> new hook to combine both, and don't uniformly read the memory
>>>> destination anymore. Instead, DstMem opcodes without Mov now need to
>>>> have done so in their respective case blocks.
>>>>
>>>> Also strip bogus _ prefixes from macro parameters when this only affects
>>>> lines which are being changed anyway.
>>>>
>>>> In the test harness, besides some re-ordering to facilitate running a
>>>> few tests twice (one without and a second time with the .rmw hook in
>>>> place), tighten a few EFLAGS checks and add a test for NOT with memory
>>>> operand (in particular to verify EFLAGS don't get altered there).
>>>>
>>>> For now make use of the hook optional for callers; eventually we may
>>>> want to consider making this mandatory.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>>> ---
>>>> v3: New.
>>>> ---
>>>> TBD: Do we want to also support non-lockable RMW insns in the new hook
>>>>      and helper (SHL & friends, SHLD, SHRD)?
>>> What would this achieve?  I suppose it would avoid a double pagewalk.
>> Well - it's mostly the implication from this double walk which I'm
>> concerned about: The first walk is one for a read, which might
>> succeed when the second (write) walk fails. We'd then have done
>> a read which should have never occurred. But anyway this would
>> be follow-up work only, nothing to be added to the patch here.
> 
> In some copious new future with the emulation changes discussed at
> summit, we'd want to issue a single writeable translation request.
> 
> On that basis then, we should extend the use of this hook to all RMW
> instruction, irrespective of locking.

Added to my todo list. Before putting together the patch here I
had actually considered the introduction of map/unmap hooks,
but I did conclude that this would be quite a bit more intrusive.

>>>> -    case 0x38 ... 0x3d: cmp: /* cmp */
>>>> +    case 0x38: case 0x39: cmp: /* cmp reg,mem */
>>>> +        if ( ops->rmw && dst.type == OP_MEM &&
>>>> +             (rc = read_ulong(dst.mem.seg, dst.mem.off, &dst.val,
>>>> +                              dst.bytes, ctxt, ops)) != X86EMUL_OKAY )
>>> Why does rmw matter here? cmp doesn't write to its operands.
>> The read of the "destination" operand was skipped in case there
>> is a ->rmw hook (see the change to generic destination operand
>> processing ahead of the main switch()). This needs to be carried
>> out here now (and elsewhere when what is nominally the
>> destination operand really is a second source one).
> 
> Oh, right.  I think this needs to be clearer.  Having two different
> behaviours depending on whether the caller provides an rmw hook is very
> subtle.

Well, if you have suggestions as to how to make this more clear,
I'm all ears.

> The particularly confusing thing here is that cmp isn't an rmw
> instruction.  I presume the oddity comes about because cmp encodes the
> memory operand in dst, even though the operand doesn't get written to?

Sort of (formally source and destination aren't spelled out at
the ModR/M byte level): It's really our operand encoding scheme
which puts us into this situation: We can't encode more than one
source operand. And for the insn flavors with ModR/M and
immediate the table entries are even shared between CMP and
the 7 other opcodes all writing to the register or memory.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.