[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Ping²: [PATCH] x86emul: de-duplicate scatters to the same linear address



On 17.02.2021 09:32, Jan Beulich wrote:
> On 05.02.2021 12:28, Jan Beulich wrote:
>> On 05.02.2021 11:41, Andrew Cooper wrote:
>>> On 10/11/2020 13:26, Jan Beulich wrote:
>>>> The SDM specifically allows for earlier writes to fully overlapping
>>>> ranges to be dropped. If a guest did so, hvmemul_phys_mmio_access()
>>>> would crash it if varying data was written to the same address. Detect
>>>> overlaps early, as doing so in hvmemul_{linear,phys}_mmio_access() would
>>>> be quite a bit more difficult.
>>>
>>> Are you saying that there is currently a bug if a guest does encode such
>>> an instruction, and we emulate it?
>>
>> That is my take on it, yes.
>>
>>>> Note that due to cache slot use being linear address based, there's no
>>>> similar issue with multiple writes to the same physical address (mapped
>>>> through different linear addresses).
>>>>
>>>> Since this requires an adjustment to the EVEX Disp8 scaling test,
>>>> correct a comment there at the same time.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>>> ---
>>>> TBD: The SDM isn't entirely unambiguous about the faulting behavior in
>>>>      this case: If a fault would need delivering on the earlier slot
>>>>      despite the write getting squashed, we'd have to call ops->write()
>>>>      with size set to zero for the earlier write(s). However,
>>>>      hvm/emulate.c's handling of zero-byte accesses extends only to the
>>>>      virtual-to-linear address conversions (and raising of involved
>>>>      faults), so in order to also observe #PF changes to that logic
>>>>      would then also be needed. Can we live with a possible misbehavior
>>>>      here?
>>>
>>> Do you have a chapter/section reference?
>>
>> The instruction pages. They say in particular
>>
>> "If two or more destination indices completely overlap, the “earlier”
>>  write(s) may be skipped."
>>
>> and
>>
>> "Faults are delivered in a right-to-left manner. That is, if a fault
>>  is triggered by an element and delivered ..."
>>
>> To me this may or may not mean the skipping of indices includes the
>> skipping of faults (which a later element then would raise anyway).
> 
> Does the above address your concerns / questions? If not, what else
> do I need to provide?

I have to admit that I find it quite disappointing that this bug fix
has missed 4.15. It doesn't feel well here even more than elsewhere,
but again I'm intending to commit this - if need be without any acks -
once the tree is fully open again. As a bug fix it'll want backporting
as well.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.