|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] x86emul: handle address wrapping for VMASKMOVP{S, D}
I failed to recognize the need to mirror the changes done by 7869e2bafe
("x86emul/fuzz: add rudimentary limit checking") into the earlier
written but later committed 2fe43d333f ("x86emul: support remaining AVX
insns"): Behavior here is the same as for multi-part reads or writes.
Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
There's another issue here, but I'll first have to think about possible
(preferably non-intrusive) solutions: An access crossing a page
boundary and having
- a set mask bit corresponding to an element fully living in the first
page,
- one or more clear mask bits corresponding to the initial elements on
the second page,
- another higher mask bit being set
would result in a wrong CR2 value to be reported in case the access to
the second page would cause a fault (it would point to the start of the
page instead of the element being accessed). Neither splitting the
access here into multiple ones nor uniformly passing a byte or word
enable mask into ->write() seem very desirable.
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -7887,7 +7887,7 @@ x86_emulate(
switch ( d & SrcMask )
{
case SrcMem:
- rc = ops->read(ea.mem.seg, ea.mem.off + first_byte,
+ rc = ops->read(ea.mem.seg, truncate_ea(ea.mem.off +
first_byte),
(void *)mmvalp + first_byte, op_bytes,
ctxt);
if ( rc != X86EMUL_OKAY )
@@ -7970,7 +7970,7 @@ x86_emulate(
else
{
fail_if(!ops->write);
- rc = ops->write(dst.mem.seg, dst.mem.off + first_byte,
+ rc = ops->write(dst.mem.seg, truncate_ea(dst.mem.off + first_byte),
!state->simd_size ? &dst.val
: (void *)mmvalp + first_byte,
dst.bytes, ctxt);
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |