|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V2 1/3] xen/vm_access: Support for memory-content hiding
>>> On 26.06.15 at 10:22, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
> On 06/25/2015 06:57 PM, Jan Beulich wrote:
>>>>> On 15.06.15 at 11:03, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>> --- a/xen/arch/x86/hvm/emulate.c
>>> +++ b/xen/arch/x86/hvm/emulate.c
>>> @@ -578,6 +578,28 @@ static int hvmemul_read(
>>> container_of(ctxt, struct hvm_emulate_ctxt, ctxt));
>>> }
>>>
>>> +static int hvmemul_read_set_context(
>>> + enum x86_segment seg,
>>> + unsigned long offset,
>>> + void *p_data,
>>> + unsigned int bytes,
>>> + struct x86_emulate_ctxt *ctxt)
>>> +{
>>> + struct vcpu *curr = current;
>>> + unsigned int len;
>>> +
>>> + if ( !curr->arch.vm_event.emul_read_data )
>>> + return X86EMUL_UNHANDLEABLE;
>>> +
>>> + len = min_t(unsigned int,
>>> + bytes, curr->arch.vm_event.emul_read_data->size);
>>> +
>>> + if ( len )
>>> + memcpy(p_data, curr->arch.vm_event.emul_read_data->data, len);
>>
>> And the rest of the destination simply remains unmodified /
>> uninitialized?
>
> It does, yes. Should I memset() it to 0 or something else before the
> memcpy()?
That really depends on your intentions. It just seems (together
with the other cases) inconsistently handled at present. I see
you having three options:
- return actual memory contents for parts of the writes outside
the overridden range,
- return zero (or another preset value) for such parts,
- fail such operations.
What you shouldn't do is leave data potentially uninitialized (or
else you'd need to make sure that this doesn't result in leaking
hypervisor stack contents anywhere).
>>> @@ -891,14 +934,37 @@ static int hvmemul_rep_outs(
>>> !!(ctxt->regs->eflags & X86_EFLAGS_DF), NULL);
>>> }
>>>
>>> -static int hvmemul_rep_movs(
>>> +static int hvmemul_rep_outs_set_context(
>>> + enum x86_segment src_seg,
>>> + unsigned long src_offset,
>>> + uint16_t dst_port,
>>> + unsigned int bytes_per_rep,
>>> + unsigned long *reps,
>>> + struct x86_emulate_ctxt *ctxt)
>>> +{
>>> + struct vcpu *curr = current;
>>> + unsigned int safe_bytes;
>>> +
>>> + if ( !curr->arch.vm_event.emul_read_data )
>>> + return X86EMUL_UNHANDLEABLE;
>>> +
>>> + safe_bytes = min_t(unsigned int, bytes_per_rep,
>>> + curr->arch.vm_event.emul_read_data->size);
>>> +
>>> + return hvmemul_do_pio(dst_port, reps, safe_bytes, 0, IOREQ_WRITE,
>>> + !!(ctxt->regs->eflags & X86_EFLAGS_DF),
>>> + curr->arch.vm_event.emul_read_data->data);
>>
>> This isn't consistent with e.g. rep_movs below - you shouldn't
>> reduce the width of the operation.
>
> I agree, but to be honest I wasn't sure of how to better go about this,
> it seem a bit more involved that the rest of the cases. I could perhaps
> allocate a bytes_per_rep-sized buffer, memset() it to zero and then copy
> safe_bytes from curr->arch.vm_event.emul_read_data->data to the
> beginning of it?
See above - you first need to define the model you want to follow,
and then you need to handle this uniformly everywhere.
>> Also - did I overlook where *reps gets forced to 1? If it's being
>> done elsewhere, perhaps worth an ASSERT()?
>
> *reps got forced to 1 in hvmemul_rep_stos_set_context() in V1, I took
> that out as per your comments, please see:
>
> http://lists.xen.org/archives/html/xen-devel/2015-05/msg01054.html
That's fine, but then you can't simply ignore it in the safe_bytes
calculation.
>>> @@ -981,7 +1047,19 @@ static int hvmemul_rep_movs(
>>> */
>>> rc = hvm_copy_from_guest_phys(buf, sgpa, bytes);
>>> if ( rc == HVMCOPY_okay )
>>> + {
>>> + struct vcpu *curr = current;
>>> +
>>> + if ( unlikely(set_context) && curr->arch.vm_event.emul_read_data )
>>> + {
>>> + unsigned long safe_bytes = min_t(unsigned long, bytes,
>>> + curr->arch.vm_event.emul_read_data->size);
>>
>> The variable doesn't need to be unsigned long.
>
> bytes is unsigned long in the function above:
>
> unsigned long saddr, daddr, bytes;
>
> I think that's the only place where that is the case, but I had assumed
> that whoever wrote the code knew better and followed suit. I'm happy to
> change both places though.
No, that's not the point. The point is that the result of
min(<unsigned int>, <unsigned long>) always fits in
unsigned int.
>>> @@ -1000,13 +1078,40 @@ static int hvmemul_rep_movs(
>>> return X86EMUL_OKAY;
>>> }
>>>
>>> -static int hvmemul_rep_stos(
>>> +static int hvmemul_rep_movs(
>>> + enum x86_segment src_seg,
>>> + unsigned long src_offset,
>>> + enum x86_segment dst_seg,
>>> + unsigned long dst_offset,
>>> + unsigned int bytes_per_rep,
>>> + unsigned long *reps,
>>> + struct x86_emulate_ctxt *ctxt)
>>> +{
>>> + return _hvmemul_rep_movs(src_seg, src_offset, dst_seg, dst_offset,
>>> + bytes_per_rep, reps, ctxt, 0);
>>> +}
>>> +
>>> +static int hvmemul_rep_movs_set_context(
>>> + enum x86_segment src_seg,
>>> + unsigned long src_offset,
>>> + enum x86_segment dst_seg,
>>> + unsigned long dst_offset,
>>> + unsigned int bytes_per_rep,
>>> + unsigned long *reps,
>>> + struct x86_emulate_ctxt *ctxt)
>>> +{
>>> + return _hvmemul_rep_movs(src_seg, src_offset, dst_seg, dst_offset,
>>> + bytes_per_rep, reps, ctxt, 1);
>>> +}
>>
>> Perhaps putting a flag hvmemul_ctxt would be better?
>
> Sorry, I don't understand the comment. Can you, please, clarify?
All these wrappers irritate me - a flag in hvmemul_ctxt would - afaict -
eliminate the need for all of them.
> In addition to all the comments, Tamas has kindly pointed out that the
> correct name is mem_access, not vm_event (in the patch title), I'll be
> fixing that too in V3. Sorry for the typo.
Sure - I simply try to not repeat comments others have made.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |