[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] x86/HVM: suppress I/O completion for port output



>>> On 29.03.18 at 10:54, <Paul.Durrant@xxxxxxxxxx> wrote:
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -282,7 +282,7 @@ static int hvmemul_do_io(
>>              rc = hvm_send_ioreq(s, &p, 0);
>>              if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
>>                  vio->io_req.state = STATE_IOREQ_NONE;
>> -            else if ( data_is_addr )
>> +            else if ( data_is_addr || (!is_mmio && dir == IOREQ_WRITE) )
> 
> I'm not entirely sure, but it seems like this test might actually be 
> !hvm_vcpu_io_need_completion()...
> 
>>                  rc = X86EMUL_OKAY;
>>          }
>>          break;
>> --- a/xen/include/asm-x86/hvm/vcpu.h
>> +++ b/xen/include/asm-x86/hvm/vcpu.h
>> @@ -91,10 +91,12 @@ struct hvm_vcpu_io {
>>      const struct g2m_ioport *g2m_ioport;
>>  };
>> 
>> -static inline bool_t hvm_vcpu_io_need_completion(const struct
>> hvm_vcpu_io *vio)
>> +static inline bool hvm_vcpu_io_need_completion(const struct
>> hvm_vcpu_io *vio)
>>  {
>>      return (vio->io_req.state == STATE_IOREQ_READY) &&
>> -           !vio->io_req.data_is_ptr;
>> +           !vio->io_req.data_is_ptr &&
>> +           (vio->io_req.type != IOREQ_TYPE_PIO ||
>> +            vio->io_req.dir != IOREQ_WRITE);
> 
> ... now that you've updated it here.

It could have been before, and it wasn't, so I didn't want to change
that. My assumption is that the function wasn't used to leverage
local variables (and avoid the .state comparison altogether).
Technically it could be switched, I agree. I guess I should at least
attach a comment, clarifying that this is an open-coded, slightly
optimized variant of the function.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.