[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/VMX: sanitize VM86 TSS handling



>>> On 20.02.17 at 12:01, <tim@xxxxxxx> wrote:
> At 05:03 -0700 on 17 Feb (1487307837), Jan Beulich wrote:
>> The present way of setting this up is flawed: Leaving the I/O bitmap
>> pointer at zero means that the interrupt redirection bitmap lives
>> outside (ahead of) the allocated space of the TSS. Similarly setting a
>> TSS limit of 255 when only 128 bytes get allocated means that 128 extra
>> bytes may be accessed by the CPU during I/O port access processing.
>> 
>> Introduce a new HVM param to set the allocated size of the TSS, and
>> have the hypervisor actually take care of setting namely the I/O bitmap
>> pointer. Both this and the segment limit now take the allocated size
>> into account.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> v2: Instead of HVM_PARAM_VM86_TSS_SIZE, introduce
>>     HVM_PARAM_VM86_TSS_SIZED, which requires the old parameter to no
>>     longer be saved in libxc's write_hvm_params(). Only initialize the
>>     TSS once after the param was set. Request only 384 bytes (and
>>     128-byte alignment) for the TSS. Add padding byte to capping value.
>>     Add comment to hvm_copy_to_guest_phys() invocations.
> 
> This still seems like it has too many moving parts -- why not just
> declare the top half of the existing param to be the size, interpret
> size==0 as size==128, and init the contents when the param is written?

I would have done that if the parameters and their hypercall function
were tools only (and hence we could freely change their behavior).
Also, since we wouldn't be able to tell size 128 from size 0 with what
you propose, "get" would then possibly return a value different from
what was passed to "set".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.