[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] RFC x86/hvm: Don't truncate the hvm hypercall index before range checking it



>>> On 24.10.16 at 12:25, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 24/10/16 11:09, Jan Beulich wrote:
>>>>> On 24.10.16 at 11:55, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 24/10/16 10:33, Jan Beulich wrote:
>>>>>>> On 14.10.16 at 17:51, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>>> When the compat hypercall ABI was added for HVM guests (i.e. supporting 
>>>>> 32bit
>>>>> operating systems making hypercalls against a 64bit Xen), an ABI breakage 
>>>>> was
>>>>> introduced for non-compat guests, as the 64bit hypercall index became
>>>>> truncated to 32 bits.
>>>>>
>>>>> This has been the case for a very long time, but is not very obvious from 
>>>>> the
>>>>> code, and definitely counterintuitive, seeing as all other 64bit 
>>>>> parameters
>>>>> are passed without truncation.
>>>>>
>>>>> However, the only supported method of making hypercalls is to call into 
>>>>> the
>>>>> hypercall page, which in practice means that only hypercall index up to 
>>>>> 63 are
>>>>> supported.
>>>>>
>>>>> Therefore, take the opportunity to fix the ABI before it becomes 
>>>>> impossible to
>>>>> fix.
>>>> Considering
>>>>
>>>>     if ( (eax & 0x80000000) && is_viridian_domain(currd) )
>>>>         return viridian_hypercall(regs);
>>> Why is this a problem?
>> Because the use of bit 31 (instead of <guest-address-width> - 1)
>> pretty clearly indicates the expectation of a 32-bit value here.
> 
> It is specifically a bit which is specifically reserved in the Viridian
> ABI.  It isn't related to guest address width.

Well, okay for that part, albeit it makes clear that we can't ever
have hypercall numbers in the range 0x80000000...0xffffffff.

>>>  Truncation or not has no effect of this
>>> calculation, and the exact value comes from the Xen-provided Viridian
>>> hypercall page (see enable_hypercall_page() in viridian.c); we are at
>>> liberty to change the semantics if we wish.
>> No, we aren't. No-one is required to use the hypercall page. We
>> only provide it as a courtesy.
> 
> Yes we very much are at liberty to change things.  Viridian would not
> function without using that page (as the hypercalls would be confused
> with Xen hypercalls), and the spec is very clear that the hypercall page
> will be used.
> 
> As for the Xen hypercall page, the ABI is clearly stated as:
> 
>     call hypercall_page + hypercall-number * 32
> 
> in include/public/arch-x86/xen-x86_{32,64}.h, meaning that we are
> perfectly at liberty to alter the layout and inner-workings of our
> hypercall page as well.

This, iirc, is not something that has been this way from the beginning;
I think the page has got introduced as a courtesy for 64-bit PV guests,
where the hypercall sequence involves multiple instructions (I can't
tell whether perhaps for HVM guests it has always been there, to
abstract out the vendor differences in what instruction to use).

In fact even current upstream Linux still has a remnant of it being
different, by way of the (now unused) TRAP_INSTR definition. If the
presence of a hypercall page (as an obvious prerequisite of its use)
was a requirement, we shouldn't boot guests not having one (and we
probably should go as far as refusing calls originating from outside,
which would break many if not all SUSE 32-bit PV kernels, which do a
few early calls without going through hypercall_page).

>>>> I'm not convinced we should change current behavior, the more that
>>>> the change has at least theoretical potential of breaking existing guests.
>>> My argument is that no existing guests can be relying on this behaviour.
>> Why not? People tend to rely on anything they find works.
> 
> There is no supported or expected way of getting into this situation to
> start with, and the only entities making hypercalls will be using some
> variation of our public headers.

This is something we can hope for, but can't expect.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.