[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] XSA-60 - how to get back to a sane state



On 02/12/2013 19:43, George Dunlap wrote:
> On 12/02/2013 02:28 PM, Jan Beulich wrote:
>> All,
>>
>> Jinsong's patches having been in for nearly a month now, but not
>> being in a shape that would make releasing in 4.4 or backporting to
>> the older trees desirable, we need to come to a conclusion on
>> which way to go. Currently it looks like we have three options, but
>> of course I'll be happy to see other (better!) ones proposed.
>>
>> 1) Stay with what we have.
>>
>> 2) Revert 86d60e85 ("VMX: flush cache when vmentry back to UC
>> guest") in its entirety plus, perhaps, the change 62652c00 ("VMX:
>> fix cr0.cd handling") did to vmx_ctxt_switch_to().
>>
>> 3) Apply the attached patch that Andrew and I have been putting
>> together, with the caveat that it's still incomplete (see below).
>>
>> The latter two are based on the observation that the amount of
>> cache flushing we do with what is in the master tree right now is
>> more than what we did prior to that patch series but still
>> insufficient. Hence the revert would get us back to the earlier
>> state (and obviously eliminate the performance problems that
>> were observed when doing too eager flushing), whereas
>> applying the extra 5th patch would get us closer to a proper
>> solution.
>
> What's missing is a description of the pros and cons of 1 and 2.  Do
> you have any links to threads describing the problem?
>
>  -George
>

1) has the basic XSA-60 fixes and some wbinvd()s, which are a
significant performance issue and insufficient to completely fix the
problem at hand.  As a result, 1) is the worst possible option to stay
with as far as Xen is concerned (irrespective of the upcoming 4.4 release).

2) will revert us back to the basic XSA-60 with none of the wbinvd()s,
which fixes the security issue and is no worse than before in terms of a
correctness-in-the-case-of-uncachable-hvm-domains point of view.

3) as-is is still insufficient to fix the problem in 1), and would
currently result in a further performance regression.

FWIW, my vote is for option 2) which will ease the current performance
regression, in favor of allowing us time to come up with a proper
solution to the pre-existing problem of Xen and Qemu mappings of a UC
domain's memory.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.