[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ping: [PATCH] x86: fix "xpti=" and "pv-l1tf=" yet again



>>> On 25.09.18 at 18:43, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 18/09/18 13:41, Jan Beulich wrote:
>> Ping?
> 
> You're very unlikely to get any reply when I'm

I certainly did realize that I wouldn't get an immediate answer, but I
also didn't want to defer the pings any longer.

>>>>> On 11.09.18 at 10:20, <JBeulich@xxxxxxxx> wrote:
>>>>>> On 29.08.18 at 14:36, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> On 21/08/18 11:44, Jan Beulich wrote:
>>>>> While commit 2a3b34ec47 ("x86/spec-ctrl: Yet more fixes for xpti=
>>>>> parsing") indeed fixed "xpti=dom0", it broke "xpti=no-dom0", in that
>>>>> this then became equivalent to "xpti=no".
>>>> That was accidental, but the end result is consistent with other options.
>>>>
>>>> As with spec-ctrl, if someone wants to start making fine-grain control,
>>>> they should specify everything.  There is a reason why the
>>>>
>>>> **WARNING: Any use of this option may interfere with heuristics.  Use
>>>> with extreme care.**
>>>>
>>>> disclaimer exists.
>>> I've looked again: Such a disclaimer does not exist for xpti= nor
>>> pv-l1tf=, and it shouldn't, as use of these options does not in fact
>>> interfere with any (other) heuristics (they're separate options for
>>> reasons beyond syntax issues that would result if they were folded
>>> into spec-ctrl= ).
>>>
>>> If the sole remaining change request was to split the variables into
>>> separate booleans, I can do that (although, as said, I'm not
>>> convinced this is helpful).
> 
> How can you possibly justify this comment?  Even in your proposed patch,
> you note that the current scheme is overloaded and won't work for an
> extra boolean.

Hmm, I don't see me noting anything like this. The only remark that
I think you may possibly mean is the post-commit-message one about
perhaps folding OPT_XPTI_* and OPT_PV_L1TF_*.

> Further to that, the code is almost impossible to parse.

Subjective again, and this patch isn't really making things any worse.
If you felt this was the wrong model, why did you mirror the XPTI
parsing into your L1TF patches?

>>>  But there were other open points, and
>>> I'd prefer to either commit v1 with the one copy-and-paste bug
>>> fixed, or send a v2 which has a chance of being accepted (i.e.
>>> with all open points addressed verbally or by code changes).
> 
> I'm still not happy about the addition of default.  The ordering of
> options causing different results is the dealbreaker (as a side effect
> of your spec_ctrl=0 extention),

The ordering of options being meaningful is there, and has been
there before any of this year's security work. I.e. it also affects
other options, perhaps just not as noticably as is the case here. See
the difference between "cpuidle no-cpuidle" and "no-cpuidle cpuidle",
or "apic=bigsmp apic=default" and "apic=default apic=bigsmp"? Their
behavior is all well defined, afaict.

In fact things like "cpuidle" lack a way to switch back to default mode,
in my opinion. But it's certainly the case that for more complex cases
it is more important to have a way to switch back to default mode;
when it's just a yes/no thing you can as well say "yes" or "no" right
on the command line.

> but I'm also not inclined to take bodges
> to work around broken boot software configurations.

I'm afraid I don't understand this part.

Bottom line: I'm afraid I still don't see a way forward here, i.e. what
I _need_ to change to make the patch acceptable. I'm open to
making changes in how things are done (but I'd like to be told clearly),
but I'm not really open to leaving (parts of) the current broken
behavior broken. In any event I'll break out the hopefully
uncontroversial part of silencing the false error messages.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.