[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1.1 2/2] x86/hpet: Don't enable legacy replacement mode unconditionally



On 26.03.2021 14:43, Marek Marczykowski-Górecki wrote:
> On Fri, Mar 26, 2021 at 02:30:09PM +0100, Jan Beulich wrote:
>> On 26.03.2021 13:29, Ian Jackson wrote:
>>> I wrote:
>>>> I'm sorry, but I think it is too late for 4.15 to do this.  I prefer
>>>> Jan's patch which I have alread release-acked.
>>>>
>>>> Can someone qualified please provide a maintainer review for this,
>>>> ideally today ?
>>>
>>> I asked Andrew on IRC:
>>>
>>> 12:08 <Diziet> andyhhp__: Are you prepared to maintainer-ack Jan's
>>>                more-minimal hpet workaround approach ?
>>> 12:16 <andyhhp__> Diziet: honestly, no.  I don't consider that
>>>                   acceptable behaviour, and it is a fairly big "f you"
>>>                   (this was literally feedback I got in private) to
>>>                   the downstreams who've spent years trying to get us
>>>                   to fix this bug, and have now backported the first
>>>                   version.
>>> 12:16 <andyhhp__> I'm looking into the feedback on my series
>>> 12:17 <andyhhp__> one way or another, the moment we enter the fallback
>>>                   path for interrupt routing, something is very broken
>>>                   on the system
>>> 12:19 <andyhhp__> so the tradeoff is an unspecified bug on one ancient
>>>                   laptop which can't be tested now, vs 5 years of Atom
>>>                   CPUs, 2 years of latop CPUs, and the forthcoming
>>>                   Server line of Intel CPUs
>>> 12:19 <andyhhp__> or whatever other compromise we can work on
>>>
>>> I'm sorry that this bug is going to continue to be not properly fixed.
>>
>> Actually I had another thought here in the morning, but then didn't
>> write it down: While Andrew's approach indeed would (hopefully)
>> improve user experience, it'll reduce the incentive of actually
>> fixing the issue. Normally I might not be that concerned, but seeing
>> how long it took to even arrive at a workaround, I'm afraid now I am
>> concerned.
> 
> I assume "the issue" you meant "Xen using legacy stuff that stops being
> supported by the hardware", right? Yes it is an issue. But for most
> users of Xen, having it broken more likely will results in "lets switch
> to something that works" (perhaps not after the first such case, but
> this is definitely not the first one) instead of "lets spend some hours
> on debugging this very low level issue".

Like sadly is the case in so many areas nowadays, this to me suggests
that you value short term benefits over things working correctly long
term. Yes, it is important to be attractive to users. But this would
better not be at the price of keeping in place workarounds for overly
long periods of time, possible even forever. Such is likely to bite
us (perhaps by way of biting some of our users) down the road.

To be honest, I find it very strange that the original report over a
month ago was never followed up by the necessary technical detail.
Andrew did tell me that outside of the report on the mailing list he
did explicitly ask for such. (I can't rule out that meanwhile he was
given the info, but really all of this would better be on xen-devel.)

> And to be honest, this is more and more appealing option, even though
> all the deficiencies of KVM...

Well, feel free to throw more engineering resources into Xen's
(upstream) maintenance. There being a much larger community of
engineers around KVM is perhaps the main reason here.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.