[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/18] x86: more bool_t to bool cleanup



>>> On 03.07.17 at 14:54, <wei.liu2@xxxxxxxxxx> wrote:
> On Mon, Jul 03, 2017 at 02:10:03AM -0600, Jan Beulich wrote:
>> >>> On 30.06.17 at 19:01, <wei.liu2@xxxxxxxxxx> wrote:
>> > Seeing that bool_t keeps creeping back in new patches I think the only 
> solution
>> > is to get rid of bool_t once and for all, as soon as possible.
>> 
>> I don't fully agree, and considering the flood of patches you're
>> submitting in this area I think I finally need to voice my opinion
>> here (I had really meant to only do this in Budapest, on a BoF
>> I mean to propose): I appreciate you having and taking the
>> time to do this cleanup. Nevertheless I'm not overly happy with
>> it. For one, it requires time (even if not very much) to review.
>> And as you likely know, patch volume and review bandwidth
>> don't really fit together very well. (I had managed to deal with
>> all my old, non-RFC reviews during the last week, only to find
>> I'm again at almost 50 again this morning, and I haven't
>> finished working through the all the xen-devel traffic having
>> come in over the weekend. This is simply frustrating.)
> 
> I can see two aspects in your email. I want to step back and deconstruct
> it a bit.
> 
> 1. The cost / benefit of doing cleanup at once vs gradually (this
>    series only)
> 
> You seem to think doing tree-wide cleanup in one go is a bad thing
> because it will make backporting harder (from below).
> 
> I disagree. Changes are just changes. You and Andrew have done a lot of
> changes in recent releases. They will also make backporting harder, but
> you definitely think it is worthy of the effort.

Yes and no. For example, take the integer type cleanup I did: I've
intentionally left untouched the u<nn> and s<nn> types because
there are still too many of them to reasonably replace them all.
The ones I did remove were either unused altogether, or had so
few instances left that a final cleanup was warranted.

> It now all comes down to how one calculates cost / benefit. I prefer to
> do things all at once so that we don't confuse future contributors and
> save the need to point out the same issues over and over again. I value
> consistency a lot.
> 
> Linux kernel has done this sort of tree-wide cleanups, Xen toolstack has
> done it too (either by me or external contributors). I think we could do
> the same thing for the hypervisor.
> 
> Obviously if you don't agree with this approach and my value
> proposition, there is no point in me pursuing this further. I will let
> you and Andrew figure out what is best suited. I just need a clear
> answer from you two.

Well, first of all it's not just Andrew and me. If everyone else
thought differently, even if the two of us agreed we would
probably better accept the other position. And then I don't want
you to give up altogether, and I had tried to make clear that I
value the time and work you've invested here. I certainly
appreciate the result, it is just that your separation in 1 and ...

> 2. The limited bandwidth of reviewers

... 2 can't possibly be a full one - it is also the process to get
there which matters, and hence I would view such broad
cleanup differently depending on other patch volume.

>> It would perhaps be okay if no comments were needed at all,
> 
> This is just impossible: 1. I can't read yours and Andrew's mind; 2. I
> have my own opinions in certain areas - I will try to convince you or be
> convinced, but that will require discussions.

Understood, but ...

>> but I think in all of the series you sent to this effect there were
>> further corrections necessary (leaving aside merely desirable
>> ones). Especially bulk cleanup work like this should introduce as
>> little overhead to others as possible. Hence the comments here
>> also apply to the PV code splitting work you've apparently
>> invested quite a bit of time into.
>> 
> 
> I do try to be as careful as possible with the code -- I don't think I
> ever broke the hypervisor too badly, if at all, in my recent work.  Now
> I've mostly figured out what you and Andrew like patch-wise. If you
> think of anything that can be done better, do let me know.

... while I certainly didn't mean to accuse you of anything, let alone
breaking the hypervisor, there were a few things which neither
would have resulted in breakage nor would have required mind
reading. Best example probably is when you touched definitions but
let declarations alone.

> And frankly I didn't mean / want to do the cleanup in the first place --
> I wanted to do another thing: PV in PVH. But the code as-is is just not
> in the right shape to work with. As I went along, it gradually grew into
> a useful project of its own right. To be clear, this is not to blame
> anyone involved in the past or now. The constraints then were different
> from the ones we have now.  I've foolishly signed myself up to this big
> project because I think it is worth it. :-)

Ah, I didn't realize that was the background even here; I did assume
that to be the background for the PV split work.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.