[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/save: reserve HVM save record numbers that have been consumed...

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Sent: 19 December 2019 11:30
> To: Durrant, Paul <pdurrant@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Wei Liu <wl@xxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Roger Pau Monné
> <roger.pau@xxxxxxxxxx>
> Subject: Re: [Xen-devel] [PATCH] x86/save: reserve HVM save record numbers
> that have been consumed...
> On 19/12/2019 11:06, Durrant, Paul wrote:
> >> It is not fair or reasonable to include extra headroom in a "oh dear we
> >> screwed up - will the community be kind enough to help us work around
> >> our ABI problems" scenario.
> >>
> > I would have thought all the pain you went through to keep XenServer
> going with its non-upstreamed hypercall numbers would have made you a
> little more sympathetic to dealing with past mistakes.
> I could object to the principle of the patch, if you'd prefer :)
> If you recall for the legacy window PV driver ABI breakages, I didn't
> actually reserve any numbers upstream in the end.  All compatibility was
> handled locally.

And I remember how nasty the hacks were ;-)

Given we don't yet have a clash that requires such nastiness, I just want to 
avoid it happening before we manage to dispense with the downstream-only legacy 

> >> For now, I'd just leave it as a comment, and strictly only covering the
> >> ones you have used.  There is no need to actually bump the structure
> >> sizes in xen for now - simply that the ones you have currently used
> >> don't get allocated for something different in the future.
> >>
> > Ok, we can defer actually bumping HVM_SAVE_CODE_MAX, but it's almost
> certain to happen eventually.
> That's fine.

Ok. I'll send a v2 with just the comment (and assume Wei's R-b still stands).

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.