[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][PATCH 06/16] hvmloader/pci: skip reserved ranges



On Wed, Jul 15, 2015 at 09:32:34AM +0100, Jan Beulich wrote:
> >>> On 15.07.15 at 02:55, <tiejun.chen@xxxxxxxxx> wrote:
> >> > I agree we'd better overhaul this since we already found something
> >>> unreasonable here. But one or two weeks is really not enough to fix this
> >>> with a bitmap framework, and although a bitmap can make mmio allocation
> >>> better, but more complicated if we just want to allocate PCI mmio.
> >>>
> >>> So could we do this next? I just feel if you'd like to pay more time
> >>> help me refine our current solution, its relatively realistic to this
> >>> case :) And then we can go into bitmap in details or work out a better
> >>> solution in sufficient time slot.
> >>
> >> Looking at how long it took to get here (wasn't this series originally
> >> even meant to go into 4.5?) and how much time I already spent
> > 
> > Certainly appreciate your time.
> > 
> > I didn't mean its wasting time at this point. I just want to express 
> > that its hard to implement that solution in one or two weeks to walking 
> > into 4.6 as an exception.
> > 
> > Note I know this feature is still not accepted as an exception to 4.6 
> > right now so I'm making an assumption.
> 
> After all this is a bug fix (and would have been allowed into 4.5 had
> it been ready in time), so doesn't necessarily need a freeze
> exception (but of course the bar raises the later it gets). Rather
> than rushing in something that's cumbersome to maintain, I'd much
> prefer this to be done properly.
> 

This series is twofold. I consider the tools side change RDM (not
limited to RMRR) a new feature.  It introduces a new feature to fix a
bug. It would still be subject to freeze exception from my point of
view.

Wei.

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.