[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES for ioreq server



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 07 July 2015 13:53
> To: Paul Durrant
> Cc: Andrew Cooper; George Dunlap; Kevin Tian; zhiyuan.lv@xxxxxxxxx; Zhang
> Yu; xen-devel@xxxxxxxxxxxxx; Keir (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES for
> ioreq server
> 
> >>> On 07.07.15 at 11:23, <Paul.Durrant@xxxxxxxxxx> wrote:
> > I wonder, would it be sufficient - at this stage - to add a new mapping sub-
> op
> > to the HVM op to distinguish mapping of mapping gfns vs. MMIO ranges.
> That
> > way we could use the same implementation underneath for now (using
> the
> > rb_rangeset, which I think stands on its own merits for MMIO ranges
> anyway)
> 
> Which would be (taking into account the good description of the
> differences between RAM and MMIO pages given by George
> yesterday [I think])? I continue to not be convinced we need
> this new rangeset type (the more that it's name seems wrong,
> since - as said by George - we're unlikely to deal with ranges
> here).
> 

I don't see that implementing rangesets on top of rb tree is a problem. IMO 
it's a useful optimization in its own right since it takes something that's 
currently O(n) and makes it O(log n) using an rb tree implementation that's 
already there. In fact, perhaps we just make the current rangeset 
implementation use rb trees underneath, then there's no need for the extra API.

  Paul

> Jan
> 
> > but allow them to diverge later... perhaps using a new P2T (page-to-type)
> > table, which I believe may become necessary as Intel reclaims bits for h/w
> > use and thus squeezes our existing number of supported page types.
> >
> >   Paul
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.