[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES for ioreq server



> -----Original Message-----
> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
> George Dunlap
> Sent: 06 July 2015 13:50
> To: Paul Durrant
> Cc: Yu Zhang; xen-devel@xxxxxxxxxxxxx; Keir (Xen.org); Jan Beulich; Andrew
> Cooper; Kevin Tian; zhiyuan.lv@xxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES for
> ioreq server
> 
> On Mon, Jul 6, 2015 at 1:38 PM, Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> wrote:
> >> -----Original Message-----
> >> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
> >> George Dunlap
> >> Sent: 06 July 2015 13:36
> >> To: Yu Zhang
> >> Cc: xen-devel@xxxxxxxxxxxxx; Keir (Xen.org); Jan Beulich; Andrew Cooper;
> >> Paul Durrant; Kevin Tian; zhiyuan.lv@xxxxxxxxx
> >> Subject: Re: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES
> for
> >> ioreq server
> >>
> >> On Mon, Jul 6, 2015 at 7:25 AM, Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
> >> wrote:
> >> > MAX_NR_IO_RANGES is used by ioreq server as the maximum
> >> > number of discrete ranges to be tracked. This patch changes
> >> > its value to 8k, so that more ranges can be tracked on next
> >> > generation of Intel platforms in XenGT. Future patches can
> >> > extend the limit to be toolstack tunable, and MAX_NR_IO_RANGES
> >> > can serve as a default limit.
> >> >
> >> > Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
> >>
> >> I said this at the Hackathon, and I'll say it here:  I think this is
> >> the wrong approach.
> >>
> >> The problem here is not that you don't have enough memory ranges.  The
> >> problem is that you are not tracking memory ranges, but individual
> >> pages.
> >>
> >> You need to make a new interface that allows you to tag individual
> >> gfns as p2m_mmio_write_dm, and then allow one ioreq server to get
> >> notifications for all such writes.
> >>
> >
> > I think that is conflating things. It's quite conceivable that more than one
> ioreq server will handle write_dm pages. If we had enough types to have
> two page types per server then I'd agree with you, but we don't.
> 
> What's conflating things is using an interface designed for *device
> memory ranges* to instead *track writes to gfns*.

What's the difference? Are you asserting that all device memory ranges have 
read side effects and therefore write_dm is not a reasonable optimization to 
use? I would not want to make that assertion.

  Paul

>  Fundamentally the
> reason you have this explosion of "device memory ranges" is that what
> you're tracking isn't device memory, and it isn't a range.  If your
> umbrella isn't very good at hammering in nails, the solution is to go
> get a hammer, not to add steel reinforcement to your umbrella.
> 
> My suggestion is, short-term, to simply allow the first ioreq server
> to register for write_dm notifications to get notifications, and
> return an error if a second one tries to do so.
> 
> If it becomes important for a single domain to have two ioreq servers
> that need this functionality, then we can come up with an internal Xen
> structure, *designed for gfns*, to track this.  My suspicion is that
> it will never be needed.
> 
>  -George
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.