[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] blkback global resources



On Tue, 2012-03-27 at 11:22 +0100, Jan Beulich wrote:
> >>> On 27.03.12 at 11:41, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
> > On Mon, 2012-03-26 at 17:20 +0100, Ian Campbell wrote:
> >> On Mon, 2012-03-26 at 16:56 +0100, Jan Beulich wrote:
> >> > All the resources allocated based on xen_blkif_reqs are global in
> >> > blkback. While (without having measured anything) I think that this
> >> > is bad from a QoS perspective (not the least implied from a warning
> >> > issued by Citrix'es multi-page-ring patches:
> >> > 
> >> >  if (blkif_reqs < BLK_RING_SIZE(order))
> >> >          printk(KERN_WARNING "WARNING: "
> >> >                 "I/O request space (%d reqs) < ring order %ld, "
> >> >                 "consider increasing %s.reqs to >= %ld.",
> >> >                 blkif_reqs, order, KBUILD_MODNAME,
> >> >                 roundup_pow_of_two(BLK_RING_SIZE(order)));
> >> > 
> >> > indicating that this _is_ a bottleneck), I'm otoh hesitant to convert
> >> > this to per-instance allocations, as the amount of memory taken
> >> > away from Dom0 for this may be not insignificant when there are
> >> > many devices.
> >> > 
> > 
> > What's your main concern on QoS? Lock contention? Starvation? Or any
> > other things?
> 
> However you want to put it. Prior to the multi-page ring patches, we
> have 64 pending requests (global) and 32 ring entries. Obviously,
> bumping the ring size just to order 1 will already bring the number of
> possible in-flight entries per device on par with those in-flight across
> all devices. So _if_ someone really determined that a multi-page ring
> helps performance, I wonder whether that was with manually
> adjusted global pending request values (not said anywhere) or with
> just a single frontend (not very close to real world scenarios).
> 

Just to be precise, bumping order to 1 makes ring entries more than 64.

> In any case, two guests with heavy I/O clearly have the potential to
> hinder each other, even if both get backed by different physical
> devices.
> 

Right. One solution I can think of is that each blk thread holds a small
number of private entries (threshold to be determined), while blkback
maintains a pool for any allocation goes beyond that threshold. But this
just makes things more and more complex -- let's not overkill ourselves
before we identify the real bottleneck.



Wei.

> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.