[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: RFC: I/O bandwidth controller



Hi Fernando,

> > > >   - Implement a block layer resource controller. dm-ioband is a working
> > > > solution and feature rich but its dependency on the dm infrastructure is
> > > > likely to find opposition (the dm layer does not handle barriers
> > > > properly and the maximum size of I/O requests can be limited in some
> > > > cases). In such a case, we could either try to build a standalone
> > > > resource controller based on dm-ioband (which would probably hook into
> > > > generic_make_request) or try to come up with something new.
> > > 
> > > I doubt about the maximum size of I/O requests problem. You can't avoid
> > > this problem as far as you use device mapper modules with such a bad
> > > manner, even if the controller is implemented as a stand-alone controller.
> > > There is no limitation if you only use dm-ioband without any other device
> > > mapper modules.
> > 
> > The following is a part of source code where the limitation comes from.
> > 
> > dm-table.c: dm_set_device_limits()
> >         /*
> >          * Check if merge fn is supported.
> >          * If not we'll force DM to use PAGE_SIZE or
> >          * smaller I/O, just to be safe.
> >          */
> > 
> >         if (q->merge_bvec_fn && !ti->type->merge)
> >                 rs->max_sectors =
> >                         min_not_zero(rs->max_sectors,
> >                                      (unsigned int) (PAGE_SIZE >> 9));
> > 
> > As far as I can find, In 2.6.27-rc1-mm1, Only some software RAID
> > drivers and pktcdvd driver define merge_bvec_fn().
> 
> Yup, exactly. The implication of this is that we may see a drop in
> performance in some RAID configurations.

The current device-mapper introduces a bvec merge function for device
mapper devices. IMHO, the limitation goes away once we implement this
in dm-ioband. Am I right, Alasdair?

Thanks,
Ryo Tsuruta

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.