[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 6/8] libxl: introduce libxl__alloc_vdev



Ian Campbell writes ("Re: [Xen-devel] [PATCH v4 6/8] libxl: introduce 
libxl__alloc_vdev"):
> On Tue, 2012-04-24 at 15:58 +0100, Ian Jackson wrote:
> > Stefano Stabellini writes ("[Xen-devel] [PATCH v4 6/8] libxl: introduce 
> > libxl__alloc_vdev"):
> > > +    } while (vdev[strlen(vdev) - 1] <= 'z');
> > 
> > There is a scaling limit here of not starting more than 26 domains
> > simultaneously.  Is that acceptable ?  I'm tempted to suggest not.
> 
> While I agree we also need to start being pragmatic about ever getting
> 4.2 out the door...
> 
> If it's a trivial job to support aa, ab etc then lets do it, if not then
> lets leave it for 4.3?

The problem is because the code is trying to avoid too much knowledge
of the devid scheme; in particular, it's trying to avoid introducing
the inverse function to libxl__device_disk_dev_number.  Although it
/does/ depend intimately on the details of
libxl__device_disk_dev_number.

It is arguably a bug that libxl__device_disk_dev_number combines the
parsing of vdev strings with the composition of disk/partition numbers
into devids.  But we can avoid needing to decompose it by passing
libxl__device_disk_dev_number a format which is easier to create than
one with a base-26-encoded disk number.

What I would do is:
  * call libxl__device_disk_dev_number once on blkdev_start with
    non-null arguments for pdisk and ppartition; check that
    the partition is 0.
  * loop incrementing the disk value
  * on each iteration,
    - generate a vdev = GCSPRINTF("d%d", disk);
    - use libxl__device_disk_dev_number on that vdev to get the devid
    - check whether this devid is available

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.