[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] blktap2 control function (version 2)


  • To: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
  • From: eXeC001er <execooler@xxxxxxxxx>
  • Date: Mon, 28 Jun 2010 20:10:35 +0400
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 28 Jun 2010 09:11:56 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=iRnOHW85ber6rEBMF9aSOiya3DnQUvC1KMV9/MXVUmIsNVP2fZ/O/USd2YrGKT70Dd p/da44aMH9pPFUmnb6WRqL8y5FMIpsQ1GVc6Imza5cR7DdmA7ir52Yj+T/RNyuX0FXHS RaV0kZWAYjnGwhycxfDo0uY+yXkdcTyQltGXk=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>



2010/6/28 Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
eXeC001er writes ("[Xen-devel] [PATCH] blktap2 control function (version 2)"):
> Patch description:

Thanks for your contribution.

> - Â Âif not os.access(disk, os.R_OK):
> - Â Â Â Âmsg = "Disk isn't accessible"
> - Â Â Â Âlog.error(msg)
> - Â Â Â Âraise VmError(msg)
> + Â Âattempt = 0
> + Â Âwhile True:
> + Â Â Â Âif not os.access(disk, os.R_OK) and attempt > 3:
> + Â Â Â Â Â Âmsg = "Disk isn't accessible"
> + Â Â Â Â Â Âlog.error(msg)
> + Â Â Â Â Â Âraise VmError(msg)
> + Â Â Â Âelse:
> + Â Â Â Â Â Âbreak
> + Â Â Â Âattempt = attempt + 1

Um, can you explain why this is necessary or reasonable ? ÂWhy three
tries ? ÂWhy do we need to poll for this at all ? ÂSurely if this
helps in any particular situation it means we have a race, which
should be fixed.

(sometimes) When i use pygrub i have error:ÂDisk isn't accessible.


> diff -r 7dcfdd45bc9e tools/python/xen/xend/XendDomainInfo.py
> --- a/tools/python/xen/xend/XendDomainInfo.py Tue Jun 22 07:31:14 2010 +0100
> +++ b/tools/python/xen/xend/XendDomainInfo.py Sun Jun 27 14:12:26 2010 +0400
> @@ -3269,7 +3269,7 @@
> Â Â Â Â Â Â Â Â Â Â Âlog.info("Unmounting %s from %s." %
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â (fn, BOOTLOADER_LOOPBACK_DEVICE))
>
> - Â Â Â Â Â Â Â Â Â Âdom0.destroyDevice('tap', BOOTLOADER_LOOPBACK_DEVICE)
> + Â Â Â Â Â Â Â Â Â Âdom0.destroyDevice(devtype, BOOTLOADER_LOOPBACK_DEVICE, force = True)

I don't understand this area of the code at all well but this change
seems plausible.

You Âcan see method 'destroyDevice' in class 'XendDomainInfo' inÂtools/python/xen/xend/XendDomainInfo.py (line 1302 +- 10 lines) (related to "bug-fix 1 and 3" in my list)


> diff -r 7dcfdd45bc9e tools/python/xen/util/blkif.py
> --- a/tools/python/xen/util/blkif.py ÂTue Jun 22 07:31:14 2010 +0100
> +++ b/tools/python/xen/util/blkif.py ÂSun Jun 27 14:12:26 2010 +0400
> @@ -87,7 +87,10 @@
> Â Â Â Â Â Â Â Â Âfn = "/dev/%s" %(fn,)
>
> Â Â Â Â Âif typ in ("tap", "tap2"):
> - Â Â Â Â Â Â(taptype, fn) = fn.split(":", 1)
> + Â Â Â Â Â Âif fn.count(":") == 1:
> + Â Â Â Â Â Â Â Â(taptype, fn) = fn.split(":", 1)
> + Â Â Â Â Â Âelse:
> + Â Â Â Â Â Â Â Â(taptype, fn) = fn.split(":", 2)[1:3]

I'm not sure I understand this. ÂIs it related to this:
> 2. Bug fix for error: "File 'vhd:/path/.../disk.img' doesn't exist." (not
> correct parsing)
?

yesÂ
Â
> 3. Bug fix for error: "Error: Device 51952 not connected" (in config file
> for DomU we should be use prefix "tap2:tapdisk:xxx" (tap2:xxx) for devices
> from (aio, ram, qcow, vhd, remus) or "tap:tapdisk:xxx" (tap:xxx) for devices
> from (sync, vmdk, qcow2, ioemu))

I haven't tried tap2 at all myself. ÂIs it fully supported everywhere
now ?

seeÂabove
Â

Later:
> blktap2_ctrl_func.patch - for xen-4.x-testing only (because in xen-unstable
> i see previous version of patch)

Does that mean that xen-unstable needs fixing too ? ÂI'd rather apply
a change to xen-unstable first and test it there.

xen-unstable have my previous patch,Âbut it can lead to regression (if use 'tap:xxx' instead Â'tap:tapdisk:xxx')
Â

Ian.

Thanks.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.