WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

RE: [Xen-ia64-devel] New tree issues (domUrestartanddomVTibootissues)

To: "Alex Williamson" <alex.williamson@xxxxxx>
Subject: RE: [Xen-ia64-devel] New tree issues (domUrestartanddomVTibootissues)
From: "Zhang, Xing Z" <xing.z.zhang@xxxxxxxxx>
Date: Thu, 14 Jun 2007 21:48:02 +0800
Cc: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>, xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 14 Jun 2007 06:46:04 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <1181825329.6221.632.camel@bling>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AceugmWzp/Kkb9TQRZW/Eo3SwiInUwABkqxA
Thread-topic: [Xen-ia64-devel] New tree issues (domUrestartanddomVTibootissues)
>   No, I was thinking we might store the nvram in the EFI partition of
>the HVM guest.  pygrub already has some support for finding an
>elilo.conf in the guest EFI partition, parsing it and pulling out the
>kernel and initrd for PV guests.  I'm wondering if a similar method
>could be used to read and write the NVRAM image into the guest's EFI
>partition.
>
>   I'm not sure my read-only image scenario makes any sense, and this
>approach would suffer the same issue with one nvram store per EFI
>partition.  I don't know if that's an unreasonable limitation or not.
>The thing I like about storing the nvram in the guest image is that
it's
>self contained.  The guest image could be copied to another system and
>the nvram would be there with it.  I also don't know if libfsimage has
>support to create and later write /etc/xen/nvram file as it's typically
>only used to read out of the boot partition.
>
>   I can see now why you wanted to store the nvram based on disk image
>path, it's a similar idea to storing the nvram in the guest EFI
>partition.  If the nvram is stored in the dom0 fs, using domain name
>seems like the easiest approach, but I still like the idea of having
the
>nvram stored in the guest image.
[Zhang, Xing Z] 
        Thanks for your advices. I agree we need do more thought on how
to save nvram data. I ever intended to save it to GFW binary, but GFW is
shared by multiple domains, save nvram in it will complicate things. I
have poor knowledge on how to pygrub work, but I will go on thinking how
to make things happy.

>[2007-06-13 22:58:55 2862] ERROR (__init__:1072)
XendDomainInfo.destroy:
>xc.domain_destroy failed.
>Traceback (most recent call last):
>  File "//usr/lib/python/xen/xend/XendDomainInfo.py", line 1704, in
>destroyDomain
>    xc.domain_destroy(self.domid)
>Error: (1, 'Internal error', 'Cannot get nvram data from GFW!\n (3 = No
such
>process)')
>
[Zhang, Xing Z] 
I am afraid you need use version of GFW which is Flash.fd.2007.06.05.
Nvram patch need support by GFW.

BTW: I can't reproduce XenU restart issue and poweroff issue. Both
reboot and poweroff work finely on my box. I didn't use the newest
changeset because it cannot be built successful. The error is lacking
definition of IA64_PSR_AC and IA64_PSR_BN macro which in
xc_ia64_hvm_build.c. I just patch nvram patch on an old changeset. I
will go on looking into it tomorrow. 

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>