[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen, Linux and EFI.

On Wed, Jul 11, 2012 at 05:27:08PM -0400, Shriram Rajagopalan wrote:
> On Wed, Jul 11, 2012 at 4:45 PM, Konrad Rzeszutek Wilk <
> konrad.wilk@xxxxxxxxxx> wrote:
> > Hey,
> >
> > There has been some discussion about EFI and SecureBoot and such.
> >
> > Most of the time I get questions in the form of "How do I get Fedora 17
> > with Xen to do EFI", I am going to concentrate on Fedora, but I think
> > this applies to other distros too.
> >
> > From my reading (I hadn't actually tried EFI yet), there are two ways
> > to bootup a system:
> >
> >  - Using grub2.efi. Grub2 does the EFI API calls and calls the Xen
> > hypervisor
> >    as if there were no EFI. This means no need for the EFI calls from
> >    Linux or Xen are required).
> >
> >
> >  - Using xen.efi. Xen can be built as a PE (Portable Executable) and it can
> >    boot as an EFI image. Naturally you also need to provide a configuration
> >    file and here are the details on it:
> >    http://xenbits.xen.org/docs/unstable/misc/efi.html
> >
> >    And you would also need to configure the EFI nvram to execute xen.efi
> >    instead of grub2.efi.
> >
> >    For the Linux side, the kernel needs to make new EFI variant hypercalls.
> >    Currently the SLES kernel is capable of it. The upstream Linux kernel
> >    cannot do it. There were patches proposed for it:
> >    http://lists.xen.org/archives/html/xen-devel/2012-02/msg02027.html
> >
> >
> Does the Linux side dom0 kernel changes need to be done irrespective of the
> two options above ? or does it apply only for booting with xen.efi ?

I think the later only. Thought I am not sure how in the first case (GRUB2)
how the E820 is made to be passed to the hypervisor kernel.

> I spent a week trying to get Xen boot with grub2.efi (ubuntu 12.04).
> I ended up getting "Not enough memory to relocate domain 0".
> So I presume that the dom0 kernel EFI support needs to be done for both
> cases (grub2.efi and xen.efi) ?
> Additional info:
>  Hardware: IBM System X server.
>  It appeared that when booting xen under grub2.efi, xen was picking up a
> e-801 map

Huh. E801 is from the ancient days. I am a bit surprised that grub2.efi would
manufacture such ancient map.

So just to make sure I am not confused - you ran GRUB2 and ran with the
normal hypervisor and Linux kernel. What did you serial output look like?

>  instead of the e-820 map that was on the system. I forced xen code to a
> multiboot
>  e-820 map instead of the native one ( based on a forum post I saw).
>  That didnt help much.

What does the memory map look like when you booted with GRUB2.efi + Linux.
Was the memory map the same or different? I am trying to figure out if
the issue is that Xen needs extra code to deal with a GRUB2 manufactured
E801 map - and that the baremetal kernel already has such logic.

> So I ended up booting with a SLES kernel. Not even sure if opensuse 12.1
> will work.

With what hypervisor? Same one you used when you tried GRUB2 with Xen earlier?
>    which were mostly ports of how SLES did it (And they should reflect
> >    the proper ownership, which they don't have right now).
> >
> >    The EFI maintainer (Matthew) commented
> >    http://lists.xen.org/archives/html/xen-devel/2012-02/msg00815.html
> >    that he would like a better abstraction model for it. Mainly to
> >    push those calls deeper down (so introduce the registration in the
> >    the efi_calls). Or perhaps by providing in
> > boot_params.efi_info.efi_systab
> >    a finely crafted structure pointing to Linux functions that would
> >    do the hypercalls.
> >
> > And there you have it. In other words it needs somebody willing to
> > look at the patches as a baseline and do some exciting new work.
> > I sadly don't have right now the time to address this :-(
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-devel
> >

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.