[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for 4.5 v6 1/1] Add mmio_hole_size



On Wed, Oct 08, 2014 at 09:41:36AM -0400, Don Slutz wrote:
> On 10/07/14 10:08, Konrad Rzeszutek Wilk wrote:
> >On Fri, Oct 03, 2014 at 09:48:38AM -0400, Don Slutz wrote:
> >>If you add enough PCI devices then all mmio may not fit below 4G
> >>which may not be the layout the user wanted. This allows you to
> >>increase the below 4G address space that PCI devices can use and
> >>therefore in more cases not have any mmio that is above 4G.
> >>
> >>There are real PCI cards that do not support mmio over 4G, so if you
> >>want to emulate them precisely, you may also need to increase the
> >>space below 4G for them. There are drivers for these cards that also
> >>do not work if they have their mmio space mapped above 4G.
> >>
> >>This allows growing the MMIO hole to the size needed.
> >>
> >>This may help with using pci passthru and HVM.
> >Ha! It definitly helps in some cases with GPUs.
> >>Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx>
> >>---
> >>v6:
> >>     I think we should get rid of xc_hvm_build_with_hole() entirely.
> >>       Done.
> >>     Add text describing restrictions on hole size
> >>       Done.
> >>     "<=", to be consistent with the check in
> >>         libxl__domain_create_info_setdefault ()
> >>       Done
> >>
> >>v5:
> >>   Add LIBXL_HAVE_BUILDINFO_HVM_MMIO_HOLE_SIZE.
> >>
> >>v4:
> >>     s/pci_hole_min_size/mmio_hole_size/
> >>     s/HVM_BELOW_4G_RAM_END/HVM_BELOW_4G_MMIO_START/
> >>     Add ignore to hvmloader message
> >>     Adjust commit message
> >>     Dropped min; stopped mmio_hole_size changing in hvmloader
> >>
> >>
> >>  docs/man/xl.cfg.pod.5             | 16 +++++++++
> >>  tools/firmware/hvmloader/config.h |  1 -
> >>  tools/firmware/hvmloader/pci.c    | 71 
> >> +++++++++++++++++++++++++++------------
> >>  tools/libxl/libxl.h               |  5 +++
> >>  tools/libxl/libxl_create.c        | 29 ++++++++++++----
> >>  tools/libxl/libxl_dm.c            | 24 +++++++++++--
> >>  tools/libxl/libxl_dom.c           |  8 +++++
> >>  tools/libxl/libxl_types.idl       |  1 +
> >>  tools/libxl/xl_cmdimpl.c          | 12 +++++++
> >>  9 files changed, 136 insertions(+), 31 deletions(-)
> >>
> >>diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> >>index 8bba21c..0a870d2 100644
> >>--- a/docs/man/xl.cfg.pod.5
> >>+++ b/docs/man/xl.cfg.pod.5
> >>@@ -1111,6 +1111,22 @@ wallclock (i.e., real) time.
> >>  =back
> >>+=head3 Memory layout
> >>+
> >>+=over 4
> >>+
> >>+=item B<mmio_hole_size=BYTES>
> >>+
> >>+Specifies the size the MMIO hole below 4GiB will be.  Only valid for
> >>+device_model_version = "qemu-xen".  Note: this requires QEMU 2.1.0
> >>+or later.
> >This patch depends on 'max-ram-below-4g' being in QEMU upstream.
> >Has that been codified/Acked by the maintainers there?
> >(It would be rather cumbersome if this changed).
> 
> Yes, it is part of QEMU 2.1 which has been released.

I think you might just want to ditch that information as
..
> 
> 
> >Is there a link to the latest patch-set?
> 
> I currently do not have an external git server.
> 
> >
> >Naturally to use this with Xen 4.5 one would have to replace
> >the qemu-xen that we ship with it, with an upstream one.
> 
> Nope.  The QEMU that is part of Xen also has this.  Was
> done in:
> 
> Message-ID: <alpine.DEB.2.02.1407281711550.2293@xxxxxxxxxxxxxxxxxxxxxxx>
> References: <alpine.DEB.2.02.1407241221110.2293@xxxxxxxxxxxxxxxxxxxxxxx>
>     <1406554299-25744-1-git-send-email-dslutz@xxxxxxxxxxx>
> 
> 
> 
> >
> >I believe Fedora is an distro that does that (as in
> >it builds using the same QEMU that KVM and Xen are using).
> >Are there any other ones?
> 
> Not sure.
> 
> >
> >I am conflicted about this as:
> >  - Internally (Oracel) we have an usage for this and at some
> >    point developed something quite similar, so I am partially
> >    biased to this.
> >
> >  - This by itself won't work with the version of QEMU-xen that
> >    is going to be shipped in Xen 4.5. Which means it is a bit
> >    of an dead code - but then there are some patches that we
> >    put in Xen 4.5 that were cleanups to help with further work.
> >    And there are also pieces of code in the hypervisor
> >    which don't have corresponding code in the toolstack.
> 
> This is wrong.  The QEMU-xen that is going to be shipped
> with Xen 4.5 supports this.

.. you say that the version of QEMU-xen that is going to be with
Xen 4.5 will have this.
> 
> >  - The 'max-ram-below-4g' not being codified makes me a bit
> >    uneasy - as we would have to alter this when your patches
> >    make it in QEMU v2.1 (or later).
> 
> I did not understand this.

You can ignore that sentence. Somehow I was thinking your patches
for QEMU were not in the QEMU code base - as I read your 'QEMU 2.1.0
or later' as - 'later' == 'not in tree yet.'.

> 
> >On the patch itself:
> >  - It is isolated. It won't be run by the majority of users - hence
> >    any bugs that it might cause are in the common code. There
> >    does not seem much..
> >
> >  - It needs an Ack or Reviewed by the toolstack maintainers
> >    and also from George (who touched the hvmloader last time).
> >
> >
> >>+
> >>+Cannot be smaller than 256MiB. Cannot be larger than 4GiB.
> >>+
> ...
> >>diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> >>index 8b82584..3737148 100644
> >>--- a/tools/libxl/libxl_create.c
> >>+++ b/tools/libxl/libxl_create.c
> >>@@ -23,6 +23,7 @@
> >>  #include <xc_dom.h>
> >>  #include <xenguest.h>
> >>  #include <xen/hvm/hvm_info_table.h>
> >>+#include <xen/hvm/e820.h>
> >>  int libxl__domain_create_info_setdefault(libxl__gc *gc,
> >>                                           libxl_domain_create_info *c_info)
> >>@@ -428,13 +429,26 @@ int libxl__domain_build(libxl__gc *gc,
> >>          vments[4] = "start_time";
> >>          vments[5] = libxl__sprintf(gc, "%lu.%02d", 
> >> start_time.tv_sec,(int)start_time.tv_usec/10000);
> >>-        localents = libxl__calloc(gc, 7, sizeof(char *));
> >>-        localents[0] = "platform/acpi";
> >>-        localents[1] = libxl_defbool_val(info->u.hvm.acpi) ? "1" : "0";
> >>-        localents[2] = "platform/acpi_s3";
> >>-        localents[3] = libxl_defbool_val(info->u.hvm.acpi_s3) ? "1" : "0";
> >>-        localents[4] = "platform/acpi_s4";
> >>-        localents[5] = libxl_defbool_val(info->u.hvm.acpi_s4) ? "1" : "0";
> >>+        localents = libxl__calloc(gc, 9, sizeof(char *));
> >>+        i = 0;
> >>+        localents[i++] = "platform/acpi";
> >>+        localents[i++] = libxl_defbool_val(info->u.hvm.acpi) ? "1" : "0";
> >>+        localents[i++] = "platform/acpi_s3";
> >>+        localents[i++] = libxl_defbool_val(info->u.hvm.acpi_s3) ? "1" : 
> >>"0";
> >>+        localents[i++] = "platform/acpi_s4";
> >>+        localents[i++] = libxl_defbool_val(info->u.hvm.acpi_s4) ? "1" : 
> >>"0";
> >>+        if (info->u.hvm.mmio_hole_size) {
> >>+            uint64_t max_ram_below_4g =
> >>+                (1ULL << 32) - info->u.hvm.mmio_hole_size;
> >>+
> >>+            if (max_ram_below_4g <= HVM_BELOW_4G_MMIO_START) {
> >>+                localents[i++] = "platform/mmio_hole_size";
> >>+                localents[i++] =
> >>+                    libxl__sprintf(gc, "%"PRIu64,
> >>+                                   info->u.hvm.mmio_hole_size);
> >>+            }
> >>+        }
> >>+        assert(i < 9);
> >Why? Please include an comment explaining why?
> 
> Ok.  Does:
> 
> /* Check for heap corruption, localents array size is 9 */
> help?
> 
> >>          break;
> >>      case LIBXL_DOMAIN_TYPE_PV:
> >>@@ -460,6 +474,7 @@ int libxl__domain_build(libxl__gc *gc,
> >>              vments[i++] = "image/cmdline";
> >>              vments[i++] = (char *) state->pv_cmdline;
> >>          }
> >>+        assert(i < 11);
> >Why? Please include an comment explaining why.
> 
> Ditto:
> 
> /* Check for heap corruption, vments array size is 11 */

I was thinking more of - why even the need for them? It is pretty
evident what the value would be - so you could just ditch them.
Unless Ian or Wei think it should be in there.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.