[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by ioreq server





On 12/21/2015 10:45 PM, Jan Beulich wrote:
On 15.12.15 at 03:05, <shuai.ruan@xxxxxxxxxxxxxxx> wrote:
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -935,6 +935,9 @@ static void hvm_ioreq_server_free_rangesets(struct 
hvm_ioreq_server *s,
          rangeset_destroy(s->range[i]);
  }

+static const char *io_range_name[ NR_IO_RANGE_TYPES ] =

const

OK. Thanks.


+                                {"port", "mmio", "pci", "wp-ed memory"};

As brief as possible, but still understandable - e.g. "wp-mem"?


Got it. Thanks.

@@ -2593,6 +2597,16 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct 
domain *d,
          type = (p->type == IOREQ_TYPE_PIO) ?
                  HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
          addr = p->addr;
+        if ( type == HVMOP_IO_RANGE_MEMORY )
+        {
+             ram_page = get_page_from_gfn(d, p->addr >> PAGE_SHIFT,
+                                          &p2mt, P2M_UNSHARE);
+             if ( p2mt == p2m_mmio_write_dm )
+                 type = HVMOP_IO_RANGE_WP_MEM;
+
+             if ( ram_page )
+                 put_page(ram_page);
+        }

You evaluate the page's current type here - what if it subsequently
changes? I don't think it is appropriate to leave the hypervisor at
the mercy of the device model here.


Well. I do not quite understand your concern. :)
Here, the get_page_from_gfn() is used to determine if the addr is a MMIO
or a write-protected ram. If this p2m type is changed, it should be
triggered by the guest and device model, e.g. this RAM is not supposed
to be used as the graphic translation table. And it should be fine.
But I also wonder, if there's any other routine more appropriate to get
a p2m type from the gfn?

--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -48,8 +48,8 @@ struct hvm_ioreq_vcpu {
      bool_t           pending;
  };

-#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
-#define MAX_NR_IO_RANGES  256
+#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_WP_MEM + 1)
+#define MAX_NR_IO_RANGES  8192

I'm sure I've objected before to this universal bumping of the limit:
Even if I were to withdraw my objection to the higher limit on the
new kind of tracked resource, I would continue to object to all
other resources getting their limits bumped too.


Hah. So how about we keep MAX_NR_IO_RANGES as 256, and use a new value,
say MAX_NR_WR_MEM_RANGES, set to 8192 in this patch? :)

Thanks a lot & happy new year!


Yu

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.