[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-ia64-devel] Re: [Xen-devel] [PATCH] Std VGA Performance



On Wed, 2007-10-24 at 17:36 -0400, Ben Guthro wrote:
> This patch improves the performance of Standard VGA,
> the mode used during Windows boot and by the Linux
> splash screen.

Hi,

   ia64 uses VGA too.  I've been able to regain some functionality with
the patch below, but the VGA modes used by our firmware still have
significant issues (once we boot to Linux userspace, VGA text mode gets
readable).  It seems like perhaps we've lost support for some basic text
VGA modes.  I haven't tried to understand the changes in qemu yet, but
are we sacrificing compatibility for performance?  Patch and screen shot
below.  Thanks,

        Alex

PS - for xen-ia64-devel, both the Open Source and Intel GFW have issues
with EFI text mode w/ this patch (use EFI shell to see it on Intel GFW).

Signed-off-by: Alex Williamson <alex.williamson@xxxxxx>
---

diff -r 4034317507de xen/arch/ia64/vmx/mmio.c
--- a/xen/arch/ia64/vmx/mmio.c  Mon Oct 29 16:49:02 2007 +0000
+++ b/xen/arch/ia64/vmx/mmio.c  Mon Oct 29 12:29:18 2007 -0600
@@ -56,10 +56,12 @@ static int hvm_buffered_io_intercept(ior
 {
     struct vcpu *v = current;
     spinlock_t  *buffered_io_lock;
-    buffered_iopage_t *buffered_iopage =
+    buffered_iopage_t *pg =
         (buffered_iopage_t *)(v->domain->arch.hvm_domain.buffered_io_va);
-    unsigned long tmp_write_pointer = 0;
     int i;
+    buf_ioreq_t bp;
+    /* Timeoffset sends 64b data, but no address.  Use two consecutive slots. 
*/
+    int qw = 0;
 
     /* ignore READ ioreq_t! */
     if ( p->dir == IOREQ_READ )
@@ -75,11 +77,41 @@ static int hvm_buffered_io_intercept(ior
     if ( i == HVM_BUFFERED_IO_RANGE_NR )
         return 0;
 
+    /* Return 0 for the cases we can't deal with. */
+    if ( p->addr > 0xffffful || p->data_is_ptr || p->df || p->count != 1 )
+        return 0;
+
+    bp.type = p->type;
+    bp.dir = p->dir;
+    switch (p->size) {
+    case 1:
+        bp.size = 0;
+        break;
+    case 2:
+        bp.size = 1;
+        break;
+    case 4:
+        bp.size = 2;
+        break;
+    case 8:
+        bp.size = 3;
+        qw = 1;
+        gdprintk(XENLOG_INFO, "quadword ioreq type:%d data:%"PRIx64"\n",
+                 p->type, p->data);
+        break;
+    default:
+        gdprintk(XENLOG_WARNING, "unexpected ioreq size:%"PRId64"\n", p->size);
+        return 0;
+    }
+
+    bp.data = p->data;
+    bp.addr = qw ? ((p->data >> 16) & 0xfffful) : (p->addr & 0xffffful);
+
     buffered_io_lock = &v->domain->arch.hvm_domain.buffered_io_lock;
     spin_lock(buffered_io_lock);
 
-    if ( buffered_iopage->write_pointer - buffered_iopage->read_pointer ==
-         (unsigned long)IOREQ_BUFFER_SLOT_NUM ) {
+    if ( pg->write_pointer - pg->read_pointer >=
+         (unsigned long)IOREQ_BUFFER_SLOT_NUM - (qw ? 1 : 0) ) {
         /* the queue is full.
          * send the iopacket through the normal path.
          * NOTE: The arithimetic operation could handle the situation for
@@ -89,13 +121,19 @@ static int hvm_buffered_io_intercept(ior
         return 0;
     }
 
-    tmp_write_pointer = buffered_iopage->write_pointer % IOREQ_BUFFER_SLOT_NUM;
-
-    memcpy(&buffered_iopage->ioreq[tmp_write_pointer], p, sizeof(ioreq_t));
+    memcpy(&pg->buf_ioreq[pg->write_pointer % IOREQ_BUFFER_SLOT_NUM],
+           &bp, sizeof(bp));
+
+    if (qw) {
+        bp.data = p->data >> 32;
+        bp.addr = (p->data >> 48) & 0xfffful;
+        memcpy(&pg->buf_ioreq[(pg->write_pointer + 1) % IOREQ_BUFFER_SLOT_NUM],
+               &bp, sizeof(bp));
+    }
 
     /*make the ioreq_t visible before write_pointer*/
     wmb();
-    buffered_iopage->write_pointer++;
+    pg->write_pointer += qw ? 2 : 1;
 
     spin_unlock(buffered_io_lock);
 

Attachment: corrupted_gfx.png
Description: PNG image

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.