[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: stdvga: slow ioreq



Yes, I guess you're using TightVNC? I always have much better experience
with RealVNC.

 -- Keir

On 15/11/07 14:58, "Christoph Egger" <Christoph.Egger@xxxxxxx> wrote:

> 
> The HVM guest window resizing problem via VNC is NOT a Xen/Qemu bug.
> The actual problem is the vncviewer client lacking support for the
> DesktopSize VNC pseudo-encoding. Clients missing this feature don't
> get notified from the server.
> 
> Looking a little around, I found two VNC clients which support this:
> RealVNC and ggivnc (SVN(!) version, http://www.lysator.liu.se/~peda/ggivnc/).
> 
> Christoph
> 
> 
> On Thursday 08 November 2007 13:49:47 Christoph Egger wrote:
>> My OpenSuSE 10.2 HVM guest is 64bit.
>> 
>> I just found a workaround:
>> Close the vnc client and re-connect. Then the vnc client uses
>> a larger window and I can actually see the cursor line.
>> 
>> This behaviour looks very much to me that the vnc server code
>> does not notify the client about graphic mode changes.
>> 
>> Christoph
>> 
>> On Tuesday 06 November 2007 17:26:45 Dave Lively wrote:
>>> Hi Christoph -
>>>   I'm trying to reproduce the behavior you're seeing.  Is your guest
>>> 32- or 64-bit?
>>> 
>>> Dave
>>> 
>>> On 11/5/07, Christoph Egger <Christoph.Egger@xxxxxxx> wrote:
>>>> An obvious bug I am seeing (but is not indicated by this certain
>>>> diagnostic message) is a scrolling bug. It appears when I boot a HVM
>>>> guest that uses a graphic mode (e.g. OpenSuSE 10.2). I am connected via
>>>> VNC to it. I don't see the line where the cursor is. I have to blindely
>>>> guess when the guest expects me to log in. After a (blind) successful
>>>> login, I have to type something like 'ls' several times, until I see
>>>> what I actually typed and the output of what I typed.
>>>> The latest changeset I tried is 16317 and the bug is reproducable.
>>>> The oldest changeset I tried so far is 16281 and this issue was
>>>> reproducable there, too. I hope, that helps.
>>>> 
>>>> Christoph
>>>> 
>>>> On Monday 05 November 2007 17:38:24 Robert Phillips wrote:
>>>>> Hi Christoph,
>>>>> 
>>>>> What you are seeing is (I hope) just an annoying diagnostic.  If you
>>>>> reduce your guest log level or eliminate the gdprintk() the problem
>>>>> should go away.
>>>>> 
>>>>> The diagnostic is warning that the ioreq could not be placed in the
>>>>> buffered iopage so it is being sent synchronously to qemu.
>>>>> The ioreq could not be placed in the buffered iopage because it
>>>>> couldn't be condensed into the (new) format.
>>>>> The new format has no room to store 'count' so ioreqs with count != 1
>>>>> take the slow route.
>>>>> 
>>>>> Our experience is that the few ioreqs handled this way are far out
>>>>> numbered by the condensable ioreqs
>>>>> and since far more condensable ioreqs now fit in the buffered iopage
>>>>> (than would fit with the old format)
>>>>> the performance improvement is substantial.
>>>>> 
>>>>> If the diagnostic is pointless and annoying, it should be eliminated.
>>>>> 
>>>>> -- Robert Phillips
>>>>> 
>>>>> On 11/5/07, Christoph Egger <Christoph.Egger@xxxxxxx> wrote:
>>>>>> Hello Ropert!
>>>>>> 
>>>>>> Since changeset 16285 (xen-staging), I get the following output
>>>>>> when I launch
>>>>>> a HVM guest and VGABios is running:
>>>>>> 
>>>>>> 
>>>>>> -------------------------------------------------------------------
>>>>>> -- ---- -------------- (XEN) intercept.c:172:d1 slow ioreq. type:1
>>>>>> size:1 addr:0xa0000 dir:0 ptr:1
>>>>>> df:0 count:16
>>>>>> (XEN) intercept.c:172:d1 slow ioreq. type:1 size:1 addr:0xa0020
>>>>>> dir:0 ptr:1
>>>>>> df:0 count:16
>>>>>> (XEN) intercept.c:172:d1 slow ioreq. type:1 size:1 addr:0xa0040
>>>>>> dir:0 ptr:1
>>>>>> df:0 count:16
>>>>>> (XEN) intercept.c:172:d1 slow ioreq. type:1 size:1 addr:0xa0060
>>>>>> dir:0 ptr:1
>>>>>> df:0 count:16
>>>>>> (XEN) intercept.c:172:d1 slow ioreq. type:1 size:1 addr:0xa0080
>>>>>> dir:0 ptr:1
>>>>>> df:0 count:16
>>>>>> (XEN) intercept.c:172:d1 slow ioreq. type:1 size:1 addr:0xa00a0
>>>>>> dir:0 ptr:1
>>>>>> df:0 count:16
>>>>>> [...]
>>>>>> (XEN) intercept.c:172:d1 slow ioreq. type:1 size:1 addr:0xa1fe0
>>>>>> dir:0 ptr:1
>>>>>> df:0 count:16
>>>>>> 
>>>>>> -------------------------------------------------------------------
>>>>>> -- ---- -----------------
>>>>>> 
>>>>>> This is not the full output (to keep this mail readable). The
>>>>>> address output
>>>>>> starts from 0xa0000 and goes to 0xa1fe0 and it always increases by
>>>>>> 0x20. (So you can generate the full output yourself :)
>>>>>> 
>>>>>> The output comes from xen/arch/x86/intercept.c, function
>>>>>> hvm_buffered_io_send().
>>>>>> It is this code snippet:
>>>>>> 
>>>>>>     /* Return 0 for the cases we can't deal with. */
>>>>>>     if ( (p->addr > 0xffffful) || p->data_is_ptr || p->df ||
>>>>>> (p->count != 1) )
>>>>>>     {
>>>>>>         gdprintk(XENLOG_DEBUG, "slow ioreq. type:%d size:%"PRIu64"
>>>>>> addr:0x%"
>>>>>>                  PRIx64" dir:%d ptr:%d df:%d count:%"PRIu64"\n",
>>>>>>                  p->type, p->size, p->addr, !!p->dir,
>>>>>>                  !!p->data_is_ptr, !!p->df, p->count);
>>>>>>         return 0;
>>>>>>     }
>>>>>> 
>>>>>> It looks like the problem was there before changeset 16285 but got
>>>>>> uncovered
>>>>>> with the addition of the debug output.
>>>>>> 
>>>>>> Christoph
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.