[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v11 8/9] Add IOREQ_TYPE_VMWARE_PORT



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 05 June 2015 10:36
> To: Don Slutz; Don Slutz
> Cc: Aravind Gopalakrishnan; Suravee Suthikulpanit; Andrew Cooper; Ian
> Campbell; Paul Durrant; George Dunlap; Ian Jackson; Stefano Stabellini; Eddie
> Dong; Jun Nakajima; Kevin Tian; xen-devel@xxxxxxxxxxxxx; Boris Ostrovsky;
> Keir (Xen.org); Tim (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v11 8/9] Add IOREQ_TYPE_VMWARE_PORT
> 
> >>> On 04.06.15 at 13:28, <don.slutz@xxxxxxxxx> wrote:
> > On 06/03/15 13:09, George Dunlap wrote:
> >> On 05/22/2015 04:50 PM, Don Slutz wrote:
> >>> This adds synchronization of the 6 vcpu registers (only 32bits of
> >>> them) that vmport.c needs between Xen and QEMU.
> >>>
> >>> This is to avoid a 2nd and 3rd exchange between QEMU and Xen to
> >>> fetch and put these 6 vcpu registers used by the code in vmport.c
> >>> and vmmouse.c
> >>>
> >>> In the tools, enable usage of QEMU's vmport code.
> >>>
> >>> The currently most useful VMware port support that QEMU has is the
> >>> VMware mouse support.  Xorg included a VMware mouse support that
> >>> uses absolute mode.  This make using a mouse in X11 much nicer.
> >>>
> >>> Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx>
> >>> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
> >> Sorry for coming a bit late to this party.  On a high level I think this
> >> is good, but there doesn't seem to be anything in here in particular
> >> that is vmware-specific.  Would it make more sense to give this a more
> >> generic name, and have it include all of the general-purpose registers?
> >
> > I do not know of a more general case.  The code here is very VMware "in
> > (%dx),%eax" specific.  The x86 architecture does not have an in/out case
> > where registers other then rax get used and/or changed that need to be
> > sent to QEMU.  There already is code to handle ins better then 1 byte at
> > a time.
> >
> > There is also a data size issue.  The register data sent over is smaller
> > then the ioreq data.  Therefore the number of vCPUs that are supported
> > is the the same.  Changing the amount of data sent would effect this
> > (like requiring more then 1 page).
> 
> You may or may not have heard talk about there being an extension
> to the qemu interface in the works anyway that involves larger data
> items (xmm, ymm, and zmm data in particular) to be communicated to
> qemu. Depending on the time frame for this to arrive (Paul?) perhaps
> it would make sense to defer the changes here to then build on top of
> that instead of introducing a custom mechanism?
> 

The idea was to 'bounce' larger accesses via a guest RAM page (another magic 
page within the E820 reserved region just below 4G). Theoretically this should 
not require modifications to QEMU because it already handles multi-rep I/O 
to/from guest pages. Alas this proved not to be true as QEMU does translates 
all guest addresses (whether they are in the ioreq addr or data field) through 
its own memory map and hence anything within the E820 is treated as emulated.
So, I'm now working on fixing the current 'chunking' code in hvmemul_read/write 
to handle accesses wider than 8 bytes as multiple round-trips to QEMU. Less 
efficient, but it I believe it will work... and if QEMU is modified, we could 
try bouncing again in future.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.