[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Re: [Patch] the interface of invalidating qemumapcache



> >> Allocating the PCI resources with an incrementing region_num
counter
> >> is pointless (and in fact confusing) given that the BAR for mmio
and
> >> portio resources are hardcoded as 1 and 0 (respectively) in the
> >> pv-on-hvm driver code. Also the existing portio resource is (I
> >> believe) a placeholder. Thus you don't need to create a new
resource
> >> -- use the first port of resource region 0 instead. The existing
> >> read/write handler registrations in platform_io_map() are
pointless.
> >> You can remove them and replace with a single-byte write handler to
> >> blow the mapcache -- there's no need to install handlers for 2- or
> >> 4-byte accesses, nor for read accesses.
> >
> > Why are we triggering this with an ioport access rather than just
> > adding a new message type between xen and qemu-dm? The latter seems
> > rather cleaner.
> 
> Since qemu-dm is reponsible for I/O events, I think it's natural (i.e.
> kind of chipset featrue) to construct and send an I/O event to qemu-dm
> to trigger mapcache invalidation. It does not mean the guest needs to
> use an I/O instruction, but our plan is that Xen sends a mapcache
> invalidation message and it's implemented as an I/O event for HVM
> guests.

Ah, OK -- so this is just for testing purposes. Seems sensible.
Thanks,
Ian



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.