[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Qemu-devel] [RFC/BUG] xen-mapcache: buggy invalidate map cache?



On Mon, 10 Apr 2017, Stefano Stabellini wrote:
> On Mon, 10 Apr 2017, hrg wrote:
> > On Sun, Apr 9, 2017 at 11:55 PM, hrg <hrgstephen@xxxxxxxxx> wrote:
> > > On Sun, Apr 9, 2017 at 11:52 PM, hrg <hrgstephen@xxxxxxxxx> wrote:
> > >> Hi,
> > >>
> > >> In xen_map_cache_unlocked(), map to guest memory maybe in entry->next
> > >> instead of first level entry (if map to rom other than guest memory
> > >> comes first), while in xen_invalidate_map_cache(), when VM ballooned
> > >> out memory, qemu did not invalidate cache entries in linked
> > >> list(entry->next), so when VM balloon back in memory, gfns probably
> > >> mapped to different mfns, thus if guest asks device to DMA to these
> > >> GPA, qemu may DMA to stale MFNs.
> > >>
> > >> So I think in xen_invalidate_map_cache() linked lists should also be
> > >> checked and invalidated.
> > >>
> > >> What’s your opinion? Is this a bug? Is my analyze correct?
> 
> Yes, you are right. We need to go through the list for each element of
> the array in xen_invalidate_map_cache. Can you come up with a patch?

I spoke too soon. In the regular case there should be no locked mappings
when xen_invalidate_map_cache is called (see the DPRINTF warning at the
beginning of the functions). Without locked mappings, there should never
be more than one element in each list (see xen_map_cache_unlocked:
entry->lock == true is a necessary condition to append a new entry to
the list, otherwise it is just remapped).

Can you confirm that what you are seeing are locked mappings
when xen_invalidate_map_cache is called? To find out, enable the DPRINTK
by turning it into a printf or by defininig MAPCACHE_DEBUG.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.