[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/hvmloader: round up memory BAR size to 4K



On Tue, Jan 21, 2020 at 10:18:16AM +0100, Jan Beulich wrote:
> On 20.01.2020 18:18, Roger Pau Monné wrote:
> > On Mon, Jan 20, 2020 at 05:10:33PM +0100, Jan Beulich wrote:
> >> On 17.01.2020 12:08, Roger Pau Monne wrote:
> >>> When placing memory BARs with sizes smaller than 4K multiple memory
> >>> BARs can end up mapped to the same guest physical address, and thus
> >>> won't work correctly.
> >>
> >> Thinking about it again, aren't you fixing one possible case by
> >> breaking the opposite one: What you fix is when the two distinct
> >> BARs (of the same or different devices) map to distinct MFNs
> >> (which would have required a single GFN to map to both of these
> >> MFNs). But don't you, at the same time, break the case of two
> >> BARs (perhaps, but not necessarily, of the same physical device)
> >> mapping both to the same MFN, i.e. requiring to have two distinct
> >> GFNs map to one MFN? (At least for the moment I can't see a way
> >> for hvmloader to distinguish the two cases.)
> > 
> > IMO we should force all BARs to be page-isolated by dom0 (since Xen
> > doesn't have the knowledge of doing so), but I don't see the issue
> > with having different gfns pointing to the same mfn. Is that a
> > limitation of paging?
> 
> It's a limitation of the (global) M2P table.

Oh, so the mappings would be correct on the EPT/NPT, but not on the
M2P.

> 
> > I think you can map a grant multiple times into
> > different gfns, which achieves the same AFAICT.
> 
> One might think this would be possible, but afaict
> guest_physmap_add_{page,entry}() will destroy the prior association
> when/before inserting the new one. I.e. if subsequently any operation
> was used which needs to consult the M2P, only the most recently
> recorded GFN would be returned and hence operated on. Whether that's
> a problem in practice (i.e. whether any such M2P lookup might
> sensibly happen) is pretty hard to tell without going through a lot
> of code, I guess.

I'm afraid I don't know either.

So I'm not sure how to progress with this patch, are we fine with
those limitations?

As I said, Xen hasn't got enough knowledge to correctly isolate the
BARs, and hence we have to rely on dom0 DTRT. We could add checks in
Xen to make sure no BARs share a page, but it's a non-trivial amount
of scanning and sizing each possible BAR on the system.

IMO this patch is an improvement over the current state, and we can
always do further improvements afterwards.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.