[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-next RFC 4/8] x86: factor out xen variants for hypervisor setup code



On Fri, Sep 27, 2019 at 01:41:59PM +0200, Roger Pau Monné wrote:
> > > 
> > > I wonder, do you also require to map hypervisor data into the guest
> > > physmap when running on HyperV?
> > > 
> > 
> > Yes. There are a lot of comparable concepts in Hyper-V. For example,
> > there is a page called VP assist page which is more or less the same as
> > Xen's vcpuinfo. Its format, content and interfaces may be different, but
> > conceptually it is the same as vcpuinfo.
> > 
> > > Is there a way when running on HyperV to request unused physical
> > > address space ranges? What Xen currently does in init_memmap is quite
> > > crappy.
> > > 
> > 
> > Xen itself still needs to manage the address space, no?
> >
> > I agree the rangeset code in xen.c isn't pretty, but it does the job and
> > is not too intrusive.
> 
> The problem with the current approach is that the nested Xen has no
> way of knowing whether those holes are actually unused, or are MMIO
> regions used by devices for example.
> 
> Hence I wondered if HyperV had a way to signal to guests that a
> physical address range is safe to be used as scratch mapping space. We
> had spoken avoid introducing something in Xen to be able to report to
> guests safe ranges in the physmap to map stuff.

AFAICT Hyper-V TLFS doesn't describe such functionality.

On the other hand, Hyper-V may not need this infrastructure at all
because it doesn't have grant table frame or shared info page. The most
likely outcome is in the next version the memmap stuff will be left to
Xen only until I find a use case for it.

Wei.

> 
> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.