[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [LKML] [PATCH 7/7] xen: Enable grant table and xenbus for PV extension of HVM



On Tue, 2010-03-02 at 13:41 +0000, Konrad Rzeszutek Wilk wrote:
> On Tue, Mar 02, 2010 at 09:21:43AM +0800, Sheng Yang wrote:
> > On Tuesday 02 March 2010 01:38:58 Konrad Rzeszutek Wilk wrote:
> > > > +/* The region reserved by QEmu for Xen platform device */
> > > > +#define GNTTAB_START       0xf2000000ul
> > > > +#define GNTTAB_SIZE        0x20000ul
> > > 
> > > I thought that in the earlier review you said:
> > > 
> > > "> > +#define GNTTAB_START           0xfbfe0000ul
> > > 
> > > > > +#define GNTTAB_SIZE            0x20000ul
> > > >
> > > > Is it possible that there would be a PCI device that would be
> > > > passed in the guest that would conflict with the above mentioned
> > > > E820 region?
> > > 
> > > I would change them to a dedicated PCI MMIO address in the next version.
> > > Thanks.
> > > 
> > > "
> > > ?
> > 
> > And yes, this is the dedicated PCI MMIO address I mentioned.. I would 
> > update 
> > the comments to get it more clear.
> > 
> > I don't think it's very clear solution, because the real good way to do 
> > this 
> > is probe pci device and find out with one is the platform pci device then 
> > use 
> > it. But the grant table initialization is quite earlier compared to the 
> > possible probing now... I hardcode the position now, and hunting for a 
> > better 
> 
> Would it be possible to move the grant table initialization to later
> phase? Past the PCI loading/initialization?

Or provide the address via an MSR, hypervisor specific CPUID leaf or IO
port or by using the early_pci infrastructure in the kernel. I don't
think we are short of options ;-)

Ian.

> 
> > idea.
> > 
> > Comments?
> > 
> > -- 
> > regards
> > Yang, Sheng
> >  



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.