[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUG 1747]Guest could't find bootable device with memory more than 3600M



>>> On 12.06.13 at 15:23, Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote:
> Il 12/06/2013 06:05, George Dunlap ha scritto:
>> On 12/06/13 08:25, Jan Beulich wrote:
>>>>>> On 11.06.13 at 19:26, Stefano Stabellini
>>>>>> <stefano.stabellini@xxxxxxxxxxxxx> wrote:
>>>> I went through the code that maps the PCI MMIO regions in hvmloader
>>>> (tools/firmware/hvmloader/pci.c:pci_setup) and it looks like it already
>>>> maps the PCI region to high memory if the PCI bar is 64-bit and the MMIO
>>>> region is larger than 512MB.
>>>>
>>>> Maybe we could just relax this condition and map the device memory to
>>>> high memory no matter the size of the MMIO region if the PCI bar is
>>>> 64-bit?
>>> I can only recommend not to: For one, guests not using PAE or
>>> PSE-36 can't map such space at all (and older OSes may not
>>> properly deal with 64-bit BARs at all). And then one would generally
>>> expect this allocation to be done top down (to minimize risk of
>>> running into RAM), and doing so is going to present further risks of
>>> incompatibilities with guest OSes (Linux for example learned only in
>>> 2.6.36 that PFNs in ioremap() can exceed 32 bits, but even in
>>> 3.10-rc5 ioremap_pte_range(), while using "u64 pfn", passes the
>>> PFN to pfn_pte(), the respective parameter of which is
>>> "unsigned long").
>>>
>>> I think this ought to be done in an iterative process - if all MMIO
>>> regions together don't fit below 4G, the biggest one should be
>>> moved up beyond 4G first, followed by the next to biggest one
>>> etc.
>> 
>> First of all, the proposal to move the PCI BAR up to the 64-bit range is
>> a temporary work-around.  It should only be done if a device doesn't fit
>> in the current MMIO range.
>> 
>> We have three options here:
>> 1. Don't do anything
>> 2. Have hvmloader move PCI devices up to the 64-bit MMIO hole if they
>> don't fit
>> 3. Convince qemu to allow MMIO regions to mask memory (or what it thinks
>> is memory).
>> 4. Add a mechanism to tell qemu that memory is being relocated.
>> 
>> Number 4 is definitely the right answer long-term, but we just don't
>> have time to do that before the 4.3 release.  We're not sure yet if #3
>> is possible; even if it is, it may have unpredictable knock-on effects.
> 
> #3 should be possible or even the default (would need to check), but #4
> is probably a bit harder to do.  Perhaps you can use a magic I/O port
> for the xen platform PV driver, but if you can simply use two PCI
> windows it would be much simpler because that's the same that TCG and
> KVM already do.  The code is all there for you to lift in SeaBIOS.

What is the connection here to the platform PV driver?

> Only Windows XP and older had problems with that because they didn't
> like something in the ASL; but the 64-bit window is placed at the end of
> RAM, so in principle any PAE-enabled OS can use it.

At the end of _RAM_???

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.