[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] xen/pvh: detect PVH after kexec



On 03/21/2017 08:13 AM, Roger Pau Monne wrote:
> On Tue, Mar 21, 2017 at 12:53:07PM +0100, Vitaly Kuznetsov wrote:
>> Roger Pau Monne <roger.pau@xxxxxxxxxx> writes:
>>
>>> On Tue, Mar 21, 2017 at 10:21:52AM +0100, Vitaly Kuznetsov wrote:
>>>> Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> writes:
>>>>
>>>>> On 03/20/2017 02:20 PM, Vitaly Kuznetsov wrote:
>>>>>> PVH guests after kexec boot like normal HVM guests and we're not entering
>>>>>> xen_prepare_pvh()
>>>>> Is it not? Aren't we going via xen_hvm_shutdown() and then
>>>>> SHUTDOWN_soft_reset which would restart at the same entry point as
>>>>> regular boot?
>>>> No, we're not doing regular boot: from outside of the guest we don't
>>>> really know where the new kernel is placed (as guest does it on its
>>>> own). We do soft reset to clean things up and then guest jumps to the
>>>> new kernel starting point by itself.
>>>>
>>>> We could (in theory, didn't try) make it jump to the PVH starting point
>>>> but we'll have to at least prepare the right boot params for
>>>> init_pvh_bootparams and this looks like additional
>>>> complication. PVHVM-style startup suits us well but we still need to be
>>>> PVH-aware.
>>> We are going to have the same issue when booting PVH with OVMF, Linux will 
>>> be
>>> started at the native UEFI entry point, and we will need some way to detect
>>> that we are running in PVH mode.
>>>
>>> What issues do you see when using the HVM boot path for kexec?
>> The immediate issue I ran into was ballooning driver over-allocating
>> with XENMEM_populate_physmap:


I couldn't go even that far. Is there anything besides the two libxl
patches that you posted yesterday?

>>
>> (XEN) Dom15 callback via changed to Direct Vector 0xf3
>> (XEN) d15v0 Over-allocation for domain 15: 262401 > 262400
>> (XEN) memory.c:225:d15v0 Could not allocate order=0 extent: id=15 memflags=0 
>> (175 of 512)
>> (XEN) d15v0 Over-allocation for domain 15: 262401 > 262400
>> (XEN) memory.c:225:d15v0 Could not allocate order=0 extent: id=15 memflags=0 
>> (0 of 512)
>> (XEN) d15v0 Over-allocation for domain 15: 262401 > 262400
>> ...
>>
>> I didn't investigate why it happens, setting xen_pvh=1 helped. Not sure
>> if it's related, but I see the following code in __gnttab_init():
>>
>>      /* Delay grant-table initialization in the PV on HVM case */
>>      if (xen_hvm_domain() && !xen_pvh_domain())
>>              return 0;
>>
>> and gnttab_init() is later called in platform_pci_probe().
> But I guess this never happens in the PVH case because there's no Xen platform
> PCI device?
>
> Making the initialization of the grant tables conditional to the presence of
> the Xen platform PCI device seems wrong. The only thing needed for grant 
> tables
> is a physical memory region. This can either be picked from unused physical
> memory (over 4GB to avoid collisions), or by freeing some RAM region.

That's because Linux HVM guests use PCI MMIO region for grant tables
(see platform_pci_probe()).

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.