[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Device Subtree Change from Traditional to Upstream



On 01/04/2018 06:52 AM, Anthony PERARD wrote:
> On Wed, Jan 03, 2018 at 05:10:54PM -0600, Kevin Stange wrote:
>> On 01/03/2018 11:57 AM, Anthony PERARD wrote:
>>> On Wed, Dec 20, 2017 at 11:40:03AM -0600, Kevin Stange wrote:
>>>> Hi,
>>>>
>>>> I've been working on transitioning a number of Windows guests under HVM
>>>> from using QEMU traditional to QEMU upstream as is recommended in the
>>>> documentation.  When I move these guests, the PCI subtree for Xen
>>>> devices changes and Windows creates a totally new copy of each device.
>>>> Windows tracks down the storage without issue, but it treats the new
>>>> instance of the NIC driver as a new device and clears the network
>>>> configuration even though the MAC address is unchanged.  Manually
>>>> booting the guest back on the traditional device model reactivates the
>>>> original PCI subtree and the old network configuration with it.
>>>>
>>>> The only thing that I have been able to find that's substantially
>>>> different comparing the device trees is that the device instance ID
>>>> values differ on the parent Xen PCI device:
>>>>
>>>> PCI\VEN_5853&DEV_0001&SUBSYS_00015853&REV_01\3&267A616A&3&18
>>>>
>>>> PCI\VEN_5853&DEV_0001&SUBSYS_00015853&REV_01\3&267A616A&3&10
>>>>
>>>> Besides actually setting the guest to boot using QEMU traditional, is
>>>> there a way to convince Windows to treat these devices as the same?  A
>>>> patch-based solution would be acceptable to me if there is one, but I
>>>> don't understand the code well enough to create my own solution.
>>>
>>> Hi Kevin,
>>>
>>> I've got a patch to QEMU that seems to do the trick:
>>>
>>> From: Anthony PERARD <anthony.perard@xxxxxxxxxx>
>>> Subject: [PATCH] xen-platform: Hardcode PCI slot to 3
>>>
>>> Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>
>>> ---
>>>  hw/i386/pc_piix.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>> index 5e47528993..93e3a9a916 100644
>>> --- a/hw/i386/pc_piix.c
>>> +++ b/hw/i386/pc_piix.c
>>> @@ -405,7 +405,7 @@ static void pc_xen_hvm_init(MachineState *machine)
>>>  
>>>      bus = pci_find_primary_bus();
>>>      if (bus != NULL) {
>>> -        pci_create_simple(bus, -1, "xen-platform");
>>> +        pci_create_simple(bus, PCI_DEVFN(3, 0), "xen-platform");
>>>      }
>>>  }
>>>  #endif
>>>
>>>
>>> The same thing could be done by libxl, by providing specific command
>>> line options to qemu. (I think that could even be done via a different
>>> config file for the guest.)
>>
>> This patch doesn't seem to work for me.  It seems like the device model
>> process is exiting immediately, but I haven't been able to find any
>> information as to what is going wrong.  I tested with Xen 4.6.6 and the
>> QEMU packaged with that release.  Should I try it on a different version
>> of Xen and QEMU?
> 
> What this patch does is asking QEMU to insert the PCI card
> "xen-platform" into the 3rd PCI slot. My guess is that failed because
> there is already a PCI device there.
> 
> You could check qemu's logs, it's in
> /var/log/xen/qemu-dm-${guest_name}.log

The log file in question only says:

qemu: terminating on signal 1 from pid 8865

> Let's try something else, instead of patching QEMU, we can patch libxl,
> that might work better. Can you try this patch? (I've only test
> compiled.) I've write the patch for Xen 4.6, since that the version you
> are using.

This isn't doing the trick either, with the same misbehavior. The log
file is the same in both cases.

-- 
Kevin Stange
Chief Technology Officer
Steadfast | Managed Infrastructure, Datacenter and Cloud Services
800 S Wells, Suite 190 | Chicago, IL 60607
312.602.2689 X203 | Fax: 312.602.2688
kevin@xxxxxxxxxxxxx | www.steadfast.net

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.