WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [RFC][PATCH 0/6] HVM PCI Passthrough (non-IOMMU)

To: Guy Zana <guy@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] [RFC][PATCH 0/6] HVM PCI Passthrough (non-IOMMU)
From: John Byrne <john.l.byrne@xxxxxx>
Date: Fri, 08 Jun 2007 19:25:38 -0700
Cc: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, "Kay, Allen M" <allen.m.kay@xxxxxxxxx>
Delivery-date: Fri, 08 Jun 2007 19:23:45 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <9392A06CB0FDC847B3A530B3DC174E7B02B7F84D@xxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <9392A06CB0FDC847B3A530B3DC174E7B02B7F84D@xxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.12 (X11/20060911)
Guy,

Things are working at least somewhat, now. Answers/comments below.

Guy Zana wrote:
> Hi Jhon,
> 
> Thanks for testing out our patches!
> My comments below.
> 
>> -----Original Message-----
>> From: John Byrne [mailto:john.l.byrne@xxxxxx] 
>> Sent: Friday, June 08, 2007 5:53 AM
>> To: Guy Zana
>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Subject: Re: [Xen-devel] [RFC][PATCH 0/6] HVM PCI Passthrough 
>> (non-IOMMU)
>>
>>
>> Guy,
>>
>> I tried your patches with a bnx2 NIC on SLES10 and they didn't work.
>>
>> The first reason was that you mask off the capabilities bit 
>> in the PCI status. If I got rid of this, I could at least get 
>> the NIC to configure, but it didn't work and the dropped 
>> packets looked to be random garbage, so I don't think it was 
>> talking to the device properly. (But I understand almost 
>> nothing about PCI device configuration, so I don't know what 
>> to look for.)
>>
> 
>The released patches are considered to be "developmental", there are
>still work needed to be done (not too much though :) ) in order to make
>it usable for everyone. Are you sure you mapped the right IRQ? Please
>post the qemu-dm log file / xm dmesg. The capabilities bits are
>masked-off so we won't need to handle MSIs yet and power management
>(ACPI) related stuff, that could be quite a pain when trying to do
>pass-through for integrated devices.

I'd missed the line in your patch zero e-mail about pass-through.c. Once
I'd fixed that and with your hint about MSI-interrupts, I passed the
disable_msi option to the bnx2 driver and things worked, at least for a
while. I could get a ssh connection going through the interface, but the
machine locked up. My 32-bit machine doesn't have a lot of memory, so
things are sluggish and it is hard to tell lock-ups from thrashing. I
will reinstall one of my 64-bit machines that has more memory as 32-bits
and try it there.

> Another thing, 
> Does this NIC card has an expansion ROM?

Not according to lspci.

> 
>> I haven't noticed the merge tree springing into existence 
>> into on xenbits, so is there any progress on making into a 
>> real feature? It sounds like most of the work needs to be 
>> done between you and Intel, but I could certainly help with testing.
>>
> 
> That would be great!

Just let me know what you need tested and I'll see what I can do.

> 
> I think that both patches (ours' and Intel's) need some more work before we 
> can start merging.
> Neocleus already merged some parts from the Intel patches (mmio & pio
> handling). We are also aiming for 64bits (x86) support on the next release.

64-bits would be nice as that as what I usually run.

>> One thing I am interested in is, with the 1:1 mapping, could 
>> we disable the VT page-fault handling? I've found that the 
>> page-fault overhead for VT is horrible and would probably 
>> affect fork-exec benchmarks significantly.
> 
> Cool idea! Our CTO thought about it as well :)
> It's kind of hard not to use the VT page-fault handler at all, there are
> some issues with memory protection (security), and memory-remapping that
> we would want to do in the future (In order to support bios & expansion
> ROM duplication). I agree that you can make it faster though! it may
> require some drastic changes in the hypervisor.

Without an IOMMU, you forfeit memory protection, anyway, so I am willing
to handwave security for the moment. For VT, at the moment, it looks
like I might be able to just hack something to set the VMCS to disable
page faults after the domain is running. Setting CR3 will still generate
a fault, but all you need to do is set the real CR3, as far as I can
tell. It may not really work out, but I'm going to try.

Thanks,

John


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel