[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: [PATCH RFC 2/3] Virtio draft III: example net driver
Rusty Russell wrote: > 1. 1500 byte packets will always be 1 sg long (ie. you should BUG_ON() > if that ever happens). The issue is MTU > PAGE_SIZE, where I was > planning to allocate discontig pages. I can't quite see what your > maximum supported MTU is, though? The current maximum supported MTU for ibmveth is 64k. Additionally, the firmware interface for ibmveth allows for multiple rx "pools" of different sizes to be allocated and given to the hypervisor. Depending on the incoming packet size, the smallest buffer is then chosen. Just something to keep in mind as you look at adding large frame support. I'm not sure what requirements other users would have... > 2. The header problem is difficult (I can't resist pointing out that if > you had sg receive capability, it would be trivial 8). The two > possibilities are to have get_buf take a "unsigned long *off" as well, > or to have ibm_veth do a memmove. > > memmove sounds horrible at first glance, but since the hypervisor has > just copied the packet I'm not sure we'll notice in practice. > Benchmarking should tell... memmove definitely does not sound optimal. I would definitely prefer returning an offset on get_buf, but when I get something up and running I could certainly run some benchmarks. > BTW, after Avi's comments I have a new virtio draft, but still debugging > my implementation. It doesn't effect this discussion, but would involve > churn for an actual implementation if you've gotten that far... I've started working on some code, but only to get a better handle on the API and see what issues ibmveth might run into. I was expecting some code churn yet at this stage.. -Brian -- Brian King Linux on Power Virtualization IBM Linux Technology Center _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |