[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH] Virtual machine queue NIC support in control panel


  • To: "Santos, Jose Renato G" <joserenato.santos@xxxxxx>
  • From: "Zhao, Yu" <yu.zhao@xxxxxxxxx>
  • Date: Sun, 3 Feb 2008 15:06:48 +0800
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Sat, 02 Feb 2008 23:12:30 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Achj6a1mWdtC/MX0R9qBYCYBhdWPIQASrclgAH68QNA=
  • Thread-topic: [PATCH] Virtual machine queue NIC support in control panel

Renato,

Thanks for your comments.
"vmq-attach/detach" are intended to associate a device queue to a vif
when this vif is running. These two "xm" options require a physical NIC
name and a vif reference, so they can invoke low level utility to do the
real work. If the physical NIC doesn't have any available queue, the low
level utility is supposed to return an error, thus "xm vmq-attach" will
report the failure.

Using "accel" plug-in framework to do this is a decent solution.
However, "accel" plug-in lacks dynamic association function, which means
user cannot set up or change a accelerator for a VIF when this VIF is
running (as I mentioned in another email to Kieran Mansley). If we can
improve "accel" plug-in to support this and some other features that may
be required by other acceleration technologies, "vmq" and other coming
acceleration options can converge.

Any other comments or suggestions, please feel free to let me know. I'm
trying to revise this patch to use "accel" and will send it out later.

Regards,
Yu

>-----Original Message-----
>From: Santos, Jose Renato G [mailto:joserenato.santos@xxxxxx]
>Sent: Friday, February 01, 2008 2:42 AM
>To: Zhao, Yu; Keir.Fraser@xxxxxxxxxxxx
>Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
>Subject: RE: [PATCH] Virtual machine queue NIC support in control panel
>
>Yu,
>Thanks for the patch
>I don't know the python tools well enough to provide detailed comments
on the
>patch.
>I just have one high level comment. Using the term "vmq" in the domain
>configuration file or in commands like "vmq-attach" may give the user
the wrong
>impression that a device queue will be dedicated to the vif. This may
or may
>not be true depending on how many queues are available and how many
other vifs
>are using them. It seems that we should allow the control tools to bind
a vif
>to a NIC and let netback decide which vifs will use dedicated queues
and which
>will share a common queue. Thus it seems that using a name like "pdev"
is more
>appropriate than "vmq". Using a "pdev" parameter to associate a vif
with a
>physical device can be used by accelerator plugins as suggested by
Kieran. That
>said, in the future it will be useful to add commands to list vifs and
vmq mappings
>and to pin vifs to a vmq, in a similar way we list and pin vCPUs.
>
>Renato
>
>> -----Original Message-----
>> From: Zhao, Yu [mailto:yu.zhao@xxxxxxxxx]
>> Sent: Thursday, January 31, 2008 1:14 AM
>> To: Keir.Fraser@xxxxxxxxxxxx; Santos, Jose Renato G
>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Subject: [PATCH] Virtual machine queue NIC support in control panel
>>
>> This patch enables virtual machine queue NIC support in
>> control panel (xm/xend), so user can add or remove dedicated
>> queue for a guest.
>>
>> Virtual machine queue is a technology for network devices,
>> which intends to reduce the burden on the hypervisor while
>> improving network I/O performance through the virtualized
>> platform. Some vendors have lunched their products, like
>> Intel(R) 82575/82598 (for more information of this
>> technology:
>> http://www.intel.com/technology/platform-technology/virtualiza
>> tion/VMDq_whitepaper.pdf).
>>
>> This patch requires a vendor-specified utility to control the NIC.
>>
>> This patch also could be applied to netchannel2.
>>
>> Singed-off-by: Yu Zhao <yu.zhao@xxxxxxxxx>
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.