[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0 of 12] PV on HVM Xen



On Fri, May 28, 2010 at 03:25:34AM -0700, Boris Derzhavets wrote:
>    What is an advantage of PV on HVM ?
>

Pure HVM guests using the Qemu emulated disk/network devices are slow. 
PV-on-HVM drivers make disk- and network IO fast for HVM guests.

>    Kernel 2.6.34 with Stefano's patches may be built i believe only on Linux
>    HVM DomU.
>

Exactly. They're meant for an upstream kernel, running as Xen HVM guest.

>    At the same time any recent Linux ( >=24 or >=26) supports PV guest
>    install ( it's in
>    mainline for a while).
>    What i am missing here ?
> 

HVM guests might be faster for some workloads compared to PV guests.
Kernel compilation could be one example..ie. workloads spawning a lot 
of new processes all the time.

-- Pasi

>    Boris.
> 
>    --- On Mon, 5/24/10, Stefano Stabellini <Stefano.Stabellini@xxxxxxxxxxxxx>
>    wrote:
> 
>      From: Stefano Stabellini <Stefano.Stabellini@xxxxxxxxxxxxx>
>      Subject: [Xen-devel] [PATCH 0 of 12] PV on HVM Xen
>      To: "linux-kernel@xxxxxxxxxxxxxxx" <linux-kernel@xxxxxxxxxxxxxxx>
>      Cc: "Stefano Stabellini" <Stefano.Stabellini@xxxxxxxxxxxxx>, "Jeremy
>      Fitzhardinge" <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx"
>      <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Don Dutile" <ddutile@xxxxxxxxxx>,
>      "Sheng Yang" <sheng@xxxxxxxxxxxxxxx>
>      Date: Monday, May 24, 2010, 2:25 PM
> 
>      Hi all,
>      this is another update of the PV on HVM Xen series that addresses
>      Jeremy's comments.
>      The platform_pci hooks have been removed, suspend/resume for HVM
>      domains is now much more similar to the PV case and shares the same
>      do_suspend function.
>      Alloc_xen_mmio_hook has been removed has well, now the memory allocation
>      for
>      the grant table is done by the xen platform pci driver directly.
>      The per_cpu xen_vcpu variable is set by a cpu_notifier function so that
>      secondary vcpus have the variable set correctly no matter what the xen
>      features are on the host.
>      The kernel command line option xen_unplug has been renamed to
>      xen_emul_unplug and the code that makes use of it has been moved to a
>      separate file (arch/x86/xen/platform-pci-unplug.c).
>      Xen_unplug_emulated_devices is now able to detect if blkfront, netfront
>      and the Xen platform PCI driver have been compiled, and set the default
>      value of xen_emul_unplug accordingly.
>      The patch "Initialize xenbus device structs with ENODEV as
>      default" has been removed from the series and it will be sent
>      separately.
>      Finally the comments on most of the patches have been improved.
> 
>      The series is based on 2.6.34 and supports Xen PV frontends running
>      in a HVM domain, including netfront, blkfront and the VIRQ_TIMER.
> 
>      In order to be able to use VIRQ_TIMER and to improve performances you
>      need a patch to Xen to implement the vector callback mechanism
>      for event channel delivery.
> 
>      A git tree is also available here:
> 
>      git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git
> 
>      branch name 2.6.34-pvhvm-v2.
> 
>      Cheers,
> 
>      Stefano
> 
>      _______________________________________________
>      Xen-devel mailing list
>      [1]Xen-devel@xxxxxxxxxxxxxxxxxxx
>      [2]http://lists.xensource.com/xen-devel
> 
> References
> 
>    Visible links
>    1. file:///mc/compose?to=Xen-devel@xxxxxxxxxxxxxxxxxxx
>    2. http://lists.xensource.com/xen-devel

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.