WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH][v2] Hybrid extension support in Xen

To: Sheng Yang <sheng@xxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH][v2] Hybrid extension support in Xen
From: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Date: Tue, 2 Feb 2010 13:52:44 +0000
Cc: Keir, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Fraser <Keir.Fraser@xxxxxxxxxxxxx>
Delivery-date: Tue, 02 Feb 2010 05:53:07 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <201002022106.42451.sheng@xxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix Systems, Inc.
References: <201002021616.19189.sheng@xxxxxxxxxxxxxxx> <1265110015.2965.22432.camel@xxxxxxxxxxxxxxxxxxxxxx> <201002022106.42451.sheng@xxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, 2010-02-02 at 13:06 +0000, Sheng Yang wrote:
> On Tuesday 02 February 2010 19:26:55 Ian Campbell wrote:
> > On Tue, 2010-02-02 at 08:16 +0000, Sheng Yang wrote:
> > > +static hvm_hypercall_t *hvm_hypercall_hybrid64_table[NR_hypercalls] =
> > > {
> > > +    [ __HYPERVISOR_memory_op ] = (hvm_hypercall_t *)hvm_memory_op,
> > > +    [ __HYPERVISOR_grant_table_op ] = (hvm_hypercall_t
> > > *)hvm_grant_table_op,
> > > +    HYPERCALL(xen_version),
> > > +    HYPERCALL(console_io),
> > > +    HYPERCALL(vcpu_op),
> > > +    HYPERCALL(sched_op),
> > > +    HYPERCALL(event_channel_op),
> > > +    HYPERCALL(hvm_op),
> > > +};
> > 
> > Why not just expand the exiting hvm hypercall table to incorporate these
> > new hypercalls?
> 
> I am just afraid the normal HVM guest called these hypercalls would result in 
> some chaos, so add a limitation(for hybrid only) here. (I admit it didn't 
> much 
> improve the security for a malicious guest...)

I don't think this limitation adds any meaningful security or reduction
in chaos. A non-hybrid aware HVM guest simply won't make those
hypercalls.

> > In fact, why is hybrid mode considered a separate mode by the hypervisor
> > at all? Shouldn't it just be an extension to regular HVM mode which
> > guests may choose to use? This seems like it would eliminate a bunch of
> > the random conditionals.
> 
> There is still something different from normal HVM guest. For example, to use 
> PV timer, we should clean the tsc offset in HVM domain; and for event 
> delivery, we would use predefined VIRQ rather than emulated IOAPIC/APIC. 
> These 
> code are exclusive, we need them wrapped with flag(which we called "hybrid"). 
> The word "mode" here may be is inaccuracy, a "extension" should be more 
> proper. I would change the phrase next time.

But the old mechanisms (emulated IOAPIC etc) are still present until the
enable_hybrid HVMOP is called, aren't they? Why can't you perform the
switch at the point at which the new feature is requested by the guest
e.g. when the VIRQ is configured?

It looks like you are using evtchn's for all interrupt injection,
including any emulated or passthrough devices which may be present.
Using evtchn's for PV devices obviously makes sense but I think this
needs to coexist with emulated interrupt injection for non-PV devices so
the IOAPIC/APIC should not be mutually exclusive with using PV evtchns.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>