[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 1/8]: PVH: Basic and preparatory changes



On Fri, Aug 17, 2012 at 08:47:02PM +0100, Ian Campbell wrote:
> On Fri, 2012-08-17 at 20:24 +0100, Mukesh Rathor wrote:
> > On Fri, 17 Aug 2012 09:35:54 +0100
> > Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> > 
> > > On Thu, 2012-08-16 at 01:57 +0100, Mukesh Rathor wrote:
> > > 
> > > > LDT (linear address, # ents) */
> > > > -    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > > > frames, # ents) */
> > > > +    unsigned long gdt_frames[16], gdt_ents; /* GDT (machine
> > > > frames, # ents).*
> > > > +                                            * PV in HVM: it's GDTR
> > > > addr/sz */
> > > 
> > > I'm not sure I understand this comment. What is "GDTR addr/sz" do you
> > > mean that gdtframes/gdt_ents has a different semantics here?
> > > 
> > > Might be worthy of a union? Or finding some other way to expand this
> > > struct.
> > 
> > In case of PVH, the field is used to send down GDTR address and size.
> > perhaps better to just leave the comment out.
> 
> I think if the semantics of this field are totally different in the two
> modes then I think a union is warranted.
> 
> > > > -void __init xen_arch_setup(void)
> > > > +/* Normal PV domain not running in HVM container */
> > > 
> > > It's a bit of a shame to overload the "HVM" term this way, to mean
> > > both the traditional "providing a full PC like environment" and "PV
> > > using hardware virtualisation facilities".
> > > 
> > > Perhaps:
> > >         /* Normal PV domain without PVH extensions */
> > 
> > Ok, HVM==Hardware Virtual Machine seems more appropriate here,
> 
> HVM in the context of Xen means more than that though, it implies a Qemu
> and emulation of a complete "PC-like" environment and all of that stuff,
> which is why I think it is inappropriate/confusing to be overloading it
> to mean "PV with hardware assistance" too.
> 
> Xen's use of the term HVM is a bit unhelpful, exactly because it is a
> broad sounding term but with a very specific meaning, but we are kind of
> stuck with it.
> 
> > but I can remove the word HVM and go with 'PVH extensions'.
> 
> Please ;-)
> 
> > 
> > > >  
> > > >  #ifdef CONFIG_ACPI
> > > >         if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
> > > > diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> > > > index f58dca7..cdf269d 100644
> > > > --- a/arch/x86/xen/smp.c
> > > > +++ b/arch/x86/xen/smp.c
> > > > @@ -300,8 +300,6 @@ cpu_initialize_context(unsigned int cpu, struct
> > > > task_struct *idle) gdt = get_cpu_gdt_table(cpu);
> > > > -       BUG_ON((unsigned long)gdt & ~PAGE_MASK);
> > > > +               ctxt->ldt_ents = 0;
> > > 
> > > Something odd is going on with the indentation here (and below I've
> > > just noticed). I suspect lots of the changes aren't really changing
> > > anything other than whitespace?
> > 
> > Konrad wanted just doing the indentation without code change first
> > where it made sense so that further patch makes it easy to see if
> > statements added.
> 
> I disagree that this is a useful way to structure a series unless the
> whitespace change is in a patch of *only* whitespace changes (and even
> then this would be an uncommon way to do things IMO).

That is the intention - as otherwise its darn hard to read what has
changed in the further patches.

> 
> But putting the whitespace changes associated with adding an if
> alongside unrelated actual semantic changes in a totally different patch
> is probably the most confusing and least helpful of all the possible
> options!

<nods> The patch should have been seperate.
> 
> My personal preference would be to do the indent when adding the if and
> let people who want to see the differences without the indentation
> change use "diff -b".
> 
> Konrad is maintainer though, so if he likes this then fine.
> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.