[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v13 1/2] vmx: VT-d posted-interrupt core logic handling




> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Wednesday, February 24, 2016 12:34 AM
> To: Wu, Feng <feng.wu@xxxxxxxxx>
> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Dario Faggioli
> <dario.faggioli@xxxxxxxxxx>; George Dunlap <george.dunlap@xxxxxxxxxxxxx>;
> Tian, Kevin <kevin.tian@xxxxxxxxx>; xen-devel@xxxxxxxxxxxxx; Keir Fraser
> <keir@xxxxxxx>
> Subject: Re: [PATCH v13 1/2] vmx: VT-d posted-interrupt core logic handling
> 
> >>> On 23.02.16 at 09:34, <feng.wu@xxxxxxxxx> wrote:
> > +static void vmx_vcpu_block(struct vcpu *v)
> > +{
> > +    unsigned long flags;
> > +    unsigned int dest;
> > +    spinlock_t *old_lock = pi_blocking_list_lock(v);
> > +    spinlock_t *pi_blocking_list_lock = &vmx_pi_blocking_list_lock(v-
> >processor);
> > +    struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
> > +
> > +    spin_lock_irqsave(pi_blocking_list_lock, flags);
> > +    old_lock = cmpxchg(&pi_blocking_list_lock(v), old_lock,
> > +                       &vmx_pi_blocking_list_lock(v->processor));
> 
> See my comment on v12.

Here is your comment on v12 " Why don't you use the local variable here?",
here I need to assign new values to 'v->arch.hvm_vmx.pi_block_list_lock',
I am not sure how to use the "local variable here", could you please elaborate
a bit more? Thanks a lot!

> > --- a/xen/include/asm-x86/hvm/hvm.h
> > +++ b/xen/include/asm-x86/hvm/hvm.h
> > @@ -565,6 +565,12 @@ const char *hvm_efer_valid(const struct vcpu *v,
> > uint64_t value,
> >                             signed int cr0_pg);
> >  unsigned long hvm_cr4_guest_reserved_bits(const struct vcpu *v, bool_t
> > restore);
> >
> > +#define arch_vcpu_block(v) ({                                              
> >     \
> > +    void (*func) (struct vcpu *) = (v)->domain-
> >arch.hvm_domain.vmx.vcpu_block;\
> > +    if ( func )                                                            
> >     \
> > +        func(v);                                                           
> >     \
> > +})
> 
> See my comment on v12. The code structure actually was better
> there, and all you needed to do is introduce a local variable.

Do you mean something like the following:

+#define arch_vcpu_block(v) ({                                               \
+    struct vcpu *vcpu = v;                                                     
     \
+    if ( (vcpu)->domain->arch.hvm_domain.vmx.vcpu_block )                      
\
+        (vcpu)->domain->arch.hvm_domain.vmx.vcpu_block((vcpu));            \
+})

Why is this better than the one in v12? Thanks!

> 
> > @@ -101,6 +160,17 @@ struct pi_desc {
> >
> >  #define NR_PML_ENTRIES   512
> >
> > +#define pi_blocking_vcpu_list(v)    \
> > +    ((v)->arch.hvm_vmx.pi_blocking_vcpu_info.pi_blocking_vcpu_list)
> > +
> > +#define pi_blocking_list_lock(v)    \
> > +    ((v)->arch.hvm_vmx.pi_blocking_vcpu_info.pi_blocking_list_lock)
> 
> The latest when writing this it should have occurred to you that
> there are too many pi_blocking_ prefixes. Please strive to name
> thinks such that macros like these aren't really necessary. The
> same naturally applies to struct vmx_pi_blocking_vcpu, albeit
> there the VMX maintainer have the final say.

Using these macros can shorten the code length, or it is hard to read when
using the original one, such as 
'v->arch.hvm_vmx.pi_blocking_vcpu_info.pi_blocking_vcpu_list '.
Even we change the member name in the structure, it is still very long, such as
        'v->arch.hvm_vmx.pi_blocking_vcpu_info.vcpu_list '
        'v->arch.hvm_vmx.pi_blocking_vcpu_info.list_lock '
In most case, it is still beyond the 80 characters limitation, which makes the 
code
a little hard to read.

Thanks,
Feng
        
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.