[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 6/7] vmx: VT-d posted-interrupt core logic handling



Hi Jan,

Kevin and Dario gave some comments about this patch. I would like to
know whether you have any comments about this patch, it is highly
appreciated if you can give your opinions, which is very important for
it to get merged as soon as possible. Thank you!

Thanks,
Feng

> -----Original Message-----
> From: Dario Faggioli [mailto:dario.faggioli@xxxxxxxxxx]
> Sent: Wednesday, December 23, 2015 10:21 AM
> To: Wu, Feng <feng.wu@xxxxxxxxx>; xen-devel@xxxxxxxxxxxxx
> Cc: Tian, Kevin <kevin.tian@xxxxxxxxx>; Keir Fraser <keir@xxxxxxx>; George
> Dunlap <george.dunlap@xxxxxxxxxxxxx>; Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>
> Subject: Re: [Xen-devel] [PATCH v10 6/7] vmx: VT-d posted-interrupt core logic
> handling
> 
> Here I am,
> 
> On Thu, 2015-12-03 at 16:35 +0800, Feng Wu wrote:
> > This is the core logic handling for VT-d posted-interrupts. Basically
> > it
> > deals with how and when to update posted-interrupts during the
> > following
> > scenarios:
> > - vCPU is preempted
> > - vCPU is slept
> > - vCPU is blocked
> >
> > [..]
> >
> Thanks for changing the changelog, making it look like much more an
> high level description of what happens and why.
> 
> > v10:
> > - Check iommu_intpost first
> > - Remove pointless checking of has_hvm_container_vcpu(v)
> > - Rename 'vmx_pi_state_change' to 'vmx_pi_state_to_normal'
> > - Since vcpu_unblock() doesn't acquire 'pi_blocked_vcpu_lock', we
> > Â don't need use another list to save the vCPUs with 'ON' set, just
> > Â directly call vcpu_unblock(v).
> >
> This were all my comments to v9 and, I've verified in the patch, they
> actually have been all addressed... Thanks for this.
> 
> This patch looks fine to me now. There are a few minor issues that I'll
> raise inline, but in general, I think this is a good design, and the
> patch does it job fine at implementing it.
> 
> Here they are the detailed comments.
> 
> First of all, trying to apply it, I got:
> 
> <stdin>:105: trailing whitespace.
> void arch_vcpu_block(struct vcpu *v)
> <stdin>:106: trailing whitespace.
> {
> <stdin>:107: trailing whitespace.
> ÂÂÂÂif ( v->arch.vcpu_block )
> <stdin>:108: trailing whitespace.
> ÂÂÂÂÂÂÂÂv->arch.vcpu_block(v);
> <stdin>:109: trailing whitespace.
> }
> 
> It may not be accurate, though (i.e., it may be due to what I used for
> applying the patches), so, please, double check.
> 
> And there are also a couple of long lines, which you should split.
> 
> > +void vmx_vcpu_block(struct vcpu *v)
> > +{
> > +ÂÂÂÂunsigned long flags;
> > +ÂÂÂÂstruct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
> > +
> > +ÂÂÂÂif ( !has_arch_pdevs(v->domain) )
> > +ÂÂÂÂÂÂÂÂreturn;
> > +
> > +ÂÂÂÂASSERT(v->arch.hvm_vmx.pi_block_cpu == NR_CPUS);
> > +
> > +ÂÂÂÂ/*
> > +ÂÂÂÂÂ* The vCPU is blocking, we need to add it to one of the per
> > pCPU lists.
> > +ÂÂÂÂÂ* We save v->processor to v->arch.hvm_vmx.pi_block_cpu and use
> > it for
> > +ÂÂÂÂÂ* the per-CPU list, we also save it to posted-interrupt
> > descriptor and
> > +ÂÂÂÂÂ* make it as the destination of the wake-up notification event.
> > +ÂÂÂÂÂ*/
> > +ÂÂÂÂv->arch.hvm_vmx.pi_block_cpu = v->processor;
> > +
> > +ÂÂÂÂspin_lock_irqsave(&per_cpu(pi_blocked_vcpu_lock,
> > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂv->arch.hvm_vmx.pi_block_cpu), flags);
> > +ÂÂÂÂlist_add_tail(&v->arch.hvm_vmx.pi_blocked_vcpu_list,
> > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ&per_cpu(pi_blocked_vcpu, v-
> > >arch.hvm_vmx.pi_block_cpu));
> > +ÂÂÂÂspin_unlock_irqrestore(&per_cpu(pi_blocked_vcpu_lock,
> > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂv->arch.hvm_vmx.pi_block_cpu), flags);
> > +
> > +ÂÂÂÂASSERT(!pi_test_sn(pi_desc));
> > +
> > +ÂÂÂÂ/*
> > +ÂÂÂÂÂ* We don't need to set the 'NDST' field, since it should point
> > to
> > +ÂÂÂÂÂ* the same pCPU as v->processor.
> > +ÂÂÂÂÂ*/
> > +
> So, maybe we can ASSERT() that?
> 
> > +ÂÂÂÂwrite_atomic(&pi_desc->nv, pi_wakeup_vector);
> > +}
> 
> > +static void vmx_pi_switch_from(struct vcpu *v)
> > +{
> > +ÂÂÂÂstruct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
> > +
> > +ÂÂÂÂif ( !iommu_intpost || !has_arch_pdevs(v->domain) ||
> > +ÂÂÂÂÂÂÂÂÂtest_bit(_VPF_blocked, &v->pause_flags) )
> > +ÂÂÂÂÂÂÂÂreturn;
> > +
> > +ÂÂÂÂ/*
> > +ÂÂÂÂÂ* The vCPU has been preempted or went to sleep. We don't need
> > to send
> > +ÂÂÂÂÂ* notification event to a non-running vcpu, the interrupt
> > information
> > +ÂÂÂÂÂ* will be delivered to it before VM-ENTRY when the vcpu is
> > scheduled
> > +ÂÂÂÂÂ* to run next time.
> > +ÂÂÂÂÂ*/
> > +ÂÂÂÂpi_set_sn(pi_desc);
> > +}
> > +
> > +static void vmx_pi_switch_to(struct vcpu *v)
> > +{
> > +ÂÂÂÂstruct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
> > +
> > +ÂÂÂÂif ( !iommu_intpost || !has_arch_pdevs(v->domain) )
> > +ÂÂÂÂÂÂÂÂreturn;
> > +
> > +ÂÂÂÂif ( x2apic_enabled )
> > +ÂÂÂÂÂÂÂÂwrite_atomic(&pi_desc->ndst, cpu_physical_id(v->processor));
> > +ÂÂÂÂelse
> > +ÂÂÂÂÂÂÂÂwrite_atomic(&pi_desc->ndst,
> > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂMASK_INSR(cpu_physical_id(v->processor),
> > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂPI_xAPIC_NDST_MASK));
> > +
> > +ÂÂÂÂpi_clear_sn(pi_desc);
> >
> No comment matching the one above (for pi_set_sn(), in
> vmx_pi_switch_from())? I don't feel too strong about this, but it would
> be nice to have both (or none, but I'd prefer both! :-D).
> 
> > +}
> > +
> > +static void vmx_pi_state_to_normal(struct vcpu *v)
> > +{
> >
> I'm still a bit puzzled about the name... But it's better than in the
> previous round, and I can't suggest a solution that I would like myself
> better... so I'm fine with this one.
> 
> Regards,
> Dario
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.