[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/5] xentrace: add TRC_HVM_PI_LIST_ADD



On Mon, May 15, 2017 at 09:33:04AM +0800, Tian, Kevin wrote:
>> From: Gao, Chao
>> Sent: Thursday, May 11, 2017 2:04 PM
>> 
>> This patch adds TRC_HVM_PI_LIST_ADD to track adding one entry to
>> the per-pcpu blocking list. Also introduce a 'counter' to track
>> the number of entries in the list.
>> 
>> Signed-off-by: Chao Gao <chao.gao@xxxxxxxxx>
>> ---
>> @@ -119,6 +120,9 @@ static void vmx_vcpu_block(struct vcpu *v)
>>       */
>>      ASSERT(old_lock == NULL);
>> 
>> +    atomic_inc(&per_cpu(vmx_pi_blocking, v->processor).counter);
>> +    HVMTRACE_4D(PI_LIST_ADD, v->domain->domain_id, v->vcpu_id, v-
>> >processor,
>> +                atomic_read(&per_cpu(vmx_pi_blocking, 
>> v->processor).counter));
>>      list_add_tail(&v->arch.hvm_vmx.pi_blocking.list,
>>                    &per_cpu(vmx_pi_blocking, v->processor).list);
>>      spin_unlock_irqrestore(pi_blocking_list_lock, flags);
>> @@ -186,6 +190,8 @@ static void vmx_pi_unblock_vcpu(struct vcpu *v)
>>      {
>>          ASSERT(v->arch.hvm_vmx.pi_blocking.lock == pi_blocking_list_lock);
>>          list_del(&v->arch.hvm_vmx.pi_blocking.list);
>> +        atomic_dec(&container_of(pi_blocking_list_lock,
>> +                                 struct vmx_pi_blocking_vcpu, 
>> lock)->counter);
>>          v->arch.hvm_vmx.pi_blocking.lock = NULL;
>>      }
>> 
>> @@ -234,6 +240,7 @@ void vmx_pi_desc_fixup(unsigned int cpu)
>>          if ( pi_test_on(&vmx->pi_desc) )
>>          {
>>              list_del(&vmx->pi_blocking.list);
>> +            atomic_dec(&per_cpu(vmx_pi_blocking, cpu).counter);
>>              vmx->pi_blocking.lock = NULL;
>>              vcpu_unblock(container_of(vmx, struct vcpu, arch.hvm_vmx));
>>          }
>> @@ -258,6 +265,8 @@ void vmx_pi_desc_fixup(unsigned int cpu)
>> 
>>              list_move(&vmx->pi_blocking.list,
>>                        &per_cpu(vmx_pi_blocking, new_cpu).list);
>> +            atomic_dec(&per_cpu(vmx_pi_blocking, cpu).counter);
>> +            atomic_inc(&per_cpu(vmx_pi_blocking, new_cpu).counter);
>
>Don't you also need a trace here?

Yes, it is needed.

>
>and from completeness p.o.v, is it useful to trace both dec/inc?
>

I tried to do this. Assuming the log should show which pcpu's list
has decreased, I don't know how to get the pcpu id in vmx_pi_unblock_vcpu(),
though we can reference the per-cpu structure.

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.