[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2] x86/time: update vtsc_last with cmpxchg and drop vtsc_lock
On 16/12/2019 10:00, Roger Pau Monné wrote: > On Fri, Dec 13, 2019 at 10:48:02PM +0000, Igor Druzhinin wrote: >> Now that vtsc_last is the only entity protected by vtsc_lock we can >> simply update it using a single atomic operation and drop the spinlock >> entirely. This is extremely important for the case of running nested >> (e.g. shim instance with lots of vCPUs assigned) since if preemption >> happens somewhere inside the critical section that would immediately >> mean that other vCPU stop progressing (and probably being preempted >> as well) waiting for the spinlock to be freed. >> >> This fixes constant shim guest boot lockups with ~32 vCPUs if there is >> vCPU overcommit present (which increases the likelihood of preemption). >> >> Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx> >> --- >> xen/arch/x86/domain.c | 1 - >> xen/arch/x86/time.c | 16 ++++++---------- >> xen/include/asm-x86/domain.h | 1 - >> 3 files changed, 6 insertions(+), 12 deletions(-) >> >> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c >> index bed19fc..94531be 100644 >> --- a/xen/arch/x86/domain.c >> +++ b/xen/arch/x86/domain.c >> @@ -539,7 +539,6 @@ int arch_domain_create(struct domain *d, >> INIT_PAGE_LIST_HEAD(&d->arch.relmem_list); >> >> spin_lock_init(&d->arch.e820_lock); >> - spin_lock_init(&d->arch.vtsc_lock); >> >> /* Minimal initialisation for the idle domain. */ >> if ( unlikely(is_idle_domain(d)) ) >> diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c >> index 216169a..202446f 100644 >> --- a/xen/arch/x86/time.c >> +++ b/xen/arch/x86/time.c >> @@ -2130,19 +2130,15 @@ u64 gtsc_to_gtime(struct domain *d, u64 tsc) >> >> uint64_t pv_soft_rdtsc(const struct vcpu *v, const struct cpu_user_regs >> *regs) >> { >> - s_time_t now = get_s_time(); >> + s_time_t old, new, now = get_s_time(); >> struct domain *d = v->domain; >> >> - spin_lock(&d->arch.vtsc_lock); >> - >> - if ( (int64_t)(now - d->arch.vtsc_last) > 0 ) >> - d->arch.vtsc_last = now; >> - else >> - now = ++d->arch.vtsc_last; >> - >> - spin_unlock(&d->arch.vtsc_lock); >> + do { >> + old = d->arch.vtsc_last; >> + new = (int64_t)(now - d->arch.vtsc_last) > 0 ? now : old + 1; > > Why do you need to do this subtraction? Isn't it easier to just do: > > new = now > d->arch.vtsc_last ? now : old + 1; > > That avoids the cast and the subtraction. I'm afraid I fell into the same trap as Jan. Given they are both signed will change in v2. >> + } while ( cmpxchg(&d->arch.vtsc_last, old, new) != old ); > > I'm not sure if the following would be slightly better performance > wise: > > do { > old = d->arch.vtsc_last; > if ( d->arch.vtsc_last >= now ) > { > new = atomic_inc_return(&d->arch.vtsc_last); > break; > } > else > new = now; > } while ( cmpxchg(&d->arch.vtsc_last, old, new) != old ); > > In any case I'm fine with your version using cmpxchg exclusively. That could be marginally better (knowing that atomic increment usually performs better than cmpxchg) but it took me some time to work out there is no hidden race here. I'd request a third opinion on the matter if it's worth changing. Igor _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |