[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCHv5 0/5] Use ticket locks for spinlocks



Use ticket locks for spin locks instead of the current byte locks.
Ticket locks are fair.  This is an important property for hypervisor
locks.

Note that spin_lock_irq() and spin_lock_irqsave() now spin with irqs
disabled (previously, they would spin with irqs enabled if possible).
This is required to prevent deadlocks when the irq handler tries to
take the same lock with a higher ticket.

We have analysed the affect of this series on interrupt latency (by
measuring the duration of irq disable/enable regions) and there is no
signficant impact.

http://xenbits.xen.org/people/dvrabel/ticket-locks/busy_comp.png

Thanks to Jennifer Herbert for performing this analysis.

Performance results (using a earlier prototype) of aggregate network
throughput between 20 VIF pairs shows good improvements.

http://xenbits.xen.org/people/dvrabel/3336.png

This benchmark is limited by contention on the map track lock.

Changes in v5:
- _spin_barrier() fixes (re-add smp_mb(), profiling, cpu_relax()).
- Cleanup x86 add_sized().

Changes in v4:
- Use new add_sized() to increment ticket head.
- Add arch_lock_acquire_barrier() and arch_lock_release_barrier().
- Rename xadd() to arch_fetch_and_add().
- Shrink struct domain by 16 bytes.

Changes in v3:
- xadd() implementation for arm32 and arm64.
- Add barriers required on arm.
- Only wait for the current lock holder in _spin_barrier().

Changes in v2:
- Reimplemented in C, requiring only an arch-specific xadd() op
- Optimize spin_lock_recursive() to call spin_lock() since trylock
  loops are bad with ticket locks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.