[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 08/12] xen/spinlock: add another function level
Hi Juergen, On 13/12/2023 06:23, Juergen Gross wrote: On 12.12.23 20:10, Julien Grall wrote:Hi Juergen, On 12/12/2023 09:47, Juergen Gross wrote:Add another function level in spinlock.c hiding the spinlock_t layout from the low level locking code. This is done in preparation of introducing rspinlock_t for recursive locks without having to duplicate all of the locking code.So all the fields you pass are the one from spinlock. Looking at pahole after this series is applid, we have: ==== Debug + Lock profile ==== struct spinlock {spinlock_tickets_t tickets; /* 0 4 */ union lock_debug debug; /* 4 4 */ struct lock_profile * profile; /* 8 8 *//* size: 16, cachelines: 1, members: 3 */ /* last cacheline: 16 bytes */ }; struct rspinlock {spinlock_tickets_t tickets; /* 0 4 */ uint16_t recurse_cpu; /* 4 2 */ uint8_t recurse_cnt; /* 6 1 *//* XXX 1 byte hole, try to pack */union lock_debug debug; /* 8 4 *//* XXX 4 bytes hole, try to pack */struct lock_profile * profile; /* 16 8 *//* size: 24, cachelines: 1, members: 5 */ /* sum members: 19, holes: 2, sum holes: 5 */ /* last cacheline: 24 bytes */ }; ==== Debug ==== struct spinlock {spinlock_tickets_t tickets; /* 0 4 */ union lock_debug debug; /* 4 4 *//* size: 8, cachelines: 1, members: 2 */ /* last cacheline: 8 bytes */ }; struct rspinlock {spinlock_tickets_t tickets; /* 0 4 */ uint16_t recurse_cpu; /* 4 2 */ uint8_t recurse_cnt; /* 6 1 *//* XXX 1 byte hole, try to pack */union lock_debug debug; /* 8 4 *//* size: 12, cachelines: 1, members: 4 */ /* sum members: 11, holes: 1, sum holes: 1 */ /* last cacheline: 12 bytes */ }; ==== Prod ==== struct spinlock {spinlock_tickets_t tickets; /* 0 4 */ union lock_debug debug; /* 4 0 *//* size: 4, cachelines: 1, members: 2 */ /* last cacheline: 4 bytes */ }; struct rspinlock {spinlock_tickets_t tickets; /* 0 4 */ uint16_t recurse_cpu; /* 4 2 */ uint8_t recurse_cnt; /* 6 1 */ union lock_debug debug; /* 7 0 *//* size: 8, cachelines: 1, members: 4 */ /* padding: 1 */ /* last cacheline: 8 bytes */ };I think we could embed spinlock_t in rspinlock_t without increasing rspinlock_t. Have you considered it? This could reduce the number of function level introduced in this series.That was the layout in the first version of this series. Jan requested to changeit to the current layout [1]. Ah... Looking through the reasoning, I have to disagree with Jan argumentations. At least with the full series applied, there is no increase of rspinlock_t in debug build (if we compare to the version you provided in this series). Furthermore, this is going to remove at least patch #6 and #8. We would still need nrspinlock_* because they can just be wrapper to spin_barrier(&lock->lock). This should also solve his concern of unwieldy code: > + spin_barrier(&p2m->pod.lock.lock.lock); Cheers, -- Julien Grall
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |