[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] xen-netfront: don't nest queue locks in xennet_connect()



On 07/02/2014 11:09 AM, David Vrabel wrote:
The nesting of the per-queue rx_lock and tx_lock in xennet_connect()
is confusing to both humans and lockdep.  The locking is safe because
this is the only place where the locks are nested in this way but
lockdep still warns.

Instead of adding the missing lockdep annotations, refactor the
locking to avoid the confusing nesting.  This is still safe, because
the xenbus connection state changes are all serialized by the xenwatch
thread.

Tulin (GSoC student) is working on adding some concurrency to the xenwatch thread (possibly eliminating it and replacing with a workqueue). Which means that we will need to add some sort of synchronization in watch callbacks, whenever they are required.

-boris


Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
Reported-by: Sander Eikelenboom <linux@xxxxxxxxxxxxxx>
---
  drivers/net/xen-netfront.c |   16 ++++++++++++----
  1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 2ccb4a0..6a37d62 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -2046,13 +2046,15 @@ static int xennet_connect(struct net_device *dev)
        /* By now, the queue structures have been set up */
        for (j = 0; j < num_queues; ++j) {
                queue = &np->queues[j];
-               spin_lock_bh(&queue->rx_lock);
-               spin_lock_irq(&queue->tx_lock);
/* Step 1: Discard all pending TX packet fragments. */
+               spin_lock_irq(&queue->tx_lock);
                xennet_release_tx_bufs(queue);
+               spin_unlock_irq(&queue->tx_lock);
/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
+               spin_lock_bh(&queue->rx_lock);
+
                for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
                        skb_frag_t *frag;
                        const struct page *page;
@@ -2076,6 +2078,8 @@ static int xennet_connect(struct net_device *dev)
                }
queue->rx.req_prod_pvt = requeue_idx;
+
+               spin_unlock_bh(&queue->rx_lock);
        }
/*
@@ -2087,13 +2091,17 @@ static int xennet_connect(struct net_device *dev)
        netif_carrier_on(np->netdev);
        for (j = 0; j < num_queues; ++j) {
                queue = &np->queues[j];
+
                notify_remote_via_irq(queue->tx_irq);
                if (queue->tx_irq != queue->rx_irq)
                        notify_remote_via_irq(queue->rx_irq);
-               xennet_tx_buf_gc(queue);
-               xennet_alloc_rx_buffers(queue);
+ spin_lock_irq(&queue->tx_lock);
+               xennet_tx_buf_gc(queue);
                spin_unlock_irq(&queue->tx_lock);
+
+               spin_lock_bh(&queue->rx_lock);
+               xennet_alloc_rx_buffers(queue);
                spin_unlock_bh(&queue->rx_lock);
        }


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.