[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 11/13] xen/pvcalls: implement release command



On Thu, 27 Jul 2017, Boris Ostrovsky wrote:
> > +int pvcalls_front_release(struct socket *sock)
> > +{
> > +   struct pvcalls_bedata *bedata;
> > +   struct sock_mapping *map;
> > +   int req_id, notify;
> > +   struct xen_pvcalls_request *req;
> > +
> > +   if (!pvcalls_front_dev)
> > +           return -EIO;
> > +   bedata = dev_get_drvdata(&pvcalls_front_dev->dev);
> > +   if (!bedata)
> > +           return -EIO;
> 
> Some (all?) other ops don't check bedata validity. Should they all do?

No, I don't think they should: dev_set_drvdata is called in the probe
function (pvcalls_front_probe). I'll remove it.


> > +
> > +   if (sock->sk == NULL)
> > +           return 0;
> > +
> > +   map = (struct sock_mapping *) READ_ONCE(sock->sk->sk_send_head);
> > +   if (map == NULL)
> > +           return 0;
> > +
> > +   spin_lock(&bedata->pvcallss_lock);
> > +   req_id = bedata->ring.req_prod_pvt & (RING_SIZE(&bedata->ring) - 1);
> > +   if (RING_FULL(&bedata->ring) ||
> > +       READ_ONCE(bedata->rsp[req_id].req_id) != PVCALLS_INVALID_ID) {
> > +           spin_unlock(&bedata->pvcallss_lock);
> > +           return -EAGAIN;
> > +   }
> > +   WRITE_ONCE(sock->sk->sk_send_head, NULL);
> > +
> > +   req = RING_GET_REQUEST(&bedata->ring, req_id);
> > +   req->req_id = req_id;
> > +   req->cmd = PVCALLS_RELEASE;
> > +   req->u.release.id = (uint64_t)sock;
> > +
> > +   bedata->ring.req_prod_pvt++;
> > +   RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&bedata->ring, notify);
> > +   spin_unlock(&bedata->pvcallss_lock);
> > +   if (notify)
> > +           notify_remote_via_irq(bedata->irq);
> > +
> > +   wait_event(bedata->inflight_req,
> > +           READ_ONCE(bedata->rsp[req_id].req_id) == req_id);
> > +
> > +   if (map->active_socket) {
> > +           /* 
> > +            * Set in_error and wake up inflight_conn_req to force
> > +            * recvmsg waiters to exit.
> > +            */
> > +           map->active.ring->in_error = -EBADF;
> > +           wake_up_interruptible(&map->active.inflight_conn_req);
> > +
> > +           mutex_lock(&map->active.in_mutex);
> > +           mutex_lock(&map->active.out_mutex);
> > +           pvcalls_front_free_map(bedata, map);
> > +           mutex_unlock(&map->active.out_mutex);
> > +           mutex_unlock(&map->active.in_mutex);
> > +           kfree(map);
> 
> Since you are locking here I assume you expect that someone else might
> also be trying to lock the map. But you are freeing it immediately after
> unlocking. Wouldn't that mean that whoever is trying to grab the lock
> might then dereference freed memory?

The lock is to make sure there are no recvmsg or sendmsg in progress. We
are sure that no newer sendmsg or recvmsg are waiting for
pvcalls_front_release to release the lock because before send a message
to the backend we set sk_send_head to NULL.


> > +   } else {
> > +           spin_lock(&bedata->pvcallss_lock);
> > +           list_del_init(&map->list);
> > +           kfree(map);
> > +           spin_unlock(&bedata->pvcallss_lock);
> > +   }
> > +   WRITE_ONCE(bedata->rsp[req_id].req_id, PVCALLS_INVALID_ID);
> > +
> > +   return 0;
> > +}
> > +
> >  static const struct xenbus_device_id pvcalls_front_ids[] = {
> >     { "pvcalls" },
> >     { "" }
> > diff --git a/drivers/xen/pvcalls-front.h b/drivers/xen/pvcalls-front.h
> > index 25e05b8..3332978 100644
> > --- a/drivers/xen/pvcalls-front.h
> > +++ b/drivers/xen/pvcalls-front.h
> > @@ -23,5 +23,6 @@ int pvcalls_front_recvmsg(struct socket *sock,
> >  unsigned int pvcalls_front_poll(struct file *file,
> >                             struct socket *sock,
> >                             poll_table *wait);
> > +int pvcalls_front_release(struct socket *sock);
> >  
> >  #endif

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.