[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 08/18] xen/pvcalls: implement connect command



>>> +   ret = xenbus_map_ring_valloc(dev, &req->u.connect.ref, 1, &page);
>>> +   if (ret < 0) {
>>> +           sock_release(map->sock);
>>> +           kfree(map);
>>> +           goto out;
>>> +   }
>>> +   map->ring = page;
>>> +   map->ring_order = map->ring->ring_order;
>>> +   /* first read the order, then map the data ring */
>>> +   virt_rmb();
>>
>> Not sure I understand what the barrier is for here. I don't think compiler
>> will reorder ring_order access with the call.
> It's to avoid using the live version of ring_order to map the data ring
> pages (the other end could be changing that value at any time). We want
> to be sure that the compiler doesn't optimize out map->ring_order and
> use map->ring->ring_order instead.

Wouldn't WRITE_ONCE(map->ring_order, map->ring->ring_order) be the right
primitive then?

And also: if the other side changes ring size, what are we mapping then?
It's obsolete by now.

-boris

>
>
>>> +   if (map->ring_order > MAX_RING_ORDER) {
>>> +           ret = -EFAULT;
>>> +           goto out;
>>> +   }
>> If the barrier is indeed needed this check belongs before it.
> I don't think so, see above.
>
>
>>
>>> +   ret = xenbus_map_ring_valloc(dev, map->ring->ref,
>>> +                                (1 << map->ring_order), &page);
>>> +   if (ret < 0) {
>>> +           sock_release(map->sock);
>>> +           xenbus_unmap_ring_vfree(dev, map->ring);
>>> +           kfree(map);
>>> +           goto out;
>>> +   }
>>> +   map->bytes = page;
>>>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.